ConcurrentModificationException in transactional operations

I'm trying to do simple puts on a transactional cache with these settings. The puts do not conflict with each other on their keys.
connection.setAutoCommit(false);
connection.setIsolationLevel(Isolation.READ_COMMITTED);
connection.setEager(false);As soon as a few transactions have completed I start seeing these errors. After that all other transactions also fail with confusing exceptions - some about ClassNotFound and some about txns already being committed.
java.util.ConcurrentModificationException
     at java.util.AbstractList$Itr.checkForComodification(AbstractList.java:372)
     at java.util.AbstractList$Itr.next(AbstractList.java:343)
     at com.tangosol.coherence.transaction.internal.TranscriptImpl.getModifiedTables(TranscriptImpl.java:66)
     at com.tangosol.coherence.transaction.internal.storage.Session.getModifiedTables(Session.java:79)
     at com.tangosol.coherence.transaction.internal.storage.Session.queueOperation(Session.java:35)
     at com.tangosol.coherence.transaction.internal.router.LocalDirector.route(LocalDirector.java:41)
     at com.tangosol.coherence.transaction.internal.Endpoint.dispatch(Endpoint.java:105)
     at com.tangosol.coherence.transaction.internal.OptimisticNamedCacheImpl.invoke(OptimisticNamedCacheImpl.java:708)
     at com.tangosol.coherence.transaction.internal.OptimisticNamedCacheImpl.put(OptimisticNamedCacheImpl.java:370)
java.lang.IllegalStateException: Operation called in illegal state: COMMITTED
     at com.tangosol.coherence.transaction.internal.TransactionImpl.assertStateChange(TransactionImpl.java:313)
     at com.tangosol.coherence.transaction.internal.TransactionImpl.setCommitted(TransactionImpl.java:140)
     at com.tangosol.coherence.transaction.internal.component.ClientAutoCommit.invoke(ClientAutoCommit.java:76)
     at com.tangosol.coherence.transaction.internal.Endpoint.dispatch(Endpoint.java:95)
     at com.tangosol.coherence.transaction.internal.router.StaticEndpointConcern.apply(StaticEndpointConcern.java:48)
     at com.tangosol.coherence.transaction.internal.router.OperationFilter.apply(OperationFilter.java:51)
     at com.tangosol.coherence.transaction.internal.router.OutboundRouterImpl.route(OutboundRouterImpl.java:36)
     at com.tangosol.coherence.transaction.internal.Endpoint.dispatch(Endpoint.java:105)
     at com.tangosol.coherence.transaction.internal.router.StaticEndpointConcern.apply(StaticEndpointConcern.java:48)
     at com.tangosol.coherence.transaction.internal.router.OperationFilter.apply(OperationFilter.java:51)
     at com.tangosol.coherence.transaction.internal.router.OutboundRouterImpl.route(OutboundRouterImpl.java:36)
     at com.tangosol.coherence.transaction.internal.Endpoint.dispatch(Endpoint.java:105)
     at com.tangosol.coherence.transaction.internal.router.StaticEndpointConcern.apply(StaticEndpointConcern.java:48)
     at com.tangosol.coherence.transaction.internal.router.OperationFilter.apply(OperationFilter.java:51)
     at com.tangosol.coherence.transaction.internal.router.OutboundRouterImpl.route(OutboundRouterImpl.java:36)
     at com.tangosol.coherence.transaction.internal.Endpoint.dispatch(Endpoint.java:105)
     at com.tangosol.coherence.transaction.internal.router.StaticEndpointConcern.apply(StaticEndpointConcern.java:48)
     at com.tangosol.coherence.transaction.internal.router.OperationFilter.apply(OperationFilter.java:51)
     at com.tangosol.coherence.transaction.internal.router.OutboundRouterImpl.route(OutboundRouterImpl.java:36)
     at com.tangosol.coherence.transaction.internal.router.LocalDirector.route(LocalDirector.java:56)
     ... 22 moreSometimes:
java.lang.IllegalStateException: Operation called in illegal state: ERROR
...Any ideas?
Thanks,
Ashwin.

I suppose this is the correct way to use optimistic, transactional caches:
* Author: Ashwin Jayaprakash / Date: 1/4/12 / Time: 12:32 PM
public class TxnTest {
    protected static final ThreadLocal<Connection> connections = new ThreadLocal<Connection>();
    protected static DefaultConnectionFactory connectionFactory;
    public static void main(String[] args) throws Exception {
        connectionFactory = new DefaultConnectionFactory();
        for (int i = 0; i < 5; i++) {
            runTest();
    private static void runTest() throws InterruptedException {
        Runnable job = new Runnable() {
            @Override
            public void run() {
                String name = Thread.currentThread().getName();
                for (int i = 0; i < 100; i++) {
                    for (int j = 0; j < 3; j++) {
                        if (j > 0) {
                            System.out.printf("Txn [%s-%d] retry attempt [%d]%n", name, i, j);
                        Connection connection = connectionFactory.createConnection();
                        connection.setAutoCommit(false);
                        connection.setIsolationLevel(Isolation.READ_COMMITTED);
                        connection.setEager(false);
                        OptimisticNamedCache users = connection.getNamedCache("tx-users");
                        try {
                            users.put("bill", new User("bill-" + name, i));
                            users.put("mary", new User("mary-" + name, i));
                            users.put("john", new User("john-" + name, i));
                            connection.commit();
                            break;
                        catch (Exception e) {
                            connection.rollback();
                        finally {
                            connection.close();
        Thread t1 = new Thread(job);
        Thread t2 = new Thread(job);
        t1.start();
        t2.start();
        t1.join();
        t2.join();
        System.out.println("Entries: " + CacheFactory.getCache("tx-users").size());
        for (Object o : CacheFactory.getCache("tx-users").values()) {
            System.out.println("   " + o);
    public static class User implements Serializable {
        String name;
        int age;
        public User() {
        public User(String name, int age) {
            this.name = name;
            this.age = age;
        public String getName() {
            return name;
        public int getAge() {
            return age;
        @Override
        public String toString() {
            final StringBuilder sb = new StringBuilder();
            sb.append(getClass().getName());
            sb.append("{name='").append(name).append('\'');
            sb.append(", age=").append(age);
            sb.append('}');
            return sb.toString();
}

Similar Messages

  • Write-through Cache behavior during Transactional Operation

    If a put is called on a write-through cache during a transaction(with Optimistic Read-Committed settings) that involves multiple caches some set to write-through and others to write-behind, when will the store operation on the corresponding CacheStore be attempted?
         a) Immediately after the put() is called on the cache but before the transaction commit
         or
         b) Immediately after the transaction is committed irrespective of when the put is called

    Hi Abhay,
         The backing map (in this case, <tt>com.tangosol.net.cache.ReadWriteBackingMap</tt>) is responsible for calling the CacheStore implementation. When "commit" is called, Coherence will synchronously send the data to the backing map; the backing map then determines what to do with the data. In the case of ReadWriteBackingMap, it will either (depending on its configuration) synchronously call CacheStore (meaning that a store exception will interrupt your transaction) or queue the update for later (meaning that any store exception will occur after the cache transaction has completed).
         In 3.0, the <rollback-cachestore-failures> element under <read-write-backing-map-scheme> controls whether CacheStore exceptions are propagated back to the client. If you are using a release prior to 3.0, please see this FAQ Item on CacheStore Exceptions.
         Jon Purdy
         Tangosol, Inc.

  • Is Operation Undo/Reversal possible in SAP ME?

    Hello Experts!
    I have reported/completed an operation for an SFC in a POD and its status changed to In-Queue for next operation. I was wondering if there is any way to undo the reporting on the previous operation so that we can redo it at a later time?
    And, would this undo also send a reversal of confirmation to ERP?
    Thanks,
    Venkat

    Venkat,
    Just to point out that unless the operation that was accidentally completed was the last operation then nothing would move to inventory as the product would still be WIP.
    If it was the last operation ME would mark the SFC as DONE and the appropriate goods movements would be carried out in ECC to move the part into inventory / stock. For this scenario I would push the part back into ME using a rework order as this would move the part back out of stock and back to WIP.
    Assuming the SFC wasn't at the last operation then have a look at "SFC Step Status" which allows you to move a product to any operation on the current router but be careful as you can cause lots of trouble with this command.
    From a business process perspective I would suggest the best solution would be raise a non conformance with a disposition that allows you to go back to a previous operation as you should be tracking how often users transact operations incorrectly and tackle the root cause.
    Hope this helps
    Kevin

  • System.Transactions.TransactionAbortedException with no reason

    Occasionally I am receiving the following error:
    System.Transactions.TransactionAbortedException: The transaction has aborted. ---> System.InvalidOperationException: The requested operation cannot be completed because the connection
    has been broken.
    The exception is happening on the Dispose method of
    TransactionScope although nothing special happened, no timeout occur and the SQL server
    is OK.
    This method is executed every 10 minutes and the problem occurs occasionally, nothing seems to be difference when the problem does not occur.
    The stack trace is:
    --- End of inner exception stack trace ---
    at System.Transactions.TransactionStateAborted.EndCommit(InternalTransaction tx)
    at System.Transactions.CommittableTransaction.Commit()
    at System.Transactions.TransactionScope.InternalDispose()
    at System.Transactions.TransactionScope.Dispose()
    I am having the following code:
    using (TransactionScope
    ts = new
    TransactionScope(TransactionScopeOption.Required,
    new
    TimeSpan(0, 0, 30)))
    using (m_SQLConnection =
    new
    SqlConnection(m_SQLConnectionstring))
    m_SQLConnection.Open();
    SQLCommand.Connection = m_SQLConnection;
    SQLCommand.CommandTimeout = 1;
    for (int i=0; i <2; i++)
    SQLCommand.ExecuteNonQuery();
    ts.Complete();
    As I said timeout does not occur since all those commands takes about 10 ms to execute,
    the command timeout is 1 second and the transaction timeout is 30 seconds.

    Hello Tal,
    HAve you got resolved your problem? yes! please can you share what was the the problem. Because i am getting same exception,
    System.Transactions.TransactionAbortedException was caught
      Message=The transaction has aborted.
      Source=System.Transactions
      StackTrace:
           at System.Transactions.TransactionStateAborted.EndCommit(InternalTransaction tx)
           at System.Transactions.CommittableTransaction.Commit()
           at System.Transactions.TransactionScope.InternalDispose()
           at System.Transactions.TransactionScope.Dispose()
           at MCCT.Application.Planning.BusinessManagers.CCombineUnCombineJobManagerWrite.CombineJob(IEntityCollectionBase oPartMovesCollection) in F:\MATSFP\MCCT.Application\MCCT.Application.Planning\MCCT.Application.Planning.BusinessManagers\CCombineUnCombineJobManagerWrite.cs:line
    330
      InnerException: System.Data.SqlClient.SqlException
           Message=The transaction operation cannot be performed because there are pending requests working on this transaction.
           Source=.Net SqlClient Data Provider
           ErrorCode=-2146232060
           Class=16
           LineNumber=1
           Number=3981
           Procedure=""
           Server=QEDSVR04
           State=1
           StackTrace:
                at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection)
                at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection)
                at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning()
                at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj)
                at System.Data.SqlClient.TdsParser.TdsExecuteTransactionManagerRequest(Byte[] buffer, TransactionManagerRequestType request, String transactionName, TransactionManagerIsolationLevel isoLevel,
    Int32 timeout, SqlInternalTransaction transaction, TdsParserStateObject stateObj, Boolean isDelegateControlRequest)
                at System.Data.SqlClient.SqlInternalConnectionTds.ExecuteTransactionYukon(TransactionRequest transactionRequest, String transactionName, IsolationLevel iso, SqlInternalTransaction internalTransaction,
    Boolean isDelegateControlRequest)
                at System.Data.SqlClient.SqlInternalConnectionTds.ExecuteTransaction(TransactionRequest transactionRequest, String name, IsolationLevel iso, SqlInternalTransaction internalTransaction, Boolean
    isDelegateControlRequest)
                at System.Data.SqlClient.SqlDelegatedTransaction.SinglePhaseCommit(SinglePhaseEnlistment enlistment)
           InnerException:
    Thanks & regards,
    Anand Jagtap.
    Anddy

  • Transactions Problem in JPA

    Hi,
    In Our Application i want to use Bean ManagedTransactions instead of ContainerManagedTransactions(I am using Oracle TopLink JPA ) . For this purpose there is two transaction types are available that are ResourceLocalTranactions and JTA Transactions .
    Which one is advantageous when i use Bean Managed Transactions . What are the requirements for that .

    Actually my requirement is to combined different methods in a one transaction . If any method in a transaction is failed it is required to rollback preceded methods .
    For this purpose i write one getTransaction object method in my SessionBean (JPA).
    I call that method in Backing bean with that i get UserTransaction object in the backing bean from session bean . After that i perform transaction operations like commit rollback begin operations in my backing bean (Combining different methods in one transaction) .
    Plz give me a advice is it correct way to approach . For this i used Container Managed Transaction and JTA as transaction type .

  • Transaction Sync and Database Size

    Hello,
    We're using BDB (via the Java interface) as the persistent store for a messaging platform. In order to achieve high performance, the transactional operations are configured to not sync, i.e., TransactionConfig.setSync(false) . While we do achieve better performance, the size of the database does seem rather large. We checkpoint on a periodic basis, and each time we checkpoint, the size of the database grows, even though records (messages in our world) are being deleted. So, if I were to add, say 10000 records, delete all of them and then checkpoint, the size of the database would actually grow! In addition, the database file, while being large, is also very sparse - a 30GB file when compressed reduces in size to 0.5 GB.
    We notice that if we configure our transactional operations to sync, the size is much smaller, and stays constant, i.e., if I were to insert and subsequently delete 10000 records into a database whose file is X MB, the size of the database file after the operations would be roughly X MB.
    I understand that transaction logs are applied to the database when we checkpoint, but should I be configuring the behaviour of the checkpointing (via CheckpoinConfig )?
    Also, I am checkpointing periodically from a separate thread. Does BDB itself spawn any threads for checkpointing?
    Our environment is as follows:
    RedHat EL 2.6.9-34.ELsmp
    Java 1.5
    BDB 4.5.20
    Thanks much in advance,
    Prashanth

    Hi Prashanth,
    If your delete load is high, your workload should benefit from setting the DB_REVSPLITOFF flag, which keeps around the structure of the btree regardless of records being deleted. The result should be less splits and merges, and is therefore better concurrency.
    Here you can find some documentation that should help you:
    Access method tuning: http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_misc/tune.html
    Transaction tuning: http://www.oracle.com/technology/documentation/berkeley-db/db/ref/transapp/tune.html
    If you are filling the cache with dirty pages, you can indeed call checkpoint() periodically in the application, or you can create a memp_trickle thread. See the following sections of the documentation:
    - Javadoc: Environment -> trickleCacheWrite" http://www.oracle.com/technology/documentation/berkeley-db/db/java/com/sleepycat/db/Environment.html#trickleCacheWrite(int)
    Some related thread for the "database size issue", can be found here: http://forums.oracle.com/forums/thread.jspa?threadID=534371&tstart=0
    Bogdan Coman

  • How does Berkeley realize the transaction?

    I heard the Berkeley DB supports transaction feature, and I'd like to know more about it. In my mind, there are two cases about transaction.
    I. One case is to do the transaction on one table
    II. Another case is to do the transaction on multiple tables, which means those tables have relations, and transaction needs be bound on those tables
    Which case is supported by the Berkeley database?
    Please help me to understand it.

    Hi Ashok,
    Let me clarify my example firstly. In that example, there are three threads (thread-1, thread-2, and thread-3), they are trying to access the database with transaction.
    For thread-1, it starts a transaction, and updates A, but because of the thread (task) schedule, thread-2 gets the CPU and starts another transaction. Similarly, when thread-2 updates B, thread-3 gets the CPU and starts the third transaction. After thread-3 commits/rollbacks the third transaction, thread-2 acquires the CPU again, and continues its transaction, then thread-1 acquires the CPU, and continues its transaction also.
    Of course, for a database, it can be designed to DO NOT allow transaction-crossing, which means when a thread is doing the transaction on DB-A, other threads cannot do the transaction on DB-A, until the former thread finished the transaction operation. However, it is a technology issue, not a requirement issue. From the technology perspective, a database can be designed to DO allow transaction-crossing, which means when a thread is doing the transaction on DB-A, other threads still can do other transactions on DB-A. Certainly, in order to realize the transaction-crossing, it needs more efforts, includes well defined data structures, the algorithm to correlate multiple transactions, the policy to store the data during a transaction, etc... So, I'd like to collect some feedbacks from people who are using or designing the database, and make a decision if the transaction-crossing function is necessary for a database product. If most of designers and users do not think the transaction-crossing function is useful, then I am over considering it.
    Kind Regards,
    Kevin.
    BTW, in my example, symbol "x--------> y" means thread-switching (from thread x to y), symbol ".........." is used to make the text be aligned well, it has not too much meaning, symbol "x<----------y" means thread-switching (from thread y to x)

  • DB_APPEND on a queue within a transaction

    Hello,
    With your help, I did get my test program worked. It appends data on a queue.
    But now I have problems to do the same thing but within a transaction. I get an "Invalid argument" error.
    Thanks for a little help.
    Here is my small test program :
    #include <stdio.h>
    #include <string.h>
    #include <ctype.h>
    #include <fcntl.h>
    #include <errno.h>
    #include <db.h>
    DB_ENV *env;
    int ret;
    DB * db;
    const int QUEUE_RECORD_SIZE = 50;
    int pad_char = 35; // #
    char * dbName = "queue-test-file.db";
    u_int32_t db_flags;
    u_int32_t env_flags =
    DB_CREATE | DB_INIT_TXN | DB_INIT_LOCK | DB_INIT_LOG |
    DB_INIT_MPOOL | DB_THREAD | DB_RECOVER | DB_REGISTER;
    db_recno_t recno;
    DBT key, data;
    char buf[1024];
    u_int32_t len;
    typedef enum { FALSE = (0 == 1), TRUE = (1 == 1)} Boolean;
    Boolean FirstTime = TRUE;
    int main(void) {
    DB_TXN *l_txn = NULL;
    int i = 0;
    ret = db_env_create (&env, 0);
    if (ret != 0) {
    printf("error in db_env_create\n");
    ret = env->open (env, "/data/test/", env_flags, 0);
    if (ret != 0) {
    printf("error in env open\n");
    ret = db_create (&db, env, 0);
    if (ret != 0) {
    printf("error in db_create\n");
    ret = db->set_re_len(db, QUEUE_RECORD_SIZE);
    if (ret != 0) {
    printf ("error in set_re_len\n");
    ret = db->set_re_pad(db, pad_char);
    if (ret != 0) {
    printf("error in set_re_pad\n");
    db_flags = DB_CREATE;
    ret = db->open (db, NULL, dbName, NULL, DB_QUEUE, db_flags, 0);
    if (ret != 0) {
    printf ("database opening failed (%s) Error = %s\n", dbName, db_strerror (ret));
    memset (&key, 0, sizeof (DBT));
    memset (&data, 0, sizeof (DBT));
    // writing
    while (TRUE) {
    printf("record #%lu> ", (u_long)recno); fflush(stdout);
    fgets(buf, sizeof(buf), stdin);
    if(!strncmp(buf, "quit", 4)) {
    if (i > 0) ret = l_txn->commit (l_txn, 0);
    break;
    if ((len = strlen(buf)) <= 1) continue;
    key.data = &recno;
    key.flags = DB_DBT_USERMEM;
    key.ulen = sizeof(recno); // for the check out
    data.data = buf;
    data.size = len - 1;
    if (FirstTime) ret = env->txn_begin (env, NULL, &l_txn, 0);
    if (i == 2) {
    ret = l_txn->commit (l_txn, 0);
    ret = env->txn_begin (env, NULL, &l_txn, 0);
    i = 0;
    FirstTime = FALSE;
    switch (ret = db->put(db, NULL, &key, &data, DB_APPEND)) {
    case 0: printf("OK\n");
    ++i;
    break;
    default: db->err(db, ret, "DB->put"); break;
    db->close (db, 0);
    if (ret != 0) {
    printf ("database close failed (%s) Error = %s\n", dbName, db_strerror (ret));
    ret = env->close (env, 0);
    if (ret != 0) {
    printf ("database close failed (%s) Error = %s\n", dbName, db_strerror (ret));
    return(0);
    } // end of main

    To perform transactional operations on a Berkeley DB database, the DB->open call must be done in a transaction. The simplest way to do this is to change:
    db_flags = DB_CREATE;to:
    db_flags = DB_CREATE | DB_AUTO_COMMIT;Regards,
    Michael Cahill, Oracle Berkeley DB.

  • Large DML (update) transaction and slow capture/enqueue - performance -help

    Hi all.
    I have a situation on our 10.1 database with single streams queue (capture process local) where during normal small transactional operations everything runs fine. As soon as someone does a large DML statement (say 500 000+ rows) type of magnitude -> then the capture process spends "DAYS" mining the logs .............ie the capture and propogation process grinds to a halt and the whole replication of system is now in jeapordy.
    Is this a configuration issue ?? We have streams_pool_size set to 200M and the SGA_MAX at 1GB and we only have around 10 concurrent users in UAT at present. How do I find/tune our streams environment ?? What is causing the contention/significant slowdown of the capture process to mine the redo and add the relevant LCR's to the queue tables ??
    Please provide any useful advice / scripts / queries/ ideas to aid diagnostics !!!

    Hi all
    Has anyone got any info to add here - We are really confused as to why the performance of the streams capture process is so dismal , and how to "tune"/diagnose the actual issue. I can raise a TAR but I would prefer to get peoples ideas/thoughts first who actually use the technology .................
    thanks

  • Using Transaction type

    Hi
    Our app's code is in core java and we do not use EJB. I am implementing a common functionality that will be used to log errors into a database table. In case of a failure condition all others database writes other than the error log entry should be rolled back .
    I was wondering if there is something like the transaction type (require_new) in EJB in core java that I can use to make the call to the logging function as a seperate transaction. Or can I accomplish it by using EJB as a wrapper (I am still reading on how to use EJB just as a wrapper).
    Any help or direction would be appreciated.
    Thanks

    Hi parvin,
    In EJB, the way this is typically done is by declaring a separate business method with tx attribute TX_REQUIRES_NEW that performs the transactional operation you want to be done indepdent of the other work. When this method is called its work will be executed within a new transaction and then committed. When control returns to the 1st method its transaction will be resumed. Then, the work performed there will either commit or rollback accordingly.
    For this kind of application your best bet is to use Java EE rather than Java SE. There's nothing in the core Java SE API that is the equivalent of EJB tx attributes or JTA distributed transactions. There are other Java based frameworks such as Spring that have similar functionality, but I haven't used them so I can't speak to the details.
    --ken                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Transaction dswp "Socket Error 10049 {Thd 5516} [socket #22A0BC, 0.0.0.0:19

    Hi,
    I just install SM 7.0 SR3 on HP-UX/Oracle 10.2. The installation finished ok.
    I have done all configuration steps with satellite systems(ECC .0 Development, ECC 6.0 Test, EC 6.0 Productive), users, RFCs,... all of this steps finished ok. SO, I have created a Solution with the three satellite systems
    I'm able to log on SAP Solution Manager system, also I've chequed java stack, both are working.
    The problem is when I go to dswp transaction, operations->solution monitoring->System monitoring/administration, the systems doesn't appears, I have a message:
    Socket Error
    Socket Error 10049 {Thd 3476} [socket #44078, 0.0.0.0:2347 to :0]
    Please could yo help me???
    Thanks and regards
    Raul

    At tr. sm21, for today only appears a error: Run-time error "GETWA_NOT_ASSIGNED" occurred at 9:21. And If I reproduce the error, It doesn't generate other run-time error like this. So, I don't think that run-time error ocurred at 9:21 was related with my problem.
    About SMICM - Goto --> Tracefile --> Display:
    [Thr 15] Tue May  6 17:24:18 2008
    [Thr 15] *** WARNING => IcmNetCheck: NiServToNo(8080) took 2 seconds [icxxman_mt.c 4594]
    [Thr 15] *** WARNING => IcmNetCheck: 1 possible network problems detected - please check the network/DNS settings [icxxman_mt.c
    [Thr  1] Tue May  6 17:25:33 2008
    [Thr  1] HttpSAPR3SetParam: switch j2ee http port from  to: 50000
    [Thr  1] HttpSAPR3SetParam: switch j2ee https port from  to: 50001
    [Thr  1] Tue May  6 17:26:51 2008
    [Thr  1] HttpSAPR3SetParam: Switched j2ee status to: 1
    The log lines refers to "May 6", so nine days ago. And It doesn't reflect any error.
    Also, in the transaction sm04, selecting the entry for the user, menu user-> trace->activate...I reproduce the error...and after return to sm04 menu user->trace->display, appears:
    Step of user SOLMANADM , seession 1 , step 9
    D  *** ERROR => invalid AREA_PIXEL data: window height = 0 [diaginp.c    1782]
    Step of user SOLMANADM , seession 1 , step 10
    Step of user SOLMANADM , seession 1 , step 11
    Step of user SOLMANADM , seession 1 , step 12
    Step of user SOLMANADM , seession 1 , step 13
    D    texts >RFC Connection Error<>F<><
    D    texts >Display Errors Only<>D<><
    D    texts >Display Errors Only<>D<><
    User trace of the next session for user SOLMANADM , Session 1 , step 1
    User trace of the next session for user SOLMANADM , Session 1 , step 1
    Step of user SOLMANADM , seession 1 , step 2
    But It doesn't indicate any idea about the error
    Raul

  • Upgrading from Weblogic 6.1sp5/TopLink 3.6.3 to Weblogic 8.1/TopLink 9.0.4

    Hi.
    We are doing the migration from Weblogic 6.1sp5/TopLink 3.6.3 to Weblogic 8.1 & TopLink 9.0.4
    We have been reading available documentation and started the migration. We have not had any problems with package ranaming issues and XML DOCTYPES changes.
    As we do have a lot of ammedmed querys, we do prefer to use a java class as our proyect mapping descriptor rather than a huge XML.
    We are currently porting all our querys to the new system.
    As we are doing a lot of changes we are also updating our EJB 1.1 entity beans to EJB 2.0 (maybe in the future we will deploy our EJBs with local interfaces too).
    The main problem we are facing right now is with all the querys we had in our toplink-cmp-<bean>.xml files. Even if most of the finders are "REDIRECT" querys that had to be ammendmed we have a good number of "EXPRESSION"s on them.
    As far as we are able to see we should move them to the TopLink Project java file in the querys section. The question is: is it possible to declare them in the ejb-jar.xml file into the <ejb-ql> tag (we have seen that the TopLinkImplemented string appears on the <ejb-ql> telling the container who should handle the query, so if Weblogic can be "tricked" to do that "ejb-ql" it may be possible to do the same with a TopLink expression isn't it?.
    Another issue we don't have clear is if now all the querys must be either ejb-ql (compliant with the EJB 2.0 standard) or named querys (either TopLink expressions or SQL querys) defined in the TopLink Project java or XML files. What happened with the redirect querys
    I would like to point just another issue we have "solved" but we are not really satisfied with how we solved it.
    We are using timestamp locking with our entity beans. The shouldBindAllParameters is set to true too and we have the logging activated so we have been able to track down the problem.
    The problem seems to be something like this.
    You create a new entity bean and TopLink issues a sentence like this against the DB (Oracle 8.1.7 I must say we have also configured the login session to use the Oracle8PlatformSupport)
    INSERT INTO FOO(OID,OCA) VALUES (?, ?)
    bind => [1, 2004-04-26 13:16:45.251]
    As far as we know Oracle 8.1.7 doesn't store the milliseconds (we are using DATE type in Oracle tables) so the ".251" value is stored as "0"
    Then we try to delete the entity bean and TopLink sends this sentence to the database:
    DELETE FROM FOO WHERE ((OID = ? AND OCA = ?))
    bind => [1, 2004-04-26 13:16:45.251]
    Then, a TOPLINK-5003 (the object has been modified or deleted since last read) is raised and the transaction is rolled back.
    We tried without binding and it works perfect (it seems that the timestamp is treated as YYYY-MM-DD SSSSS in the "to_date" function issued)
    As we would like to keep with binding all parameters in order to optimize our database accesses we have changed the default jdbc driver (sun.jdbc...) that is showed when reading the proper property of the login sesion with the oracle.jdbc.driver.OracleDriver. This latest driver seems to solve the problem with the reminders of the timestamps but this brings us a doubt.
    If we have configured two datasources in our Weblogic's config.xml file (one for transactional operations, beeing that one also an XA driver as we need to coordinate RDBMS operations with a JMS service, and the other one being a "standar" oracle.jdbc.driver.OracleDriver) why is that the driver used as default by the login method is that "weird" sun.jdbc... driver? Shouldn't TopLink use one (the XA driver I hopefully) of the drivers defined in Weblogic's datasource?
    1. Is the issue we are seeing with the timestamp a known bug/problem?
    2. Is there a better way to solve it than changing the driver (we are affraid of the "new" issues changing it could raise)
    I have seen that with TopLink 3.6.3 and the "default" driver used in the login (it is the same by default on both TopLink 3.6.3 and TopLink 9.0.4) the binded timestamp are "truncated" to the second resolution and it works without any problem.
    Thanks in advance.
    Ignacio

    Not sure on all of the issues, but can provide some information on some of them.
    Re-directors:
    Support was added for re-director to core queries, so now your named queries can make use of re-director if they require advanced dynamic execution. So all re-director queries can become named queries.
    Timestamp locking:
    In general we would always suggest numeric version locking if you have the choice. I'm not clear on what you are saying about the driver having an effect on the problem, however in general Oracle 8 does not support milliseconds, if you use timestamp locking with local timestamps they will have milliseconds and the database values will not match the ones in memory.
    To resolve this you can,
    - Use server timestamps TimestampLockingPolicy.useServerTime()
    - Clear the timestamp's milli/nano second value through an aboutToInsert event.
    - Extend Oracle8Platform.convertObject to clear the timestamp's milli/nano second value.
    - If you can use Oracle 9 or 10 there is now a TIMESTAMP type that does support milliseconds.
    If a different driver seems to be solving this, it is most likely the database ignores the milliseconds in the comparison, but the driver you are using sends the millisecond to the database incorrectly when binding.

  • Looking for help to increase performance on a DB XML database.

    I'll try to answer all the questions in the Performance Questionnaire from here.
    1) I'm primarily concerned with insertion performance. The best I've seen so far is about 6000 inserts/per second. This is running inside a VMWare VM with 3 GB of RAM. The VM is set up with 2 CPUs each with 2 cores. The host machine has 8GB of RAM with a dual core 2.67 GHZ i7 (2 logical cores per CPU). The best performance I've seen is by running 2 threads of execution. A single thread only gets me about 2500 inserts per/second.
    This is all within a very simple, isolate program. I'm trying to determine how to re-architect a more complicated system, but if I can't hope to hit 10k inserts per second with my sample, I don't see how it's possible to expand this out to something more complicated.
    2) Versions: BDBXML version 2.5.26 no special patches or config options
    3) BDB version 4.8.26, no special patches
    4) 2.67 dual core, hyperthreaded intel i7 (4 logical processors)
    5) Host: Windows 7 64-bit, Guest: RHEL5 64-bit
    6) Underlying disk is a 320GB WesternDigital barricuda (SATA). It's a laptop harddrive, I believe it's only 5400 RPM. Although the VM does not have exclusive access to the drive, it is not the same drive as the Host sytem drive. (i.e. Windows runs off of the C drive, this is the D drive). The has a 60GB slice of this drive.
    7) Drive is NTFS formatted for the host. Guest, ext3
    8) Host 8gb, guest 3gb (total usage when running tests low, i.e. no swapping by guest or host)
    9) not currently using any replication
    10) Not using remote filesystem
    11) db_cv_mutex=POSIX/pthreads/library/x86_64/gcc-assembly
    12) Using the C++ API for DBXML, and the C API for BDB
    using gcc/g++ version 4.1.2
    13) not using app server or web server
    14) flags to 'DB_ENV->open()': | DB_SYSTEM_MEM
              | DB_INIT_MPOOL
              | DB_INIT_LOCK
              | DB_INIT_LOG
              | DB_INIT_TXN
              | DB_RECOVER
              | DB_THREAD
    other env flags explicitly set:
    DB_LOG_IN_MEMORY 1
    DB_LOG_ZERO 1
    set_cachesize(env, 1, 0, 1) // 1GB cache in single block
    DB_TXN_NOSYNC 1
    DB_TXN_WRITE_NOSYNC 1
    I am not using a DB_CONFIG file at this time.
    15) For the container config:
    transactional true
    transactionsNotDurable true
    containertype wholedoc
    indexNodes Off
    pagesize 4096
    16) In my little test program, I have a single container.
    16.1) flags are the same as listed above.
    16.2) I've tried with an empty container, and one with documents already inside and haven't noticed much difference at this point. I'm running 1, 2, 3, or 4 threads, each inserting 10k documents in a loop. Each insert is a single transaction.
    16.3) Wholedoc (tried both node & wholedoc, I believe wholedoc was slightly faster).
    16.4) The best performance I've seen is with a smaller document that is about 500 bytes.
    16.5) I'm not currently using any document data.
    17)sample document:
    <?xml version='1.0' encoding='UTF-8' standalone='no'?>
    <Record xmlns='http://someurl.com/test' JID='UUID-f9032e9c-7e9a-4f2c-b40e-621b0e66c47f'>
    <DataType>journal</DataType>
    <RecordID>f9032e9c-7e9a-4f2c-b40e-621b0e66c47f</RecordID>
    <Hostname>test.foo.com</Hostname>
    <HostUUID>34c90268-57ba-4d4c-a602-bdb30251ec77</HostUUID>
    <Timestamp>2011-11-10T04:09:55-05:00</Timestamp>
    <ProcessID>0</ProcessID>
    <User name='root'>0</User>
    <SecurityLabel>unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023</SecurityLabel>
    </Record>
    18. As mentioned, I'm looked to get at least 10k documents per second for insertion. Updates are much more infrequent, and can run slower. I am not doing any partial updates, or replacing documents. In the actual system, there are minor updates that happen to document metadata, but again, these can be slower.
    19. I'm primarily concerned with insertion rate, not query.
    20. Xquery samples are not applicable at the moment.
    21. I am using transactions, no special flags aside from setting them all to 'not durable'
    22. Log files are currently stored on the same disk as the database.
    23. I'm not using AUTO_COMMIT
    24. I don't believe there are any non-transactional operations
    25. best performance from 2 threads doing insertions
    26. The primary way I've been testing performance is by using the 'clock_gettime(CLOCK_REALTIME)' calls inside my test program. The test program spawns 1 or more threads, each thread inserts 10k documents. The main thread waits for all the threads to complete, then exits. I'm happy to send the source code for this program if that would be helpful.
    27. As mentioned, I'm hoping to get at least 10k inserts per second.
    28. db_stat outputs:
    28.1 db_stat -c:
    93 Last allocated locker ID
    0x7fffffff Current maximum unused locker ID
    9 Number of lock modes
    1000 Maximum number of locks possible
    1000 Maximum number of lockers possible
    1000 Maximum number of lock objects possible
    40 Number of lock object partitions
    0 Number of current locks
    166 Maximum number of locks at any one time
    5 Maximum number of locks in any one bucket
    0 Maximum number of locks stolen by for an empty partition
    0 Maximum number of locks stolen for any one partition
    0 Number of current lockers
    35 Maximum number of lockers at any one time
    0 Number of current lock objects
    95 Maximum number of lock objects at any one time
    3 Maximum number of lock objects in any one bucket
    0 Maximum number of objects stolen by for an empty partition
    0 Maximum number of objects stolen for any one partition
    565631 Total number of locks requested
    542450 Total number of locks released
    0 Total number of locks upgraded
    29 Total number of locks downgraded
    22334 Lock requests not available due to conflicts, for which we waited
    23181 Lock requests not available due to conflicts, for which we did not wait
    0 Number of deadlocks
    0 Lock timeout value
    0 Number of locks that have timed out
    0 Transaction timeout value
    0 Number of transactions that have timed out
    784KB The size of the lock region
    10098 The number of partition locks that required waiting (0%)
    866 The maximum number of times any partition lock was waited for (0%)
    6 The number of object queue operations that required waiting (0%)
    7220 The number of locker allocations that required waiting (2%)
    0 The number of region locks that required waiting (0%)
    3 Maximum hash bucket length
    ====================
    28.2 db_stat -l:
    0x40988 Log magic number
    16 Log version number
    31KB 256B Log record cache size
    0 Log file mode
    10Mb Current log file size
    0 Records entered into the log
    0 Log bytes written
    0 Log bytes written since last checkpoint
    0 Total log file I/O writes
    0 Total log file I/O writes due to overflow
    0 Total log file flushes
    7 Total log file I/O reads
    1 Current log file number
    28 Current log file offset
    1 On-disk log file number
    28 On-disk log file offset
    0 Maximum commits in a log flush
    0 Minimum commits in a log flush
    160KB Log region size
    0 The number of region locks that required waiting (0%)
    ======================
    28.3 db_stat -m
    1GB Total cache size
    1 Number of caches
    1 Maximum number of caches
    1GB Pool individual cache size
    0 Maximum memory-mapped file size
    0 Maximum open file descriptors
    0 Maximum sequential buffer writes
    0 Sleep after writing maximum sequential buffers
    0 Requested pages mapped into the process' address space
    1127961 Requested pages found in the cache (99%)
    3622 Requested pages not found in the cache
    7590 Pages created in the cache
    3622 Pages read into the cache
    7663 Pages written from the cache to the backing file
    0 Clean pages forced from the cache
    0 Dirty pages forced from the cache
    0 Dirty pages written by trickle-sync thread
    11212 Current total page count
    11212 Current clean page count
    0 Current dirty page count
    131071 Number of hash buckets used for page location
    4096 Assumed page size used
    1142798 Total number of times hash chains searched for a page
    1 The longest hash chain searched for a page
    1127988 Total number of hash chain entries checked for page
    0 The number of hash bucket locks that required waiting (0%)
    0 The maximum number of times any hash bucket lock was waited for (0%)
    4 The number of region locks that required waiting (0%)
    0 The number of buffers frozen
    0 The number of buffers thawed
    0 The number of frozen buffers freed
    11218 The number of page allocations
    0 The number of hash buckets examined during allocations
    0 The maximum number of hash buckets examined for an allocation
    0 The number of pages examined during allocations
    0 The max number of pages examined for an allocation
    0 Threads waited on page I/O
    0 The number of times a sync is interrupted
    Pool File: temp.dbxml
    4096 Page size
    0 Requested pages mapped into the process' address space
    1127961 Requested pages found in the cache (99%)
    3622 Requested pages not found in the cache
    7590 Pages created in the cache
    3622 Pages read into the cache
    7663 Pages written from the cache to the backing file
    =================================
    28.4 db_stat -r (n/a, no replication)
    28.5 db_stat -t
    0/0 No checkpoint LSN
    Tue Oct 30 15:05:29 2012 Checkpoint timestamp
    0x8001d4d5 Last transaction ID allocated
    100 Maximum number of active transactions configured
    0 Active transactions
    5 Maximum active transactions
    120021 Number of transactions begun
    0 Number of transactions aborted
    120021 Number of transactions committed
    0 Snapshot transactions
    0 Maximum snapshot transactions
    0 Number of transactions restored
    48KB Transaction region size
    1385 The number of region locks that required waiting (0%)
    Active transactions:

    Replying with output from iostat & vmstat (including the output exceeded the character count).
    =============================
    output of vm_stat while running 4 threads, inserting 10k documents each. It took just under 18 seconds to complete. I ran vmstat a few times while it was running:
    r b swpd free buff cache si so bi bo in cs us sy id wa st
    3 0 0 896904 218004 1513268 0 0 14 30 261 83 1 1 98 0 0
    $ vmstat
    procs -----------memory---------- ---swap-- -----io---- system -----cpu------
    r b swpd free buff cache si so bi bo in cs us sy id wa st
    5 0 0 889588 218004 1520500 0 0 14 30 261 84 1 1 98 0 0
    $ vmstat
    procs -----------memory---------- ---swap-- -----io---- system -----cpu------
    r b swpd free buff cache si so bi bo in cs us sy id wa st
    2 0 0 882892 218012 1527124 0 0 14 30 261 84 1 1 98 0 0
    $ vmstat
    procs -----------memory---------- ---swap-- -----io---- system -----cpu------
    r b swpd free buff cache si so bi bo in cs us sy id wa st
    4 0 0 896664 218012 1533284 0 0 14 30 261 85 1 1 98 0 0
    $ vmstat
    procs -----------memory---------- ---swap-- -----io---- system -----cpu------
    r b swpd free buff cache si so bi bo in cs us sy id wa st
    5 0 0 890456 218012 1539748 0 0 14 30 261 85 1 1 98 0 0
    $ vmstat
    procs -----------memory---------- ---swap-- -----io---- system -----cpu------
    r b swpd free buff cache si so bi bo in cs us sy id wa st
    2 0 0 884256 218020 1545800 0 0 14 30 261 86 1 1 98 0 0
    $ vmstat
    procs -----------memory---------- ---swap-- -----io---- system -----cpu------
    r b swpd free buff cache si so bi bo in cs us sy id wa st
    4 0 0 878304 218020 1551520 0 0 14 30 261 86 1 1 98 0 0
    $ sudo vmstat
    procs -----------memory---------- ---swap-- -----io---- system -----cpu------
    r b swpd free buff cache si so bi bo in cs us sy id wa st
    2 0 0 871980 218028 1558108 0 0 14 30 261 87 1 1 98 0 0
    $ vmstat
    procs -----------memory---------- ---swap-- -----io---- system -----cpu------
    r b swpd free buff cache si so bi bo in cs us sy id wa st
    5 0 0 865780 218028 1563828 0 0 14 30 261 87 1 1 98 0 0
    $ vmstat
    procs -----------memory---------- ---swap-- -----io---- system -----cpu------
    r b swpd free buff cache si so bi bo in cs us sy id wa st
    3 0 0 859332 218028 1570108 0 0 14 30 261 87 1 1 98 0 0
    $ vmstat
    procs -----------memory---------- ---swap-- -----io---- system -----cpu------
    r b swpd free buff cache si so bi bo in cs us sy id wa st
    2 0 0 586756 218028 1572660 0 0 14 30 261 88 1 1 98 0 0
    $ vmstat
    procs -----------memory---------- ---swap-- -----io---- system -----cpu------
    r b swpd free buff cache si so bi bo in cs us sy id wa st
    3 2 0 788032 218104 1634624 0 0 14 31 261 88 1 1 98 0 0
    ================================
    sda1 is mount on /boot
    sda2 is mounted on /
    sda3 is swap space
    output for iostat, same scenario, 4 threads inserting 10k documents each:
    $ iostat -x 1
    Linux 2.6.18-308.4.1.el5 (localhost.localdomain) 10/30/2012
    avg-cpu: %user %nice %system %iowait %steal %idle
    27.43 0.00 4.42 1.18 0.00 66.96
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 46.53 0.00 2.97 0.00 396.04 133.33 0.04 14.33 14.33 4.26
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 46.53 0.00 2.97 0.00 396.04 133.33 0.04 14.33 14.33 4.26
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    26.09 0.00 15.94 0.00 0.00 57.97
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    26.95 0.00 29.72 0.00 0.00 43.32
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    29.90 0.00 32.16 0.00 0.00 37.94
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    40.51 0.00 27.85 0.00 0.00 31.65
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    40.50 0.00 26.75 0.50 0.00 32.25
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 3.00 0.00 2.00 0.00 40.00 20.00 0.03 17.00 17.00 3.40
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 3.00 0.00 2.00 0.00 40.00 20.00 0.03 17.00 17.00 3.40
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    30.63 0.00 32.91 0.00 0.00 36.46
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    29.57 0.00 32.83 0.00 0.00 37.59
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    29.65 0.00 32.41 0.00 0.00 37.94
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    46.70 0.00 26.40 0.00 0.00 26.90
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    32.72 0.00 33.25 0.00 0.00 34.04
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 7.00 0.00 57.00 0.00 512.00 8.98 2.25 39.54 0.82 4.70
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 7.00 0.00 57.00 0.00 512.00 8.98 2.25 39.54 0.82 4.70
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    32.08 0.00 31.83 0.00 0.00 36.09
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    33.75 0.00 31.50 0.00 0.00 34.75
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    33.00 0.00 31.99 0.25 0.00 34.76
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 3.00 0.00 2.00 0.00 40.00 20.00 0.05 24.00 24.00 4.80
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 3.00 0.00 2.00 0.00 40.00 20.00 0.05 24.00 24.00 4.80
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    53.62 0.00 21.70 0.00 0.00 24.69
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    33.92 0.00 22.11 0.00 0.00 43.97
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    8.53 0.00 4.44 0.00 0.00 87.03
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    5.58 0.00 2.15 0.00 0.00 92.27
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    0.00 0.00 1.56 12.50 0.00 85.94
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 9.00 0.00 1.00 0.00 80.00 80.00 0.23 86.00 233.00 23.30
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 9.00 0.00 1.00 0.00 80.00 80.00 0.23 86.00 233.00 23.30
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    1.49 0.00 11.90 0.00 0.00 86.61
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 1.00 0.00 8.00 8.00 0.04 182.00 35.00 3.50
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 1.00 0.00 8.00 8.00 0.04 182.00 35.00 3.50
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    0.26 0.00 21.82 0.00 0.00 77.92
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    0.00 0.00 20.48 0.00 0.00 79.52
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    9.49 0.00 13.33 0.00 0.00 77.18
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    20.35 0.00 4.77 0.00 0.00 74.87
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    6.32 0.00 13.22 1.72 0.00 78.74
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 15302.97 0.99 161.39 7.92 34201.98 210.68 65.27 87.75 3.93 63.76
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 15302.97 0.99 161.39 7.92 34201.98 210.68 65.27 87.75 3.93 63.76
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    1.83 0.00 5.49 1.22 0.00 91.46
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 21.00 0.00 95.00 0.00 91336.00 961.43 43.76 1003.00 7.18 68.20
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 21.00 0.00 95.00 0.00 91336.00 961.43 43.76 1003.00 7.18 68.20
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    ===================

  • Getting realistic performance expectations.

    I am running tests to see if I can use the Oracle Berkeley XML database as a backend to a web application but am running into query response performance limitations. As per the suggestions for performance related questions, I have pulled together answers to the series of questions that need to be addressed, and they are given below. The basic issue at stake, however, is am I being realistic about what I can expect to achieve with the database?
    Regards
    Geoff Shuetrim
    Oracle Berkeley DB XML database performance.
    Berkeley DB XML Performance Questionnaire
    1. Describe the Performance area that you are measuring? What is the
    current performance? What are your performance goals you hope to
    achieve?
    I am using the database as a back end to a web application that is expected
    to field a large number of concurrent queries.
    The database scale is described below.
    Current performance involves responses to simple queries that involve 1-2
    minute turn around (this improves after a few similar queries have been run,
    presumably because of caching, but not to a point that is acceptable for
    web applications).
    Desired performance is for queries to execute in milliseconds rather than
    minutes.
    2. What Berkeley DB XML Version? Any optional configuration flags
    specified? Are you running with any special patches? Please specify?
    Berkeley DB XML Version: 2.4.16.1
    Configuration flags: enable-java -b 64 prefix=/usr/local/BerkeleyDBXML-2.4.16
    No special patches have been applied.
    3. What Berkeley DB Version? Any optional configuration flags
    specified? Are you running with any special patches? Please Specify.
    Berkeley DB Version? 4.6.21
    Configuration flags: None. The Berkeley DB was built and installed as part of the
    Oracle Berkeley XML database build and installation process.
    No special patches have been applied.
    4. Processor name, speed and chipset?
    Intel Core 2 CPU 6400 @ 2.13 GHz (1066 FSB) (4MB Cache)
    5. Operating System and Version?
    Ubuntu Linux 8.04 (Hardy) with the 2.6.24-23 generic kernel.
    6. Disk Drive Type and speed?
    300 GB 7200RPM hard drive.
    7. File System Type? (such as EXT2, NTFS, Reiser)
    EXT3
    8. Physical Memory Available?
    Memory: 3.8GB DDR2 SDRAM
    9. Are you using Replication (HA) with Berkeley DB XML? If so, please
    describe the network you are using, and the number of Replica’s.
    No.
    10. Are you using a Remote Filesystem (NFS) ? If so, for which
    Berkeley DB XML/DB files?
    No.
    11. What type of mutexes do you have configured? Did you specify
    –with-mutex=? Specify what you find inn your config.log, search
    for db_cv_mutex?
    I did not specify -with-mutex when building the database.
    config.log indicates:
    db_cv_mutex=POSIX/pthreads/library/x86_64/gcc-assembly
    12. Which API are you using (C++, Java, Perl, PHP, Python, other) ?
    Which compiler and version?
    I am using the Java API.
    I am using the gcc 4.2.4 compiler.
    I am using the g++ 4.2.4 compiler.
    13. If you are using an Application Server or Web Server, please
    provide the name and version?
    I am using the Tomcat 5.5 application server.
    It is not using the Apache Portable Runtime library.
    It is being run using a 64 bit version of the Sun Java 1.5 JRE.
    14. Please provide your exact Environment Configuration Flags (include
    anything specified in you DB_CONFIG file)
    I do not have a DB_CONFIG file in the database home directory.
    My environment configuration is as follows:
    Threaded = true
    AllowCreate = true
    InitializeLocking = true
    ErrorStream = System.err
    InitializeCache = true
    Cache Size = 1024 * 1024 * 500
    InitializeLogging = true
    Transactional = false
    TrickleCacheWrite = 20
    15. Please provide your Container Configuration Flags?
    My container configuration is done using the Java API.
    The container creation code is:
    XmlContainerConfig containerConfig = new XmlContainerConfig();
    containerConfig.setStatisticsEnabled(true);
    XmlContainer container = xmlManager.createContainer("container",containerConfig);I am guessing that this means that the only flag I have set is the one
    that enables recording of statistics to use in query optimization.
    I have no other container configuration information to provide.
    16. How many XML Containers do you have?
    I have one XML container.
    The container has 2,729,465 documents.
    The container is a node container rather than a wholedoc container.
    Minimum document size is around 1Kb.
    Maximum document size is around 50Kb.
    Average document size is around 2Kb.
    I am using document data as part of the XQueries being run. For
    example, I condition query results upon the values of attributes
    and elements in the stored documents.
    The database has the following indexes:
    xmlIndexSpecification = dataContainer.getIndexSpecification();
    xmlIndexSpecification.replaceDefaultIndex("node-element-presence");
    xmlIndexSpecification.addIndex(Constants.XBRLAPINamespace,"fragment","node-element-presence");
    xmlIndexSpecification.addIndex(Constants.XBRLAPINamespace,"data","node-element-presence");
    xmlIndexSpecification.addIndex(Constants.XBRLAPINamespace,"xptr","node-element-presence");
    xmlIndexSpecification.addIndex("","stub","node-attribute-presence");
    xmlIndexSpecification.addIndex("","index", "unique-node-attribute-equality-string");
    xmlIndexSpecification.addIndex(Constants.XBRL21LinkNamespace,"label","node-element-substring-string");
    xmlIndexSpecification.addIndex(Constants.GenericLabelNamespace,"label","node-element-substring-string");
    xmlIndexSpecification.addIndex("","name","node-attribute-substring-string");
    xmlIndexSpecification.addIndex("","parentIndex", "node-attribute-equality-string");
    xmlIndexSpecification.addIndex("","uri", "node-attribute-equality-string");
    xmlIndexSpecification.addIndex("","type", "node-attribute-equality-string");
    xmlIndexSpecification.addIndex("","targetDocumentURI", "node-attribute-equality-string");
    xmlIndexSpecification.addIndex("","targetPointerValue", "node-attribute-equality-string");
    xmlIndexSpecification.addIndex("","absoluteHref", "node-attribute-equality-string");
    xmlIndexSpecification.addIndex("","id","node-attribute-equality-string");
    xmlIndexSpecification.addIndex("","value", "node-attribute-equality-string");
    xmlIndexSpecification.addIndex("","arcroleURI", "node-attribute-equality-string");
    xmlIndexSpecification.addIndex("","roleURI", "node-attribute-equality-string");
    xmlIndexSpecification.addIndex("","name", "node-attribute-equality-string");
    xmlIndexSpecification.addIndex("","targetNamespace", "node-attribute-equality-string");
    xmlIndexSpecification.addIndex("","contextRef", "node-attribute-equality-string");
    xmlIndexSpecification.addIndex("","unitRef", "node-attribute-equality-string");
    xmlIndexSpecification.addIndex("","scheme", "node-attribute-equality-string");
    xmlIndexSpecification.addIndex("","value", "node-attribute-equality-string");
    xmlIndexSpecification.addIndex(Constants.XBRL21Namespace,"identifier", "node-element-equality-string");           
    xmlIndexSpecification.addIndex(Constants.XMLNamespace,"lang","node-attribute-equality-string");
    xmlIndexSpecification.addIndex(Constants.XLinkNamespace,"label","node-attribute-equality-string");
    xmlIndexSpecification.addIndex(Constants.XLinkNamespace,"from","node-attribute-equality-string");
    xmlIndexSpecification.addIndex(Constants.XLinkNamespace,"to","node-attribute-equality-string");
    xmlIndexSpecification.addIndex(Constants.XLinkNamespace,"type","node-attribute-equality-string");
    xmlIndexSpecification.addIndex(Constants.XLinkNamespace,"arcrole","node-attribute-equality-string");
    xmlIndexSpecification.addIndex(Constants.XLinkNamespace,"role","node-attribute-equality-string");
    xmlIndexSpecification.addIndex(Constants.XLinkNamespace,"label","node-attribute-equality-string");
    xmlIndexSpecification.addIndex(Constants.XBRLAPILanguagesNamespace,"language","node-element-presence");
    xmlIndexSpecification.addIndex(Constants.XBRLAPILanguagesNamespace,"code","node-element-equality-string");
    xmlIndexSpecification.addIndex(Constants.XBRLAPILanguagesNamespace,"value","node-element-equality-string");
    xmlIndexSpecification.addIndex(Constants.XBRLAPILanguagesNamespace,"encoding","node-element-equality-string");17. Please describe the shape of one of your typical documents? Please
    do this by sending us a skeleton XML document.
    The following provides the basic information about the shape of all documents
    in the data store.
    <ns:fragment xmlns:ns="..." attrs...(about 20 of them)>
      <ns:data>
        Single element that varies from document to document but that
        is rarely more than a few small elements in size and (in some cases)
        a lengthy section of string content for the single element.
      </ns:data>
    </ns:fragment>18. What is the rate of document insertion/update required or
    expected? Are you doing partial node updates (via XmlModify) or
    replacing the document?
    Document insertion rates are not a first order performance criteria.
    I do no document modifications using XmlModify.
    When doing updates I replace the original document.
    19. What is the query rate required/expected?
    Not sure how to provide metrics for this but a single web page is
    being generated, this can involve hundreds of queries. each of which
    should be trivial to execute given the indexing strategy in use.
    20. XQuery -- supply some sample queries
    1. Please provide the Query Plan
    2. Are you using DBXML_INDEX_NODES?
              I am using DBXML_INDEX_NODES by default because I
              am using a node container rather than a whole document
              container.
    3. Display the indices you have defined for the specific query.
    4. If this is a large query, please consider sending a smaller
    query (and query plan) that demonstrates the problem.
    Example queries.
    1. collection('browser')/*[@parentIndex='none']
    <XQuery>
      <QueryPlanToAST>
        <LevelFilterQP>
          <StepQP axis="parent-of-attribute" uri="*" name="*" nodeType="element">
            <ValueQP container="browser" index="node-attribute-equality-string" operation="eq" child="parentIndex" value="none"/>
          </StepQP>
        </LevelFilterQP>
      </QueryPlanToAST>
    </XQuery>2. collection('browser')/*[@stub]
    <XQuery>
      <QueryPlanToAST>
        <LevelFilterQP>
          <StepQP axis="parent-of-attribute" uri="*" name="*" nodeType="element">
            <PresenceQP container="browser" index="node-attribute-presence-none" operation="eq" child="stub"/>
          </StepQP>
        </LevelFilterQP>
      </QueryPlanToAST>
    </XQuery>3. qplan "collection('browser')/*[@type='org.xbrlapi.impl.ConceptImpl' or @parentIndex='asdfv_3']"
    <XQuery>
      <QueryPlanToAST>
        <LevelFilterQP>
          <StepQP axis="parent-of-attribute" uri="*" name="*" nodeType="element">
            <UnionQP>
              <ValueQP container="browser" index="node-attribute-equality-string" operation="eq" child="type" value="org.xbrlapi.impl.ConceptImpl"/>
              <ValueQP container="browser" index="node-attribute-equality-string" operation="eq" child="parentIndex" value="asdfv_3"/>
            </UnionQP>
          </StepQP>
        </LevelFilterQP>
      </QueryPlanToAST>
    </XQuery>4.
    setnamespace xlink http://www.w3.org/1999/xlink
    qplan "collection('browser')/*[@uri='http://www.xbrlapi.org/my/uri' and */*[@xlink:type='resource' and @xlink:label='description']]"
    <XQuery>
      <QueryPlanToAST>
        <LevelFilterQP>
          <NodePredicateFilterQP uri="" name="#tmp8">
            <StepQP axis="parent-of-child" uri="*" name="*" nodeType="element">
              <StepQP axis="parent-of-child" uri="*" name="*" nodeType="element">
                <NodePredicateFilterQP uri="" name="#tmp1">
                  <StepQP axis="parent-of-attribute" uri="*" name="*" nodeType="element">
                    <ValueQP container="browser" index="node-attribute-equality-string" operation="eq" child="label:http://www.w3.org/1999/xlink"
                    value="description"/>
                  </StepQP>
                  <AttributeJoinQP>
                    <VariableQP name="#tmp1"/>
                    <ValueQP container="browser" index="node-attribute-equality-string" operation="eq" child="type:http://www.w3.org/1999/xlink"
                    value="resource"/>
                  </AttributeJoinQP>
                </NodePredicateFilterQP>
              </StepQP>
            </StepQP>
            <AttributeJoinQP>
              <VariableQP name="#tmp8"/>
              <ValueQP container="browser" index="node-attribute-equality-string" operation="eq" child="uri" value="http://www.xbrlapi.org/my/uri"/>
            </AttributeJoinQP>
          </NodePredicateFilterQP>
        </LevelFilterQP>
      </QueryPlanToAST>
    </XQuery>21. Are you running with Transactions? If so please provide any
    transactions flags you specify with any API calls.
    I am not running with transactions.
    22. If your application is transactional, are your log files stored on
    the same disk as your containers/databases?
    The log files are stored on the same disk as the container.
    23. Do you use AUTO_COMMIT?
    Yes. I think that it is a default feature of the DocumentConfig that
    I am using.
    24. Please list any non-transactional operations performed?
    I do document insertions and I do XQueries.
    25. How many threads of control are running? How many threads in read
    only mode? How many threads are updating?
    One thread is updating. Right now one thread is running queries. I am
    not yet testing the web application with concurrent users given the
    performance issues faced with a single user.
    26. Please include a paragraph describing the performance measurements
    you have made. Please specifically list any Berkeley DB operations
    where the performance is currently insufficient.
    I have loaded approximately 7 GB data into the container and then tried
    to run the web application using that data. This involves running a broad
    range of very simple queries, all of which are expected to be supported
    by indexes to ensure that they do not require XML document traversal activity.
    Querying performance is insufficient, with even the most basic queries
    taking several minutes to complete.
    27. What performance level do you hope to achieve?
    I hope to be able to run a web application that simultaneously handles
    page requests from hundreds of users, each of which involves a large
    number of database queries.
    28. Please send us the output of the following db_stat utility commands
    after your application has been running under "normal" load for some
    period of time:
    % db_stat -h database environment -c
    1038     Last allocated locker ID
    0x7fffffff     Current maximum unused locker ID
    9     Number of lock modes
    1000     Maximum number of locks possible
    1000     Maximum number of lockers possible
    1000     Maximum number of lock objects possible
    155     Number of current locks
    157     Maximum number of locks at any one time
    200     Number of current lockers
    200     Maximum number of lockers at any one time
    13     Number of current lock objects
    17     Maximum number of lock objects at any one time
    1566M     Total number of locks requested (1566626558)
    1566M     Total number of locks released (1566626403)
    0     Total number of locks upgraded
    852     Total number of locks downgraded
    3     Lock requests not available due to conflicts, for which we waited
    0     Lock requests not available due to conflicts, for which we did not wait
    0     Number of deadlocks
    0     Lock timeout value
    0     Number of locks that have timed out
    0     Transaction timeout value
    0     Number of transactions that have timed out
    712KB     The size of the lock region
    21807     The number of region locks that required waiting (0%)
    % db_stat -h database environment -l
    0x40988     Log magic number
    13     Log version number
    31KB 256B     Log record cache size
    0     Log file mode
    10Mb     Current log file size
    0     Records entered into the log
    28B     Log bytes written
    28B     Log bytes written since last checkpoint
    1     Total log file I/O writes
    0     Total log file I/O writes due to overflow
    1     Total log file flushes
    0     Total log file I/O reads
    1     Current log file number
    28     Current log file offset
    1     On-disk log file number
    28     On-disk log file offset
    1     Maximum commits in a log flush
    0     Minimum commits in a log flush
    96KB     Log region size
    0     The number of region locks that required waiting (0%)
    % db_stat -h database environment -m
    500MB     Total cache size
    1     Number of caches
    1     Maximum number of caches
    500MB     Pool individual cache size
    0     Maximum memory-mapped file size
    0     Maximum open file descriptors
    0     Maximum sequential buffer writes
    0     Sleep after writing maximum sequential buffers
    0     Requested pages mapped into the process' address space
    1749M     Requested pages found in the cache (99%)
    722001     Requested pages not found in the cache
    911092     Pages created in the cache
    722000     Pages read into the cache
    4175142     Pages written from the cache to the backing file
    1550811     Clean pages forced from the cache
    19568     Dirty pages forced from the cache
    3     Dirty pages written by trickle-sync thread
    62571     Current total page count
    62571     Current clean page count
    0     Current dirty page count
    65537     Number of hash buckets used for page location
    1751M     Total number of times hash chains searched for a page (1751388600)
    8     The longest hash chain searched for a page
    3126M     Total number of hash chain entries checked for page (3126038333)
    4535     The number of hash bucket locks that required waiting (0%)
    278     The maximum number of times any hash bucket lock was waited for (0%)
    1     The number of region locks that required waiting (0%)
    0     The number of buffers frozen
    0     The number of buffers thawed
    0     The number of frozen buffers freed
    1633189     The number of page allocations
    4301013     The number of hash buckets examined during allocations
    259     The maximum number of hash buckets examined for an allocation
    1570522     The number of pages examined during allocations
    1     The max number of pages examined for an allocation
    184     Threads waited on page I/O
    Pool File: browser
    8192     Page size
    0     Requested pages mapped into the process' address space
    1749M     Requested pages found in the cache (99%)
    722001     Requested pages not found in the cache
    911092     Pages created in the cache
    722000     Pages read into the cache
    4175142     Pages written from the cache to the backing file
    % db_stat -h database environment -r
    Not applicable.
    % db_stat -h database environment -t
    Not applicable.
    vmstat
    r b swpd free buff cache si so bi bo in cs us sy id wa
    1 4 40332 773112 27196 1448196 0 0 173 239 64 1365 19 4 72 5
    iostat
    Linux 2.6.24-23-generic (dell)      06/02/09
    avg-cpu: %user %nice %system %iowait %steal %idle
    18.37 0.01 3.75 5.67 0.00 72.20
    Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
    sda 72.77 794.79 1048.35 5376284 7091504
    29. Are there any other significant applications running on this
    system? Are you using Berkeley DB outside of Berkeley DB XML?
    Please describe the application?
    No other significant applications are running on the system.
    I am not using Berkeley DB outside of Berkeley DB XML.
    The application is a web application that organises the data in
    the stored documents into hypercubes that users can slice/dice and analyse.
    Edited by: Geoff Shuetrim on Feb 7, 2009 2:23 PM to correct the appearance of the query plans.

    Hi Geoff,
    Thanks for filling out the performance questionnaire. Unfortunately the forum software seems to have destroyed some of your queries - you might want to use \[code\] and \[code\] to markup your queries and query plans next time.
    Geoff Shuetrim wrote:
    Current performance involves responses to simple queries that involve 1-2
    minute turn around (this improves after a few similar queries have been run,
    presumably because of caching, but not to a point that is acceptable for
    web applications).
    Desired performance is for queries to execute in milliseconds rather than
    minutes.I think that this is a reasonable expectation in most cases.
    14. Please provide your exact Environment Configuration Flags (include
    anything specified in you DB_CONFIG file)
    I do not have a DB_CONFIG file in the database home directory.
    My environment configuration is as follows:
    Threaded = true
    AllowCreate = true
    InitializeLocking = true
    ErrorStream = System.err
    InitializeCache = true
    Cache Size = 1024 * 1024 * 500
    InitializeLogging = true
    Transactional = false
    TrickleCacheWrite = 20If you are performing concurrent reads and writes, you need to enable transactions in the both the environment and the container.
    Example queries.
    1. collection('browser')/*[@parentIndex='none']
    <XQuery>
    <QueryPlanToAST>
    <LevelFilterQP>
    <StepQP axis="parent-of-attribute" uri="*" name="*" nodeType="element">
    <ValueQP container="browser" index="node-attribute-equality-string" operation="eq" child="parentIndex" value="none"/>
    </StepQP>
    </LevelFilterQP>
    </QueryPlanToAST>
    </XQuery>
    I have two initial observations about this query:
    1) It looks like it could return a lot of results - a query that returns a lot of results will always be slow. If you only want a subset of the results, use lazy evalulation, or put an explicit call to the subsequence() function in the query.
    2) An explicit element name with an index on it often performs faster than a "*" step. I think you'll get faster query execution if you specify the document element name rather than "*", and then add a "node-element-presence" index on it.
    3) Generally descendant axis is faster than child axis. If you just need the document rather than the document (root) element, you might find that this query is a little faster (any document with a "parentIndex" attribute whose value is "none"):
    collection()[descendant::*/@parentIndex='none']Similar observations apply to the other queries you posted.
    Get back to me if you're still having problems with specific queries.
    John

  • OpenMessageQueue and XA

    I dont understand how OpenMessageQueue could participate in global transaction, when it is quite possible that a commit will fail:
    http://docs.sun.com/app/docs/doc/819-4469/6n6kb5cs3?a=view#gczqs
    It says:
    "In the second case, when the failover occurs during a call to Session.commit, there may be three outcomes:
    1) The transaction is committed successfully and the call to Session.commit does not return an exception. In this case, the application client does not have to do anything.
    2) The runtime throws a TransactionRolledbackException and does not commit the transaction. The transaction is automatically rolled back by the Message Queue runtime. In this case, the client application must retry the transaction as described for the case in which an open transaction is failed-over.
    3) A JMXException is thrown. This signals the fact that the transaction state is unknown: It might have either succeeded or failed. A client application should handle this case by assuming failure, pausing for three seconds, calling Session.rollback, and then retrying the operations. However, since the commit might have succeeded, when retrying the transacted operations, a producer should set application-specific properties on the messages it re-sends to signal that these might be duplicate messages. Likewise, consumers that retry receive operations should not assume that a message that is redelivered is necessarily a duplicate. In other words, to ensure once and only once delivery, both producers and consumers need to do a little extra work to handle this edge case. The code samples presented next illustrate good coding practices for handling this situation."
    If I make the JMS sessions part of a global transaction, how can the OpenMessageQueue / JMS resource guarantee to be part of the ACID property when there is a possible that commit throws an exception ( assuming therefore that the transaction did not complete ), but it actually committed as per above ( 3rd outcome ) ??

    I dont understand how OpenMessageQueue could participate in global transaction, when it is quite possible that a commit will fail:
    http://docs.sun.com/app/docs/doc/819-4469/6n6kb5cs3?a=view#gczqs
    It says:
    "In the second case, when the failover occurs during a call to Session.commit, there may be three outcomes:
    1) The transaction is committed successfully and the call to Session.commit does not return an exception. In this case, the application client does not have to do anything.
    2) The runtime throws a TransactionRolledbackException and does not commit the transaction. The transaction is automatically rolled back by the Message Queue runtime. In this case, the client application must retry the transaction as described for the case in which an open transaction is failed-over.
    3) A JMXException is thrown. This signals the fact that the transaction state is unknown: It might have either succeeded or failed. A client application should handle this case by assuming failure, pausing for three seconds, calling Session.rollback, and then retrying the operations. However, since the commit might have succeeded, when retrying the transacted operations, a producer should set application-specific properties on the messages it re-sends to signal that these might be duplicate messages. Likewise, consumers that retry receive operations should not assume that a message that is redelivered is necessarily a duplicate. In other words, to ensure once and only once delivery, both producers and consumers need to do a little extra work to handle this edge case. The code samples presented next illustrate good coding practices for handling this situation."
    If I make the JMS sessions part of a global transaction, how can the OpenMessageQueue / JMS resource guarantee to be part of the ACID property when there is a possible that commit throws an exception ( assuming therefore that the transaction did not complete ), but it actually committed as per above ( 3rd outcome ) ??

Maybe you are looking for