Locking strategies

What options are currently avaiable for CMP entitys EJBs for concurrency in OC4J? According to EBB specs containers can support 3 commit modes : A , B and C . Which of those are avaiable ?
Thanks

Thanks for your replay, Jeff.
But i think i wasn't clear in my question (not english born language!).
My question was , which locking modes will be avaiable in R2?.
And as we only intend to use CMP entity beans , i was only interested
in the option for this kind of beans, not BMP (if locking options were diferent).
Meanwhile i downloaded the last preview ,and i found in orion-ejb.jar in one of the examples
a new deploy attribute the indicates those new lockinng modes, thau you mentioned but no documentation about it.
Can you tell me what they really mean ?
Thanks ,
Joao Cunha

Similar Messages

  • Need details about "Lock Profiling" tab of JRockit JRA

    Hi,
    I'm experimenting with the JRockit JRA tool: I think this is a very useful tool ! It provides very valuable information.
    About locks ("Lock Profiling" tab), since JRockit manages locks in a very sophisticated manneer, it enables to get very important information about which monitors are used by the application, helping for improving the performances.
    Nevertheless, the BEA categories (thin/fat, uncontended/contended, recursive, after sleep) are not so clear. A short paper explaining what they mean would greatly help.
    Fat contended monitors cost the most, but maybe 10000 thin uncontended locks cost the same as 1 fat contended lock does. We don't know.
    So, there is a lack of information about the cost (absolute: in ms, or relative: 1 fat lock costs as N thin locks) of each kind of monitor. This information would dramaticaly help people searching where improvements of lock management are required in their application.
    Thanks,
    Tony

    great explanation! Thanks
    "ihse" <[email protected]> wrote in message
    news:18555807.1105611467160.JavaMail.root@jserv5...
    About thin, fat, recursive and contended locks in JRockit:
    Let's start with the easiest part: recursive locks. A recursive lock
    occurs in the following scenario:synchronized(foo) {  // first time thread takes lock
    synchronized(foo) {  // this time, the lock is taken recursively
    }The recursive lock taking may also occur in a method call several levels
    down - it doesn't matter. Recursive locks are not neccessarily any sign of
    bad programming, at least not if the recursive lock taking is done by a
    separate method.
    The good news is that recursive lock taking in JRockit is extremely fast.
    In fact, the cost to take a lock recursively is almost negligable. This is
    regardless if the lock was originally taken as a thin or a fat lock
    (explained in detail below).
    Now let's talk a bit about contention. Contention occurs whenever a thread
    tries to take a lock, and that lock is not available (that is, it is held
    by another thread). Let me be clear: contention ALWAYS costs in terms of
    performance. The exact cost depends on many factors. I'll get to some more
    details on the costs later on.
    So if performance is an issue, you should strive to avoid contention.
    Unfortunately, in many cases it is not possible to avoid contention -- if
    you're application requires several threads to access a single, shared
    resource at the same time, contention is unavoidable. Some designs are
    better than others, though. Be careful that you don't overuse
    synchronized-blocks. Minimize the code that has to be run while holding a
    highly-contended lock. Don't use a single lock to protect unrelated
    resources, if that lock proves to be easily contended.
    In principle, that is all you can do as an application developer: design
    your program to avoid contention, if possible. There are some experimental
    flags to change some of the JRockit locking behaviour, but I strongly
    discourage anyone from using these. The default values is carefully
    trimmed, and changing this is likely to result in worse, rather than
    better, performance.
    Still, I understand if you're curious to what JRockit is doing with your
    application. I'll give some more details about the locking strategies in
    JRockit.
    All objects in Java are potential locks (monitors). This potential is
    realized as an actual lock as soon as any thread enters a synchronized
    block on that object. When a lock is "born" in this way, it is a kind of
    lock that is known as a "thin lock". A thin lock has the following
    characteristics:
    * It requires no extra memory -- all information about the lock is stored
    in the object itself.
    * It is fast to take.
    * Other threads that try to take the lock cannot register themselves as
    contending.
    The most costly part of taking a thin lock is a CAS (compare-and-swap)
    operation. It's an atomic instruction, which means as far as CPU
    instructions goes, it is dead slow. Compared to other parts of locking
    (contention in general, and taking fat locks in specific), it is still
    very fast.
    For locks that are mostly uncontended, thin locks are great. There is
    little overhead compared to no locking, which is good since a lot of Java
    code (especially in the class library) use lot of synchronization.
    However, as soon as a lock becomes contended, the situation is not longer
    as obvious as to what is most efficient. If a lock is held for just a very
    short moment of time, and JRockit is running on a multi-CPU (SMP) machine,
    the best strategy is to "spin-lock". This means, that the thread that
    wants the lock continuously checks if the lock is still taken, "spinning"
    in a tight loop. This of course means some performance loss: no actual
    user code is running, and the CPU is "wasting" time that could have been
    spent on other threads. Still, if the lock is released by the other
    threads after just a few cycles in the spin loop, this method is
    preferable. This is what's meant by a "contended thin lock".
    If the lock is not going to be released very fast, using this method on
    contention would lead to bad performance. In that case, the lock is
    "inflated" to a "fat lock". A fat lock has the following characteristics:
    * It requeries a little extra memory, in terms of a separate list of
    threads wanting to acquire the lock.
    * It is relatively slow to take.
    * One (or more) threads can register as queueing for (blocking on) that
    lock.
    A thread that encounters contention on a fat lock register itself as
    blocking on that lock, and goes to sleep. This means giving up the rest of
    its time quantum given to it by the OS. While this means that the CPU will
    be used for running real user code on another thread, the extra context
    switch is still expensive, compared to spin locking. When a thread does
    this, we have a "contended fat lock".
    When the last contending thread releases a fat lock, the lock normally
    remains fat. Taking a fat lock, even without contention, is more expensive
    than taking a fat lock (but less expensive than converting a thin lock to
    a fat lock). If JRockit believes that the lock would benefit from being
    thin (basically, if the contention was pure "bad luck" and the lock
    normally is uncontended), it might "deflate" it to a thin lock again.
    A special note regarding locks: if wait/notify/notifyAll is called on a
    lock, it will automatically inflate to a fat lock. A good advice (not only
    for this reason) is therefore not to mix "actual" locking with this kind
    of notification on a single object.
    JRockit uses a complex set of heuristics to determine amongst other
    things:
    * When to spin-lock on a thin lock (and how long), and when to inflate it
    to a fat lock on contention.
    * If and when to deflate a fat lock back to a thin lock.
    * If and when to skip on the fairness on a contended fat lock to improve
    performance.
    These heuristics are dynamically adaptive, which means that they will
    automatically change to what's best suited for the actual application that
    is being run.
    Since the switch beteen thin and fat locks are done automatically by
    JRockit to the kind of lock that maximizes performance of the application,
    the relative difference in performance between thin and fat locks
    shouldn't really be of any concern to the user. It is impossible to give a
    general answer to this question anyhow, since it differs from system to
    system, depending on how many CPU:s you have, what kind of CPU:s, the
    performance on other parts of the system (memory, cache, etc) and similar
    factors. In addition to this, it is also very hard to give a good answer
    to the question even for a specific system. Especially tricky is it to
    determine with any accuracy the time spent spinning on contended thin
    locks, since JRockit loops just a few machine instuctions a few times
    before giving up, and profiling of this is likely to heavily influence the
    time, giving a skewed image of the performance.
    To summarize:
    If you're concerned about performance, and can change your program to
    avoid contention on a lock - then do so. If you can't avoid contention,
    try to keep the code needed to run contended to a minimum. JRockit will
    then do whatever is in its power to run your progam as fast as possible.
    Use the lock information provided by JRA as a hint: fat locks are likely
    to have been contended much or for a long time. Put your effort on
    minimizing contention on them.

  • NullPointerException when deleting from table with no non-primary keys

    We have an object mapped to a table in our database.
    This table has two fields that together form a composite key. When we attempt to perform a delete from this table using the mapped object we receive the following exception and stack trace:
    java.lang.NullPointerException
    at oracle.toplink.descriptors.FieldsLockingPolicy.getAllNonPrimaryKeyFields(Unknown Source)
    at oracle.toplink.descriptors.AllFieldsLockingPolicy.getFieldsToCompare(Unknown Source)
    at oracle.toplink.descriptors.FieldsLockingPolicy.buildExpression(Unknown Source)
    at oracle.toplink.descriptors.FieldsLockingPolicy.buildDeleteExpression(Unknown Source)
    at oracle.toplink.internal.descriptors.ObjectBuilder.buildDeleteExpression(Unknown Source)
    at oracle.toplink.internal.queryframework.ExpressionQueryMechanism.buildDeleteStatement(Unknown Source)
    at oracle.toplink.internal.queryframework.ExpressionQueryMechanism.prepareDeleteObject(Unknown Source)
    at oracle.toplink.internal.queryframework.StatementQueryMechanism.deleteObject(Unknown Source)
    at oracle.toplink.queryframework.DeleteObjectQuery.execute(Unknown Source)
    at oracle.toplink.queryframework.DatabaseQuery.execute(Unknown Source)
    at oracle.toplink.publicinterface.Session.internalExecuteQuery(Unknown Source)
    at oracle.toplink.publicinterface.UnitOfWork.internalExecuteQuery(Unknown Source)
    at oracle.toplink.tools.profiler.PerformanceProfiler.profileExecutionOfQuery(Unknown Source)
    at oracle.toplink.publicinterface.Session.executeQuery(Unknown Source)
    at oracle.toplink.publicinterface.Session.executeQuery(Unknown Source)
    at oracle.toplink.internal.sessions.CommitManager.deleteAllObjects(Unknown Source)
    at oracle.toplink.publicinterface.UnitOfWork.commitToDatabase(Unknown Source)
    at oracle.toplink.publicinterface.UnitOfWork.commitRootUnitOfWork(Unknown Source)
    at oracle.toplink.publicinterface.UnitOfWork.commit(Unknown Source)
    <snip remainder of stack trace>
    The table definition for our table is as follows:
    PK,FK TASK_ID NUMBER NOT NULL
    PK,FK USER_ID NUMBER NOT NULL
    Each field is a foreign key to another table, however this is not a join table for a many to many relationship.
    The mapping definition for the mapped object is as follows:
    public Descriptor buildProConSubscriptionDescriptor( )
    Descriptor descriptor = new Descriptor( );
    descriptor.setJavaClass( mil.usmc.mol.procon.domain.ProConSubscription.class );
    descriptor.addTableName( "PRO_CON_SUBSCRIPTION" );
    descriptor.addPrimaryKeyFieldName( "PRO_CON_SUBSCRIPTION.TASK_ID" );
    descriptor.addPrimaryKeyFieldName( "PRO_CON_SUBSCRIPTION.USER_ID" );
    // Descriptor properties.
    descriptor.useNoIdentityMap( );
    descriptor.setIdentityMapSize( 1000 );
    descriptor.useRemoteNoIdentityMap( );
    descriptor.setRemoteIdentityMapSize( 1000 );
    descriptor.setAlias( "ProConSubscription" );
    descriptor.useAllFieldsLocking( );
    // Query manager.
    descriptor.getQueryManager( ).checkDatabaseForDoesExist( );
    //Named Queries
    // Event manager.
    // Mappings.
    DirectToFieldMapping taskIdMapping = new DirectToFieldMapping( );
    taskIdMapping.setAttributeName( "taskId" );
    taskIdMapping.setGetMethodName( "getTaskId" );
    taskIdMapping.setSetMethodName( "setTaskId" );
    taskIdMapping.setFieldName( "PRO_CON_SUBSCRIPTION.TASK_ID" );
    descriptor.addMapping( taskIdMapping );
    DirectToFieldMapping userIdMapping = new DirectToFieldMapping( );
    userIdMapping.setAttributeName( "userId" );
    userIdMapping.setGetMethodName( "getUserId" );
    userIdMapping.setSetMethodName( "setUserId" );
    userIdMapping.setFieldName( "PRO_CON_SUBSCRIPTION.USER_ID" );
    descriptor.addMapping( userIdMapping );
    return descriptor;
    Any thoughts as to why this is happening?
    Thanks,
    Andrew Lee

    Hi Andrew,
    AllFieldsLocking isn't designed for this since you have an object that's composed entirely of primary data. You should make use of any other of TopLink's locking policies to make this work. For instance, try useChangedFieldsLocking(), pessimistic or any of the optimistic locking strategies available from within TopLink for this to work based on the design you have outlined.
    Darren

  • Possible deadlock: transactions, cursors, and sequences...

    Hello,
    After making recent changes to our database in response to a previous issue (bug found and patched in BDB; local code fixed by removing both DB_TXN_SNAPSHOT and DB_MULTIVERSION) we've come across a possible deadlock.
    BACKGROUND:
    This database has been running for over a year in production and I've never seen a hard deadlock. We use the default deadlock detection engine internal to BDB, transactions, and our code supports the processing of deadlocks and subsequent retries and final abandonment when necessary. The transactions in question involve cursors and sequences; we're using cursors to flip through entries in an existing database, and should no match be found for an update, we insert a new record. Before doing so, we grab the "next ID" (primary key) from a sequence we have (attached, as all sequences are, to its own different DB, done on advice from online docs: "For this reason, it is often preferable for sequence objects to be stored in their own database.") and finally insert the new record.
    This is a 64-bit Linux machine. There were 4 operational threads at the time; all were waiting on pthread conditions. As I understand it, deadlocks should have been internally detected and returning DB_LOCK_DEADLOCK to us somewhere? Since we're using the default lock detection engine and firing it constantly, we should not require an Nth thread to monitor the lock tables and manually reject deadlocked transactions, etc?
    I wasn't sure what to do when the deadlock occured, but I found some forum posts referencing a db_stat -Co and ran it, along with grabbing a core dump which I still have available. I've reverted the DB binary to older DB_MULTIVERSION code as I work on figuring this out, but if there's something else crucial I should have done, I can run the new code again and wait for another deadlock to happen to run additional diagnostics.
    Any ideas or assistance is appreciated. Thank you.
    DEADLOCK INFORMATION:
    THREAD 2: Attempting to 'get' an asset.
    (gdb) bt
    #0 0x00002aec07e27496 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    #1 0x00002aec078ddeed in __db_pthread_mutex_lock () from /usr/local/BerkeleyDB.4.7/lib/libdb_cxx-4.7.so
    #2 0x00002aec078dda8b in __db_tas_mutex_lock () from /usr/local/BerkeleyDB.4.7/lib/libdb_cxx-4.7.so
    #3 0x00002aec0795b8f1 in __lock_get_internal () from /usr/local/BerkeleyDB.4.7/lib/libdb_cxx-4.7.so
    #4 0x00002aec0795bc42 in __lock_get () from /usr/local/BerkeleyDB.4.7/lib/libdb_cxx-4.7.so
    #5 0x00002aec07987d94 in __db_lget () from /usr/local/BerkeleyDB.4.7/lib/libdb_cxx-4.7.so
    #6 0x00002aec07914625 in __ham_get_meta () from /usr/local/BerkeleyDB.4.7/lib/libdb_cxx-4.7.so
    #7 0x00002aec07908d4b in __hamc_get () from /usr/local/BerkeleyDB.4.7/lib/libdb_cxx-4.7.so
    #8 0x00002aec07979e8a in __dbc_get () from /usr/local/BerkeleyDB.4.7/lib/libdb_cxx-4.7.so
    #9 0x00002aec0797aa0d in __dbc_pget () from /usr/local/BerkeleyDB.4.7/lib/libdb_cxx-4.7.so
    #10 0x00002aec0798654b in __dbc_pget_pp () from /usr/local/BerkeleyDB.4.7/lib/libdb_cxx-4.7.so
    #11 0x00002aec078d4dd7 in Dbc::get () from /usr/local/BerkeleyDB.4.7/lib/libdb_cxx-4.7.so
    THREAD 5: Attempting to 'set' an asset.
    (gdb) bt
    #0 0x00002aec07e27496 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    #1 0x00002aec078ddeed in __db_pthread_mutex_lock () from /usr/local/BerkeleyDB.4.7/lib/libdb_cxx-4.7.so
    #2 0x00002aec078dda8b in __db_tas_mutex_lock () from /usr/local/BerkeleyDB.4.7/lib/libdb_cxx-4.7.so
    #3 0x00002aec079d0c45 in __seq_get () from /usr/local/BerkeleyDB.4.7/lib/libdb_cxx-4.7.so
    #4 0x00002aec078dd02e in DbSequence::get () from /usr/local/BerkeleyDB.4.7/lib/libdb_cxx-4.7.so
    THREAD 6: Attempting to 'set' an asset.
    (gdb) bt
    #0 0x00002aec07e27496 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    #1 0x00002aec078ddeed in __db_pthread_mutex_lock () from /usr/local/BerkeleyDB.4.7/lib/libdb_cxx-4.7.so
    #2 0x00002aec078dda8b in __db_tas_mutex_lock () from /usr/local/BerkeleyDB.4.7/lib/libdb_cxx-4.7.so
    #3 0x00002aec0795b8f1 in __lock_get_internal () from /usr/local/BerkeleyDB.4.7/lib/libdb_cxx-4.7.so
    #4 0x00002aec0795bc42 in __lock_get () from /usr/local/BerkeleyDB.4.7/lib/libdb_cxx-4.7.so
    #5 0x00002aec07987d94 in __db_lget () from /usr/local/BerkeleyDB.4.7/lib/libdb_cxx-4.7.so
    #6 0x00002aec079893ee in __db_new () from /usr/local/BerkeleyDB.4.7/lib/libdb_cxx-4.7.so
    #7 0x00002aec0798bb7e in __db_poff () from /usr/local/BerkeleyDB.4.7/lib/libdb_cxx-4.7.so
    #8 0x00002aec07918eb3 in __ham_add_el () from /usr/local/BerkeleyDB.4.7/lib/libdb_cxx-4.7.so
    #9 0x00002aec07907fe1 in __hamc_put () from /usr/local/BerkeleyDB.4.7/lib/libdb_cxx-4.7.so
    #10 0x00002aec0797b6e5 in __dbc_put () from /usr/local/BerkeleyDB.4.7/lib/libdb_cxx-4.7.so
    #11 0x00002aec0796ecde in __db_put () from /usr/local/BerkeleyDB.4.7/lib/libdb_cxx-4.7.so
    #12 0x00002aec07985e6c in __db_put_pp () from /usr/local/BerkeleyDB.4.7/lib/libdb_cxx-4.7.so
    #13 0x00002aec078d395c in Db::put () from /usr/local/BerkeleyDB.4.7/lib/libdb_cxx-4.7.so
    THREAD 7: Attempting to 'set' an asset.
    (gdb) bt
    #0 0x00002aec07e27496 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    #1 0x00002aec078ddeed in __db_pthread_mutex_lock () from /usr/local/BerkeleyDB.4.7/lib/libdb_cxx-4.7.so
    #2 0x00002aec078dda8b in __db_tas_mutex_lock () from /usr/local/BerkeleyDB.4.7/lib/libdb_cxx-4.7.so
    #3 0x00002aec0795b8f1 in __lock_get_internal () from /usr/local/BerkeleyDB.4.7/lib/libdb_cxx-4.7.so
    #4 0x00002aec0795bc42 in __lock_get () from /usr/local/BerkeleyDB.4.7/lib/libdb_cxx-4.7.so
    #5 0x00002aec07987d94 in __db_lget () from /usr/local/BerkeleyDB.4.7/lib/libdb_cxx-4.7.so
    #6 0x00002aec07915b6a in __ham_lock_bucket () from /usr/local/BerkeleyDB.4.7/lib/libdb_cxx-4.7.so
    #7 0x00002aec07915dc7 in __ham_get_cpage () from /usr/local/BerkeleyDB.4.7/lib/libdb_cxx-4.7.so
    #8 0x00002aec079075f9 in __ham_lookup () from /usr/local/BerkeleyDB.4.7/lib/libdb_cxx-4.7.so
    #9 0x00002aec07908f9f in __hamc_get () from /usr/local/BerkeleyDB.4.7/lib/libdb_cxx-4.7.so
    #10 0x00002aec07979e8a in __dbc_get () from /usr/local/BerkeleyDB.4.7/lib/libdb_cxx-4.7.so
    #11 0x00002aec07984045 in __db_get () from /usr/local/BerkeleyDB.4.7/lib/libdb_cxx-4.7.so
    #12 0x00002aec079d0374 in __seq_update () from /usr/local/BerkeleyDB.4.7/lib/libdb_cxx-4.7.so
    #13 0x00002aec079d0c29 in __seq_get () from /usr/local/BerkeleyDB.4.7/lib/libdb_cxx-4.7.so
    #14 0x00002aec078dd02e in DbSequence::get () from /usr/local/BerkeleyDB.4.7/lib/libdb_cxx-4.7.so
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    Lock REGINFO information:
    Lock Region type
    5 Region ID
    env/__db.005 Region name
    0x2b8dbff65000 Original region address
    0x2b8dbff65000 Region address
    0x2b8dbff65138 Region primary address
    0 Region maximum allocation
    0 Region allocated
    Region allocations: 225009 allocations, 0 failures, 0 frees, 1 longest
    REGION_JOIN_OK Region flags
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    Locks grouped by object:
    Locker Mode Count Status ----------------- Object ---------------
    808cfad0 READ 1 HELD db1 page 0
    808cfad3 READ 2 HELD db1 page 0
    808cfad5 READ 1 HELD db1 page 0
    33 READ 1 HELD db1 handle 0
    22 READ 1 HELD db2 handle 0
    18 READ 1 HELD db3 handle 0
    808cfad5 READ 1 HELD db1 page 3906
    808cfad3 READ 1 HELD db1 page 6300
    808cfad3 WRITE 1 HELD db1 page 6300
    808cfad0 READ 1 HELD db1 page 8272
    808cfabe WRITE 2 HELD db4 page 82387
    808cfad3 WRITE 1 HELD db4 page 42879
    808cfad3 WRITE 1 HELD db4 page 42878
    808cfad3 WRITE 1 HELD db4 page 42877
    808cfad3 WRITE 1 HELD db4 page 42874
    808cfad3 WRITE 1 HELD db4 page 42873
    808cfad3 WRITE 1 HELD db4 page 42872
    808cfad3 WRITE 1 HELD db4 page 42901
    808cfad3 WRITE 1 HELD db4 page 42897
    808cfad3 WRITE 1 HELD db4 page 42882
    808cfad3 WRITE 1 HELD db4 page 42881
    808cfad3 WRITE 1 HELD db4 page 42880
    808cfad3 WRITE 1 HELD db4 page 42894
    808cfad3 WRITE 1 HELD db4 page 43797
    1a READ 1 HELD db3sequence handle 0
    1c READ 1 HELD db5 handle 0
    20 READ 1 HELD db6 handle 0
    24 READ 1 HELD db7 handle 0
    808cfad3 READ 13 HELD db4 page 0
    808cfabe READ 2 HELD db4 page 0
    808cfabe WRITE 1 WAIT db4 page 0
    808cfad5 READ 1 WAIT db4 page 0
    26 READ 1 HELD db4 handle 0
    808cfabe READ 2 HELD db4sequence page 0
    808cfad0 READ 1 HELD db4sequence page 0
    28 READ 1 HELD db4sequence handle 0
    808cfabe READ 3 HELD db8 page 0
    2a READ 1 HELD db8 handle 0
    808cfabe READ 4 HELD db9 page 0
    808cfad0 READ 1 HELD db9 page 0
    808cfad3 READ 14 HELD db9 page 0
    808cfad5 READ 1 HELD db9 page 0
    808cfabe READ 1 HELD db4sequence page 2
    808cfabe WRITE 1 HELD db4sequence page 2
    808cfad0 READ 1 WAIT db4sequence page 2
    2e READ 1 HELD db9 handle 0
    808cfabe WRITE 2 HELD db9 page 1
    808cfad3 READ 2 HELD db1sequence page 0
    35 READ 1 HELD db1sequence handle 0
    808cfad3 READ 1 HELD db1sequence page 2
    808cfad3 WRITE 1 HELD db1sequence page 2
    808cfad5 READ 1 HELD db9 page 2833
    808cfad0 WRITE 1 HELD db9 page 7946
    808cfad3 WRITE 14 HELD db9 page 8250
    808cfabe WRITE 3 HELD db8 page 13301

    What else could be causing this in terms of the application having a resource locked? I can say that there are no other running threads doing anything related to BDB at all - they are all in similar "no work, sleep until we get some" calls, with the exception being the main thread which is sitting in sigwait(). What types of things could the application be doing that would prevent all 4 BDB threads from being able to obtain mutexes that are internal to them and not accessible to the application?
    Other thoughts:
    * On a fifth thread, from time to time, txn_checkpoint() is called. Could this have been left in an unclean state?
    * If DB_RMW is used incorrectly, could lock order be compromised? We are not using CDS, so do we need to specify DB_WRITECURSOR to our db->cursor() calls? We do not; we only provide DB_RMW to pget() calls at present.
    * Why is the thread attempting to call sequence->get also deadlocked? The sequence is in its own database - is it waiting on a more global "locker manager" mutex at a high level?
    As I don't see how anything we do can directly control BDB's locking strategies, my only thought is that we're making a programming error to force BDB to lock things in an incorrect order in a way that prevents deadlock detection from occuring. Is this possible? Mainly the only thing changing here was our replacing DB_TXN_SNAPSHOT with the appropriate DB_RMW flags when needed, which is why I'm thinking we did something wrong here, but I'm not sure what.
    I'll continue investigation, but any ideas you have in terms of appropriate directions would be helpful. I'll also work on reproducing this if I can by working backwards from the stack information. Thanks.
    Later thought: Why is 808cfad3 not waiting on anything even though stack clearly shows it (thread 5, I'm guessing) in pthread_cond_wait? Can a transaction enter a wait state without showing up in db_stat output?
    Thanks!
    Edited by: user10542315 on Sep 11, 2009 1:21 PM
    Edited by: user10542315 on Sep 11, 2009 2:52 PM

  • Handling Concurrency in Oracle Service Bus11g

    Hi,
    I"m searching for how to handle concurrency within OSB.
    Scenario: +I've a proxy service which listens to MQ and whenever a message is picked up, it routes it to READProxy service and followed by CREATEProxy service or UPDATEProxy service.+
    However, the message rate is ~1000 per hour. So there could be a chance such a way two or more messages can be picked up by LISTENERProxy service at different managed servers at a time.
    Could somebody help me how to make sure READProxy service reads right data by considering the nature of concurrency?
    Thank you
    Edited by: 1002815 on May 2, 2013 10:50 AM

    Yes, you're right and this is easy and one of the best strategies..however,
    Locking Strategies: --> [http://www.dba-oracle.com/t_locking_strategies.htm]
    select for update - This holds an exclusive lock on the target row and is 100% reliable.  The downside is with disconnected session which may require DBA intervention to release the locks.  In general, the "select for update" is not used in web-based systems, or on applications with unreliable network connectivity.
    So we tried to implement Timstamp based locking but couldn't succeed as it involves millisecs and requires comparision xml timestamp to oracle timestamp..
    Just try to get help with someone who knows PL/SQL in your org or go through some tutorials over internet.Thanks for your suggestion. I can manage to understand the basic level of SQL and PL/SQL
    I'm trying to understand whether I missed out any best approach
    Thank you
    Edited by: 1002815 on May 4, 2013 10:13 AM

  • Eager fetching

    I believe eager fetching can greatly benefit from more dynamic method then
    predefined fetch groups.
    Different queries pull different parts of object graph and only client using
    object model knows what's needed for a particular query. Trying to fit
    conflicting data needs into one set of fetch group is a challenge. While I
    would retain this feature I would like to propose field based eager fetching
    control. Which can also be used for Detached objects
    Lets say we have PO with POLine collection and each POLine references
    Product. If we want to eager fetch POs along with their lines and products
    we could say something like
    pm.getFetchConfiguration().addPath(com.peacetech.sales.PO.class, "lines");
    pm.getFetchConfiguration().addPath(com.peacetech.sales.PO.class,
    "lines/product");
    or
    pm.getFetchConfiguration().addPath(com.peacetech.sales.PO.class, pathList);
    of course we would need
    pm.getFetchConfiguration().removePath(com.peacetech.sales.PO.class,
    "lines/product");
    pm.getFetchConfiguration().addPath(com.peacetech.sales.PO.class); //remove
    all for given class
    This is just a crude example. In real life we should have few classes to
    support the idea and main class GraphConfig or something so users can
    configure, build and reuse those GraphConfig instances and use them for
    various purposes like eager fetching and detaching without conflicts
    I think it would be enormous help to uncouple so many conflicting issues
    (fetch optimizations, optimistic locking field groups, detached graphs etc)
    from one very limited concept of predefined fetch groups

    Mark,
    I mentioned this possibility in on of my prior posts. It is not exactly what
    I want (since it lack interclass relations which let you express hints for
    deep graph rather than for a class and its fields) but close.
    If you maintain your metadata files by hand it is too much effort but since
    almost all my models and metadata are generated I can easily do (and undo)
    it for each and every class/field and I will give it a try :-)
    Alex
    "Marc Prud'hommeaux" <[email protected]> wrote in message
    news:[email protected]...
    Alex-
    Well, you could always simulate dynamic fetch groups by defining a
    different custom fetch group for each field (provided your license has
    the capability to use custom fetch groups). For simplicity, you could
    name each fetch group to be "ClassName.fieldName".
    That way, you could do something like this:
    KodoQuery kq = (KodoQuery) pm.newQuery (MyClass.class, "someQuery");
    kq.getFetchConfiguration ().addFetchGroup ("MyClass.fieldA");
    kq.getFetchConfiguration ().addFetchGroup ("MyClass.fieldB");
    kq.getFetchConfiguration ().addFetchGroup ("MyClass.fieldC");
    kq.getFetchConfiguration ().addFetchGroup ("MyClass.fieldD");
    Collection results = (Collection) kq.execute ();
    It would them be pretty simple to make a static helper method that will
    do thing like includeAllFieldsInFatchGroup() or
    includeNoFieldsInFetchGroup().
    In article <[email protected]>, Alex Roytman wrote:
    David Tinker says multiple fetch group is the solution:
    Hi Alex
    You can create as many fetch groups as you like in the meta data (with
    JDO
    Genie anyway). I think it is much better to get tuning information likethis
    out of the code. It should be added externally much like you add indexesto
    database tables in response to query search requirements. I understandthat
    sometimes you will need programatic control but mostly it is better notto
    pollute the code.
    JDO 2.0 is going to standardize the concept of a use-case. This willdelimit
    business operations to the JDO implementation so it can applyappropriate
    meta-data defined fetch groups or locking strategies. You will be ableto
    use a vendor supplied tool to analyze the performance of yourapplication
    and construct fetch groups for different use-cases (businessoperations). No
    change to the code is necessary.
    Here is an example:
    CODE
    pm.beginUseCase("com.peacetech.sales.displayOrder");
    POClass o = (POClass)pm.getObjectById(oid, true);
    .... other JDO code ....
    pm.endUseCase();
    The meta data for the use-case will specify the fetch group to use when
    looking up the instance (and locking behaviour etc.). Other use-casescan
    have different settings. Only the being/end calls are needed in the codeto
    make this happen.
    Note that this is a very long way from being finalized. I have just madeup
    this example and how it works in JDO 2.0 is likely to be different.
    Use-case support will show up in JDO Genie beta releases soon. Wealready
    have powerful performance monitoring and analysis support in our GUI
    Workbench and very flexible fetch groups.
    Cheers
    David
    Here's my opinion
    David,
    Thank you for your response. Multiple fetch groups do give moreflexibility
    (I do not believe Kodo supports it though) but does not alter myposition
    >>
    I do not want to pollute my metadata with tons of fetch groups creating
    dependencies between my code and those fetch groups. JDOQL languagebeing
    what it is already creates unavoidable dependency between field namesfilter
    strings so I want to keep it the same way with other things which are
    essentially references to persistent fields (Disconnected instancesdepth
    control, Eagar fetching ....)
    As for polluting my code. I want to assure you everything configurablewill
    not be in my code but in JNDI or config files. As long as JDO givesclear
    API for expressing graph path selection
    Use case concept ties nicely with Graph Path selection concept. In factthis
    GraphPaths class along with some additional options is your use case.
    It is much easer for me to refactor and keep in sync field names and my
    GraphPath than fetch groups.
    Alex
    "Abe White" <[email protected]> wrote in message
    news:[email protected]...
    I think the basic idea you've suggested is a good one. We'll certainly
    consider it for a future release. The JDO 2 spec team is also
    pondering
    these problems...
    Marc Prud'hommeaux [email protected]
    SolarMetric Inc. http://www.solarmetric.com

  • BMP Entity caching

    Hi,
    We are using BMP Entity beans on OC4J 9.0.2 and we have following problem:
    Every access to an entity bean invokes ejbLoad even if the bean is already loaded. Moreover, it's seems like OC4J re-uses instances of the beans instead of creating new instances. I mean that, for example, if we have a bean that was loaded with PK=1, and we're trying to load another bean with PK=2, OC4J invokes ejbLoad on first bean with the new PK.
    I've tried to increase max-instances and max-instances-per-pk without any success.
    I've changed validity-timeout to big number, again without success.
    I cannot change exclusive-write-access to true. OC4J always puts "false" there.
    I found couple of posts in this forum about BMP caching, but there was no any solution.
    Is there any solution to this problem?
    Thanks

    Stas -- There were a number of issues with EJB locking strategies in the v1022x. These included some scalability limits that we believed needed to be removed for enterprise systenms as well as being able to have true multi-JVM concurrency which was not easy to do with CMP in v1022x. Much of the exclusive-write-access code for non-read-only beans relied on these old mechanisms. A side effect of these changes was that for a small set of applications these changes might have some performance impact. We are looking to see how we might change this for the future but for the time being three methods exist to work around these issues. The first is to use TopLink for BMP. TopLink can provide caching that takes the place of the caching you were relying on in v1022x. The second it to use a caching mechanism like JCache that Avi described. The last and probably least desireable is to continue to use Oracle9iAS v0122x for your application.
    Realize that these changes were made to increase enterprise scalability beyond what was available in v0122x, not to negatively impact enterprise scalability.
    Laslty, it would be good to know if you are using multiple VMs in your application, if you manage multi-VM locking, and if the impact you are seeing is as great when you start to scale your application beyond a single VM.
    Thanks -- Jeff

  • Can Repository 6i (release3) be used independently without designer

    Hi we are evaluating version control software
    Does anybody know how to configure/use Repository 6i (release 3) for file based version control:
    This is my requirement:
    We have 15 developers who are sharing /making changes to an application which has about 100 packages ..
    I would like to upload the packages (spec & body script files) into repository folders and then as repository owner check in all the packages for first time.
    Then would like all developers to login (all of them given access to select,insert, update) the package files to do regular check-in / check-out of the files
    Can this be done thru' repository 6i (Release3)
    I do not wish to load all the related database objects into design capture of the designer because it would be too big a task as our application cross links with several Oracle Apps (11.03) schema objects and multi-level views which
    All i need is the file (script) based repository management where in at any time we know which developer is working on which package (as he would have checked out/ checked in) thus avoiding more than one developer changing same package thus overriding others' change
    Let me know how to configure repository 6i release 3 for the above requirement.?
    Thanks

    Gajarajan,
    Yes you can use The 6i repository without using Designer. If you install Repository only then the Designer client and the Designer model is not installed. Once you do this then you would use the Repository Object Navigator or the commandline to do you software configuration management tasks. There isn't one way of using the software as this is dependent on the way your team wants to work. But it is possible to upload your code line and then allow developers to check out and check-in code. There is various locking strategies and the concept of branching to isolate one developers, or teams of developers work from others. There is also a security model that controls which files developers can see and what operations they can perform on them. There is a free web course on the Oracle Learning Network that introduces all these concepts. OLN can be accessed through the OTN site.
    Regards
    Mark
    null

  • Lock Up Your Data for Up to 90% Less Cost than On-Premises Solutions with NetApp AltaVault

    June 2015
    Explore
    Data-Protection Services from NetApp and Services-Certified Partners
    Whether delivered by NetApp or by our professional and support services certified partners, these services help you achieve optimal data protection on-premises and in the hybrid cloud. We can help you address your IT challenges for protecting data with services to plan, build, and run NetApp solutions.
    Plan Services—We help you create a roadmap for success by establishing a comprehensive data protection strategy for:
    Modernizing backup for migrating data from tape to cloud storage
    Recovering data quickly and easily in the cloud
    Optimizing archive and retention for cold data storage
    Meeting internal and external compliance regulations
    Build Services—We work with you to help you quickly derive business value from your solutions:
    Design a solution that meets your specific needs
    Implement the solution using proven best practices
    Integrate the solution into your environment
    Run Services—We help you optimize performance and reduce risk in your environment by:
    Maximizing availability
    Minimizing recovery time
    Supplying additional expertise to focus on data protection
    Rachel Dines
    Product Marketing, NetApp
    The question is no longer if, but when you'll move your backup-and-recovery storage to the cloud.
    As a genius IT pro, you know you can't afford to ignore cloud as a solution for your backup-and-recovery woes: exponential data growth, runaway costs, legacy systems that can't keep pace. Public or private clouds offer near-infinite scalability, deliver dramatic cost reductions and promise the unparalleled efficiency you need to compete in today's 24/7/365 marketplace.
    Moreover, an ESG study found that backup and archive rank first among workloads enterprises are moving to the cloud.
    Okay, fine. But as a prudent IT strategist, you demand airtight security and complete control over your data as well. Good thinking.
    Hybrid Cloud Strategies Are the Future
    Enterprises, large and small, are searching for the right blend of availability, security, and efficiency. The answer lies in achieving the perfect balance of on-premises, private cloud, and public services to match IT and business requirements.
    To realize the full benefits of a hybrid cloud strategy for backup and recovery operations, you need to manage the dynamic nature of the environment— seamlessly connecting public and private clouds—so you can move your data where and when you want with complete freedom.
    This begs the question of how to integrate these cloud resources into your existing environment. It's a daunting task. And, it's been a roadblock for companies seeking a simple, seamless, and secure entry point to cloud—until now.
    Enter the Game Changer: NetApp AltaVault
    NetApp® AltaVault® (formerly SteelStore) cloud-integrated storage is a genuine game changer. It's an enterprise-class appliance that lets you leverage public and private clouds with security and efficiency as part of your backup and recovery strategy.
    AltaVault integrates seamlessly with your existing backup software. It compresses, deduplicates, encrypts, and streams data to the cloud provider you choose. AltaVault intelligently caches recent backups locally while vaulting older versions to the cloud, allowing for rapid restores with off-site protection. This results in a cloud-economics–driven backup-and-recovery strategy with faster recovery, reduced data loss, ironclad security, and minimal management overhead.
    AltaVault delivers both enterprise-class data protection and up to 90% less cost than on-premises solutions. The solution is part of a rich NetApp data-protection portfolio that also includes SnapProtect®, SnapMIrror®, SnapVault®, NetApp Private Storage, Cloud ONTAP®, StorageGRID® Webscale, and MetroCluster®. Unmatched in the industry, this portfolio reinforces the data fabric and delivers value no one else can provide.
    Figure 1) NetApp AltaVault Cloud-integrated Storage Appliance.
    Source: NetApp, 2015
    The NetApp AltaVault Cloud-Integrated Storage Appliance
    Four Ways Your Peers Are Putting AltaVault to Work
    How is AltaVault helping companies revolutionize their backup operations? Here are four ways your peers are improving their backups with AltaVault:
    Killing Complexity. In a world of increasingly complicated backup and recovery solutions, financial services firm Spot Trading was pleased to find its AltaVault implementation extremely straightforward—after pointing their backup software at the appliance, "it just worked."
    Boosting Efficiency. Australian homebuilder Metricon struggled with its tape backup infrastructure and rapid data growth before it deployed AltaVault. Now the company has reclaimed 80% of the time employees formerly spent on backups—and saved significant funds in the process.
    Staying Flexible. Insurance broker Riggs, Counselman, Michaels & Downes feels good about using AltaVault as its first foray into public cloud because it isn't locked in to any one approach to cloud—public or private. The company knows any time it wants to make a change, it can.
    Ensuring Security. Engineering firm Wright Pierce understands that if you do your homework right, it can mean better security in the cloud. After doing its homework, the firm selected AltaVault to securely store backup data in the cloud.
    Three Flavors of AltaVault
    AltaVault lets you tap into cloud economics while preserving your investments in existing backup infrastructure, and meeting your backup and recovery service-level agreements. It's available in three form factors: physical, virtual, and cloud-based.
    1. AltaVault Physical Appliances
    AltaVault physical appliances are the industry's most scalable cloud-integrated storage appliances, with capacities ranging from 32TB up to 384TB of usable local cache. Companies deploy AltaVault physical appliances in the data center to protect large volumes of data. These datasets typically require the highest available levels of performance and scalability.
    AltaVault physical appliances are built on a scalable, efficient hardware platform that's optimized to reduce data footprints and rapidly stream data to the cloud.
    2. AltaVault Virtual Appliances for Microsoft Hyper-V and VMware vSphere
    AltaVault virtual appliances are an ideal solution for medium-sized businesses that want to get started with cloud backup. They're also perfect for enterprises that want to safeguard branch offices and remote offices with the same level of protection they employ in the data center.
    AltaVault virtual appliances deliver the flexibility of deploying on heterogeneous hardware while providing all of the features and functionality of hardware-based appliances. AltaVault virtual appliances can be deployed onto VMware vSphere or Microsoft Hyper-V hypervisors—so you can choose the hardware that works best for you.
    3. AltaVault Cloud-based Appliances for AWS and Microsoft Azure
    For organizations without a secondary disaster recovery location, or for companies looking for extra protection with a low-cost tertiary site, cloud-based AltaVault appliances on Amazon Web Services (AWS) and Microsoft Azure are key to enabling cloud-based recovery.
    On-premises AltaVault physical or virtual appliances seamlessly and securely back up your data to the cloud. If the primary site is unavailable, you can quickly spin up a cloud-based AltaVault appliance in AWS or Azure and recover data in the cloud. Usage-based, pay-as-you-go pricing means you pay only for what you use, when you use it.
    AltaVault solutions are a key element of the NetApp vision for a Data Fabric; they provide the confidence that—no matter where your data lives—you can control, integrate, move, secure, and consistently manage it.
    Figure 2) AltaVault integrates with existing storage and software to securely send data to any cloud.
    Source: NetApp, 2015
    Putting AltaVault to Work for You
    Four common use cases illustrate the different ways that AltaVault physical and virtual appliances are helping companies augment and improve their backup and archive strategies:
    Backup modernization and refresh. Many organizations still rely on tape, which increases their risk exposure because of the potential for lost media in transport, increased downtime and data loss, and limited testing ability. AltaVault serves as a tape replacement or as an update of old disk-based backup appliances and virtual tape libraries (VTLs).
    Adding cloud-integrated backup. AltaVault makes a lot of sense if you already have a robust disk-to-disk backup strategy, but want to incorporate a cloud option for long-term storage of backups or to send certain backup workloads to the cloud. AltaVault can augment your existing purpose-built backup appliance (PBBA) for a long-term cloud tier.
    Cold storage target. Companies want an inexpensive place to store large volumes of infrequently accessed file data for long periods of time. AltaVault works with CIFS and NFS protocols, and can send data to low-cost public or private storage for durable long-term retention.
    Archive storage target. AltaVault can provide an archive solution for database logs or a target for Symantec Enterprise Vault. The simple-to-use AltaVault management platform can allow database administrators to manage the protection of their own systems.
    We see two primary use cases for AltaVault cloud-based appliances, available in AWS and Azure clouds:
    Recover on-premises workloads in the cloud. For organizations without a secondary disaster recovery location, or for companies looking for extra protection with a low-cost tertiary site, AltaVault cloud-based appliances are key to enabling cloud-based disaster recovery. Via on-premises AltaVault physical or virtual appliances, data is seamlessly and securely protected in the cloud.
    Protect cloud-based workloads.  AltaVault cloud-based appliances offer an efficient and secure approach to backing up production workloads already running in the public cloud. Using your existing backup software, AltaVault deduplicates, encrypts, and rapidly migrates data to low-cost cloud storage for long-term retention.
    The benefits of cloud—infinite, flexible, and inexpensive storage and compute—are becoming too great to ignore. AltaVault delivers an efficient, secure alternative or addition to your current storage backup solution. Learn more about the benefits of AltaVault and how it can give your company the competitive edge you need in today's hyper-paced marketplace.
    Rachel Dines is a product marketing manager for NetApp where she leads the marketing efforts for AltaVault, the company's cloud-integrated storage solution. Previously, Rachel was an industry analyst for Forrester Research, covering resiliency, backup, and cloud. Her research has paved the way for cloud-based resiliency and next-generation backup strategies.
    Quick Links
    Tech OnTap Community
    Archive
    PDF

    You didn't say what phone you have - but you can set it to update and backup and sync over wifi only - I'm betting that those things are happening "automatically" using your cellular connection rather than wifi.
    I sync my email automatically when I have a wifi connection, but I can sync manually if I need to.  Downloads happen for me only on wifi, photo and video backup are only over wifi, app updates are only over wifi....check your settings.  Another recent gotcha is Facebook and videos.  LOTS of people are posting videos on Facebook and they automatically download and play UNLESS you turn them off.  That can eat up your data in a hurry if you are on FB regularly.

  • How can I make Numbers respect the row and column locks in an Excel workbook opened in Numbers???

    I have a Windows server app that generates Excel workbooks to be emailed to political campaign volunteers to be loaded into Numbers on an iPad, edited, then emailed back to be posted to the server database.  There are two problems encountered:
    1.  The Excel workbook has the first row (column headings) and first column (route identifier) of cells locked, so that they will not scroll off the screen, but Numbers doesn't respect the locks, so when the user scrolls horizontally or vertically, the column headings and/or the route identifier scroll off the screen.
    2.  The Excel workbook has pop-up "tool-tip" type comments in certain column headings in order to provide the user with the acceptable entries for those columns, but Numbers does not respect those.  When the user touches any of the commented column heading cells, a context menu appears instead of the comment.
    What must I do in the Excel workbook sheets, or what settings can be made in Numbers to correct the above?

    I imported a Numbers '09 file into Numbers on the iPad.  All comments were removed during import. Frozen header row and column were retained.
    Thank you for your responses I must ask, however, when you refer to "importing" the Excel file, are you referring to a two step process whereby the Excel file is first converted by some other process into Numbers format, then opened in the Numbers application - which is what I have to do in my PC application to generate the Excel file, and reverse that process to convert the Excel back into my database format - or are you simply referring to opening the file in Numbers as "importing" it?  And please excuse any ignorance, as I'm not at all familiar with Apple's terminologies.  In fact, I don't own an iPad myself, but rather I have to depend on one of my clients to do the testing for me.
    I imported an XLSX file into Numbers on the iPad.  The file used "freeze panes" to "freeze" the first column and row. Only warning on import was that it changed fonts. It imported without the first row and column frozen and with no comments. Nothing I can do about the missing comments but it was a simple matter to turn the first column & row into headers and freeze them.
    Unfortunately this would not be an efficient  solution, since the end users are, for the most part, elderly political campaign volunteers who are fairly computer illiterate.  These workbooks are actually canvassing lists - known as walklists.  Their purpose is for the volunteers to interview voters, record the results of the interviews, and post the results to a database, which provides the campaigns with valuable strategizing capabilities.  Also, these workbooks have multiple pages - as many as 10 or more.  and from what I infer from the above, the setting changes would have to be made on each page.
    My whole intent in developing this iPad/Tablet methodology was to significantly reduce volunteer's work - which is a recruitment benefit - and eliminate paper.  While the latter would be accomplished, the former would not, and in fact would tend to increase it.  It's necessary to keep the first row - column headings - and the first column - the route identifier - from scrolling off the page, so that the volunteer won't have to keep scrolling up and down and right and left to know what the data are.
    Conclusion: Comments are not supported on the iPad version of Numbers.  Frozen headers are not imported from Excel but can be recreated easily.
    I was previously directed to the Apple website  http://www.apple.com/ipad/from-the-app-store/apps-by-apple/numbers.html which extols the wonders of the Numbers application.  About halfway down the page there's a section regarding, "Sliders steppers and pop-ups".  The web page states that pop-ups can be set up but, being a marketing site, gives no indication whatsoever as to how it's done.  I was hoping someone could tell my if there's any way to carry them over from an Excel file.

  • Transaction Locking during multiple Webservice - persistent webs sessions

    Hi All,<br>
    <br>
    Yesterday evening we had a discussion concerning ESA architecture. We want to create (web)services for accessing the SAP business objects (using XI) and use these (web)services via visual composer, webdynpro or custom java development.<br>
    <br>
    It does not seem a big problem to perform creations and reads of transaction, but when we want to change objects, we saw some problems concerning locking/commiting and rollbacks.<br>
    <br>
    From our GUI we would like to be able to go in edit mode and from that moment on, the transaction should be locked. We then want to change certain parameters and commit only when we push the save button.<br>
    <br>
    We can invoke a webservice wich tries to lock the transaction, but at the moment the XI scenario is completed (=the lock is created), the program at SAP side (=proxy in our case) is also finished and the lock is automaticly removed. How can we do locking, when using webservices via XI?<br>
    <br>
    The problem of the rollback and commit we can partially solve by putting more logic in the GUI, but we don't want to do that. How can we do a change of a business object and remember this change without doing a commit on the SAP system.... . Same problem for the rollback.<br>
    <br>
    Is there a away to keep a session "alive" during multiple webservice calls or to simulate it? Every webservice invokation happens in a different context...isn't it?<br>
    <br>
    <br>
    <b>Just to make it a bit more clear.</b><br>
    <br>
    Suppose we create 6 service related to the business object bupa (business partner).<br>
    - read<br>
    - change<br>
    - commit<br>
    - rollback<br>
    - lock<br>
    - unlock.<br>
    <br>
    We create a GUI which uses these services.<br>
    <br>
    <b>Step1:</b> we want to see bupa in detail, so the read webservice is called and the retrieved details are shown in the GUI<br>
    <b>Step2:</b> we want to go in edit mode, so the lock webservice is called to lock the bupa. The bupa should stay locked, untill the unlock is called. Here occurs the problem. The webservice lock is called, XI will trigger the proxy on the SAP system. This proxy will lock the bupa. As soon as the proxy-program is completed, the bupa lock will automaticly be removed ... . We want to keep this lock!<br>
    <b>Step3:</b> we change the bupa using the change webservice. Only the user who locked the bupa should be able to change it.<br>
    Problem concerning the locking occurs: standard we don't know who locked the bupa (this is done by the generic RFC user, configured in sm59). Should we pass some kind of GUID towards the proxy and build some additional logic to know which end-user in fact locked it... . Using the userid isn't sufficient, because a user could logon multiple time simultanously.<br>
    <br>
    Another problem is that we want to change the bupa, without having to do a commit yet.De commit should be called only when pushing the save button. When the proxy is ended and we did not do a commit, the changes are lost normally ... .<br>
    <br>
    What we in fact want to do is Simulate the bsp behaviour.<br>
    <b>Step4:</b>We want to perform a save of the things we changed or a reset. This means the commit or rollback webservice is called.<br>
    <b>Step5:</b> We want to unlock the bupa by calling the unlock webservice.<br>
    <br>
    <br>
    Please give me your comments.<br>
    <br>
    Kind regards<br>
    Joris<br>
    <br>
    Note: Transaction Locking during multiple Webservice "sessions".
    Message was edited by:
            Joris Verberckmoes

    There are multiple strategies how to resolve this. They require that the last change time is available in the changed object, and also that the client keeps the value of the change time when it read the data.
    1. First one wins
    Immediately before posting the changes, the current change time is read from the server. In case it is different from the client buffer, then the client changes are discarted.
    Example:
    1. Client A reads data
    2. Client B reads data
    3. Client B changes its buffer
    4. Client B checks if server change time has changed (result is no)
    5. Client B writes his changes to the server
    6. Client A changes its buffer
    7. Client A checks if server change time has changed (result is yes)
    8. Client A discarts its changes
    2. Last one wins
    Easy. Client just writes his changes to the server, overwriting any changes that might have occured since it read the data.
    Example:
    1. Client A reads data
    2. Client B reads data
    3. Client B changes its buffer
    4. Client B writes his changes to the server
    5. Client A changes its buffer
    6. Client A writes its changes to the server -> changes from client B are lost
    3. Everybody wins
    Most complicated. In case of concurrent changes, the client is responsible for merging his changes with the changes from other clients and to resolve any conflicts.
    Example:
    1. Client A reads data
    2. Client B reads data
    3. Client B changes its buffer
    4. Client B checks if server change time has changed (result is no)
    5. Client B writes his changes to the server
    6. Client A changes its buffer
    7. Client A checks if server change time has changed (result is yes)
    8. Client A merges its changes with changes from client B
    9. Client A writes his changes to the server
    "Last one wins" is definitely not water-proof. But even with the other strategies, data can potentially get lost in the short timeframe when the change time is checked and the actual update.
    To make it more secure, server support is required. E.g. the client could pass the change time from its read access to the server. The server can then reliably reject the update if the change data has been updated in beetween by another client.

  • Weblogic threads locked

    We are noticing performance issue with our production environment. Production environment has a cluster of 6 weblogic instances and the server hangs very often and entire system go down. I have attached thread dump collected from one of the weblogic server. Any idea where is the issue? can you help please.
    "ExecuteThread: '19' for queue: 'weblogic.kernel.Default'" daemon prio=5 tid=0x04f92f38 nid=0xb8c runnable [0x0713e000..0x0713fd94]
         at java.lang.Throwable.fillInStackTrace(Native Method)
         - waiting to lock <0x164ec610> (a java.lang.ClassNotFoundException)
         at java.lang.Throwable.<init>(Throwable.java:217)
         at java.lang.Exception.<init>(Exception.java:59)
         at java.lang.ClassNotFoundException.<init>(ClassNotFoundException.java:65)
         at java.net.URLClassLoader$1.run(URLClassLoader.java:199)
         at java.security.AccessController.doPrivileged(Native Method)
         at java.net.URLClassLoader.findClass(URLClassLoader.java:187)
         at java.lang.ClassLoader.loadClass(ClassLoader.java:289)
         - locked <0x173b3428> (a sun.misc.Launcher$AppClassLoader)
         at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:274)
         - locked <0x173b3428> (a sun.misc.Launcher$AppClassLoader)
         at java.lang.ClassLoader.loadClass(ClassLoader.java:282)
         - locked <0x173c1900> (a weblogic.utils.classloaders.GenericClassLoader)
         at java.lang.ClassLoader.loadClass(ClassLoader.java:282)
         - locked <0x17387ed8> (a weblogic.utils.classloaders.GenericClassLoader)
         at java.lang.ClassLoader.loadClass(ClassLoader.java:282)
         - locked <0x17dc9028> (a weblogic.utils.classloaders.GenericClassLoader)
         at java.lang.ClassLoader.loadClass(ClassLoader.java:282)
         - locked <0x17db3a88> (a weblogic.utils.classloaders.ChangeAwareClassLoader)
         at java.lang.ClassLoader.loadClass(ClassLoader.java:235)
         at weblogic.utils.classloaders.GenericClassLoader.loadClass(GenericClassLoader.java:224)
         at weblogic.utils.classloaders.ChangeAwareClassLoader.loadClass(ChangeAwareClassLoader.java:41)
         - locked <0x17db3a88> (a weblogic.utils.classloaders.ChangeAwareClassLoader)
         at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:302)
         - locked <0x17db3a88> (a weblogic.utils.classloaders.ChangeAwareClassLoader)
         at java.lang.Class.forName0(Native Method)
         at java.lang.Class.forName(Class.java:141)
         at org.apache.axis.utils.ClassUtils$2.run(ClassUtils.java:197)
         at java.security.AccessController.doPrivileged(Native Method)
         at org.apache.axis.utils.ClassUtils.loadClass(ClassUtils.java:171)
         at org.apache.axis.utils.ClassUtils.forName(ClassUtils.java:112)
         at org.apache.axis.encoding.ser.BaseDeserializerFactory.getDeserializerMethod(BaseDeserializerFactory.java:201)
         at org.apache.axis.encoding.ser.BaseDeserializerFactory.getGetDeserializer(BaseDeserializerFactory.java:289)
         at org.apache.axis.encoding.ser.BaseDeserializerFactory.getSpecialized(BaseDeserializerFactory.java:171)
         at org.apache.axis.encoding.ser.BaseDeserializerFactory.getDeserializerAs(BaseDeserializerFactory.java:115)
         at org.apache.axis.encoding.ser.SimpleDeserializerFactory.getDeserializerAs(SimpleDeserializerFactory.java:107)
         at org.apache.axis.encoding.DeserializationContextImpl.getDeserializer(DeserializationContextImpl.java:452)
         at org.apache.axis.encoding.DeserializationContextImpl.getDeserializerForType(DeserializationContextImpl.java:467)
         at org.apache.axis.message.MessageElement.getValueAsType(MessageElement.java:571)
         at com.axeda.sdk.webservices.handlers.LoginHandler.findLogin(LoginHandler.java:153)
         at com.axeda.sdk.webservices.handlers.LoginHandler.handleRequest(LoginHandler.java:99)
         at com.axeda.sdk.webservices.handlers.LoginHandler.invoke(LoginHandler.java:78)
         at org.apache.axis.strategies.InvocationStrategy.visit(InvocationStrategy.java:71)
         at org.apache.axis.SimpleChain.doVisiting(SimpleChain.java:150)
         at org.apache.axis.SimpleChain.invoke(SimpleChain.java:120)
         at org.apache.axis.server.AxisServer.invoke(AxisServer.java:287)
         at org.apache.axis.transport.http.AxisServlet.doPost(AxisServlet.java:854)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:760)
         at org.apache.axis.transport.http.AxisServletBase.service(AxisServletBase.java:339)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
         at weblogic.servlet.internal.ServletStubImpl$ServletInvocationAction.run(ServletStubImpl.java:1077)
         at weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:465)
         at weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:348)
         at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:7047)
         at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
         at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
         at weblogic.servlet.internal.WebAppServletContext.invokeServlet(WebAppServletContext.java:3902)
         at weblogic.servlet.internal.ServletRequestImpl.execute(ServletRequestImpl.java:2773)
         at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:224)
         at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:183)

    Try setting the 'Inactive connection timeout' value on the jdbc data source. Looks like your data source hangs. Also would be a good idea to set the 'test connections on reserve' on with test table sql.

  • Active release strategies

    Dear SAP GURUS,
       Is there a transaction code where i can have a look at all the active release strategies.
    Thanx,
    Fayazuddin Syed.

    Dear Syed fayazuddin ,
    You can figure it out at CL24N or CL20N.
    Example :
    Transaction CL24N
    Class        => Fill in class name  
    Class Type :     032
    If it was active, then you will see
    status field = 1     Released and with the "Green Tick" icon.
    or else:
    2     Locked
    3     Incomplete
    Sometimes, if you are having inconsistency/corrupted for your release strategy, you will also see some
    message indicate that you had inconsistency in these transaction
    Thanks
    Loke Foong

  • How to lock Logic Pro?

    My studio can be busy with people walking through quite often. Though I never have any problem with people tampering my control surface, I can be worried sometimes. Is there any way to fully lock Logic Pro so changes can't be made on my control surface? When I lock the Mac, my control surface is still fully operational as well as Logic. Just not on the display.
    Thanks!

    Strategic placement of two guillotines does it for me!
    Cheers!

  • IE7 locks onto Mobil layout vs desktop layout

    I am developing a new site and using the fluid grid layout.  I have tested it on my laptop in Firefox v18, Internet Explorer 9, and Chrome v24.  Everything was fine until I tested it on an XP desktop.  Everything looked fine in Firefox v18 on the desktop, however, Internet Explorer 7 did not show the Content1 and Content2 as side by side columns that all the other browsers showed.  In using the IE Developer Tool Bar it shows that the division style for them is width=100%.  In reviewing my css the only place where Content 1 and Content 2 has width=100% is in the Mobile Layout for 480px and below.  The Tablet and Desktop Layout both have those widths below 50%.
    Here is my html:
    <!doctype html>
    <!--[if lt IE 7]> <html class="ie6 oldie"> <![endif]-->
    <!--[if IE 7]>    <html class="ie7 oldie"> <![endif]-->
    <!--[if IE 8]>    <html class="ie8 oldie"> <![endif]-->
    <!--[if gt IE 8]><!-->
    <html class="">
    <!--<![endif]-->
    <head>
    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <title>Conestoga Wagon Web Service - Your Business Ally</title>
    <meta name="author" content="Conestoga Wagon Web Service">
    <meta name="description" content="Conestoga Wagon Web Service, your business ally; where the mystique of yesteryear meets the technolgy of today and provides small business and organizations access to affordable professional web site design and hosting.">
    <meta name="keywords" content="cheap website design,website design,affordable website design,custom website design,small businesses,website  solutions,idaho website design,website designers,website hosting,web hosting">
    <link href="css/boilerplate.css" rel="stylesheet" type="text/css">
    <link href="css/fluid.css" rel="stylesheet" type="text/css">
        <script src="js/jquery-1.4.1.min.js" type="text/javascript">
    </script>
        <script src="js/jquery.jcarousel.pack.js" type="text/javascript">
    </script>
        <script src="js/jquery-func.js" type="text/javascript"></script>
    <!--
    To learn more about the conditional comments around the html tags at the top of the file:
    paulirish.com/2008/conditional-stylesheets-vs-css-hacks-answer-neither/
    Do the following if you're using your customized build of modernizr (http://www.modernizr.com/):
    * insert the link to your js here
    * remove the link below to the html5shiv
    * add the "no-js" class to the html tags at the top
    * you can also remove the link to respond.min.js if you included the MQ Polyfill in your modernizr build
    -->
    <!--[if lt IE 9]>
    <script src="//html5shiv.googlecode.com/svn/trunk/html5.js"></script>
    <![endif]-->
    <script src="js/respond.min.js">
    </script>
    </head>
    <body>
    <div class="gridContainer clearfix">
        <div id="Header">
        <!-- Start of Header -->
        <img src="images/header.png" alt="Conestoga Wagon Web Service Header Image">
        <!-- End of Header -->
        </div>
        <div id="Menu">
        <!-- Start of the Navigation -->
                        <ul>
                            <li><a href="index.html">Home</a></li>
                            <li><a href="about.html">About Us</a></li>
                            <li><a href="design.html">Design</a></li>
                            <li><a href="hosting.html">Hosting</a></li>
                            <li><a href="contact.html">Contact Us</a></li>
                            <li><a href="faq.html">FAQ</a></li>
                        </ul>
        <!-- End of the Navigation -->   
        </div>
        <!-- Start of Content Area -->
        <div id="Content1">
            <h2>This is a test of H2 on Home page</h2>
      This is the content for Layout Division Tag "Content"
      <p>Combined with optimal use of human resources, big is no longer impregnable to ensure that non-operating cash outflows are assessed. An important ingredient of business process reengineering the vitality of conceptual synergies is of supreme importance the new golden rule gives enormous power to those individuals and units. Exploitation of core competencies as an essential enabler, through the  adoption of a proactive stance, the astute manager can adopt a position at the vanguard.</p>
      <p>As knowledge is fragmented into specialities from binary cause and effect to complex patterns, an important ingredient of business process reengineering. Building flexibility through spreading knowledge and self-organization, benchmarking against industry leaders, an essential process, should be a top priority at all times defensive reasoning, the doom loop and doom zoom. Maximization of shareholder wealth through separation of ownership from management the vitality of conceptual synergies is of supreme importance as knowledge is fragmented into specialities. The balanced scorecard, like the executive dashboard, is an essential tool building a dynamic relationship between the main players. An important ingredient of business process reengineering in a collaborative, forward-thinking venture brought together through the merging of like minds.</p>
      <p>By moving executive focus from lag financial indicators to more actionable lead indicators, working through a top-down, bottom-up approach, the strategic vision - if indeed there be one - is required to identify. Organizations capable of double-loop learning, that will indubitably lay the firm foundations for any leading company to focus on improvement, not cost. By moving executive focus from lag financial indicators to more actionable lead indicators, benchmarking against industry leaders, an essential process, should be a top priority at all times building flexibility through spreading knowledge and self-organization. To ensure that non-operating cash outflows are assessed.</p>
      <p>Measure the process, not the people. As knowledge is fragmented into specialities combined with optimal use of human resources, in order to build a shared view of what can be improved. Exploiting the productive lifecycle the components and priorities for the change program motivating participants and capturing their expectations. Big is no longer impregnable as knowledge is fragmented into specialities.</p>
      <p>By moving executive focus from lag financial indicators to more actionable lead indicators, organizations capable of double-loop learning, building a dynamic relationship between the main players. To focus on improvement, not cost, quantitative analysis of all the key ratios has a vital role to play in this exploitation of core competencies as an essential enabler. From binary cause and effect to complex patterns, the new golden rule gives enormous power to those individuals and units, to experience a profound paradigm shift.</p>
        </div>
        <div id="Content2">This is the content for Layout Div Tag "Content2"
          <p>Love's not time's fool, though rosy lips and cheeks oh, no, it is an ever fixed mark or bends with the remover to remove. Love alters not with his brief hours and weeks, it is the star to every wand'ring bark, within his bending sickle's compass come. Admit impediments; love is not love that looks on tempests and is never shaken; oh, no, it is an ever fixed mark. If this be error and upon me proved, whose worth's unknown, although his height be taken. But bears it out even to the edge of doom.</p>
          <p>It is the star to every wand'ring bark, which alters when it alteration finds. Love's not time's fool, though rosy lips and cheeks within his bending sickle's compass come; which alters when it alteration finds. Admit impediments; love is not love if this be error and upon me proved, oh, no, it is an ever fixed mark. Love alters not with his brief hours and weeks, whose worth's unknown, although his height be taken. That looks on tempests and is never shaken; admit impediments; love is not love love's not time's fool, though rosy lips and cheeks.</p>
          <p>Let me not to the marriage of true minds it is the star to every wand'ring bark, within his bending sickle's compass come. Which alters when it alteration finds, oh, no, it is an ever fixed mark love's not time's fool, though rosy lips and cheeks. Admit impediments; love is not love let me not to the marriage of true minds that looks on tempests and is never shaken. Within his bending sickle's compass come; if this be error and upon me proved, I never writ, nor no man ever loved. Love's not time's fool, though rosy lips and cheeks love alters not with his brief hours and weeks, oh, no, it is an ever fixed mark.</p>
          <p>Let me not to the marriage of true minds but bears it out even to the edge of doom. Oh, no, it is an ever fixed mark that looks on tempests and is never shaken; or bends with the remover to remove. Within his bending sickle's compass come; but bears it out even to the edge of doom.</p>
          <p>Admit impediments; love is not love love alters not with his brief hours and weeks, love's not time's fool, though rosy lips and cheeks. It is the star to every wand'ring bark, or bends with the remover to remove. Within his bending sickle's compass come; oh, no, it is an ever fixed mark whose worth's unknown, although his height be taken. Love's not time's fool, though rosy lips and cheeks.</p>
        </div>
      <!-- End of Content Area -->
        <div id="Footer"> <hr class="divider">
        <!-- Start of Footer Area -->
      <script type="text/javascript">
    now=new Date();
    year=now.getFullYear();
    </script>Copyright &copy;  2012-<script type="text/javascript">
    document.write(year);
    </script>
    <strong> Conestoga Wagon Web Service</strong><br>
    |  <a class="active" href="index.html">Conestoga Wagon Web Service</a> | <a href="hosting.html">Conestoga Wagon Web Hosting</a> | <a href="proposal.html">Request Proposal</a> | <a href="tos.html">TOS</a>  |  <a href="privacy.html">Privacy Policy</a> |  <a href="hostagreement.html">Web Hosting Agreement</a> | <br>
    <strong>An Idaho owned and operated Web Design and Hosting company.</strong>
        <!-- End of Footer Area -->
        </div>
    </div>
    </body>
    </html>
    Here is my css:
    @charset "utf-8";
    /* Simple fluid media
       Note: Fluid media requires that you remove the media's height and width attributes from the HTML
       http://www.alistapart.com/articles/fluid-images/
    img, object, embed, video {
        max-width: 100%;
    /* IE 6 does not support max-width so default to width 100% */
    .ie6 img {
        width:100%;
        Dreamweaver Fluid Grid Properties
        dw-num-cols-mobile:        5;
        dw-num-cols-tablet:        8;
        dw-num-cols-desktop:    12;
        dw-gutter-percentage:    10;
        Inspiration from "Responsive Web Design" by Ethan Marcotte
        http://www.alistapart.com/articles/responsive-web-design
        and Golden Grid System by Joni Korpi
        http://goldengridsystem.com/
    /* Mobile Layout: 480px and below. */
    .gridContainer {
        margin-left: auto;
        margin-right: auto;
        width: 98.1818%;
        padding-left: 0.909%;
        padding-right: 0.909%;
    #Header {
        clear: both;
        float: left;
        margin-left: 0;
        width: 100%;
    #Menu {
        clear: both;
        float: left;
        margin-left: 0;
        width: 100%;
        display: block;
    #Menu ul{
        list-style-type: none;
        text-align: center;
        font-weight: bold;
        float: right;
        padding-top: 0;
        padding-right: 0px;
        padding-bottom: 0;
        padding-left: 0;
    #Menu ul li{
        float: left;
        display: inline;
    #Menu ul li a{
        float: left;
        display: inline;
        width: 151px;
        height: 100px;
        background: url(../images/nav.png);
        text-decoration: none;
        line-height: 67px;
        color: #ffb400;
    #Menu ul li a.active,
    #Menu ul li a:hover{ color:#fff; background: url(../images/nav-active.png) }
    #Content1 {
        clear: both;
        float: left;
        margin-left: 0;
        width: 100%;
        display: block;
        text-align: justify;
    #Footer {
        clear: both;
        float: left;
        margin-left: 0;
        width: 100%;
        display: block;
        color: #ffb400;
        text-align: center;
        padding-top: 0px;
        padding-bottom: 10px;
    #Footer a {color: #ffb400; }
    #Content2 {
        clear: both;
        float: left;
        margin-left: 0;
        width: 100%;
        display: block;
        text-align: justify;
    /* Tablet Layout: 481px to 768px. Inherits styles from: Mobile Layout. */
    @media only screen and (min-width: 481px) {
    .gridContainer {
        width: 98.8636%;
        padding-left: 0.5681%;
        padding-right: 0.5681%;
    #Header {
        clear: both;
        float: left;
        margin-left: 0;
        width: 100%;
    #Menu {
        clear: both;
        float: left;
        margin-left: 0;
        width: 100%;
        display: block;
    #Content1 {
        clear: both;
        float: left;
        margin-left: 0;
        width: 49.4252%;
        display: block;
    #Footer {
        clear: both;
        float: left;
        margin-left: 0;
        width: 100%;
        display: block;
    #Content2 {
        clear: none;
        float: left;
        margin-left: 1.1494%;
        width: 49.4252%;
        display: block;
    /* Desktop Layout: 769px to a max of 1232px.  Inherits styles from: Mobile Layout and Tablet Layout. */
    @media only screen and (min-width: 769px) {
    .gridContainer {
        width: 99.2424%;
        max-width: 1232px;
        padding-left: 0.3787%;
        padding-right: 0.3787%;
        margin: auto;
    #Header {
        clear: both;
        float: left;
        margin-left: 0;
        width: 100%;
    #Menu {
        clear: both;
        float: left;
        margin-left: 0;
        width: 100%;
        display: block;
    #Content1 {
        clear: both;
        float: left;
        margin-left: 0;
        width: 49.6183%;
        display: block;
    #Footer {
        clear: both;
        float: left;
        margin-left: 0;
        width: 100%;
        display: block;
    #Content2 {
        clear: none;
        float: left;
        margin-left: 2%;
        width: 48%;
        display: block;
    It just seems like IE7 is locking onto the very first section of the css and not moving to the desktop and I just haven't been able to solve this so any assistance would be greatly appreciated.
    Thanks,
    Verne

    I have been doing a lot of looking around for possible solutions to this IE 7 issue with media queries.  While there were a lot of diverse views on the subject I came across an article that seemed to be an easier solution to try than everything else.  I downloaded the js file they recommended, copied and pasted
    <!-- css3-mediaqueries.js for IE less than 9 -->
    <!--[if lt IE 9]>
    <script src="http://css3-mediaqueries-js.googlecode.com/svn/trunk/css3-mediaqueries.js"></script>
    <![endif]-->
    into my document and now IE 7 shows the web page just like FF, IE9, and Chrome does.  It was a super easy fix, even for a novice.
    I hope this helps someone else like it has helped me.
    Thanks,
    Verne
    Message was edited by: in-idaho

Maybe you are looking for