Turning On Snapshot Isolation Gotchas

Hello Experts,
We have been experiencing high number of Deadlocks while using MERGE statement and turning on Snapshot Isolation perfectly solves our problem and our throughput and concurrency didn't get affected at all.
We did load testing and monitored TempDB VersionStore size and it was nothing significant and we have 64 Gig Memory allocated in Prod Server. Our team did the reading and research primarily from these online sources.
My Question is "Is there any gotchas in turing on SnapShot Isolation you won't see right away?". I want learn from experiences before we venture into turning it on in our production Environment?. I saw some folks experienced 60 Gig Version Store
because there was 3 month old active transaction. 
What kind of preventive and maintenance scripts would be useful to monitor the system and take corrective action?.
I have few scripts to monitor tempdb version store size, and peformon Transaction Counters. Is there any other better scripts/tools available?.
Kimberly Tripp Video on Isolation Levels :
http://download.microsoft.com/download/6/7/9/679B8E59-A014-4D88-9449-701493F2F9FD/HDI-ITPro-TechNet-mp4video-MCM_11_SnapshotIsolationLecture(4).m4v
Kendra Little on SnapShot Isolatioin :
http://www.brentozar.com/archive/2013/01/implementing-snapshot-or-read-committed-snapshot-isolation-in-sql-server-a-guide/
Microsoft Link: https://msdn.microsoft.com/en-us/library/ms188277(v=sql.105).aspx
https://msdn.microsoft.com/en-us/library/bb522682.aspx
SQL Team Link : http://www.sqlteam.com/article/transaction-isolation-and-the-new-snapshot-isolation-level
Idera Short article on TempDB : http://sqlmag.com/site-files/sqlmag.com/files/uploads/2014/01/IderaWP_Demystifyingtempdb.pdf
Jim Gray Example by Craig Freedman : http://blogs.msdn.com/b/craigfr/archive/2007/05/16/serializable-vs-snapshot-isolation-level.aspx
Thanks in advance.
~I90Runner
I90Runner

It is unclear what isolation level have you enabled RCSI or SI?
Downsides:
Excessive tempdb usage due to version store activity. Think about session that deletes 1M rows. All those rows must be copied to version store regardless of session transaction isolation level and/or if there are other sessions
that running in optimistic isolation levels at the moment when deletion started.
Extra fragmentation – SQL Server adds 14-byte version tag (version store pointer) to the rows in the data files when they are modified. This tag stayed until index is rebuild
Development challenges – again, error 3960 with snapshot isolation level. Another example in both isolation levels – trigger or code based referential integrity. You can always solve it by adding with (READCOMMITTED) hint
if needed. 
While switching to RCSI could be good emergency technique to remove blocking between readers and writers (if you can live with overhead AND readers are using read committed), I would suggest to find root cause of the blocking.
Confirm that you have locking issues – check if there are shared lock waits in wait stats, that there is no lock escalations that block readers, check that queries are optimized, etc.  
Best Regards,Uri Dimant SQL Server MVP,
http://sqlblog.com/blogs/uri_dimant/
MS SQL optimization: MS SQL Development and Optimization
MS SQL Consulting:
Large scale of database and data cleansing
Remote DBA Services:
Improves MS SQL Database Performance
SQL Server Integration Services:
Business Intelligence

Similar Messages

  • Pls tel me what is the diffrence between snapshot isolation level of mssql and oracels isolation level

    Hi,
            In mssql i am using following things.
           I have two database D1 and D2, i am using snapshot isolation (ALTER DATABASE MyDatabase
    SET ALLOW_SNAPSHOT_ISOLATION ON) in both database.
    Following is the situation.
    1) There is  one SP sp1 ( it can be in any database d1 or d2), it updates d2 from d1.
    2) d2 is used for reading by web, execept above SP sp1
    3) d1 gets updation from web in readcommite isolation.
    4) both database will be on same instence of mssql.
    Q1) wanted to know how to implement the same thing in oracle 11x express edition.
    Q2) is there any diffrence between snapshot isolation level of mssql and oracel.
    any link would be help full.
    yours sincerely

    >Q1) should i set the option to OFF after the process(ts) is complete
    No keep it on.
    >Q2) ALLOW_SNAPSHOT_ISOLATION  ON , will affect other isolation level's transactions
    No it will not affect any other transaction isolation level.
    >Q3) is my choice of isolation level for process(ts) is correct or there can be any other solution.
    Seems fine, although there are probably many other solutions.
    David
    David http://blogs.msdn.com/b/dbrowne/

  • Snapshot isolation transaction aborted due to update conflict

    Hi Forum,
    Can anyone help me to give the solution for this below problem.
    We are developing MVC3 application with SQL Server 2008, we are facing as
    Snapshot isolation transaction aborted due to update conflict
    Snapshot isolation transaction aborted due to update conflict. You cannot use snapshot isolation to access table 'dbo.Tb_M_Print' directly or indirectly in database 'DB_Production' to update, delete, or insert the row that has been modified or deleted
    by another transaction. Retry the transaction or change the isolation level for the update/delete statement .
    Please tell me the solution how to proceed for the above problem
    Rama

    change the isolation level for the update/delete statement .
    The error message already mentions the solution.
    See also MSDN
    Lesson 1: Understanding the Available Transaction Isolation Levels => Update Conflicts
    Olaf Helper
    [ Blog] [ Xing] [ MVP]

  • Segmentation fault when using snapshot isolation with Berkeley DB 6.1.19 and 5.1.29

    Hello,
    I have been experimenting with snapshot isolation with Berkeley DB, but I find that it frequently triggers a segmentation fault when write transactions are in progress.  The following test program reliably demonstrates the problem in Linux using either 5.1.29 or 6.1.19. 
    https://anl.app.box.com/s/3qq2yiij2676cg3vkgik
    Compilation instructions are at the top of the file.  The test program creates a temporary directory in /tmp, opens a new environment with the DB_MULTIVERSION flag, and spawns 8 threads.  Each thread performs 100 transactional put operations using DB_TXN_SNAPSHOT.  The stack trace when the program crashes generally looks like this:
    Program received signal SIGSEGV, Segmentation fault.
    [Switching to Thread 0x7ffff7483700 (LWP 11871)]
    0x00007ffff795e190 in __memp_fput ()
       from /usr/lib/x86_64-linux-gnu/libdb-5.1.so
    (gdb) where
    #0  0x00007ffff795e190 in __memp_fput ()
       from /usr/lib/x86_64-linux-gnu/libdb-5.1.so
    #1  0x00007ffff7883c30 in __bam_get_root ()
       from /usr/lib/x86_64-linux-gnu/libdb-5.1.so
    #2  0x00007ffff7883dca in __bam_search ()
       from /usr/lib/x86_64-linux-gnu/libdb-5.1.so
    #3  0x00007ffff7870246 in ?? () from /usr/lib/x86_64-linux-gnu/libdb-5.1.so
    #4  0x00007ffff787468f in ?? () from /usr/lib/x86_64-linux-gnu/libdb-5.1.so
    #5  0x00007ffff79099f4 in __dbc_iput ()
       from /usr/lib/x86_64-linux-gnu/libdb-5.1.so
    #6  0x00007ffff7906c10 in __db_put ()
       from /usr/lib/x86_64-linux-gnu/libdb-5.1.so
    #7  0x00007ffff79191eb in __db_put_pp ()
       from /usr/lib/x86_64-linux-gnu/libdb-5.1.so
    #8  0x0000000000400f14 in thread_fn (foo=0x0)
        at ../tests/transactional-osd/bdb-snapshot-write.c:154
    #9  0x00007ffff7bc4182 in start_thread (arg=0x7ffff7483700)
        at pthread_create.c:312
    #10 0x00007ffff757f38d in clone ()
        at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
    I understand that this test program, with 8 concurrent (and deliberately conflicting) writers, is not an ideal use case for snapshot isolation, but this can be triggered in other scenarios as well.
    You can disable snapshot isolation by toggling the value of the USE_SNAP #define near the top of the source, and the test program then runs fine without it.
    Can someone help me to identify the problem?
    many thanks,
    -Phil

    Hi Phil,
       We have taken a look at this in more detail and there was a bug in the code.   We have fixed the bug.     We will roll it into our next 6.1 release that we do.   If you would like an early patch that will go on top of 6.1.19, please email me at [email protected], reference this forum post and I can get a patch sent out to you.   It will be a .diff file that apply on the source code and then rebuild the library.  Once again thanks for finding the issue, and providing a great test program which tremendously helped in getting this resolved.
    thanks
    mike

  • Snapshot isolation level usage

    Dear All,
                 There are some transaction tables in which more than one user add and update records (only).
    what ever they add and update in transaction tables, based on that entry  they add  a record in Table A1
    , Table A1 has two cols one keeps the table name of transaction table and other col keeps the pk(primarykey) of transaction tables.
    So Table A1 always gets only  inserts,
    Table A1 gets entry only  for transaction tables , and only when transaction table gets entry .
                    At the  same time there is a  process (ts) which reads Table A1 on time basis, picks up all records
    form Table A1 and  reads data from transaction tables on the basis of PK stored in it . there it after inserts all the read records into a
    new temp table.
    and at the end of transaction  it deletes records from Table A1.
    after some time it again picks up new records from Table A1 and repeats the process.
    For process (ts) . i want to use ALLOW_SNAPSHOT_ISOLATION
    so that user can keep on entering records.
    Q1) The ALLOW_SNAPSHOT_ISOLATION
    database option must be set to ON
    before one can start a transaction that uses the SNAPSHOT isolation level. I wanted to know should i set the option to OFF after the process(ts) is complete, and switch
    it on again on the database when process(ts) starts again.
    that is, keeping it on all the time  will affect the database in any case?
    Q2) ALLOW_SNAPSHOT_ISOLATION  ON , will affect other isolation level's transactions or only to snapshot isolation levels transactions. that is, i have old
    stored proc and front end applications like web or window on .net which are using default isolation levels.
    Q3) is my choice of isolation level for process(ts) is correct or there can be any other solution.
    Note: "the information is  quite less but i wont be able to give full information."
    yours sincerely

    >Q1) should i set the option to OFF after the process(ts) is complete
    No keep it on.
    >Q2) ALLOW_SNAPSHOT_ISOLATION  ON , will affect other isolation level's transactions
    No it will not affect any other transaction isolation level.
    >Q3) is my choice of isolation level for process(ts) is correct or there can be any other solution.
    Seems fine, although there are probably many other solutions.
    David
    David http://blogs.msdn.com/b/dbrowne/

  • ( if snapshot isolation level is enable in db.) Is the version chain is generated if any read commited transation is executed or it only gnerates version chain, if any snapshot transaction is in running status.

    hi,
    I have enable snapshot isolation level, in my database all queries execute in read commited isolation only one big transaction uses snapshot isolation.
    q1)i wanted to know if snapshot silation transaction is not running but database is enable for snapshot ,then will the normal 
    queries using read commited will create versions or not.
    yours sincerley.

    Enabling snapshot isolation level on DB level does not change behavior of queries in any other isolation levels. In that option you are eliminating all blocking even between writers (assuming they do not update
    the same rows) although it could lead to 3960 errors (data has been modified by other sessions). 
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Berkeley DB's Snapshot Isolation

    After read the reference document, I think snapshot isolation means that a write operation will take a read lock in the data page under a transation. After that another transaction with read operation can read the page.But I code it like this:
    step1:txn1 update the page but not commited
    step2:txn2 read the page
    step3:commit txn1
    step4:commit txn2
    The program is stop in step2 and wait forever.Then I change step1 for read, It can excete normally.That's seems like the snapshot isolation is still take a wirte lock int the page.I feel confuss about how snapshot isolation works.
    If someone can give me an example program , or tell me how it works , I will thank you quit a lot.

    Hi,mike,thanks for you answer.I read the document again and again today.According to the suggestion,we usually use snapshot in a read-only transaction and another update transation to write. So I recode my program with a read-only transaction and a update transaction.But it is also blocked like before.I feel so confuse. I put my program here and wish you can help me with that. thanks for your time.
    #include"db_cxx.h"
    #include<iostream>
    #include<cstring>
    int main()
      u_int32_t env_flags =     DB_CREATE |
                                         DB_INIT_LOCK |
                                         DB_INIT_LOG |
                                         DB_INIT_MPOOL |
                                         DB_INIT_TXN;
      const char* home = "envHome";
      u_int32_t db_flags = DB_CREATE | DB_AUTO_COMMIT;
      const char* fileName = "envtest.db";
      Db* dbp = NULL;
      DbEnv myEnv(0);
      try{
           myEnv.open(home,env_flags,0);
           myEnv.set_flags(DB_MULTIVERSION,1);
           dbp = new Db(&myEnv,0);
           dbp->open(
                               NULL,           //Txn pointer
                               fileName,      //File name
                               NULL,           //Logic db name
                               DB_BTREE, //Database type
                               db_flags,      //Open flags
                               0                //file mode
      }catch(DbException &e){
           std::cerr<<"Error when opening database and Environment:"
                          <<fileName<<","<<home<<std::endl;
           std::cerr<<e.what()<<std::endl;
      //put data normally
      char *key1 = "luffy";
      char *data1 = "op";
      char *key2= "usopp";
      char *data2 = "brave";
      Dbt pkey1(key1,strlen(key1)+1);
      Dbt pdata1(data1,strlen(data1)+1);
      Dbt pkey2(key2,strlen(key2)+1);
      Dbt pdata2(data2,strlen(data2)+1);
      dbp->put(NULL,&pkey1,&pdata1,0);
      dbp->put(NULL,&pkey2,&pdata2,0);
      //using txn cursor to read and another cursor to modify before commit
      try{
           DbTxn *txn1 = NULL;
           myEnv.txn_begin(NULL,&txn1,DB_SNAPSHOT);
           Dbc *cursorp = NULL;
           dbp->cursor(txn1,&cursorp,0);
           Dbt tempData1,tempKey2,tempData2;
           tempData2.set_flags(DB_DBT_MALLOC);
           cursorp->get(&pkey1,&tempData1,DB_SET);
           cursorp->get(&tempKey2,&tempData2,DB_NEXT);
           //cout just to see if it is right
           std::cout<<(char*)pkey1.get_data()<<" : "<<(char*)tempData1.get_data()<<std::endl
                              <<(char*)tempKey2.get_data()<<" : "<<(char*)tempData2.get_data()<<std::endl;
           //txn2 to modify
           DbTxn *txn2 = NULL;
           myEnv.txn_begin(NULL,&txn2,0);
           Dbc *temcur = NULL;
           dbp->cursor(txn2,&temcur,0);
           temcur->put(&pkey1,&pdata2,DB_KEYFIRST);          //the program will stop here and wait forever. if the snapshop isolation made a copy before , why still block here?
                                                                                         //without this line,there won't deadlock.that means page was put a write lock before
           //commit the txn
           txn1->commit(0);
           txn2->commit(0);
      }catch(DbException &e){
          std::cerr<<"Error when use a txn"<<std::endl;
      try{
           dbp->close(0); //dbp should close before environment
           myEnv.close(0);
      }catch(DbException &e){
           std::cerr<<"Error when closing database and environment:"
                                    <<fileName<<","<<home<<std::endl;
           std::cerr<<e.what()<<std::endl;
      return 0;

  • Snapshot isolation in combination with service broker

    Hi all,
    I'm currently using the service broker in combination with snapshot isolation on the database.
    The notification request is executed under read commited isolation. The code looks like this:
    SqlDependency dependency = new SqlDependency(command, null, 0);
    dependency.OnChange += eventHandler;
    using (SqlConnection conn = new SqlConnection(connectionString))
    using (SqlTransaction tran = conn.BeginTransaction(IsolationLevel.ReadCommitted))
    command.Transaction = tran;
    command.ExecuteNonQuery();
    tran.Commit();
    The request is successfully created and works fine at the first glance.
    Now here is my problem:
    I created a notification request that should monitor two objects. The query (executed under read committed) looks something like this:
    SELECT Id, DataVersion FROM dbo.MyObjects WHERE Id = 1 OR Id = 2
    Afterwards I delete both objects in separate nested transactions. Both of them are running under snapshot isolation. It looks something like this:
    using (SqlConnection conn1 = new SqlConnection(connectionString))
    conn1.Open();
    using (SqlTransaction tran1 = connection1.BeginTransaction(IsolationLevel.Snapshot))
    using (SqlConnection conn2 = new SqlConnection(connectionString))
    conn2.Open();
    using (SqlTransaction tran2 = conn2.BeginTransaction(IsolationLevel.Snapshot))
    SqlCommand command2 = conn2.CreateCommand();
    command2.Transaction = tran2;
    command2.CommandText = "DELETE FROM MyObjects WHERE Id = 2";
    command2.ExecuteNonQuery();
    tran2.Commit();
    SqlCommand command1 = conn1.CreateCommand();
    command1.CommandText = "DELETE FROM MyObjects WHERE Id = 1";
    command1.Transaction = tran1;
    command1.ExecuteNonQuery();
    tran1.Commit(); //-> Conflict exception
    A conflict exception is raised during the commit of last transaction. The conflict seems to occur in the table "sys.query_notifcations_xxxxxxxxx". This is the exact message:
    An unhandled exception of type 'System.Data.SqlClient.SqlException' occurred in System.Data.dll
    Additional information: Snapshot isolation transaction aborted due to update conflict. You cannot use snapshot isolation to access table 'sys.query_notification_45295271' directly or indirectly in database 'MyDatabase' to update, delete,
    or insert the row that has been modified or deleted by another transaction. Retry the transaction or change the isolation level for the update/delete statement.
    Is there any restriction for the service broker that prohibits the usage of snapshot isolation.
    Thanks in advance.

    No, the error has nothing to do with Service Broker. Or for that matter, query notifications, which is the feature you are actually using. (Query notifications uses Service Broker, but Service Broker != Query notification.)
    You would get the same error if you had a trigger in MyObjects that tried to update the same row for both deletions. A snapshot transaction gives you a consistent view of the database in a certain point in time. Consider this situation:
    Snapshot transaction A that started at time T update a row R at time T2. Snapshot transaction B starts at time T1 updates the same row at time T3. Had they been regular non-snapshot transaction, transaction B would have been blocked already when it tried
    to read R, but snapshot transactions do not get blocked. But if B would be permitted to update R, the update from transaction A would be lost. Assume that the update is an incremental one, for instance updating cash balance for an account. You can see that
    this is not permittable.
    In your case, the row R happens to be a row in an internal table for query notifications, but it is the application design which is the problem. There is no obvious reason to use snapshot isolation in your example since you are only deleting. And there is
    even less reason to have two transactions and connections for the task.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Snapshot isolation

    We have setup snapshot level isolation in our Berkeley DB XML database, and started getting the following errors during queries after a while:
    PANIC: Cannot allocate memory
    We set the max lockers at 10,000, max locks at 1,000,000 and max lock objects at 1,000,000 as well. We are also very careful to commit or abort every transaction initiated. All of our operations are done under the context of an explicit transaction. Could there be some memory leak? Should we be aware of some other caveats?
    Thank you,
    Alexander.

    Hi Alexander,
    I would suggest running the application under a memory leak checker/debugger, such as Purify or Valgrind. If you do get something suspicious please report it.
    Though, when running with snapshot isolation you have to be prepared for the cost that MVCC (MultiVersion Concurrency Control) implies, that is, larger cache size requirements.
    Pages are being duplicated when a writer takes a read lock on a page, therefore operating on a copy of that page. This avoids the situation where other writers would block due to a read lock held on the page, but it also means that the cache will fill up faster. You might need a larger cache in order to hold the entire working set in memory.
    Note that the need of more cache is amplified when you have a large number of concurrent active long-running transactions, as it increases the volume of active page versions (copies of pages that cannot safely be freed). In such situation, it may worth trying to run updates at serializable isolation and only run queries at snapshot isolation. The queries will not block updates, or vice versa, and the updates will not force page versions to be kept for long periods.
    You should try keeping the transactions running under snapshot isolation as short as possible.
    Of course, the recommended approach to resolve this issue is to increase the cache size, if possible. You can estimate how large your cache should be by taking a checkpoint, followed by a call to the DB_ENV->log_archive() method. The amount of cache required is approximately double the size of the remaining log files (that is, the log files that cannot be archived).
    Also, along with increasing the cache size you may need to increase the number of maximum active transactions that the application supports.
    Please review the following places for further information:
    [http://www.oracle.com/technology/documentation/berkeley-db/db/programmer_reference/transapp_read.html#id1609299]
    [http://www.oracle.com/technology/documentation/berkeley-db/db/gsg_txn/Java/isolation.html#snapshot_isolation]
    Regards,
    Andrei

  • SELECT hangs despite Snapshot Isolation

    - I'm trying to learn isolation levels.  
        SET TRANSACTION ISOLATION LEVEL Read Snapshot
    I thought this would allow me to issue a SELECT on a dirty row (I thought it would show me the clean version of the row). But my test failed - the SELECT hangs/pends. Why?
    - First I dirtied a row:
        if object_id('tblTestTrans') is not null drop table tblTestTrans
        create table tbltestTrans (
            id int
        insert into tbltestTrans (id) values (1)
        Begin Tran
            Update tbltestTrans set id = 2
    Then I ran a SELECT in a new query window:
        ALTER DATABASE TestDb Set Allow_SnapShot_Isolation ON
        SET TRANSACTION ISOLATION LEVEL Snapshot
        select * from tblTestTrans -- hangs
    The select hangs. Why?   
       

    Your test seems a little incomplete. You should set the connection which is starting transaction to snapshot before running an update. Then you can open another connection and set it to snapshot to see only the clean version of the table (with
    only one record).

  • Can't turn off "snapshot" on adobe pdf

    Been doing online work and had to use Adobe for that - not my preferred choice but HMRC UK forces this.
    Option to save at end of work wouldn't work so decided to take snap shot of finished screen, went to Edit "take snapshot" and clicked - no snapshot taken that I can locate, and now the cursor won't show on that adobe page. I click on the adobe page and the cursor becomes crosshairs, and the page turns blue/burgundy - clicked again the colour goes but the page won't respond at all and crosshairs stay.
    Tried to deselect snapshot under Edit by clicking on it again but nothin happens , the tick is still showing next to "take snapshot" - and I think this is why the cursor won't show and page not acting as it should. Anyone know how can I resolve this?

    Found info elsewhere that resolved this but just in case anyone else has this problem - click on the adobe pdf, then click escape!!! Simples!!

  • Turn off snapshot sound

    I want to take snapshots while in video chat without the snapshot sound playing. How can I turn this off? Also, is there a way to completely mute all iChat sounds? I've already turned off all alert sounds. Thanks.

    Hi,
    iChat Plays the sounds set in the iChat Menu > Preferences > Alerts as if they were System Alerts.
    This means that System Preferences > Sound > Sound Effects and the Alert Volume in here is the volumes that Sound Effects is the volume that iChat Alert Sounds are heard at.
    You could reduce this to zero or Mute the Master volume.
    The Camera Shutter sound is played from the module that controls the Snapshot and is not a preference within iChat (You have no Control other than System Preferences > Sound).
    A quick test shows that the sound is played as a System Alert controlled level.
    I hope this helps.
    7:49 PM Friday; March 27, 2009

  • Is snapshot isolation correct in following situation.

    Hi,
    i am using two databases. d1 and d2
    from one data database (d1)  i get data and transfers it to (d2).
    the transfering sp (uspTransfer) is in (d1) which is called by an other sp(startuspTrn) from master data base of (d1)
    because startuspTrn is seduled by
    sp_procoption
    and startuspTrn keeps on calling uspTransfer after 10 sec.
                 so i had to use in my query(uspTransfer) d1.dbo.table1 to d2.dbo.table1 which is dynamic query.
    Please tel me is snapshot is sutable for this situation.
     note: tables of  d1 is used by sanpshot transaction of uspTransfer for reading and tables of d2 are used by this snapshot transaction for insertion deletion etc. other session in d2 only read data.
    but other sessions except uspTransfer in d1 can insert update delete tabels of d1( only).
    yours sincerley

    Can you share details of the application? Thanks.
    Kalman Toth Database & OLAP Architect
    SQL Server 2014 Design & Programming
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

  • What are the different ways to handle deadlocks?

    Hi,
    May I know what are the ways to solve a deadlock problem?
    Currently, I have the following code to catch the exception:
    catch (XmlException ex)
                   try
                        ex.printStackTrace();
                        txn.abort();
                   } catch (DatabaseException DbEx)
                        System.err.println("txn abort failed.");
                   }and the resulting error is:
    com.sleepycat.dbxml.XmlException: Error: DB_LOCK_DEADLOCK: Locker killed to resolve a deadlock, errcode = DATABASE_ERROR
    Any other more efficient way to handle deadlock?
    Or better ways to prevent deadlock from happening?
    I am using this environment config
    EnvironmentConfig envConf = new EnvironmentConfig();
                   envConf.setAllowCreate(true); // If the environment does not exits,
                   // create it.
                   envConf.setInitializeCache(true); // Turn on the shared memory
                   // region.
                   // envConf.setCacheSize(25 * 1024 * 1024); // 25MB cache
                   envConf.setInitializeLocking(true); // Turn on the locking
                   // subsystem.
                   envConf.setInitializeLogging(true); // Turn on the logging
                   // subsystem.
                   envConf.setTransactional(true); // Turn on the transactional
                   // subsystem.
                   // envConf.setRunRecovery(true); //Turn on run recovery
                   // envConf.setTxnNoSync(true); // Cause BDB XML to not synchronously
                   // force any log data to disk upon transaction commit
                   envConf.setLogInMemory(true); // specify in-memory logging
                   envConf.setLogBufferSize(60 * 1024 * 1024); // set logging size.
                   // envConf.setTxnWriteNoSync(true); //method. This causes logging
                   // data to be synchronously written to the OS's file system buffers
                   // upon transaction commit.
                   // envConf.setThreaded(true); //default by Java that threaded = true
                   // envConf.setMultiversion(true);
                   envConf.setLockDetectMode(LockDetectMode.DEFAULT); // Reject a
                   // random lock
                   // requestThanks in advance for any help!
    :)

    Hi Vyacheslav,
    here is the code:
    package ag;
    import com.sleepycat.db.DatabaseException;
    import com.sleepycat.db.Environment;
    import com.sleepycat.db.EnvironmentConfig;
    import com.sleepycat.db.LockDetectMode;
    import com.sleepycat.dbxml.XmlContainerConfig;
    import com.sleepycat.dbxml.XmlDocumentConfig;
    import com.sleepycat.dbxml.XmlException;
    import com.sleepycat.dbxml.XmlManager;
    import com.sleepycat.dbxml.XmlContainer;
    import com.sleepycat.dbxml.XmlDocument;
    import com.sleepycat.dbxml.XmlManagerConfig;
    import com.sleepycat.dbxml.XmlTransaction;
    import com.sleepycat.dbxml.XmlUpdateContext;
    import inter.DBInterface;
    import java.io.*;
    import java.util.Properties;
    import cp.CheckPointer;
    public class SaveMessageinDB implements DBInterface
         Environment myEnv;
         XmlManager myManager;
         XmlContainer myContainer;
         XmlTransaction txn;
         XmlContainerConfig cconfig;
         Properties properties;
         // CheckPointer cp;
         int Counter;
         public SaveMessageinDB()
              try
                   properties = new Properties();
                   properties.load(ClassLoader
                             .getSystemResourceAsStream("Aggregator.properties"));
                   setXmlEnvrionment();
                   setXmlManager();
                   setXmlContainer();
                   // cp = new CheckPointer(myEnv);
                   // cp.start();
                   // System.out.println("Checkpointer started....");
                   Counter = 0;
              } catch (Exception ex)
                   ex.printStackTrace();
         public void saveMessage(String docName, String content) throws Exception
              addXMLDocument(docName, content);
         public void setXmlEnvrionment()
              try
                   File envHome = new File(properties.getProperty("DATABASE_LOCATION"));
                   EnvironmentConfig envConf = new EnvironmentConfig();
                   envConf.setAllowCreate(true); // If the environment does not exits,
                   // create it.
                   envConf.setInitializeCache(true); // Turn on the shared memory
                   // region.
                   envConf.setCacheSize(100 * 1024 * 1024); // 100MB cache
                   envConf.setInitializeLocking(true); // Turn on the locking
                   // subsystem.
                   envConf.setInitializeLogging(true); // Turn on the logging
                   // subsystem.
                   envConf.setTransactional(true); // Turn on the transactional
                   // subsystem.
                   // envConf.setRunRecovery(true); // Turn on run recovery
                   // envConf.setTxnNoSync(true); // Cause BDB XML to not synchronously
                   // force any log data to disk upon transaction commit
                   envConf.setLogInMemory(true); // specify in-memory logging
                   envConf.setLogBufferSize(60 * 1024 * 1024); // set logging size.
                   // envConf.setTxnWriteNoSync(true);
                   // This causes logging
                   // data to be synchronously written to the OS's file system buffers
                   // upon transaction commit.
                   envConf.setMultiversion(true); //Turn on snapshot isolation
                   envConf.setLockDetectMode(LockDetectMode.DEFAULT); // Reject a
                   // random lock
                   // request
                   // myEnv = new Environment(envHome, null); //To adopt Environment
                   // already set by others
                   myEnv = new Environment(envHome, envConf);
                   System.out.println("Environment created...");
              } catch (Exception ex)
                   ex.printStackTrace();
         // All BDB XML programs require an XmlManager instance.
         // Create it from the DB Environment, but do not adopt the
         // Environment
         public void setXmlManager()
              try
                   XmlManagerConfig mconfig = new XmlManagerConfig();
                   mconfig.setAllowAutoOpen(true);
                   mconfig.setAdoptEnvironment(true);
                   mconfig.setAllowExternalAccess(true);
                   myManager = new XmlManager(myEnv, mconfig);
                   // myManager = new XmlManager (mconfig);
                   System.out.println("Manager created...");
              } catch (Exception ex)
                   ex.printStackTrace();
         public void setXmlContainer()
              try
                   cconfig = new XmlContainerConfig();
                   cconfig.setNodeContainer(true);
                   cconfig.setIndexNodes(true);
                   cconfig.setTransactional(true); // set transaction need an
                   // cconfig.setAllowValidation(false);
                   // environment
                   // cconfig.setReadUncommitted(true); // This container allow
                   // uncommitted read (able to read dirty data and not set a deadlock
                   // cconfig.setMultiversion(true);
                   myContainer = myManager.openContainer(properties
                             .getProperty("DATABASE_LOCATION")
                             + properties.getProperty("CONTAINER_NAME"), cconfig);
                   System.out.println("Container Opened...");
              } catch (XmlException XmlE)
                   try
                        myContainer = myManager.createContainer(properties
                                  .getProperty("DATABASE_LOCATION")
                                  + properties.getProperty("CONTAINER_NAME"), cconfig);
                        System.out.println("Container Created...");
                   } catch (Exception e)
                        e.printStackTrace();
              } catch (Exception ex)
                   ex.printStackTrace();
         public void addXMLDocument(String docName, String content)
              try
                   txn = myManager.createTransaction(); // no need to create
                   // transaction. auto commit
                   // by the environment
                   XmlDocumentConfig docConfig = new XmlDocumentConfig();
                   docConfig.setGenerateName(true);
                   docConfig.setWellFormedOnly(true);
                   myContainer.putDocument(txn, docName, content, docConfig);
                   // commit the Transaction
                   txn.commit();
                   System.out.println("documents added.....");
                   Counter++;
                   System.out.println("Document no: " + Counter);
                   txn.delete();
              } catch (XmlException ex)
                   try
                        System.out.println("Occuring in addXMLDocument");
                        ex.printStackTrace();
                        txn.abort();
                   } catch (DatabaseException DbEx)
                        System.err.println("txn abort failed.");
         public void cleanup()
              try
                   if (myContainer != null)
                        myContainer.close();
                   if (myManager != null)
                        myManager.close();
                   if (myEnv != null)
                        System.out.println("All cleaned up done..in sm");
                        myEnv.close();
              } catch (Exception e)
                   // ignore exceptions in cleanup
    }Thanks!

  • Setting transaction isolation level on a replicated server stored procedure

    I have a SQL server that has replication turned onto to another server which is our reporting server. Replication is real-time (or close to it). On the report server I have a stored procedure that runs from a SRS report. My question is it possible or advisable
    or does it even make sense to set the "SET TRANSACTION ISOLATION LEVEL READ COMMITTED" on at the beginning of the stored procedure which selects data from the reporting server database? Is it possible for uncommitted data on the OLTP side of the
    house to be replicated before it is committed? We are having data issues with a report and have exhausted all options and was wondering if dirty data maybe the issue since the same parameters work for a report 1 sec and then next it doesnt.

    Only committed transactions are replicated to the subscriber.  But it is possible for the report to see dirty data if running in READ UNCOMMITTED or NOLOCK.  You should run your reports in READ COMMITTED or SNAPSHOT isolation , and your replication
    subscriber should be configured with READ COMMITTED SNAPSHOT ISLOATION eg
    alter database MySubscriber set allow_snapshot_isolation on;
    alter database MySubscriber set read_committed_snapshot on;
    as recommended here
    Enhance General Replication Performance.
    David
    David http://blogs.msdn.com/b/dbrowne/

Maybe you are looking for

  • Creating Computer CI from Orchestrator

    Hello, I'm attempting to use Orchestrator 2012 SP1 to create a VM based on a SSP Request, then get the MAC of the VM, Once I have the MAC, I would like to add a CI to Service manager with Principal Name and MAC fields populated. I would then like tha

  • Video content behind other INDD layers

    Hi, I have imported an .oam file into indesign and wish to have the footage autoplaying in the background, behind interactive buttons. It works in the EPUB preview, but not on my adobe content viewer on ipad or desktop - any ideas?

  • Duplicates that do not exist

    Hi had a serious computer OS rebuild . on installing Itunes after a Win 7 reload, itunes scanned and found two or more music folders (one was a back up) I have removed the back up but cannot get itunes to remove the duplicates . Itunes thinks there a

  • How do I gesture to scroll to top or bottom of page in safari?

    On my Macbook Pro using Safari, the three finger swipe left or right takes me to the previous or next page in history. How do I gesture to take me to the top or bottom of the page?  BTW it works in FF.  Whenever I decide to give Safari another try I

  • Powerbook g3 9.2 digital camera suggestions?

    I have a powerbook g3 series running 9.2 and want to get a digital camera system - does anyone have any suggestions, given that most new stuff needs OSX??