Changing Isolation Level Mid-Transaction

Hi,
I have a SS bean which, within a single container managed transaction, makes numerous
database accesses. Under high load, we start having serious contention issues
on our MS SQL server database. In order to reduce these issues, I would like
to reduce my isolation requirements in some of the steps of the transaction.
To my knowledge, there are two ways to achieve this: a) specify isolation at the
connection level, or b) use locking hints such as NOLOCK or ROWLOCK in the SQL
statements. My questions are:
1) If all db access is done within a single tx, can the isolation level be changed
back and forth?
2) Is it best to set the isolation level at the JDBC level or to use the MS SQL
locking hints?
Is there any other solution I'm missing?
Thanks,
Sebastien

Galen Boyer wrote:
On Sun, 28 Mar 2004, [email protected] wrote:
Galen Boyer wrote:
On Wed, 24 Mar 2004, [email protected] wrote:
Oracle's serializable isolation level doesn't offer what most
customers I've seen expect it to offer. They typically expect
that a serializable transaction will block any read-data from
being altered during the transaction, and oracle doesn't do
that.I haven't implemented WEB systems that employ anything but
the default concurrency control, because a web transaction is
usually very long running and therefore holding a connection
open during its life is unscalable. But, your statement did
make me curious. I tried a quick test case. IN ONE SQLPLUS
SESSION: SQL> alter session set isolation_level =
serializable; SQL> select * from t1; ID FL ---------- -- 1 AA
2 BB 3 CC NOW, IN ANOTHER SQLPLUS SESSION: SQL> update t1 set
fld = 'YY' where id = 1; 1 row updated. SQL> commit; Commit
complete. Now, back to the previous session. SQL> select *
from t1; ID FL ---------- -- 1 AA 2 BB 3 CC So, your
statement is incorrect.Hi, and thank you for the diligence to explore. No, actually
you proved my point. If you did that with SQLServer or Sybase,
your second session's update would have blocked until you
committed your first session's transaction. Yes, but this doesn't have anything to do with serializable.
This is the weak behaviour of those systems that say writers can
block readers.Weak or strong, depending on the customer point of view. It does guarantee
that the locking tx can continue, and read the real data, and eventually change
it, if necessary without fear of blockage by another tx etc.
In your example, you were able to change and commit the real
data out from under the first, serializable transaction. The
reason why your first transaction is still able to 'see the old
value' after the second tx committed, is not because it's
really the truth (else why did oracle allow you to commit the
other session?). What you're seeing in the first transaction's
repeat read is an obsolete copy of the data that the DBMS
made when you first read it. Yes, this is true.
Oracle copied that data at that time into the per-table,
statically defined space that Tom spoke about. Until you commit
that first transaction, some other session could drop the whole
table and you'd never know it.This is incorrect.Thanks. Point taken. It is true that you could have done a complete delete
of all rows in the table though..., correct?
That's the fast-and-loose way oracle implements
repeatable-read! My point is that almost everyone trying to
serialize transactions wants the real data not to
change. Okay, then you have to lock whatever you read, completely.
SELECT FOR UPDATE will do this for your customers, but
serializable won't. Is this the standard definition of
serializable of just customer expectation of it? AFAIU,
serializable protects you from overriding already committed
data.The definition of serializable is loose enough to allow
oracle's implementation, but non-changing relevant data is
a typically understood hope for serializable. Serializable
transactions typically involve reading and writing *only
already committed data*. Only DIRTY_READ allows any access to
pre-committed data. The point is that people assume that a
serializable transaction will not have any of it's data re
committed, ie: altered by some other tx, during the serializable
tx.
Oracle's rationale for allowing your example is the semantic
arguement that in spite of the fact that your first transaction
started first, and could continue indefinitely assuming it was
still reading AA, BB, CC from that table, because even though
the second transaction started later, the two transactions *so
far*, could have been serialized. I believe they rationalize it by saying that the state of the
data at the time the transaction started is the state throughout
the transaction.Yes, but the customer assumes that the data is the data. The customer
typically has no interest in a copy of the data staying the same
throughout the transaction.
Ie: If the second tx had started after your first had
committed, everything would have been the same. This is true!
However, depending on what your first tx goes on to do,
depending on what assumptions it makes about the supposedly
still current contents of that table, it may ether be wrong, or
eventually do something that makes the two transactions
inconsistent so they couldn't have been serialized. It is only
at this later point that the first long-running transaction
will be told "Oooops. This tx could not be serialized. Please
start all over again". Other DBMSes will completely prevent
that from happening. Their value is that when you say 'commit',
there is almost no possibility of the commit failing. But this isn't the argument against Oracle. The unable to
serialize doesn't happen at commit, it happens at write of
already changed data. You don't have to wait until issuing
commit, you just have to wait until you update the row already
changed. But, yes, that can be longer than you might wish it to
be. True. Unfortunately the typical application writer logic may
do stuff which never changes the read data directly, but makes
changes that are implicitly valid only when the read data is
as it was read. Sometimes the logic is conditional so it may never
write anything, but may depend on that read data staying the same.
The issue is that some logic wants truely serialized transactions,
which block each other on entry to the transaction, and with
lots of DBMSes, the serializable isolation level allows the
serialization to start with a read. Oracle provides "FOR UPDATE"
which can supply this. It is just that most people don't know
they need it.
With Oracle and serializable, 'you pay your money and take your
chances'. You don't lose your money, but you may lose a lot of
time because of the deferred checking of serializable
guarantees.
Other than that, the clunky way that oracle saves temporary
transaction-bookkeeping data in statically- defined per-table
space causes odd problems we have to explain, such as when a
complicated query requires more of this memory than has been
alloted to the table(s) the DBMS will throw an exception
saying it can't serialize the transaction. This can occur even
if there is only one user logged into the DBMS.This one I thought was probably solved by database settings,
so I did a quick search, and Tom Kyte was the first link I
clicked and he seems to have dealt with this issue before.
http://tinyurl.com/3xcb7 HE WRITES: serializable will give you
repeatable read. Make sure you test lots with this, playing
with the initrans on the objects to avoid the "cannot
serialize access" errors you will get otherwise (in other
databases, you will get "deadlocks", in Oracle "cannot
serialize access") I would bet working with some DBAs, you
could have gotten past the issues your client was having as
you described above.Oh, yes, the workaround every time this occurs with another
customer is to have them bump up the amount of that
statically-defined memory. Yes, this is what I'm saying.
This could be avoided if oracle implemented a dynamically
self-adjusting DBMS-wide pool of short-term memory, or used
more complex actual transaction logging. ? I think you are discounting just how complex their logging
is. Well, it's not the logging that is too complicated, but rather
too simple. The logging is just an alternative source of memory
to use for intra-transaction bookkeeping. I'm just criticising
the too-simpleminded fixed-per-table scratch memory for stale-
read-data-fake-repeatable-read stuff. Clearly they could grow and
release memory as needed for this.
This issue is more just a weakness in oracle, rather than a
deception, except that the error message becomes
laughable/puzzling that the DBMS "cannot serialize a
transaction" when there are no other transactions going on.Okay, the error message isn't all that great for this situation.
I'm sure there are all sorts of cases where other DBMS's have
laughable error messages. Have you submitted a TAR?Yes. Long ago! No one was interested in splitting the current
message into two alternative messages:
"This transaction has just become unserializable because
of data changes we allowed some other transaction to do"
or
"We ran out of a fixed amount of scratch memory we associated
with table XYZ during your transaction. There were no other
related transactions (or maybe even users of the DBMS) at this
time, so all you need to do to succeed in future is to have
your DBA reconfigure this scratch memory to accomodate as much
as we may need for this or any future transaction."
I am definitely not an Oracle expert. If you can describe for
me any application design that would benefit from Oracle's
implementation of serializable isolation level, I'd be
grateful. There may well be such.As I've said, I've been doing web apps for awhile now, and
I'm not sure these lend themselves to that isolation level.
Most web "transactions" involve client think-time which would
mean holding a database connection, which would be the death
of a web app.Oh absolutely. No transaction, even at default isolation,
should involve human time if you want a generically scaleable
system. But even with a to-think-time transaction, there is
definitely cases where read-data are required to stay as-is for
the duration. Typically DBMSes ensure this during
repeatable-read and serializable isolation levels. For those
demanding in-the-know customers, oracle provided the select
"FOR UPDATE" workaround.Yep. I concur here. I just think you are singing the praises of
other DBMS's, because of the way they implement serializable,
when their implementations are really based on something that the
Oracle corp believes is a fundamental weakness in their
architecture, "Writers block readers". In Oracle, this never
happens, and is probably one of the biggest reasons it is as
world-class as it is, but then its behaviour on serializable
makes you resort to SELECT FOR UPDATE. For me, the trade-off is
easily accepted.Well, yes and no. Other DBMSes certainly have their share of faults.
I am not critical only of oracle. If one starts with Oracle, and
works from the start with their performance arcthitecture, you can
certainly do well. I am only commenting on the common assumptions
of migrators to oracle from many other DBMSes, who typically share
assumptions of transactional integrity of read-data, and are surprised.
If you know Oracle, you can (mostly) do everything, and well. It is
not fundamentally worse, just different than most others. I have had
major beefs about the oracle approach. For years, there was TAR about
oracle's serializable isolation level *silently allowing partial
transactions to commit*. This had to do with tx's that inserted a row,
then updated it, all in the one tx. If you were just lucky enough
to have the insert cause a page split in the index, the DBMS would
use the old pre-split page to find the newly-inserted row for the
update, and needless to say, wouldn't find it, so the update merrily
updated zero rows! The support guy I talked to once said the developers
wouldn't fix it "because it'd be hard". The bug request was marked
internally as "must fix next release" and oracle updated this record
for 4 successive releases to set the "next release" field to the next
release! They then 'fixed' it to throw the 'cannot serialize' exception.
They have finally really fixed it.( bug #440317 ) in case you can
access the history. Back in 2000, Tom Kyte reproduced it in 7.3.4,
8.0.3, 8.0.6 and 8.1.5.
Now my beef is with their implementation of XA and what data they
lock for in-doubt transactions (those that have done the prepare, but
have not yet gotten a commit). Oracle's over-simple logging/locking is
currently locking pages instead of rows! This is almost like Sybase's
fatal failure of page-level locking. There can be logically unrelated data
on those pages, that is blocked indefinitely from other equally
unrelated transactions until the in-doubt tx is resolved. Our TAR has
gotten a "We would have to completely rewrite our locking/logging to
fix this, so it's your fault" response. They insist that the customer
should know to configure their tables so there is only one datarow per
page.
So for historical and current reasons, I believe Oracle is absolutely
the dominant DBMS, and a winner in the market, but got there by being first,
sold well, and by being good enough. I wish there were more real market
competition, and user pressure. Then oracle and other DBMS vendors would
be quicker to make the product better.
Joe

Similar Messages

  • Isolation Level for Transaction

    Hi
    Under the Advanced Mode in JDBC Adapter, there is an option " Isolation Level For Transaction ". I see two alternatives there, " Serializable " and " Repeatable Load ". Which one should we select and when do we select it ?
    Radhika

    Hi Radhika,
    Check below documentation..
    http://help.sap.com/saphelp_nw70/helpdata/en/22/b4d13b633f7748b4d34f3191529946/frameset.htm
    Regards,
    Swetha.

  • Setting db isolation level on transaction without EJB

              I'm using UserTransaction in the servlet container, to control XA transactions.
              We're not using EJB. How do I set the database isolation level? I'm tempted
              to use java.sql.Connection.setTransactionIsolation(). However, the Sun Javadoc
              for that method says you can't call that after the transaction has started (which
              makes sense). Right now, we're starting the transaction, getting a connection,
              closing the connection, and committing the transaction. I guess that order won't
              work if I want to set the isolation level. Or am I mixing apples and oranges
              here? If I use UserTransaction, is it even appropriate to try to set the isolation
              level on the connection?
              All I really want to do is change the default isolation level. We do not need
              different isolation levels for different use cases. (Not yet, anyway.) We might
              have transactions against two different database instances or other resource managers.
              That's why I want to use UserTransaction and XA transactions.
              Thanks!
              Steve Molitor
              [email protected]
              

    Only committed transactions are replicated to the subscriber.  But it is possible for the report to see dirty data if running in READ UNCOMMITTED or NOLOCK.  You should run your reports in READ COMMITTED or SNAPSHOT isolation , and your replication
    subscriber should be configured with READ COMMITTED SNAPSHOT ISLOATION eg
    alter database MySubscriber set allow_snapshot_isolation on;
    alter database MySubscriber set read_committed_snapshot on;
    as recommended here
    Enhance General Replication Performance.
    David
    David http://blogs.msdn.com/b/dbrowne/

  • Changing isolation level in a session, is valid please see following situation i have used shapshot.

    hi,
    --DBCC FREEPROCCACHE
    --DBCC DROPCLEANBUFFERS
    CREATE TABLE #temp(ID BIGINT NOT NULL)
    SET TRANSACTION ISOLATION LEVEL REPEATABLE READ 
    BEGIN TRAN 
    INSERT INTO #temp (id) SELECT wid FROM w WHERE ss=1
    UPDATE w SET ss =0 WHERE wid IN (SELECT id FROM #Temp)
    COMMIT TRAN 
    IF (EXISTS(SELECT * FROM  #temp))
    BEGIN
    SELECT 'P'
    SET TRANSACTION ISOLATION LEVEL SNAPSHOT 
    BEGIN TRAN 
    insert into a  ( a,b,c)
    SELECT a , b ,c FROM  w WHERE wid= 104300001201746884  
    COMMIT TRAN
    END
    Q1) changin isolation in this way is correct or not?
    Q2) why i have chainged isolation is , because
    this stmt was updated by other trnsaction also, and i also wanted to udpate it , so i made one repetable read and then snapshot.
    UPDATE w SET ss=0 WHERE wid IN (SELECT id FROM #Temp)
    DROP TABLE #temp     
    yours sincerley

    http://blogs.msdn.com/b/craigfr/archive/2007/05/16/serializable-vs-snapshot-isolation-level.aspx
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Bug in Oracle's handling of transaction isolation levels?

    Hello,
    I think there is a bug in Oracle 9i database related to serializable transaction isolation level.
    Here is the information about the server:
    Operating System:     Microsoft Windows 2000 Server Version 5.0.2195 Service Pack 2 Build 2195
    System type:          Single CPU x86 Family 6 Model 8 Stepping 10 GenuineIntel ~866 MHz
    BIOS-Version:          Award Medallion BIOS v6.0
    Locale:               German
    Here is my information about the client computer:
    Operaing system:     Microsoft Windows XP
    System type:          IBM ThinkPad
    Language for DB access: Java
    Database information:
    Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.1.0 - Production
    The database has been set up using the default settings and nothing has been changed.
    To reproduce the bug, follow these steps:
    1. Create a user in 9i database called 'kaon' with password 'kaon'
    2. Using SQL Worksheet create the following table:
    CREATE TABLE OIModel (
    modelID int NOT NULL,
    logicalURI varchar (255) NOT NULL,
    CONSTRAINT pk_OIModel PRIMARY KEY (modelID),
    CONSTRAINT logicalURI_OIModel UNIQUE (logicalURI)
    3. Run the following program:
    package test;
    import java.sql.*;
    public class Test {
    public static void main(String[] args) throws Exception {
    java.util.Locale.setDefault(java.util.Locale.US);
    Class.forName("oracle.jdbc.OracleDriver");
    Connection connection=DriverManager.getConnection("jdbc:oracle:thin:@schlange:1521:ORCL","kaon","kaon");
    DatabaseMetaData dmd=connection.getMetaData();
    System.out.println("Product version:");
    System.out.println(dmd.getDatabaseProductVersion());
    System.out.println();
    connection.setAutoCommit(false);
    connection.setTransactionIsolation(Connection.TRANSACTION_SERIALIZABLE);
    int batches=0;
    int counter=2000;
    for (int outer=0;outer<50;outer++) {
    for (int i=0;i<200;i++) {
    executeUpdate(connection,"INSERT INTO OIModel (modelID,logicalURI) VALUES ("+counter+",'start"+counter+"')");
    executeUpdate(connection,"UPDATE OIModel SET logicalURI='next"+counter+"' WHERE modelID="+counter);
    counter++;
    connection.commit();
    System.out.println("Batch "+batches+" done");
    batches++;
    protected static void executeUpdate(Connection conn,String sql) throws Exception {
    Statement s=conn.createStatement();
    try {
    int result=s.executeUpdate(sql);
    if (result!=1)
    throw new Exception("Should update one row, but updated "+result+" rows, query is "+sql);
    finally {
    s.close();
    The program prints the following output:
    Product version:
    Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.1.0 - Production
    Batch 0 done
    Batch 1 done
    java.lang.Exception: Should update one row, but updated 0 rows, query is UPDATE OIModel SET logicalURI='next2571' WHERE modelID=2571
         at test.Test.executeUpdate(Test.java:35)
         at test.Test.main(Test.java:22)
    That is, after several iterations, the executeUpdate() method returns 0, rather than 1. This is clearly an error.
    4. Leave the database as is. Replace the line
    int counter=2000;
    with line
    int counter=4000;
    and restart the program. The following output is generated:
    Product version:
    Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.1.0 - Production
    Batch 0 done
    Batch 1 done
    java.sql.SQLException: ORA-08177: can't serialize access for this transaction
         at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:134)
         at oracle.jdbc.ttc7.TTIoer.processError(TTIoer.java:289)
         at oracle.jdbc.ttc7.Oall7.receive(Oall7.java:573)
         at oracle.jdbc.ttc7.TTC7Protocol.doOall7(TTC7Protocol.java:1891)
         at oracle.jdbc.ttc7.TTC7Protocol.parseExecuteFetch(TTC7Protocol.java:1093)
         at oracle.jdbc.driver.OracleStatement.executeNonQuery(OracleStatement.java:2047)
         at oracle.jdbc.driver.OracleStatement.doExecuteOther(OracleStatement.java:1940)
         at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:2709)
         at oracle.jdbc.driver.OracleStatement.executeUpdate(OracleStatement.java:796)
         at test.Test.executeUpdate(Test.java:33)
         at test.Test.main(Test.java:22)
    This is clearly an error - only one transaction is being active at the time, so there is no need for serialization of transactions.
    5. You can restart the program as many times you wish (by chaging the initial counter value first). The same error (can't serialize access for this transaction) will be generated.
    6. The error doesn't occur if the transaction isolation level isn't changed.
    7. The error doesn't occur if the UPDATE statement is commented out.
    Sincerely yours
         Boris Motik

    I have a similar problem
    I'm using Oracle and serializable isolation level.
    Transaction inserts 4000 objects and then updates about 1000 of these objects.
    Transactions sees inserted objects but cant update them (row not found or can't serialize access for this transaction are thrown).
    On 3 tries for this transaction 1 succeds and 2 fails with one of above errors.
    No other transactions run concurently.
    In read commited isolation error doesn't arise.
    I'm using plain JDBC.
    Similar or even much bigger serializable transaction works perfectly on the same database as plsql procedure.
    I've tried oci and thin (Oracle) drivers and oranxo demo (i-net) driver.
    And this problems arises on all of this drivers.
    This problem confused me so much :(.
    Maby one of Oracle users, developers nows cause of this strange behaviour.
    Thanx for all answers.

  • Transaction Isolation Level to Read UnCommited in Non OLTP Database

    HI,
    We are having a database which for NOT OLTP process. That is OLAP DB. Operation on that DB is only Select and (Incremental Insert - FOR DWH ) not Update/Delete and we are performing ROLAP operations in that DB.
    By Default SQL Server DB isolation Level is READ COMMITTED.AS Our DB IS OLAP SQL Server DB we need to change the isolation level toRead Uncommited. We google it down but We can achive in
    Transaction level only by SET isoaltion Level TO Read UNCOMMITED
    or ALLOW_SNAPSHOT_ISOLATION ON or READ_COMMITTED_SNAPSHOT
    Is there any other way if we can Change the Database isolation level to READ uncommitedfor Entire Database?, insteads of achiving in Transaction Level or With out enabling SET ALLOW_SNAPSHOT_ISOLATION ON or READ_COMMITTED_SNAPSHOT
    Please use Marked as Answer if my post solved your problem and use Vote As Helpful if a post was useful.
    Please use Marked as Answer if my post solved your problem and use Vote As Helpful if a post was useful.

    Hi,
    My first question would be why do you want to change Isolation level to read uncommitted, are you aware about the repercussions you will get dirty data, a wrong data.
    Isolation level is basically associated with the connection so is define in connection.
    >> Transaction level only by SET isoaltion Level TO Read UNCOMMITED or ALLOW_SNAPSHOT_ISOLATION ON or READ_COMMITTED_SNAPSHOT
    Be cautious Read UNCOMMITED  and Snapshot isolation level are not same.The former is pessimistic Isolation level and later is Optimistic.Snapshot isolation levels are totally different from read uncommitted as snapshot Isolation level
    uses row versioning.I guess you wont require snapshot isolation level in O:AP DB.
    Please read below blog about setting Isolation level Server wide
    http://blogs.msdn.com/b/ialonso/archive/2012/11/26/how-to-set-the-default-transaction-isolation-level-server-wide.aspx
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers
    My TechNet Wiki Articles

  • Question Isolation level on performance

    If I have a question for report, and this report is not really realtime necessary(for example, munites delay is fine). in order to improve performance for report, allow query to get dirty data even there is a transaction lock on on table. So if change isolation level from 3 to 1 or 0, any big performace gain?

    Not sure what the functionality of the report is but you may also look at using the "readpast" query hint which allows skipping rows which on which incompatible locks are being held.
    Dirty read should be carefully evaluated/explained with the users of report since sometimes they will approve dirty read for performance benefit but won't really understand the implication. Just from my book of experience.
    warm regards,
    sudhir

  • Isolation Level of iViews

    Hi,
    I have an iView, on an EP5 SP6, that currently has Isolation Level 4 enabled. I would like it to go to Isolation Level 3 since the size of the iView changes each day.
    The iView is inherited from a KM iView and I have been told that KM iViews need to run with Isolation Level 4. Is this really correct?
    Thanks in advance.
    Cheers
    Kris Kegel

    We write  T-SQL in both right?
    Sorry didnt understand what you meant by that.
    Isolation Level and Transaction Management are two different aspects of T-SQL.
    The former talks about level up to which a transaction can proceed without intervention from other parallel
    transactions. In Read Commited no other transaction will be able to read data that is in use by the current transaction unless its done ie commited/rolled back. But it may introduce additional data that may fall in the range of data which first transaction
    is currently in hold causing phantom reads
    The transaction management explains whether one needs to explicitly specify start and end of transaction
    or whether system will implicitly dtermine the start and end of the transaction.
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Restore default isolation level fails with connection in pool

    Hi,
    I am developing an application that needs to set the TransactionIsolation to SERIALIZE for a transaction. Setting the TransactionIsolation is not the problem. After this transaction is committed or rolled back, i set the isolation level back to the default i saved before.
    The code gets executed and throws no exception. The connection i used is released into the pool. The next time i get this connection from the pool the isolation level is already SERIALIZE. This is not what i wanted to achieve.
    It has to be possible to change the isolation level for transaction, isn´t it?
    Here is the code, that i use. The ConnectionManager gets the connection from a connection pool i configured in the jdbc connector service. Excep for this issue any other operation works fine.
                    ConnectionManager connectionManager = new ConnectionManager();
              Connection con = null;
              int transactionIsolationLevel = 0;
              Queue queue = null;
              List list = null;
              try {
                   con = connectionManager.getConnection();
                   transactionIsolationLevel = con.getTransactionIsolation();
                   if( logger.isInfoEnabled())
                        logger.info(LOGLOC + "ISOLATION_LEVEL default: " + transactionIsolationLevel);
                   // auskommentiert für RE
                   con.setTransactionIsolation( Connection.TRANSACTION_SERIALIZABLE );
                   con.setAutoCommit( false );
              QueueManager queueManager = new QueueManager();
              list = queueManager.GetQueueEntriesBySizeGroups( con, small, medium, large, serverNode );
              con.commit();
              } catch (ClassNotFoundException cnfe) {
                   logger.error(LOGLOC + "Exception setting up transaction context for queue service!", cnfe);
                   handleExceptions(queue, cnfe);
                   try {
                        con.rollback();
                   } catch (SQLException e) {
                        logger.error(LOGLOC + "Exception rolling back transaction!", e);               
              } catch (SQLException sqle) {
                   logger.error(LOGLOC + "Exception setting up transaction context for queue service!", sqle);
                   handleExceptions(queue, sqle);
                   try {
                        con.rollback();
                   } catch (SQLException e) {
                        logger.error(LOGLOC + "Exception rolling back transaction!", e);               
              } catch (QueueManagerException qme) {
                   logger.error(LOGLOC + "Exception executing queue manager!", qme);
                   handleExceptions(queue, qme);
                   try {
                        con.rollback();
                   } catch (SQLException e) {
                        logger.error(LOGLOC + "Exception rolling back transaction!", e);               
              } finally {
                   try {
                        con.setAutoCommit(true);
                        if( logger.isInfoEnabled())
                             logger.info(LOGLOC + "ISOLATION_LEVEL before setting default: " + con.getTransactionIsolation() + " now setting: " + transactionIsolationLevel );
                        // Auskommentiert für RE
                        con.setTransactionIsolation( transactionIsolationLevel );
                        con.close();
                   } catch (SQLException e) {
                        logger.error(LOGLOC + "Exception setting up transaction context for queue service!", e);               
    The datasource is a simple jdbc1.x Oracle Datasource with no special settings.
    In a remote debugging session i saw, that the wrapped Connection from the datasource sets the txLevel successfully, But the underlying T4Connection does not get this isolation level. Could this be a bug?
    Any hints, solutions?

    Hi,
    I am developing an application that needs to set the TransactionIsolation to SERIALIZE for a transaction. Setting the TransactionIsolation is not the problem. After this transaction is committed or rolled back, i set the isolation level back to the default i saved before.
    The code gets executed and throws no exception. The connection i used is released into the pool. The next time i get this connection from the pool the isolation level is already SERIALIZE. This is not what i wanted to achieve.
    It has to be possible to change the isolation level for transaction, isn´t it?
    Here is the code, that i use. The ConnectionManager gets the connection from a connection pool i configured in the jdbc connector service. Excep for this issue any other operation works fine.
                    ConnectionManager connectionManager = new ConnectionManager();
              Connection con = null;
              int transactionIsolationLevel = 0;
              Queue queue = null;
              List list = null;
              try {
                   con = connectionManager.getConnection();
                   transactionIsolationLevel = con.getTransactionIsolation();
                   if( logger.isInfoEnabled())
                        logger.info(LOGLOC + "ISOLATION_LEVEL default: " + transactionIsolationLevel);
                   // auskommentiert für RE
                   con.setTransactionIsolation( Connection.TRANSACTION_SERIALIZABLE );
                   con.setAutoCommit( false );
              QueueManager queueManager = new QueueManager();
              list = queueManager.GetQueueEntriesBySizeGroups( con, small, medium, large, serverNode );
              con.commit();
              } catch (ClassNotFoundException cnfe) {
                   logger.error(LOGLOC + "Exception setting up transaction context for queue service!", cnfe);
                   handleExceptions(queue, cnfe);
                   try {
                        con.rollback();
                   } catch (SQLException e) {
                        logger.error(LOGLOC + "Exception rolling back transaction!", e);               
              } catch (SQLException sqle) {
                   logger.error(LOGLOC + "Exception setting up transaction context for queue service!", sqle);
                   handleExceptions(queue, sqle);
                   try {
                        con.rollback();
                   } catch (SQLException e) {
                        logger.error(LOGLOC + "Exception rolling back transaction!", e);               
              } catch (QueueManagerException qme) {
                   logger.error(LOGLOC + "Exception executing queue manager!", qme);
                   handleExceptions(queue, qme);
                   try {
                        con.rollback();
                   } catch (SQLException e) {
                        logger.error(LOGLOC + "Exception rolling back transaction!", e);               
              } finally {
                   try {
                        con.setAutoCommit(true);
                        if( logger.isInfoEnabled())
                             logger.info(LOGLOC + "ISOLATION_LEVEL before setting default: " + con.getTransactionIsolation() + " now setting: " + transactionIsolationLevel );
                        // Auskommentiert für RE
                        con.setTransactionIsolation( transactionIsolationLevel );
                        con.close();
                   } catch (SQLException e) {
                        logger.error(LOGLOC + "Exception setting up transaction context for queue service!", e);               
    The datasource is a simple jdbc1.x Oracle Datasource with no special settings.
    In a remote debugging session i saw, that the wrapped Connection from the datasource sets the txLevel successfully, But the underlying T4Connection does not get this isolation level. Could this be a bug?
    Any hints, solutions?

  • Pls tel me what is the diffrence between snapshot isolation level of mssql and oracels isolation level

    Hi,
            In mssql i am using following things.
           I have two database D1 and D2, i am using snapshot isolation (ALTER DATABASE MyDatabase
    SET ALLOW_SNAPSHOT_ISOLATION ON) in both database.
    Following is the situation.
    1) There is  one SP sp1 ( it can be in any database d1 or d2), it updates d2 from d1.
    2) d2 is used for reading by web, execept above SP sp1
    3) d1 gets updation from web in readcommite isolation.
    4) both database will be on same instence of mssql.
    Q1) wanted to know how to implement the same thing in oracle 11x express edition.
    Q2) is there any diffrence between snapshot isolation level of mssql and oracel.
    any link would be help full.
    yours sincerely

    >Q1) should i set the option to OFF after the process(ts) is complete
    No keep it on.
    >Q2) ALLOW_SNAPSHOT_ISOLATION  ON , will affect other isolation level's transactions
    No it will not affect any other transaction isolation level.
    >Q3) is my choice of isolation level for process(ts) is correct or there can be any other solution.
    Seems fine, although there are probably many other solutions.
    David
    David http://blogs.msdn.com/b/dbrowne/

  • Snapshot isolation level usage

    Dear All,
                 There are some transaction tables in which more than one user add and update records (only).
    what ever they add and update in transaction tables, based on that entry  they add  a record in Table A1
    , Table A1 has two cols one keeps the table name of transaction table and other col keeps the pk(primarykey) of transaction tables.
    So Table A1 always gets only  inserts,
    Table A1 gets entry only  for transaction tables , and only when transaction table gets entry .
                    At the  same time there is a  process (ts) which reads Table A1 on time basis, picks up all records
    form Table A1 and  reads data from transaction tables on the basis of PK stored in it . there it after inserts all the read records into a
    new temp table.
    and at the end of transaction  it deletes records from Table A1.
    after some time it again picks up new records from Table A1 and repeats the process.
    For process (ts) . i want to use ALLOW_SNAPSHOT_ISOLATION
    so that user can keep on entering records.
    Q1) The ALLOW_SNAPSHOT_ISOLATION
    database option must be set to ON
    before one can start a transaction that uses the SNAPSHOT isolation level. I wanted to know should i set the option to OFF after the process(ts) is complete, and switch
    it on again on the database when process(ts) starts again.
    that is, keeping it on all the time  will affect the database in any case?
    Q2) ALLOW_SNAPSHOT_ISOLATION  ON , will affect other isolation level's transactions or only to snapshot isolation levels transactions. that is, i have old
    stored proc and front end applications like web or window on .net which are using default isolation levels.
    Q3) is my choice of isolation level for process(ts) is correct or there can be any other solution.
    Note: "the information is  quite less but i wont be able to give full information."
    yours sincerely

    >Q1) should i set the option to OFF after the process(ts) is complete
    No keep it on.
    >Q2) ALLOW_SNAPSHOT_ISOLATION  ON , will affect other isolation level's transactions
    No it will not affect any other transaction isolation level.
    >Q3) is my choice of isolation level for process(ts) is correct or there can be any other solution.
    Seems fine, although there are probably many other solutions.
    David
    David http://blogs.msdn.com/b/dbrowne/

  • Setting XA isolation level.

    Is there a configuration parameter that controls the default isolation level used by distributed transactions when you configure XA support on Oracle 8i? I know Oracle's default isolation level is READ COMMITTED, but I would like to have SERIALIZABLE as the isolation level for transactions that are initiated from some MS COM+ components accessing the database.
    Thanks,
    Sam

    Ian,
    The default for Oracle (any version) is ReadCommitted. The only other
    isolation level Oracle supports is Serializable but it's implemented in
    such a way that you will be allowed to continue until commit time and
    only then you might get an exception stating the the access for that
    transaction could not be serialized.
    I don't know for the jDriver but if you use the Oracle Thin XA driver
    even if you set the isolation level in your descriptor you will get an
    exception from Weblogic. It is a Weblogic bug and you can contact
    [email protected] to get a patch.
    Regards,
    Dejan
    IJ wrote:
    edocs (http://e-docs.bea.com/wls/docs70/oracle/trxjdbcx.html#1080746) states that,
    if using jDriver for Oracle/XA you can not set the transaction isolation level
    for a
    transaction and that 'Transactions use the transaction isolation level set on
    the connection
    or the default transaction isolation level for the database'. Does this mean that
    you shouldn't try to set it programatically (fair enough) or that you can't set
    it in the weblogic deployment descriptor either? Also anybody got any idea what
    the default is likely to be if you are using
    an Oracle 9iR2 database?

  • Isolation Level of SQL Server when committing statements

    Dear All,
    Can someone please tell me whether Read Committed and
    Auto commit means the same thing in transaction Isolation level?
    I assume if SQL default is Auto Commit then it commits all transactions whether we use
    "Commit Transaction' cmd or not.
    Thank-you
    SQL75

    We write  T-SQL in both right?
    Sorry didnt understand what you meant by that.
    Isolation Level and Transaction Management are two different aspects of T-SQL.
    The former talks about level up to which a transaction can proceed without intervention from other parallel
    transactions. In Read Commited no other transaction will be able to read data that is in use by the current transaction unless its done ie commited/rolled back. But it may introduce additional data that may fall in the range of data which first transaction
    is currently in hold causing phantom reads
    The transaction management explains whether one needs to explicitly specify start and end of transaction
    or whether system will implicitly dtermine the start and end of the transaction.
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • XA and SERIALIZABLE isolation level

    Hi,
    I'm using JDBC with oracle DB 8.1.6. When I get the physical connection and set transaction isolation level to TRANSACTION_SERIALIZABLE then issue start() on the XAResource instance it gives me error number 24776 "cannot start a new transaction". Without setting isolation level, the transaction would go smoothly. Note that this is the only transaction I have in my small test program. You can test it with the example given in oracle documentation.

    Hi,
    I'm using JDBC with oracle DB 8.1.6. When I get the physical connection and set transaction isolation level to TRANSACTION_SERIALIZABLE then issue start() on the XAResource instance it gives me error number 24776 "cannot start a new transaction". Without setting isolation level, the transaction would go smoothly. Note that this is the only transaction I have in my small test program. You can test it with the example given in oracle documentation. i don't think you are allowed to set transaction isolation level to Serializable for XA

  • Setting Isolation levels

    We have a window service which is responsible to send SMS to users.
    We have 20 instance running as a window service which hits a single table “Tab1”.
    I have implemented isolation level – “SET TRANSACTION ISOLATION LEVEL SERIALIZABLE”, but it is not working because 20 instance is reading a single record at the same time and thus sending 20 SMS to single user.
    I want that if 20 instances read the same record, only one should send the SMS and not all.
    Please advice what can be done..
    We have these 20 instances to speed up the process of sending SMS from the table... so we cannot just rely on one service

    SERIALIZABLE only means that until you commit, SQL Server guarantees that the result set will be the same if you repeat the query. It does not perform any updates or lock a row for reacing by other users.
    It sounds like you have some sort of a queue. Thus, you should look into Service Broker.
    In the meanwhile you can try:
    BEGIN TRANSACTION
    SELECT TOP(1) .... FROM SMS_table WITH (READPAST)
    -- Send your SMS here
    UPDATE tbl SET fetched = 1 WHERE ...
    COMMIT TRANSACTIOn
    Don't use SERIALIZABLE, but stick to READ COMMITTED.
    Erland Sommarskog, SQL Server MVP, [email protected]

Maybe you are looking for

  • I put my iPod on my cable box USB to charge it and listen to music and when I plug it into my wall charger it doesn't connect. Please help idk what to do.

    I'm able to plug it into my ps3 and the computer and it works, but when I plug it into the wall charger it doesn't work. I charged it with my wall charger all day but as soon as I connected it to my cable box it it only works with USB ports now. help

  • Mapping and cube browsing

    Hi, I have build my first cube in OWB following the book "Oracle Warehouse Builder 11g; Getting Started", but I think that I have some problems with my mappings? When I run (start) my stage mapping and dimensions mapping I can see that some rows are

  • DVD Quality: Using Pinnacle Video Transfer and I-movie 08

    I am using Pinnacle video transfer to upload old video - some directly from old camera tapes, some from 8mm video that has been transferred to VCR or DVD by a commercial service such as Costco. I am using the "Professional Quality" to burn my finishe

  • Help with an Audio Book

    I downloaded and audio book and wanted to burn it to disc. It gives me the option to burn to multiple discs but then does nothing. I have a pretty boring desk job and was hoping to get to listen to this while at work. How would I go about getting thi

  • Creative cloud desktop tool trying to update

    Setup: Running Win7 64bit on Sager NP8760 laptop - cpu: older i7 M820 1.73 ghz - 8GB ram I get a notice from the task bar that `creative cloud' has an update.... ok start the process. I see a dialog that the update is downloading .... I see a dialog