Dirty reads

hi ,
What are dirty reads ?
Can someone point me to a good document for understanding dirty reads ?
Rgds
S

WIP  wrote:
What are dirty reads ?In simple terms using a basic example:
sessionA locks a row for an update. It makes changes to the row. sessionB wants to read the same row - but cannot as due to the way that database is designed. That row is locked. So sessionB tells the database it wants a dirty read instead - that it does not care whether some rows are locked or not. This means that now sessionB reads rows via a cursor, and some of these could be locked rows and the data of these rows can be dirty - dirty meaning that the data is not committed data and can be rolled back. The dirty read therefore does not return a consistent set of row data.
Can someone point me to a good document for understanding dirty reads ?Not applicable to Oracle as there is no such concept as dirty reads in Oracle. All reads in Oracle is consistent - in basic terms, you only read committed data (unless you read your own uncommitted changes). And writers (processes locking and changing rows) never block readers.

Similar Messages

  • Dirty reads vs phantom reads

    Hi,
    What is the difference between a "Dirty Read" and "Phantom Reads". Can anyone explain in brief or give an example?
    Thanks in advance.

    A dirty read is an uncommitted read, which may or may not be exist in the table. A phantom is a ghost record that doesn't appear in a transaction initially but appears if it is read again because some other transaction has inserted rows matching the criteria.
    Here are examples of both:
    --Dirty read example
    create table t1 (c1 int null, c2 varchar(50) null)
    go
    insert t1(c1, c2) values (1, 'one')
    insert t1(c1, c2) values (2, 'two')
    insert t1(c1, c2) values (3, 'three')
    insert t1(c1, c2) values (4, 'four')
    begin tran
    update t1 set c2 = 'zero'
    -- let this run in the current query window and open a new query window to run the below statement
    --and you will see all 4 rows having a value of 0 in column c2, which is uncommitted data
    select * from t1 with (nolock)
    --uncomment this and run the below statement and rerun the above statement
    --and you will see the previous values of one, two, three and four
    --rollback tran
    Here's an example of a phantom reads: 
    --Phantom example
    create table t1 (c1 int null, c2 varchar(50) null)
    go
    insert t1(c1, c2) values (1, 'one')
    insert t1(c1, c2) values (2, 'two')
    insert t1(c1, c2) values (3, 'three')
    insert t1(c1, c2) values (4, 'four')
    -- let this run in the current first query window and open a second query window to run the below statement
    --and you will see all 2 rows having a value in column c2 starting with character t
    begin tran
    select * from t1
    where c2 like 't%'
    --now insert the new value of ten (matching the query criteria - starts with t) from the first query window
    insert t1(c1, c2) values (10, 'ten')
    --Run the below statement again from the second query window that is open and you will see the new row
    --that got inserted - so 3 rows are seen including the newly inserted ten
    --this new row is a phantom read
    select * from t1
    where c2 like 't%'
    --uncomment this and run the below statement in the second query window
    --rollback tran
    Satish Kartan www.sqlfood.com

  • Dirty read on informix XA datasource

    Hi,
    I'm not able to set the transaction isolation level to dirty read with informix XA driver. I tried the following options.
    1. Setting the initSQL - I set the initSQL property to SQL SET ISOLATION TO DIRTY READ
    2. Setting the IFX_ISOLATION_LEVEL - I tried adding the property IFX_ISOLATION_LEVEL=1
    Note: I have used informix driver com.informix.jdbcx.IfxXADataSource
    Setting the IFX_ISOLATION_LEVEL property on non XA data source (driver-com.informix.jdbc.IfxDriver) works fine, but the same configuration does not work on XA resource.
    Also the link http://download.oracle.com/docs/cd/E13222_01/wls/docs90/jdbc_drivers/informix.html#1065880 mentions about setting the data source with weblogic driver, but it does not mention about the property need to be set for isolation level.
    Can you please help.
    I'm using weblogic 10.0 server.

    I want to read uncommitted data from data base. I created two data source one XA and other Non XA, and I wrote a small application to lookup the data source and to print the isolation level of the connection. The Non XA connection is printing isolation level 1 which is what I wanted, but for XA connection the isolation level is printed as 2 which is the default isolation level READ COMMITTED. Also I inserted a record in some transaction and tried to read using the looked up data source, with Non XA connection I'm able to read the data but with XA connection I'm not able to read the data, instead i'm getting 'java.sql.SQLException: Could not position within a file via an index'.

  • Dirty read with READ COMMITTED and sql count.

    Hi,
    Under read committed isolation level a select count(*) ... return 1, in fact the select sees insert during other current transaction, it is typically a dirty read. But a select without count function return 0. Can someone explain me why I have this behaviour ?
    the transaction which is making the select count(*)...has read committed isolation level
    the transaction which has inserted the new item has read uncommitted isolation level
    Please tell me if I am missing something.
    I am using Maxdb 7.6.
    Thx

    Hi there,
    ok, I tried again an was actually able to reproduce the behavior
    The problem here is the special implementation of the count (*) function.
    To get correct results (isolation level wise), you may use count(<primary key>) instead.
    So for your example
    select count (IDDI) from NONO.APPA
    should always deliver the correct result.
    The same is true for the other aggreation functions (min/max/avg...) - it's just about the very special count(*) handling here.
    I informed our development about this.
    thanks for pointing out!
    Lars

  • Able to make dirty read using Oracle 9i and JDBC thin driver v 9.2.0

    I've searched this forum and did not see anything to directly answer my question.
    I checked the Oracle JDBC Frequently Asked Questions...
    ditto (perhaps due to the fact that it was last updated: 22 June 2001).
    So here is my question, and thank you in advance for any insight (apologies if I have missed finding an already answered question):
    Section 19-15 of:
    "JDBC Developer’s Guide and Reference"
    (which is for Oracle 9i database)
    downloadable from:
    http://download-east.oracle.com/docs/cd/B10501_01/java.920/a96654.pdf
    is entitled:
    "Transaction Isolation Levels and Access Modes"
    The section seems to indicate that
    if JDBC connection A is setup with:
    setAutoCommit(false)
    setTransactionIsolation(Connection.TRANSACTION_READ_COMMITTED)
    and then used to perform an update on a row (no commit(), rollback(), or close() yet) ...
    then JDBC connection B (setup in the same way) will be prevented from
    making a dirty read of that same row.
    While this behavior (row-level locking) occurs correctly when using MS SQL Server 2000,
    it is not occuring correctly with Oracle 9i and the Oracle Thin JDBC driver version 9.2.0.
    The test case I have shows that with Oracle, connection B is able to make a dirty read
    successfully in this case.
    Am I doing something wrong here ?
    Again, MS SQL Server correctly blocks connection B from making the Read until Connection A
    has been either committed, rolled back, or closed, at which time connection B is able to
    complete the read because the row is now unlocked.
    Is there a switch I must throw here ?
    Again, any help is greatly appreciated.

    Thanks for the response.
    I understand what you are saying...
    that readers don't block writers in Oracle (the same is true in SQL Server 2000).
    However, I don't see how my test case is working correctly with Oracle (the exact same code acting as I'm thinking it should with SQL Server, but I still think it is acting incorrectly with Oracle).
    I have transaction A do this:
    update <table> set <column2>=<value> where <column1>='1'
    then I use Thread.sleep() to make that program hang around for a few minutes.
    Meanwhile I sneak off and start another program which begins transaction B. I have transaction B do this:
    select * from <table> where <column1>='1'
    and the read works immediately (no blocking... just as you have said) however, transaction A is still sleeping, it has not called commit() or rollback() yet.
    So what if transaction A were to call rollback(), the value read by transaction B would be incorrect wouldn't it ?
    Both A and B use setAutoCommit(false) to start their transactions, and then call setTransactionIsolation(Connection.TRANSACTION_READ_COMMITTED).
    Isn't that supposed to guarantee that a reader can only read what is committed ?
    And if a row is in "flux"... in the process of having one or more values changed, then the database cannot say what the value will be ?
    I can almost see what you are saying.
    In letting the reader have what it wants without making it wait, I suppose it could be said that Oracle is holding true to the "only let committed data be read"
    So if that's it, then what if I want the blocking ?
    I want an entire row to be locked until whoever it in the middle of updating, adding, or removing it has finished.
    Do you know if that can be done with Oracle ? And how ?
    Thanks again for helping me.

  • How do you implement 'Dirty Read/ Write' concept?

    Hi,
    I need to implement dirty read/ write concept into my procedure. I wanted to know how to go about it. Does Oracle have provide a way to do this or is this something to be worked out with some logic manually?
    Can someone suggest the exact logic I should follow or chalk out a simple algorithm.
    Any kind of information on this would be much appreciated.
    Thanks,
    Amrita.

    Sorry for this late reply.<br>
    My first reply should have contained an example on how to implement it for the kicks. Don't use this code for anything else but a test. It's absolutely worthless application-wise. But it proves that some dirty read/write functionality can be obtained if one twists everything that is good. Here goes. First I create two java classes and two PL/SQL "wrappers". Then - simply connect with session 1 and invoke exec dirty_write<br>
    make no commit ... and let session 2 select dirty_read from dual. You'll notice that the data written by session 1 is read by session 2.<br>
    create or replace and compile
    java source named "FileAppendTest"
    as
    import java.io.File;
    import java.io.FileOutputStream;
    public class FileAppendTest {
    static public void append() {
      try {
        int vSomethingToWrite = 9;
       File vFile = new File("c:\\db_out.txt");
       FileOutputStream vAppendFile = new FileOutputStream(vFile, true);
       vAppendFile.write ( vSomethingToWrite );
      vAppendFile.close();
      } catch (Exception e) {
       // let this test hide all errors
    create or replace and compile
    java source named "DirtyReadTest"
    as
    import java.io.File;
    import java.io.FileInputStream;
    import java.io.File;
    import java.io.FileInputStream;
    public class DirtyReadTest {
    static public int read() {
      int vError = 0;
      try {
       File vFile = new File("c:\\db_out.txt");
       FileInputStream vReadFile = new FileInputStream(vFile);
       return vReadFile.read();
      } catch (Exception e) {
       return vError;
    create or replace procedure dirty_write as
    language java
    name 'FileAppendTest.append()';
    create or replace function dirty_read return number as
    language java
    name 'DirtyReadTest.read() return integer';
    -- as I mentioned earlier. Only try this code for the fun of it. Don't consider it for anything remotely usable in an application.

  • Ssis transction isolation levels : Dirty read

    Step 1 : I set the Isolation level property as ReadCommitted at the
    Data Flow Task (Please check the below image 1). Still I can read data in SQL server.
    Step 2 :  I set the Isolation level property as ReadCommitted at the Package level (Please check the below image 2). Still I can read data in SQL server.
    Please help me. How to set it up and lock the Dirty read.
    Maheswaran Jayaraman

    Thanks lot for your reply.
    I'm processing the data in Database 'A'. after the process is done, I'm transferring around 300,000 records from Database 'A' to database 'B'. when transferring the data, end user should not read the partial data. How to do it.
    I tried Chaos & ReadUnCommitted. still it's not working. Please help
    Maheswaran Jayaraman
    Don't play with the isolation levels in this case.
    You just need to encapsulate the operation into a Sequence Container so if something fails you rollback the whole thing as a unit of work.
    Arthur
    MyBlog
    Twitter

  • Dirty read

    Hello,
    When I update or insert data in my table and do not commit the changes.
    When I query the same table , I am able to view the uncommited data made by me, but other user are not able to view.
    The other session is not able to view my uncommited data that is undestandable because of Read Committed isolation.
    but how am I able to read the uncommited data in my session?
    How does it happen?
    Thanks.

    Oracle's read consistency model makes it happen.
    Your server process on the post-update select encounters data in the buffer cache which is uncommitted. Normally, that would prompt your server process to copy those buffers, locate the undo you generated when you first modified them, and then use that undo to rollback the changes you made in the copies of the buffers. The read of the data would then be from the copied buffers, and thus it would appear as if you were reading unchanged, pre-update data.
    What happens in the case you're asking about, however, is that your server process encounters changed data and notices that the session that changed the data is the same session as is querying it... and accordingly doesn't bother with the copy-and-rollback shenanigans that would take place if it was anyone else reading the data.
    It's done on a per-session basis, not a username basis. It's the SID and SERIAL# of your session that determines access to 'dirty' data. If Scott logs on twice, for example, and in one session updates EMP without committing, then in his second session, he will not be able to see the changes... even though it's the same "user", it's not the same session.

  • MS ACCESS dirty reads

    I have a JSP page that adds (or updates) a record to an MS ACCESS table and then forwards control to a second JSP page that queries the table. The problem is that second page does not find the new record (except occassionally). If the page is refreshed the record appears. Autocommit is set to true.
    Other than delaying the JSP, so ACCESS has more time to make the update, does anybody have any suggestions how this can be solved.
    Thanks
    Richard

    I don't know if it's the same, if you use JSP.
    But in use from usual applications Access seems to have this bug, that always the last inserted or updated row doesn't show its change unless
    1) the connection is closed or
    2) the next select is done on this table.
    2) seems to be the best workaround: simply do a dummy select each time after finishing the changes.

  • Dirty reads within a transaction

    Hello,
    I have a method which inserts a record into a table and returns the primary key of the record which is generated via a trigger on insert.The problem is that I cannot read the row within the same transaction.Unless I do a explicit commit the select query keeps on returning 0 records.
    I am using the same Statment object to execute both insert and select queries. How can I read uncommitted data within the same transaction. I thought using the same statement object would allow me to do that.
    I tried to set the transaction isolation level to read uncommitted before the insert statement but oracle 9i drivers allow only read committed and serializable transaction levels.
    Any help is appreciated.
    Here is the code.
    //insert.
    connection.setAutoCommit(false);
    String insert_query = " insert into employee " .....
    stmt = connection.createStatement();
    stmt.executeUpdate(insert_query);
    //select
    select_query = "select employee_id from employee where ... ";
    stmt.executeQuery(select_query);
    if (rs.next())
    int primary_key = rs.getInt(1);
    stmt.close();
    connection.commit();
    connection.setAutoCommit(true);

    I tried the following using seperate 3 seperate statements (pseudo code) using the statement defaults for cursor types. This worked as designed (I was curious about the need for a single statement, it does not look like it is required):
    1) Open 1 connection
    1) set autocommit = false
    2) Create statement, then Select * from mytable, (row count = 3)
    3) Create statement, then Insert into mytable, (update count = 1)
    4) Create statement, then Select * from mytable (row count = 4)
    I didn't take this as far as creating the trigger, and yes, I know that could defintely have an effect on the overall behavior. I'm including the code so there is no confusion on what I did.
    It would have been my hypothesis that the trigger would have no effect on reading uncommited data within the same program / transaction. If there is any way you could post the code that shows the exact problem, I could probably do a better job of reproducing it in my environment.
    Here is the code I used:
    import java.sql.*;
    import java.util.*;
    import java.text.*;
    class dbtest2 {
        public static void main (String args []) throws SQLException {
            try {
                String insert = "INSERT INTO TEST2 VALUES(1,'A',SYSDATE)";
                String select = "SELECT " +
                                    "COL1, " +
                                    "COL2, " +
                                    "TO_CHAR(COL3,'YYYY-MM-DD HH24:MI:SS') COL3DATE " +
                                "FROM TEST2";
                String col1, col2, col3;
                DriverManager.registerDriver (new oracle.jdbc.driver.OracleDriver());
                Connection conn = DriverManager.getConnection (
                                     "jdbc:oracle:thin:@riker:1521:clt12fva",
                                     "clarit",
                                     "clarit");
                conn.setAutoCommit(false);
                //  Select all rows from the table and display them
                Statement statement1 = conn.createStatement();
                ResultSet rs1 = statement1.executeQuery(select);
                int rowctr1 = 0;
                while(rs1.next()) {
                    rowctr1++;
                    col1 = rs1.getString(rs1.findColumn("COL1"));
                    col2 = rs1.getString(rs1.findColumn("COL2"));               
                    col3 = rs1.getString(rs1.findColumn("COL3DATE"));
                    System.out.println("Row="+rowctr1+" col1="+col1+" col2="+col2+" col3="+col3);  
                // Insert a new row with autocommit = false;
                Statement statement2 = conn.createStatement();
                int updateCount = statement2.executeUpdate(insert);
                System.out.println("updateCount="+updateCount);
                //  Select all rows from the table and display them
                Statement statement3 = conn.createStatement();           
                ResultSet rs3 = statement3.executeQuery(select);
                int rowctr3 = 0;
                while(rs3.next()) {
                    rowctr3++;
                    col1 = rs3.getString(rs3.findColumn("COL1"));
                    col2 = rs3.getString(rs3.findColumn("COL2"));               
                    col3 = rs3.getString(rs3.findColumn("COL3DATE"));
                    System.out.println("Row="+rowctr3+" col1="+col1+" col2="+col2+" col3="+col3);  
                //  Rollback the changes and close the JDBC objects
                conn.rollback();
                rs1.close();
                rs3.close();
                statement1.close();
                statement2.close();
                statement3.close();
            catch (Exception e) {
                System.out.println("Java Exception caught, error message="+e.getMessage());
    }Console Results:
    Row=1 col1=123.123 col2=1.01 col3=2002-12-31 23:04:01
    Row=2 col1=3333.333 col2=5 col3=2002-12-31 23:04:01
    Row=3 col1=5 col2=10000 col3=2002-12-31 23:04:01
    updateCount=1
    Row=1 col1=123.123 col2=1.01 col3=2002-12-31 23:04:01
    Row=2 col1=3333.333 col2=5 col3=2002-12-31 23:04:01
    Row=3 col1=5 col2=10000 col3=2002-12-31 23:04:01
    Row=4 col1=1 col2=A col3=2002-12-27 00:24:10

  • Is isolation level setting(Dirtry Read Options) working fine for DB2?

    Hello Gurus,
    We are building obiee reports on DB2 OLTP database. As per my understanding if we select the Isolation level as "Dirty Read" it should not lock the tables but. In our case it is locking the tables and causing others(application Users) not to update the data. Please let us know if you have faced the same issue or any solution. Our production migration is stopped because of this issue.
    Thanks,
    Anil

              Just a follow up, I think the isolation level is perhaps being set to REPEATABLE_READ,
              since that is what seems to be happening. The value from the first read is maintained
              through subsequent reads in the same transaction.
              lance
              "Lance" <[email protected]> wrote:
              >
              >I have a Message Driven Bean (MDB) that is container managed, and its
              >transaction
              >isolation is set to TRANSACTION_READ_COMMITTED in weblogic-ejb-jar.xml
              >and that
              >seems to work fine. If I look at an entity bean in onMessage which is
              >updated/commited
              >outside the transaction I can see the updates no problem.
              >
              >Now the problem is this.. inside the onMessage method, the MDB creates
              >a new
              >instance of a class. This class starts up its own UserTransaction (using
              >(UserTransaction)new
              >InitialContext().lookup("javax.transaction.UserTransaction")) and goes
              >into a
              >loop working away. Inside the loop it is inspecting a value on an entity
              >bean.
              > The classs never sees any updates to this bean which are made outside
              >this new
              >UserTransaction.
              >
              >It looks to me that the UserTransaction that the class is getting has
              >a different
              >isolation level (serialized?). Is there a way to set the isolation level
              >for
              >a UserTransaction?
              >
              >Any help would be great!
              >
              >lance
              

  • About read consistency

    Dear Guys
    i m getting confused in read consistent image of oracle
    please clear a little
    suppose if i fired a query that takes 20 min to retrieve data...now its in processing when a B user update some of the rows and fired commit in between..suppose after 5 mins when A fired query
    Now please tel A after 20 mins would get the updated rows or same image as it was when started query means old rows data

    user11221081 wrote:
    Dear Guys
    i m getting confused in read consistent image of oracle
    please clear a little
    suppose if i fired a query that takes 20 min to retrieve data...now its in processing when a B user update some of the rows and fired commit in between..suppose after 5 mins when A fired query
    Now please tel A after 20 mins would get the updated rows or same image as it was when started query means old rows dataYou should tell us that what should happen? Always remember that the databases work following the ACID properties where C stands for consistency. This means, that you must not ever see what is called a dirty read . It means that there must not be the case that you should see two different set of results in a single fetch. If a situation happens like this, the database must ensure that your data must be able to show that image of the data which is consistent to the time from when your query started. This is why you need the read consistent image and your database must follow some way of transaction isloation level which in Oracle database is read committed . This means that never ever a reader won't worry about writer and writer won't worry about reader (the word is wait but I changed it to worry to make the point clear).
    If this doesn't make the things clear to you, read the link that Srini has given to you already.
    HTH
    Aman....

  • Using LockModeType.READ/WRITE

    What circumstances make it useful to lock individual entity instances using LockModeType.READ/WRITE? I figured that setting isolation levels would have been enough, but I guess there's a need for further locking?
    When you access an entity, isn't it locked automatically unless you're using optimistic locking? I'm pretty lost on this issue...
    Could somebody please describe when would you lock individual entities?
    Thanks.

    Hi this is my understanding although I am not an expert.
    When you have READ_COMMITTED mode it means T1 can read a value T2 can modify the same value and then commit. If T1 reads it again it will get a new value or if it commits it will overwrite the previous changes. (Non-repeatable read). You wouldn't get a dirty read although you would get non-repeatable read.
    If transaction T1 calls lock(entity, LockModeType.READ) on a versioned object, the entity manager must ensure that neither of the following phenomena can occur:
    With LockMode you get extra features.
    # P1 (Dirty read): Transaction T1 modifies a row. Another transaction T2 then reads that row and obtains the modified value, before T1 has committed or rolled back. Transaction T2 eventually commits successfully; it does not matter whether T1 commits or rolls back and whether it does so before or after T2 commits.
    P2 (Non-repeatable read): Transaction T1 reads a row. Another transaction T2 then modifies or deletes that row, before T1 has committed. Both transactions eventually commit successfully.

  • SimpleValidator only validate versions on updates with repeatabler read?!

    I was testing the SimpleValidator and my example seemed to indicated that it ONLY checks that the enlisted (old) version is the same as the locked (current) version for UPDATED objects if the isolation level is repeatable read (or presumably higher)! I would have expected this check to be done no matter what the isolation level was... I thought it was ONLY reads that was not verified in the read commited isolation level compared to read commited...
    It would also be nice to know if one can change how versions are calculated from cache objects simply by overriding the calculateVersion method (my tests indicates that this is possible but I would like to get it confirmed!). After introducing POF (using separate serializers) I was very happy to avoid having my cached busines objects implement Coherence classes or interfaces and would not like to break this again by using Versionable....
    /Magnus
    Edited by: MagnusE on Jan 17, 2010 4:09 PM

    I also rewrote the original program only using transaction maps (my first version assumed that I could create detectable conflicts using dirty reads/writes outside of a transacvtion map just as well as complete and fully commited transaction maps) but this did not change anything either:
    package txtest;
    import com.tangosol.util.TransactionMap;
    import com.tangosol.util.Versionable;
    import com.tangosol.net.CacheFactory;
    import com.tangosol.net.NamedCache;
    import java.io.Serializable;
    public class Test1_ver2 {
        public static final class Person implements Versionable, Serializable {
            private int version;
            private final String name;
            private final int age;
            public Person(String name, int age, int version) {
                this.age = age;
                this.name = name;
                this.version = version;
            public int getAge() {
                return age;
            public String getName() {
                return name;
            public Comparable getVersionIndicator() {
                return version;
            public void incrementVersion() {
                version++;
            public String toString(){
                return name + ", version = " + version;
        static final String CACHE_NAME = "dist-test";
        public static void main(String[] args) {
            try {
                // "Create" cache
                NamedCache cache = CacheFactory.getCache(CACHE_NAME);
                // Initialize cache
                cache.put(1, new Person("Foo", 23, 1));
                // Creatwe transaction map1  and select isolation level
                TransactionMap tx1 = CacheFactory.getLocalTransaction(CacheFactory.getCache(CACHE_NAME));
                tx1.setConcurrency(TransactionMap.CONCUR_OPTIMISTIC);
                // If I use TRANSACTION_GET_COMMITTED no exception is thrown but if TRANSACTION_REPEATABLE_GET is used
                // the validation throws an excpetion as expected...
                tx1.setTransactionIsolation(TransactionMap.TRANSACTION_REPEATABLE_GET);
                //tx1.setTransactionIsolation(TransactionMap.TRANSACTION_GET_COMMITTED);
                TransactionMap.Validator validator1 = new com.tangosol.run.jca.SimpleValidator();
                validator1.setNextValidator(tx1.getValidator());
                tx1.setValidator(validator1);
                // Start transaction
                tx1.begin();
                // Read an object from tx1...
                Person p1 = (Person) tx1.get(1);
                TransactionMap tx2 = CacheFactory.getLocalTransaction(CacheFactory.getCache(CACHE_NAME));
                tx2.setConcurrency(TransactionMap.CONCUR_OPTIMISTIC);
                tx2.setTransactionIsolation(TransactionMap.TRANSACTION_GET_COMMITTED);
                TransactionMap.Validator validator2 = new com.tangosol.run.jca.SimpleValidator();
                validator2.setNextValidator(tx2.getValidator());
                tx2.setValidator(validator2);
                tx2.begin();
                // Read same object using tx2, update it and write it back
                Person p2 = (Person) tx2.get(1);
                tx2.put(1, new Person(p2.getName(), p2.getAge() + 1, ((Integer) p2.getVersionIndicator()) + 1));
                tx2.prepare();
                tx2.commit();
                tx1.put(1, new Person("Fum", p1.getAge(), ((Integer) p1.getVersionIndicator()) + 1));
                // Prepare and commit
                tx1.prepare();
                tx1.commit();
            } catch (Throwable t) {
                t.printStackTrace();
    }Edited by: MagnusE on Jan 18, 2010 10:41 AM

  • Phantom read

    Phantom reads     :
    A transaction reexecutes a query returning a set of rows that satisfy a search condition and finds that the set of rows satisfying the condition has changed due to another committed transaction in the meantime.
    Occurs when one transaction begins reading data and another inserts to or deletes data from the table being read.
    Question :
    whats the output we get when there is a phantom read ?
    let me make a demo example which will mimic the above scenario.
    say, I am a bank customer . i have an account ....i am searching for transaction records for the past 2 months via netbanking online statement search option.
    Now, in the mean time(concurrently)......one banker deleted some of my transaction records .
    I think , I have framed the phantom read problem correctly now.
    Question is : what will be the output when really there is such type of situation occurs ?
    see, i am assuming that this is not a dirty read case...that is the banker has commited his activity.
    so, what output i would expect when there is a phantom read ?

    Dirty read would be when you would read data made by another transaction that would be rolled back.
    Phantom read seems to be reading data that doesn't exist at the time of the making of the query (in the case of inserting). Doesn't sound very dangerous to me, when comparing to a dirty read.
    For your bank example, you would just see one transaction less in your records.

Maybe you are looking for

  • Photoshop elements 4.0 for macs

    working on my website... i heard that elements 4.0 for macs is the same as 5.0 for pc - does anyone know if thats the case?? thanks...

  • Mail.app crashes when I try to increase font size

    Has anyone had a problem with mail.app crashing? It only happens when I try to increase the message font size or change the style of font within mail preferences. It will also crash if I select "use fixed-width font for plain text messages". Mailbox

  • Custom AQ configuration

    Hi, I configured custom AQ transport protocol. B2B is able to send EDI data to custom AQ (verified in AQ table). after publishing EDI data to custom AQ, again B2B reads the same data and throws AIP-50547: Trading partner agreement not found. Is there

  • 3.5 to RCA for home stereo?

    Got an iPhone 4 32 gig. Want to play some of my music playlists on a cheap stereo in the travel trailer. Bought a pair of 3.5 to RCA (Red/White) male ends all. Sound won't play. I noticed the 3.5 cable tip has only 2 bands where the ear buds have 3 b

  • Working with hierarchy

    Hi, I am working with hierarchy (using start with and connect by prior ) in order to built a "tree" of application jobs. For each job , the hirarchy start with LEVEL 1 until level "n" . In the example bellow i have built the hirarchy for job number 3