Phantom read

Phantom reads     :
A transaction reexecutes a query returning a set of rows that satisfy a search condition and finds that the set of rows satisfying the condition has changed due to another committed transaction in the meantime.
Occurs when one transaction begins reading data and another inserts to or deletes data from the table being read.
Question :
whats the output we get when there is a phantom read ?
let me make a demo example which will mimic the above scenario.
say, I am a bank customer . i have an account ....i am searching for transaction records for the past 2 months via netbanking online statement search option.
Now, in the mean time(concurrently)......one banker deleted some of my transaction records .
I think , I have framed the phantom read problem correctly now.
Question is : what will be the output when really there is such type of situation occurs ?
see, i am assuming that this is not a dirty read case...that is the banker has commited his activity.
so, what output i would expect when there is a phantom read ?

Dirty read would be when you would read data made by another transaction that would be rolled back.
Phantom read seems to be reading data that doesn't exist at the time of the making of the query (in the case of inserting). Doesn't sound very dangerous to me, when comparing to a dirty read.
For your bank example, you would just see one transaction less in your records.

Similar Messages

  • Phantom Reads / Data Concurrency

    I am using OWB 10.2.
    I am encountering phantom reads within some of my OWB maps. These phantom reads are caused by activity on the source system when I am sucking the data from the source tables. The problem is my map does an update and then an insert. The activity on the source occurs after the update transaction has started but before the insert transaction.
    Does anyone know of a setting in OWB that allows serializable transactions?
    I thought of using the command "SET TRANSACTION ISOLATION LEVEL SERIALIZABLE" and changing the Commit Control to Manual for the map but can I put the set transaction command in a Pre-Mapping process? Or does it have to be in a SQL*Plus operator in the Process Flow.
    I cannot find any information about Data Concurrency in the OWB documentation and am surprised that no one has encountered this problem before.
    Help!

    Hi,
    Check the below links:
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:27523665852829
    http://www.experts-exchange.com/Database/Oracle/Q_20932242.html
    Best regards,
    Rafi.
    http://rafioracledba.blogspot.com

  • Nonrepeatable Reads and Phantom Reads

    Guys, I am very confuse about the definitions of Nonrepeatable Reads and Phantom Reads when I read the BEA Documentaion in http://e-docs.bea.com/workshop/docs81/doc/en/wls/guide/advanced/EJBsAndTransactions.html
    The definition was queit confusing to me because they seems to describe the same thing, but it is obviously not the same thing. The problem is to see the difference among then.
    What is that difference? Could I have an practical example of that difference?
    Thanks everyone

    Non-Repeatable reads: When a row is updated in database. 2 reads may not return same data.
    Phantom reads is where a new row is inserted into the database, hence the 2 selects may not return same set of data.
    Chekc the link below for example queies. hope it helps.
    http://en.wikipedia.org/wiki/Isolation_(computer_science)
    #Phantom_Reads

  • Dirty reads vs phantom reads

    Hi,
    What is the difference between a "Dirty Read" and "Phantom Reads". Can anyone explain in brief or give an example?
    Thanks in advance.

    A dirty read is an uncommitted read, which may or may not be exist in the table. A phantom is a ghost record that doesn't appear in a transaction initially but appears if it is read again because some other transaction has inserted rows matching the criteria.
    Here are examples of both:
    --Dirty read example
    create table t1 (c1 int null, c2 varchar(50) null)
    go
    insert t1(c1, c2) values (1, 'one')
    insert t1(c1, c2) values (2, 'two')
    insert t1(c1, c2) values (3, 'three')
    insert t1(c1, c2) values (4, 'four')
    begin tran
    update t1 set c2 = 'zero'
    -- let this run in the current query window and open a new query window to run the below statement
    --and you will see all 4 rows having a value of 0 in column c2, which is uncommitted data
    select * from t1 with (nolock)
    --uncomment this and run the below statement and rerun the above statement
    --and you will see the previous values of one, two, three and four
    --rollback tran
    Here's an example of a phantom reads: 
    --Phantom example
    create table t1 (c1 int null, c2 varchar(50) null)
    go
    insert t1(c1, c2) values (1, 'one')
    insert t1(c1, c2) values (2, 'two')
    insert t1(c1, c2) values (3, 'three')
    insert t1(c1, c2) values (4, 'four')
    -- let this run in the current first query window and open a second query window to run the below statement
    --and you will see all 2 rows having a value in column c2 starting with character t
    begin tran
    select * from t1
    where c2 like 't%'
    --now insert the new value of ten (matching the query criteria - starts with t) from the first query window
    insert t1(c1, c2) values (10, 'ten')
    --Run the below statement again from the second query window that is open and you will see the new row
    --that got inserted - so 3 rows are seen including the newly inserted ten
    --this new row is a phantom read
    select * from t1
    where c2 like 't%'
    --uncomment this and run the below statement in the second query window
    --rollback tran
    Satish Kartan www.sqlfood.com

  • Nonrepeatable reads & phantom reads

    hi
    whats the difference nonrepetable(fuzzy) read & phantom read phenomena.
    from the docs i undertand they both are same but non repetable reads signify that
    modified or deleted rows committed by other transactions can be seen by the same query
    when run again
    whereas in case of phantom read when the query is run again for the second time
    it can see newly inserted rows committed by other transactions.
    Correct me if i am wrong. If possible illustrate with an example.
    bye
    Sushant

    Non-Repeatable reads: When a row is updated in database. 2 reads may not return same data.
    Phantom reads is where a new row is inserted into the database, hence the 2 selects may not return same set of data.
    Chekc the link below for example queies. hope it helps.
    http://en.wikipedia.org/wiki/Isolation_(computer_science)
    #Phantom_Reads

  • Read committed isolation level must not produce nonrepeatable read

    Hi,
    I am sql server DBA. But i am trying to improve myself in oracle too.
    I read isolation levels in oracle. And it says, In read committed isolation level , oracle guarantees the result set that contains committed records at the beginning of reading operation.
    İf it is guaranteed , how does nonrepeatable read can occur? It must not occur then.
    I think , I misunderstood something .
    can you explain me?
    Thanks

    >
    I read isolation levels in oracle. And it says, In read committed isolation level , oracle guarantees the result set that contains committed records at the beginning of reading operation.
    İf it is guaranteed , how does nonrepeatable read can occur? It must not occur then.
    >
    See the 'Multiversion Concurrency Control' section in the database concepts doc. It discusses this and has a simple diagram (can't post it) that shows it.
    http://docs.oracle.com/cd/B28359_01/server.111/b28318/consist.htm
    >
    As a query enters the execution stage, the current system change number (SCN) is determined. In Figure 13-1, this system change number is 10023. As data blocks are read on behalf of the query, only blocks written with the observed SCN are used. Blocks with changed data (more recent SCNs) are reconstructed from data in the rollback segments, and the reconstructed data is returned for the query. Therefore, each query returns all committed data with respect to the SCN recorded at the time that query execution began. Changes of other transactions that occur during a query's execution are not observed, guaranteeing that consistent data is returned for each query.
    Statement-Level Read Consistency
    Oracle Database always enforces statement-level read consistency. This guarantees that all the data returned by a single query comes from a single point in time—the time that the query began. Therefore, a query never sees dirty data or any of the changes made by transactions that commit during query execution. As query execution proceeds, only data committed before the query began is visible to the query. The query does not see changes committed after statement execution begins.
    >
    The first sentence is the key:
    >
    As a query enters the execution stage, the current system change number (SCN) is determined.
    >
    Oracle will only query data AS OF that SCN that was determined.
    If you now rerun the query Oracle repeats the process: it determines the SCN again which could be newer if other users have committed changes.
    That second execution of the query may find that some rows have been modified or even deleted and that new rows have been inserted: nonrepeatable read.
    If you use the SERIALIZABLE isolation level then that second query will use the SCN that was determined at the very START of the transaction. For the simple example above it means the second query would use the SAME SCN that the first query used: so the same data would be returned.
    Table 13-2 in that doc (a few pages down) lists the isolation levels
    >
    Read committed
    This is the default transaction isolation level. Each query executed by a transaction sees only data that was committed before the query (not the transaction) began. An Oracle Database query never reads dirty (uncommitted) data.
    Because Oracle Database does not prevent other transactions from modifying the data read by a query, that data can be changed by other transactions between two executions of the query. Thus, a transaction that runs a given query twice can experience both nonrepeatable read and phantoms.

  • A better way to syncronize objects in servlets

    Hi,
    I am working on a webapp that has a servlet that invoque a instance of a object that accesses our sybase server.
    What is the best secure way to syncronize to avoid problems at the database?
    I have searched all of these but I am still confused:
    Syncronize the servlet
    public class LoginHandler extends HttpServlet implements SingleThreadModel
    Syncronize the object
    private DBManager dba = DBManager.getInstance() ;
    private String GetUserCode(String Username, String Password)
    syncronized(this) {
    DBResults rsResult = dba.ExecQuery(SQLQuery);
    Syncronize the object in the class
    public class DBManager implements Serializable
    private static DBManager _instancie = new DBManager();
    public static synchronized DBManager getInstance()
    if (_instance == null) _instance = new DBManager();
    return _instance;
    Syncronize the db connection
    try
    synchronized (ds)
    con = ds.getConnection();
    st = con.createStatement();
    rs = st.executeQuery(sqlQuery);
    if (rs.next()) rsValue = rs.getString(1);
    catch (Exception e)
    Syncronize method
    public static syncronized String getDBValue(String SQL)
    Thanks very much,
    Lorenzo

    I would not synchronize the servlet with SingleThreadModel
    I would not have all this code in the servlet at all. (Servlets handle HTTP requests, after all.) Would you want to avoid problems at the database if you didn't have a Web interface? I think so. In that case, take care of the problem in the persistence layer, as close to the database as possible.
    You don't want to do anything that will increase your sense of security and kill your performance.
    I think there's a difference between synchronization and isolation. Read up on the java.sql.Connection.setTransactionIsolation(int level) method. I believe that's pertinent.
    I think you should investigate pessimistic vs optimistic locking, isolation, dirty reads, phantom reads and writes. I'd enlist the help of the database and its admin as much as possible.

  • Locking issues with transaction-isolation levels

              I believe that my program is suffering from some sort of deadlock, and I was hoping
              for some feedback.
              I am helping to develop a trading system
              using EJBs, Oracle 9i, and Bea Weblogic 7.0. The system provides an entity EJB
              called LiveOrder that exposes several finder methods, most of which return java.util.Collections
              of LiveOrder EJBs.
              In weblogic-ejb-jar.xml, I have set the transaction isolation-levels for these
              finders to TRANSACTION_READ_COMMITTED_FOR_UPDATE (b/c TRANSACTION_SERIALIZABLE
              isn't really supported by Oracle), in an effort to eliminate phantom reads, which
              occur frequently if I do not use this isolation level. These finders all use transaction
              attribute 'Required'.
              It is my understanding that any transaction that calls any of these finders either
              will lock the database if no other transaction already owns the lock, or will
              wait until the lock is released if another transaction owns that lock. It also
              is my understanding that a transaction that owns a lock will always release any
              locks acquired upon expiration of that transaction (whether that be via commit
              or via rollback).
              However, this doesn't always appear the case: I have noticed occassionally that
              several clients "hang," as they wait for the lock that, for some reason, is not
              being released by its transaction. There do not appear to be any exceptions thrown
              by the system prior to the system hanging, and the Weblogic administration tool
              states that all transactions have been committed.
              If it helps, I have included the general algorithm for the main (i.e., most expensive)
              transaction:
              1. a client calls a stateless session EJB's processOrder method (which should
              implicitly start a new transaction, b/c this method has attibute 'RequiresNew')
              2. the transaction invokes the LiveOrder finder method (this should lock the DB,
              so subsequent callers should block until the lock is released).
              3. the transaction invokes another LiveOrder finder method, returning a separate
              set of data.
              4. the transaction invokes a finder method from a separate entity EJB (called
              Security), which maps to a "read-only" table in the DB (default transaction-isolation
              level, Required attribute).
              5. the transaction invokes a finder method from yet another separate entity EJB
              (called SecurityMarketValues), which maps to some other table (not read-only)
              in the DB (again, default transaction-isolation level, Required attribute).
              6. the transaction writes to the SecurityMarketValues entity EJB.
              7. the transaction writes to the LiveOrders retrieved from steps 2 and 3.
              8. the transaction ends by exiting method processOrder (thus releasing the locks
              on the LiveOrder table in the DB).
              In the system, there also exist other transactions that occassionally call the
              LiveOrder EJB finder methods, but only briefly to take a "snapshot" of the live
              order table (i.e., these transactions do not make calls to other DB tables, and
              close transactions almost immediately after starting them)
              Like I mentioned before, the system sometimes works, and sometimes it hangs. Any
              ideas? I'm running out...
              

    Jon,
              If there was an Oracle deadlock the DB would resolve it momentarily and
              will ultimately choose one transaction and throw an exception so it's
              not a DB deadlock.
              Take a thread dump at the very moment your system seems to be hanging
              and look at what the threads are doing.
              From your description it's not very unlikely that those threads of
              yours that take snapshots of the data will not disrupt the transactions
              so you may be surprised by the thread dumps that this is actually what
              happens -- those snapshot thread wait for some lock while holding locks
              needed by you other threads and it just slows down the system.
              Regards,
              Dejan
              Jon Gadzik wrote:
              >I believe that my program is suffering from some sort of deadlock, and I was hoping
              >for some feedback.
              >
              >I am helping to develop a trading system
              >using EJBs, Oracle 9i, and Bea Weblogic 7.0. The system provides an entity EJB
              >called LiveOrder that exposes several finder methods, most of which return java.util.Collections
              >of LiveOrder EJBs.
              >
              >In weblogic-ejb-jar.xml, I have set the transaction isolation-levels for these
              >finders to TRANSACTION_READ_COMMITTED_FOR_UPDATE (b/c TRANSACTION_SERIALIZABLE
              >isn't really supported by Oracle), in an effort to eliminate phantom reads, which
              >occur frequently if I do not use this isolation level. These finders all use transaction
              >attribute 'Required'.
              >
              >It is my understanding that any transaction that calls any of these finders either
              >will lock the database if no other transaction already owns the lock, or will
              >wait until the lock is released if another transaction owns that lock. It also
              >is my understanding that a transaction that owns a lock will always release any
              >locks acquired upon expiration of that transaction (whether that be via commit
              >or via rollback).
              >
              >However, this doesn't always appear the case: I have noticed occassionally that
              >several clients "hang," as they wait for the lock that, for some reason, is not
              >being released by its transaction. There do not appear to be any exceptions thrown
              >by the system prior to the system hanging, and the Weblogic administration tool
              >states that all transactions have been committed.
              >
              >If it helps, I have included the general algorithm for the main (i.e., most expensive)
              >transaction:
              >
              >1. a client calls a stateless session EJB's processOrder method (which should
              >implicitly start a new transaction, b/c this method has attibute 'RequiresNew')
              >
              >2. the transaction invokes the LiveOrder finder method (this should lock the DB,
              >so subsequent callers should block until the lock is released).
              >
              >3. the transaction invokes another LiveOrder finder method, returning a separate
              >set of data.
              >
              >4. the transaction invokes a finder method from a separate entity EJB (called
              >Security), which maps to a "read-only" table in the DB (default transaction-isolation
              >level, Required attribute).
              >
              >5. the transaction invokes a finder method from yet another separate entity EJB
              >(called SecurityMarketValues), which maps to some other table (not read-only)
              >in the DB (again, default transaction-isolation level, Required attribute).
              >
              >6. the transaction writes to the SecurityMarketValues entity EJB.
              >
              >7. the transaction writes to the LiveOrders retrieved from steps 2 and 3.
              >
              >8. the transaction ends by exiting method processOrder (thus releasing the locks
              >on the LiveOrder table in the DB).
              >
              >
              >In the system, there also exist other transactions that occassionally call the
              >LiveOrder EJB finder methods, but only briefly to take a "snapshot" of the live
              >order table (i.e., these transactions do not make calls to other DB tables, and
              >close transactions almost immediately after starting them)
              >
              >Like I mentioned before, the system sometimes works, and sometimes it hangs. Any
              >ideas? I'm running out...
              >
              >
              >
              >
              

  • Need to create a transaction for multiple select statements?

    Hello,
    I am a newbie and have a question about database transaction, e.g. whether/not to enclose multiple select statements (and select statements only) into a transaction.
    My database is set to transaction isolation level 2: REPEATABLE READ, where dirty read & non-repeatable read are not allowed, only phantom read is allowed.
    Now, in my code I have a number of methods that only contain select statements only. Since they are merely select statements, which don't do any modifications to the data, I am not sure if I am supposed to enclose them into a transaction.
    However, if I don't put them into a transaction will the transaction isolation level takes into effect automatically when another user is modifying the data that I am reading? In other words, I need to make sure the select statements will never do either dirty read or non-repeatable read. But I am not sure if it is necessary to enclose multiple select statements in a transaction, since I believe putting the select statements into a transaction will put some locks to the data being read which may reduce the concurrency of my application.
    Any help/advice would be very much appreciated.
    Duane

    You might want to try asking this on a forum that specific to your database. I suspect the answer can vary depending on the database and probably requires in depth knowledge of what the database does.

  • Isolation Level of SQL Server when committing statements

    Dear All,
    Can someone please tell me whether Read Committed and
    Auto commit means the same thing in transaction Isolation level?
    I assume if SQL default is Auto Commit then it commits all transactions whether we use
    "Commit Transaction' cmd or not.
    Thank-you
    SQL75

    We write  T-SQL in both right?
    Sorry didnt understand what you meant by that.
    Isolation Level and Transaction Management are two different aspects of T-SQL.
    The former talks about level up to which a transaction can proceed without intervention from other parallel
    transactions. In Read Commited no other transaction will be able to read data that is in use by the current transaction unless its done ie commited/rolled back. But it may introduce additional data that may fall in the range of data which first transaction
    is currently in hold causing phantom reads
    The transaction management explains whether one needs to explicitly specify start and end of transaction
    or whether system will implicitly dtermine the start and end of the transaction.
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Isolation Level of iViews

    Hi,
    I have an iView, on an EP5 SP6, that currently has Isolation Level 4 enabled. I would like it to go to Isolation Level 3 since the size of the iView changes each day.
    The iView is inherited from a KM iView and I have been told that KM iViews need to run with Isolation Level 4. Is this really correct?
    Thanks in advance.
    Cheers
    Kris Kegel

    We write  T-SQL in both right?
    Sorry didnt understand what you meant by that.
    Isolation Level and Transaction Management are two different aspects of T-SQL.
    The former talks about level up to which a transaction can proceed without intervention from other parallel
    transactions. In Read Commited no other transaction will be able to read data that is in use by the current transaction unless its done ie commited/rolled back. But it may introduce additional data that may fall in the range of data which first transaction
    is currently in hold causing phantom reads
    The transaction management explains whether one needs to explicitly specify start and end of transaction
    or whether system will implicitly dtermine the start and end of the transaction.
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Understanding isolation levels

    I'm having a difficult time understanding isolation levels. I know what problems each isolation level solves (i.e. dirty reads, nonrepeatable reads, phantom reads) and the classic textbook description of each level, but I simply cannot understand how it works. Let me explain my understanding of the various isolation levels:
    READ_UNCOMMITTED:
    The data read by TX 1 is held in a read lock, correct? TX 1 modifies the data. TX 2 therefore can read that data (but cannot write to it, due to the read lock). TX 2 can therefore read uncommitted data.
    READ_COMMITTED:
    The data read by TX 1 is held in a write lock, correct? TX 1 modifies the data. TX 2 cannot read the data because of the write lock, hence solving the dirty read problem. It cannot read the data TX 1 has so much as read during the course of its transaction (true?).
    REPEATABLE_READ:
    This is the biggest source of my confusion. How is it that the nonrepeatable read problem is not solved by READ_COMMITTED? TX 1 reads some data, TX 2 cannot read that data due to the read lock...but somehow it manages to modify TX 1's data so that when TX 1 repeats its query, it gets different results? How is this possible? And apart from the read and write lock of the two previous isolation levels, what does the database do to enforce this new isolation level on top of the other two?
    SERIALIZABLE:
    TX 2 cannot do anything without TX 1 finishing. (But isn't this similar to READ_COMMITTED, whereby TX 2 cannot even read TX 1's data until it has committed.) What is being meant by sequential execution here?
    I think a source of confusion is that I am unaware of whether isolation levels are applied to an entire database, to a transaction, to a query, or some other category. Can one transaction have one isolation level while another transaction has another isolation level?
    Any insight into isolation levels would be appreciated. Thanks.

    It depends upon the database implementation - there are various different ways to solve the problem.
    I think a source of confusion is that I am unaware of whether isolation levels are applied to an entire database, to a transaction, to a query, or some other category.
    By definition the isolation levels are applied to the transaction. Other transactions can have other isolation levels.

  • Transaction in atg

    HI
    what is transaction in atg why we need to keep some block of code in transaction block
    what will happen if any error occured in between transaction block
    what will happens if there is no error in between transaction block
    give me one small example

    Hi,
    Transaction block is required to avoid concurrent update or phantom read.
    Lets see this example:
    You login to your profile and add item A to your order. At the same time suppose your friend logins with your credentials and add item B. Now when you move to say shipping page or billing page you see two items A and B, but you had ordered only for one. Since there was no transaction control this happens.
    Please go through below links for more details:
    http://docs.oracle.com/cd/E23507_01/Platform.20073/RepositoryGuide/html/s0502repositoriesandtransactions01.html
    http://docs.oracle.com/cd/E23095_01/Platform.93/ATGCommProgGuide/html/s1014managingtransactionsintheatgcomm01.html
    Regards,
    RahulV

  • Foxit PhantomPDF for Mozilla Plugin Can't Update

    When I seek to open a file http://www. ....xyz.pdf within Firefox 17.0.1, message "This plugin is vulnerable and should be updated. Check (here) for updates. Click here (anywhere on screen) to activate the Foxit PhantomPDF for Mozilla plugin."
    When I go to site that supposedly has links to updates, it takes me to Mozilla's plug-in check page. On that page, to the right of "Foxit PhantomPDF Plugin for Mozilla" it says "Unknown plugin. If I select the "research" option, it opens Google search on the Foxit topic. The top link takes me to a Foxit FAQ page that doesn't seem to answer my question. It has links to an offer for a "trial" version of Foxit PhantomPDF reader, but I already have a licensed version of same although the licensed version does not specifically say it includes the plugin for Mozilla.
    I've tried using the Firefox|Options|Applications to keep Firefox from opening "Foxit PhantomPDF Plugin for Mozilla" and instead to use Adobe Acrobat 8.0 for opening Adobe Acrobat 7.0 documents. Selecting Acrobat 8.0 or selecting "Always ask," (for all of the three Acrobat 7 items) doesn't stop Firefox from insisting on opening "Foxit PhantomPDF Plugin for Mozilla" I have tried disabling the plug-in "Foxit PhantomPDF Plugin for Mozilla", but Firefox still insists on using it.
    b.t.w. I run Firefox on another WinXP SR-3 machine that does NOT have licensed copy of Foxit Phantom, and it willingly opens Internet based pdf documents with Adobe Acrobat Reader. I've been unable to get Adobe Acrobat Reader to install on my main machine perhaps because it believes the presence of the license version of Acrobat 8.0 makes the reader unnecessary.
    I tried the manual disabler via "about:config" and went to:
    C:\Program Files\Mozilla Firefox\plugins
    and renamed the file with triple X's to this:
    XXXnpFoxitReaderPlugin.dll
    But FireFox still insists on using the Foxit Phantom Reader for Firefox
    I don't know the nature of the plugin vulnerability, but I like to avoid any possible vulnerability. How do I kill the "Foxit PhantomPDF Plugin for Mozilla" or where do I find a compatible update?
    Thanks in advance.

    In Options > Applications, look for Portable Document Format (PDF) and next to that should be pull-down menu. If you have a plugin that can handle it, there will be an item "Use [whatever] Plug-in", but there should also be an item that says "Use [whatever]". That last item is an application, not a plugin.

  • TRANSACTION_REPEATABLE_READ???   TRANSACTION_SERIALIZABLE???

              I went through the documentation for isolation levels TRANSACTION_REPEATABLE_READ
              and TRANSACTION_SERIALIZABLE. But i am not getting a clear picture of what they
              really mean. It apprears like, Looks like TRANSACTION_READ_COMMITTED already covers
              TRANSACTION_REPEATABLE_READ. Also not sure when do we use TRANSACTION_REPEATABLE_READ
              and TRANSACTION_SERIALIZABLE.
              I've asked lot of people, i didn't get a clear explanation. Can a transaction
              "Guru" throw some light on it?
              thanks,
              Jegan
              Sembium Corporation
              

    Jegan, I'm no guru, but thought that this might help:
              TRANSACTION_SERIALIZABLE
              This level prohibits all of the following types of reads:
              Dirty reads, where a transaction reads a database row containing uncommitted
              changes from a second transaction.
              Nonrepeatable reads, where one transaction reads a row, a second transaction
              changes the same row, and the first transaction rereads the row and gets a
              different value.
              Phantom reads, where one transaction reads all rows that satisfy an SQL
              WHERE condition, a second transaction inserts a row that also satisfies the
              WHERE condition, and the first transaction applies the same WHERE condition
              and gets the row inserted by the second transaction.
              TRANSACTION_REPEATABLE_READ
              This level prohibits dirty reads and nonrepeatable reads, but it allows
              phantom reads.
              TRANSACTION_READ_COMMITTED
              This level prohibits dirty reads, but allows nonrepeatable reads and phantom
              reads.
              TRANSACTION_READ_UNCOMMITTED
              This level allows dirty reads, nonrepeatable reads, and phantom reads.
              "Jegan" <[email protected]> wrote in message
              news:[email protected]...
              >
              > I went through the documentation for isolation levels
              TRANSACTION_REPEATABLE_READ
              > and TRANSACTION_SERIALIZABLE. But i am not getting a clear picture of what
              they
              > really mean. It apprears like, Looks like TRANSACTION_READ_COMMITTED
              already covers
              > TRANSACTION_REPEATABLE_READ. Also not sure when do we use
              TRANSACTION_REPEATABLE_READ
              > and TRANSACTION_SERIALIZABLE.
              >
              > I've asked lot of people, i didn't get a clear explanation. Can a
              transaction
              > "Guru" throw some light on it?
              >
              > thanks,
              >
              > Jegan
              > Sembium Corporation
              

Maybe you are looking for

  • DMZ issues in ASA 5505 Firewall

    hi , i have asa 5505 firewall with ASA5505-UL-BUN-K9 license i have problem with DMZ. I am not able to create dmz. please suggest me what i need to do in order to be able to configure dmz. should i need to upgrade the license. please suggest.

  • Navigation Problem

    In Answers, i have report1(Field a,Field b,Field c) Report2(Field b,field d,field e) Report3(field d , field e , field f) i prompted the 'Field b' in Report2 and 'field d' in report3 my rek is if i select the value in the 'field b' in report1 it shou

  • HTTP_XI_IDOC SCenario Error

    Hi all, I am getting the Error in SXMB_MONI as "Message has Error Status on outbound side".Till today morning it was working fine.Now what happened to it we could not found out.I am trying to publish data over HTTP as Aync comm.It is reaching to XI a

  • Doubt in Docking and Splitter Container

    Hi All, I want to make a screen which is draggable on right hand side, means user can click the border line and then increase and decrease the size of screen moving it left and write. This is similar to screen in SE80, ABAPDOCU, ME22n. This can be ea

  • Need help please with positioning in CSS

    Does anyone know why IE changes the where things sit on a page when using absolute positioning? How do I fix this link so that they look the same in both IE and Firefox? Here's the link: http://www.juliannwheeler.com/phyve/who.html Thanks!!