Query Connection Isolation Levels

I can do SA_CONN_LOCKS and get the information of all the connections.  I have all the connection numbers.
I then need a way to determine the ISOLATION LEVEL of each of the connections.    We use a different ISOLATION LEVEL and  we want to make sure that isolation level is proper for all connections to stop freezing.
I'm sure I can get the isolation level of the current connection (ITSELF) ---
1. How do I get the isolation level of my own connection?
2. How do I get the isolation levels of all the other connections?  (If I have to do it manually for each connection, that's not an issue for me)
Please help!
We are setting an EXECLUSIVE lock on a table which is causing all the other workstations to freeze, and I'm sure that we are setting the isolation level to stop this from happening when opening all the connections - so for some reason, something is getting set back or the user is creating other custom connections and locking records.
Thank you.
Sybase SQL Anywhere 12.01

Hi Robert,
I have logged the incident with support.  We haven't move much towards resolution on that side, but we have on our side.
If you're ever finding that an incident needs more attention to it or that you are not receiving an urgent reply from SAP, there is a mechanism to have the SAP Customer Interaction Center involved and for the incident to be flagged at higher support levels, if the production situation warrants it - see: https://websmp106.sap-ag.de/call1sap
The query runs just fine!   It runs in 3 seconds.
If we put those two lines back - the query freezes and can sit there forever.
On the surface, this doesn't sound like a locking issue, but more of a performance issue with those views/tables. Especially if you are using isolation level 0.
They asked us to do logging (your SAP SUPPORT) and we did ..... we gave it to them and they came back and told us it could have to do with some known bug called "Issue with parallelism and we can test this by setting the max_query_tasks to 1"
Yes, after reviewing the incident I can see that we have requested and you have performed and provided us with some request logging, and after reviewing this information we had found that there were large table scans inside queries with intra-query parallelism involved (indicating a performance issue). To diagnose the performance issue further, we have requested graphical plans with statistics from these queries to see in more detail about why we are picking the query plans we are at that time. The optimizer uses statistical analysis to pick plans so therefore we need to understand the statistics behind its decisions.
The above suggestion was provided as an interim diagnostic test in order to determine if the behaviour is related to known fixed issues related to intra-query parallelism on earlier builds of 12.0.1. Providing the specific build number of 12.0.1 would avoid needing to run the diagnostic test.
However, if you are convinced that locking is truly the root issue of your problem, please provide the full output of sa_locks() at this time - there should be 11 columns, including the 'lock_type' information which would provide the type of object that is being locked:
conn_name,conn_id,user_id,table_type,creator,table_name,index_id,lock_class,lock_duration,lock_type,row_identifier
If you are over 1000 locks, you will need to increase the 'max_locks' argument to sa_locks().
I have also noted that you have also just provided a response to us in the incident shortly after posting here. Please continue to use the support incident to work with us directly for the diagnostic discussion and your resolution options.
Regards,
Jeff Albion
SAP Active Global Support

Similar Messages

  • C How to Set Isolation Level in the Connection String

    How to Set Isolation Level in the Connection String using the "Microsoft OLE DB Provider for DB2 Version 4.0"?
    We are trying to move from Crystal Reporting that run against a IBM DB2 database on a mainfram to SSRS reporting and we have downloaded the "Microsoft OLE DB Provider for DB2 Version 4.0" and then worked with the DB2 Administrator to create the
    Packages.  We only have access to use the "Read Uncommitted ("MSUR001") package.   We were able to connect and pull data before he removed access to the other packages, but after setting access the Connection keeps trying to use
    the 'Cursor Stability (MSCS001)" package.   How do we change the Default to the "Read Uncommitted ("MSUR001") package???   Since it is keeps defaulting to the the other package
    we can't connect to do it in the T-SQL query, it has to be set at the Connection String level.

    Hi Dannyboy1263,
    According to your description, you want to set the Transaction Isolation Level in Connection String. Right?
    In Reporting Services, the Connection String for OLE DB Connection can only contains Provider, Data Source and Initial Catalog. There's no property for setting Transaction Isolation Level in the Connection String. Based on my knowledge, we can
    only set the Transaction Isolation Level at Query level or set it by using code (C#, VB, Java...) to call the property of Connection. So unfortunately your requirement can't be achieved currently.
    Reference:
    OLE DB Connection Type (SSRS)
    Data Connections, Data Sources, and Connection Strings in Reporting Services
    If you have any question, please feel free to ask.
    Best Regards,
    Simon Hou

  • SQL Query help ( On connect By level clause)

    Hi all,
    I have this query developed with data in with clause.
    With dat As
      select '@AAA @SSS @DDD' col1 from dual union all
      select '@ZZZ @XXX @TTT @RRR @ZZA' col1 from dual
    Select regexp_substr( col1 , '[^@][A-Z]+',1,level) Show from dat
    connect by level  <= regexp_count(col1, '@');Current output :-
    SHOW
    AAA
    SSS
    DDD
    RRR
    ZZA
    TTT
    RRR
    ZZA
    XXX
    DDD
    RRR
    SHOW
    ZZA
    TTT
    RRR
    ZZA
    . . .1st row comes fine, But next row data is getting duplicated. And total record count = 30. I tried with some but didn't work.
    Expected output :-
    SHOW
    AAA
    SSS
    DDD
    ZZZ
    XXX
    TTT
    RRR
    ZZAI need some change on my query and I am not able to find that. So anybody can add on that or can also provide some different solution too.
    Thanks!
    Ashutosh

    Hi,
    When you use something like "CONNECT BY LEVEL <= x", then at least one of the following must be true:
    (a) the table has no more than 1 row
    (b) there are other conditions in the CONNECT BY clause, or
    (c) you know what you are doing.
    To help see why, run this query
    SELECT     SYS_CONNECT_BY_PATH (dname, '/')     AS path
    ,     LEVEL
    FROM     scott.dept
    CONNECT BY     LEVEL <= 3
    ;and study the results:
    PATH                                     LEVEL
    /ACCOUNTING                                  1
    /ACCOUNTING/ACCOUNTING                       2
    /ACCOUNTING/ACCOUNTING/ACCOUNTING            3
    /ACCOUNTING/ACCOUNTING/RESEARCH              3
    /ACCOUNTING/ACCOUNTING/SALES                 3
    /ACCOUNTING/ACCOUNTING/OPERATIONS            3
    /ACCOUNTING/RESEARCH                         2
    /ACCOUNTING/RESEARCH/ACCOUNTING              3
    /ACCOUNTING/RESEARCH/RESEARCH                3
    /ACCOUNTING/RESEARCH/SALES                   3
    /ACCOUNTING/RESEARCH/OPERATIONS              3
    /ACCOUNTING/SALES                            2
    /ACCOUNTING/SALES/ACCOUNTING                 3
    84 rows selected.

  • Connect by level query is taking too long time to run

    Hello,
    I have a query that returns quarters (YYYYQ) of a begin- and enddate within a specific id, that is built with a connect by level clause, but the query is running to long. I have used explain plan to see what the query is doing, but no silly things to see, just a full table scan, with low costs.
    This is the query:
    select to_char(add_months( cpj.crpj_start_date,3*(level - 1)),'YYYYQ') as sales_quarter
    , cpj.crpj_id as crpj_id
    from mv_gen_cra_projects cpj
    where cpj.crpj_start_date >= to_date('01/01/2009','mm/dd/yyyy')
    and cpj.crpj_start_date <= cpj.crpj_end_date
    and cpj.crpj_routing_type = 'A'
    and ( cpj.crpj_multi_artist_ind = 'N'
    or cpj.crpj_multi_artist_ind is null)
    connect by level <= 1 + ceil(months_between(cpj.crpj_end_date,cpj.crpj_start_date)/3);
    The result have to be like this:
    SALES_QUARTER CRPJ_ID
    20091 100
    20092 100
    20093 100
    20094 100
    20101 100
    20102 100
    Can anyone help me out with this?

    but no silly things to see, just a full table scan, with low costs.Well, maybe an index scan would be faster?
    However:
    You will need to provide us some more details, like:
    - database version (the result of: SQL> select * from v$version;)
    - post the explain plan output (put the tag before and after it, so indentation and formatting are maintained, see the [FAQ|http://forums.oracle.com/forums/help.jspa] for more explanation regarding tags )
    - what are your optimizer settings (the result of: SQL> show parameter optimizer)
    - if applicable: are your table statistics up to date?
    - mv_gen_cra_projects  is a materialized view perhaps?
    Edited by: hoek on Jan 26, 2010 10:50 AM                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Restore default isolation level fails with connection in pool

    Hi,
    I am developing an application that needs to set the TransactionIsolation to SERIALIZE for a transaction. Setting the TransactionIsolation is not the problem. After this transaction is committed or rolled back, i set the isolation level back to the default i saved before.
    The code gets executed and throws no exception. The connection i used is released into the pool. The next time i get this connection from the pool the isolation level is already SERIALIZE. This is not what i wanted to achieve.
    It has to be possible to change the isolation level for transaction, isn´t it?
    Here is the code, that i use. The ConnectionManager gets the connection from a connection pool i configured in the jdbc connector service. Excep for this issue any other operation works fine.
                    ConnectionManager connectionManager = new ConnectionManager();
              Connection con = null;
              int transactionIsolationLevel = 0;
              Queue queue = null;
              List list = null;
              try {
                   con = connectionManager.getConnection();
                   transactionIsolationLevel = con.getTransactionIsolation();
                   if( logger.isInfoEnabled())
                        logger.info(LOGLOC + "ISOLATION_LEVEL default: " + transactionIsolationLevel);
                   // auskommentiert für RE
                   con.setTransactionIsolation( Connection.TRANSACTION_SERIALIZABLE );
                   con.setAutoCommit( false );
              QueueManager queueManager = new QueueManager();
              list = queueManager.GetQueueEntriesBySizeGroups( con, small, medium, large, serverNode );
              con.commit();
              } catch (ClassNotFoundException cnfe) {
                   logger.error(LOGLOC + "Exception setting up transaction context for queue service!", cnfe);
                   handleExceptions(queue, cnfe);
                   try {
                        con.rollback();
                   } catch (SQLException e) {
                        logger.error(LOGLOC + "Exception rolling back transaction!", e);               
              } catch (SQLException sqle) {
                   logger.error(LOGLOC + "Exception setting up transaction context for queue service!", sqle);
                   handleExceptions(queue, sqle);
                   try {
                        con.rollback();
                   } catch (SQLException e) {
                        logger.error(LOGLOC + "Exception rolling back transaction!", e);               
              } catch (QueueManagerException qme) {
                   logger.error(LOGLOC + "Exception executing queue manager!", qme);
                   handleExceptions(queue, qme);
                   try {
                        con.rollback();
                   } catch (SQLException e) {
                        logger.error(LOGLOC + "Exception rolling back transaction!", e);               
              } finally {
                   try {
                        con.setAutoCommit(true);
                        if( logger.isInfoEnabled())
                             logger.info(LOGLOC + "ISOLATION_LEVEL before setting default: " + con.getTransactionIsolation() + " now setting: " + transactionIsolationLevel );
                        // Auskommentiert für RE
                        con.setTransactionIsolation( transactionIsolationLevel );
                        con.close();
                   } catch (SQLException e) {
                        logger.error(LOGLOC + "Exception setting up transaction context for queue service!", e);               
    The datasource is a simple jdbc1.x Oracle Datasource with no special settings.
    In a remote debugging session i saw, that the wrapped Connection from the datasource sets the txLevel successfully, But the underlying T4Connection does not get this isolation level. Could this be a bug?
    Any hints, solutions?

    Hi,
    I am developing an application that needs to set the TransactionIsolation to SERIALIZE for a transaction. Setting the TransactionIsolation is not the problem. After this transaction is committed or rolled back, i set the isolation level back to the default i saved before.
    The code gets executed and throws no exception. The connection i used is released into the pool. The next time i get this connection from the pool the isolation level is already SERIALIZE. This is not what i wanted to achieve.
    It has to be possible to change the isolation level for transaction, isn´t it?
    Here is the code, that i use. The ConnectionManager gets the connection from a connection pool i configured in the jdbc connector service. Excep for this issue any other operation works fine.
                    ConnectionManager connectionManager = new ConnectionManager();
              Connection con = null;
              int transactionIsolationLevel = 0;
              Queue queue = null;
              List list = null;
              try {
                   con = connectionManager.getConnection();
                   transactionIsolationLevel = con.getTransactionIsolation();
                   if( logger.isInfoEnabled())
                        logger.info(LOGLOC + "ISOLATION_LEVEL default: " + transactionIsolationLevel);
                   // auskommentiert für RE
                   con.setTransactionIsolation( Connection.TRANSACTION_SERIALIZABLE );
                   con.setAutoCommit( false );
              QueueManager queueManager = new QueueManager();
              list = queueManager.GetQueueEntriesBySizeGroups( con, small, medium, large, serverNode );
              con.commit();
              } catch (ClassNotFoundException cnfe) {
                   logger.error(LOGLOC + "Exception setting up transaction context for queue service!", cnfe);
                   handleExceptions(queue, cnfe);
                   try {
                        con.rollback();
                   } catch (SQLException e) {
                        logger.error(LOGLOC + "Exception rolling back transaction!", e);               
              } catch (SQLException sqle) {
                   logger.error(LOGLOC + "Exception setting up transaction context for queue service!", sqle);
                   handleExceptions(queue, sqle);
                   try {
                        con.rollback();
                   } catch (SQLException e) {
                        logger.error(LOGLOC + "Exception rolling back transaction!", e);               
              } catch (QueueManagerException qme) {
                   logger.error(LOGLOC + "Exception executing queue manager!", qme);
                   handleExceptions(queue, qme);
                   try {
                        con.rollback();
                   } catch (SQLException e) {
                        logger.error(LOGLOC + "Exception rolling back transaction!", e);               
              } finally {
                   try {
                        con.setAutoCommit(true);
                        if( logger.isInfoEnabled())
                             logger.info(LOGLOC + "ISOLATION_LEVEL before setting default: " + con.getTransactionIsolation() + " now setting: " + transactionIsolationLevel );
                        // Auskommentiert für RE
                        con.setTransactionIsolation( transactionIsolationLevel );
                        con.close();
                   } catch (SQLException e) {
                        logger.error(LOGLOC + "Exception setting up transaction context for queue service!", e);               
    The datasource is a simple jdbc1.x Oracle Datasource with no special settings.
    In a remote debugging session i saw, that the wrapped Connection from the datasource sets the txLevel successfully, But the underlying T4Connection does not get this isolation level. Could this be a bug?
    Any hints, solutions?

  • Performance issue with connect by level query

    Hi I have a problem with connect by level in oracle.
    My table is :
    J_USER_CALENDAR
    USER_NAME     FROM_DATE     TO_DATE     COMMENTS
    Uma Shankar     2-Nov-09     5-Nov-09     Comment1
    Veera     11-Nov-09     13-Nov-09     Comment2
    Uma Shankar     15-Dec-09     17-Dec-09     Commnet3
    Vinod     20-Oct-09     21-Oct-09     Comments4
    The above table is the user leave calendar.
    Now I need to display the users who are on leave between 01-Nov-2009 to 30-Nov-2009
    The output should look like:
    USER_NAME     FROM_DATE     COMMENTS
    Uma Shankar     2-Nov-09     Comment1
    Uma Shankar     3-Nov-09     Comment1
    Uma Shankar     4-Nov-09     Comment1
    Uma Shankar     5-Nov-09     Comment1
    Veera     11-Nov-09     Comment2
    Veera     12-Nov-09     Comment2
    Veera     13-Nov-09     Comment2
    For this I have tried with following query , but it is taking too long time to execute.
    select FROM_DATE,user_name,comments from (SELECT distinct FROM_DATE,user_name ,
    comments FROM (SELECT (LEVEL) + FROM_DATE-1 FROM_DATE,TO_DATE, FIRST_NAME||' '|| LAST_NAME
    user_name ,COMMENTS FROM J_USER_CALENDAR
    where
    and J_USER_CALENDAR.IS_DELETED=0
    CONNECT BY LEVEL <= TO_DATE - FROM_DATE+1) a )where (FROM_DATE = '01-Nov-2009' or FROM_DATE = '30-Nov-2009'
    or FROM_DATE between '01-Nov-2009' and '30-Nov-2009') order by from_Date ,lower(user_name)
    Please help me.
    Thanks in advance.
    Regards,
    Phanikanth

    I have not attempted to analyze your SQL statement.
    Here is a test set up:
    CREATE TABLE T1(
      USERNAME VARCHAR2(30),
      FROM_DATE DATE,
      TO_DATE DATE,
      COMMENTS VARCHAR2(100));
    INSERT INTO T1 VALUES ('Uma Shankar', '02-Nov-09','05-Nov-09','Comment1');
    INSERT INTO T1 VALUES ('Veera','11-Nov-09','13-Nov-09','Comment2');
    INSERT INTO T1 VALUES ('Uma Shankar','15-Dec-09','17-Dec-09','Commnet3');
    INSERT INTO T1 VALUES ('Vinod','20-Oct-09','21-Oct-09','Comments4');
    INSERT INTO T1 VALUES ('Mo','20-Oct-09','05-NOV-09','Comments4');
    COMMIT;Note that I included one additional row, where the person starts their vacation in the previous month and ends in the month of November.
    You could approach the problem like this:
    Assume that you would like to list all of the days of a particular month:
    SELECT
      TO_DATE('01-NOV-2009','DD-MON-YYYY')+(ROWNUM-1) MONTH_DAY
    FROM
      DUAL
    CONNECT BY
      LEVEL<=ADD_MONTHS(TO_DATE('01-NOV-2009','DD-MON-YYYY'),1)-TO_DATE('01-NOV-2009','DD-MON-YYYY');Note that the above attempts to calculate the number of days in the month of November - if it is known that the month has a particular number of days, 30 for instance, you could rewrite the CONNECT BY clause like this:
    CONNECT BY
      LEVEL<=30Now, we need to pick up those rows of interest from the table:
    SELECT
    FROM
      T1 T
    WHERE
      (T.FROM_DATE BETWEEN TO_DATE('01-NOV-2009','DD-MON-YYYY') AND TO_DATE('30-NOV-2009','DD-MON-YYYY')
        OR T.TO_DATE BETWEEN TO_DATE('01-NOV-2009','DD-MON-YYYY') AND TO_DATE('30-NOV-2009','DD-MON-YYYY'));
    USERNAME        FROM_DATE TO_DATE   COMMENTS
    Uma Shankar     02-NOV-09 05-NOV-09 Comment1
    Veera           11-NOV-09 13-NOV-09 Comment2
    Mo              20-OCT-09 05-NOV-09 Comments4If we then join the two resultsets, we have the following query:
    SELECT
    FROM
      T1 T,
      (SELECT
        TO_DATE('01-NOV-2009','DD-MON-YYYY')+(ROWNUM-1) MONTH_DAY
      FROM
        DUAL
      CONNECT BY
        LEVEL<=ADD_MONTHS(TO_DATE('01-NOV-2009','DD-MON-YYYY'),1)-TO_DATE('01-NOV-2009','DD-MON-YYYY')) V
    WHERE
      (T.FROM_DATE BETWEEN TO_DATE('01-NOV-2009','DD-MON-YYYY') AND TO_DATE('30-NOV-2009','DD-MON-YYYY')
        OR T.TO_DATE BETWEEN TO_DATE('01-NOV-2009','DD-MON-YYYY') AND TO_DATE('30-NOV-2009','DD-MON-YYYY'))
      AND V.MONTH_DAY BETWEEN T.FROM_DATE AND T.TO_DATE
    ORDER BY
      USERNAME,
      MONTH_DAY;
    USERNAME        FROM_DATE TO_DATE   COMMENTS   MONTH_DAY
    Mo              20-OCT-09 05-NOV-09 Comments4  01-NOV-09
    Mo              20-OCT-09 05-NOV-09 Comments4  02-NOV-09
    Mo              20-OCT-09 05-NOV-09 Comments4  03-NOV-09
    Mo              20-OCT-09 05-NOV-09 Comments4  04-NOV-09
    Mo              20-OCT-09 05-NOV-09 Comments4  05-NOV-09
    Uma Shankar     02-NOV-09 05-NOV-09 Comment1   02-NOV-09
    Uma Shankar     02-NOV-09 05-NOV-09 Comment1   03-NOV-09
    Uma Shankar     02-NOV-09 05-NOV-09 Comment1   04-NOV-09
    Uma Shankar     02-NOV-09 05-NOV-09 Comment1   05-NOV-09
    Veera           11-NOV-09 13-NOV-09 Comment2   11-NOV-09
    Veera           11-NOV-09 13-NOV-09 Comment2   12-NOV-09
    Veera           11-NOV-09 13-NOV-09 Comment2   13-NOV-09Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • How do you determine a process or sessions isolation level.

    We are using COM+ components to issue database statements against an Oracle 8i database. COM+ has a property that allows you to set the isolation level. Is there any tool or query that would allow me to verify the isolation level in use by a session or process? I want to verify this property is actually affecting the connection to the DB.
    Thanks,
    Sam

    FLAG is just one of those columns that Oracle uses. It isn't documented but, as far as I know (which isn't very far), the only use for it is to record the isolation level for the transaction.
    I didn't mention it because I didn't think it helped you, for this reason: we don't get a record in the v$transaction view until the transaction has already started. At which point it is too late to change the ISOLATION_LEVEL for the transaction.
    Although I suppose you could do this:
    BEGIN
       UPDATE dummy_table set col1 = col1;
       -- remember V$TRANSACTION shows all txns
       SELECT count(1) INTO ln
       FROM   v$transaction t, v$session s
       WHERE  bitand(t.flag,268435456) <> 0
       AND    s.taddr = t.addr
       AND    s.audsid = sys_context('userenv', 'sessionid');
       IF ln = 0
       THEN
          ROLLBACK;
          SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
       END IF;
       --  do_whatever
       COMMIT;
    END ;.
    Cheers, APC

  • Bug in Oracle's handling of transaction isolation levels?

    Hello,
    I think there is a bug in Oracle 9i database related to serializable transaction isolation level.
    Here is the information about the server:
    Operating System:     Microsoft Windows 2000 Server Version 5.0.2195 Service Pack 2 Build 2195
    System type:          Single CPU x86 Family 6 Model 8 Stepping 10 GenuineIntel ~866 MHz
    BIOS-Version:          Award Medallion BIOS v6.0
    Locale:               German
    Here is my information about the client computer:
    Operaing system:     Microsoft Windows XP
    System type:          IBM ThinkPad
    Language for DB access: Java
    Database information:
    Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.1.0 - Production
    The database has been set up using the default settings and nothing has been changed.
    To reproduce the bug, follow these steps:
    1. Create a user in 9i database called 'kaon' with password 'kaon'
    2. Using SQL Worksheet create the following table:
    CREATE TABLE OIModel (
    modelID int NOT NULL,
    logicalURI varchar (255) NOT NULL,
    CONSTRAINT pk_OIModel PRIMARY KEY (modelID),
    CONSTRAINT logicalURI_OIModel UNIQUE (logicalURI)
    3. Run the following program:
    package test;
    import java.sql.*;
    public class Test {
    public static void main(String[] args) throws Exception {
    java.util.Locale.setDefault(java.util.Locale.US);
    Class.forName("oracle.jdbc.OracleDriver");
    Connection connection=DriverManager.getConnection("jdbc:oracle:thin:@schlange:1521:ORCL","kaon","kaon");
    DatabaseMetaData dmd=connection.getMetaData();
    System.out.println("Product version:");
    System.out.println(dmd.getDatabaseProductVersion());
    System.out.println();
    connection.setAutoCommit(false);
    connection.setTransactionIsolation(Connection.TRANSACTION_SERIALIZABLE);
    int batches=0;
    int counter=2000;
    for (int outer=0;outer<50;outer++) {
    for (int i=0;i<200;i++) {
    executeUpdate(connection,"INSERT INTO OIModel (modelID,logicalURI) VALUES ("+counter+",'start"+counter+"')");
    executeUpdate(connection,"UPDATE OIModel SET logicalURI='next"+counter+"' WHERE modelID="+counter);
    counter++;
    connection.commit();
    System.out.println("Batch "+batches+" done");
    batches++;
    protected static void executeUpdate(Connection conn,String sql) throws Exception {
    Statement s=conn.createStatement();
    try {
    int result=s.executeUpdate(sql);
    if (result!=1)
    throw new Exception("Should update one row, but updated "+result+" rows, query is "+sql);
    finally {
    s.close();
    The program prints the following output:
    Product version:
    Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.1.0 - Production
    Batch 0 done
    Batch 1 done
    java.lang.Exception: Should update one row, but updated 0 rows, query is UPDATE OIModel SET logicalURI='next2571' WHERE modelID=2571
         at test.Test.executeUpdate(Test.java:35)
         at test.Test.main(Test.java:22)
    That is, after several iterations, the executeUpdate() method returns 0, rather than 1. This is clearly an error.
    4. Leave the database as is. Replace the line
    int counter=2000;
    with line
    int counter=4000;
    and restart the program. The following output is generated:
    Product version:
    Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.1.0 - Production
    Batch 0 done
    Batch 1 done
    java.sql.SQLException: ORA-08177: can't serialize access for this transaction
         at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:134)
         at oracle.jdbc.ttc7.TTIoer.processError(TTIoer.java:289)
         at oracle.jdbc.ttc7.Oall7.receive(Oall7.java:573)
         at oracle.jdbc.ttc7.TTC7Protocol.doOall7(TTC7Protocol.java:1891)
         at oracle.jdbc.ttc7.TTC7Protocol.parseExecuteFetch(TTC7Protocol.java:1093)
         at oracle.jdbc.driver.OracleStatement.executeNonQuery(OracleStatement.java:2047)
         at oracle.jdbc.driver.OracleStatement.doExecuteOther(OracleStatement.java:1940)
         at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:2709)
         at oracle.jdbc.driver.OracleStatement.executeUpdate(OracleStatement.java:796)
         at test.Test.executeUpdate(Test.java:33)
         at test.Test.main(Test.java:22)
    This is clearly an error - only one transaction is being active at the time, so there is no need for serialization of transactions.
    5. You can restart the program as many times you wish (by chaging the initial counter value first). The same error (can't serialize access for this transaction) will be generated.
    6. The error doesn't occur if the transaction isolation level isn't changed.
    7. The error doesn't occur if the UPDATE statement is commented out.
    Sincerely yours
         Boris Motik

    I have a similar problem
    I'm using Oracle and serializable isolation level.
    Transaction inserts 4000 objects and then updates about 1000 of these objects.
    Transactions sees inserted objects but cant update them (row not found or can't serialize access for this transaction are thrown).
    On 3 tries for this transaction 1 succeds and 2 fails with one of above errors.
    No other transactions run concurently.
    In read commited isolation error doesn't arise.
    I'm using plain JDBC.
    Similar or even much bigger serializable transaction works perfectly on the same database as plsql procedure.
    I've tried oci and thin (Oracle) drivers and oranxo demo (i-net) driver.
    And this problems arises on all of this drivers.
    This problem confused me so much :(.
    Maby one of Oracle users, developers nows cause of this strange behaviour.
    Thanx for all answers.

  • Setting isolation level fails

    Hi,
    Im using Kodo 3.0.0 on Oracle 8.1.7.
    I tried to define the isolation level in the kodo.properties:
    e.g.: kodo.jdbc.TransactionIsolation: serializable
    Unfortunately Oracle throws an exception which says, that "set
    transaction" has to be the first statement called within a transaction. I
    get this exception on almost every db access.
    java.sql.SQLException: ORA-01453: SET TRANSACTION muss erste Anweisung der
    Transaktion sein
    at
    kodo.jdbc.sql.SQLExceptions.getFatalDataStore(SQLExceptions.java:42)
    at
    kodo.jdbc.sql.SQLExceptions.getFatalDataStore(SQLExceptions.java:24)
    at
    kodo.jdbc.schema.LazySchemaFactory.findTable(LazySchemaFactory.java:1
    50)
    at
    kodo.jdbc.meta.VerticalClassMapping.fromMappingInfo(VerticalClassMapp
    ing.java:135)
    at
    kodo.jdbc.meta.RuntimeMappingProvider.getMapping(RuntimeMappingProvid
    er.java:56)
    at
    kodo.jdbc.meta.MappingRepository.getMappingInternal(MappingRepository
    java:342)
    at
    kodo.jdbc.meta.MappingRepository.getMapping(MappingRepository.java:29
    7)
    at
    kodo.jdbc.meta.MappingRepository.getMappingInternal(MappingRepository
    java:325)
    at
    kodo.jdbc.meta.MappingRepository.getMapping(MappingRepository.java:29
    7)
    at
    kodo.jdbc.meta.MappingRepository.getMappings(MappingRepository.java:2
    72)
    at
    kodo.jdbc.meta.MappingRepository.getMetaDatas(MappingRepository.java:
    256)
    at kodo.query.AbstractQuery.internalCompile(AbstractQuery.java:538)
    at kodo.query.AbstractQuery.compile(AbstractQuery.java:502)
    at kodo.datacache.CacheAwareQuery.compile(CacheAwareQuery.java:265)
    -- Wolfgang

    Marc,
    Here you go...
    kodo.util.FatalDataStoreException: ORA-01453: SET TRANSACTION must be
    first statement of transaction
         at
    kodo.runtime.PersistenceManagerImpl.beforeCompletion(PersistenceManagerImpl.java:897)
         at kodo.runtime.LocalManagedRuntime.commit(LocalManagedRuntime.java:69)
         at
    kodo.runtime.PersistenceManagerImpl.commit(PersistenceManagerImpl.java:566)
         at
    edu.sjsu.recon.contribution.action.jdo.v10.kodo.v32.oracle.v101.simple.concurrency.AbstractConcurrentAction.initTestModel(AbstractConcurrentAction.java:290)
         at
    edu.sjsu.recon.contribution.action.jdo.v10.kodo.v32.oracle.v101.simple.concurrency.AbstractConcurrentAction$InitRunnable.run(AbstractConcurrentAction.java:212)
         at
    edu.sjsu.recon.util.ConcurrencyUtilities.executeSynchronized(ConcurrencyUtilities.java:20)
         at
    edu.sjsu.recon.contribution.action.jdo.v10.kodo.v32.oracle.v101.simple.concurrency.AbstractConcurrentAction.setup(AbstractConcurrentAction.java:75)
         at
    edu.sjsu.recon.execution.ServerExecutor.beforeExecute(ServerExecutor.java:27)
         at
    edu.sjsu.recon.execution.AbstractExecutor.execute(AbstractExecutor.java:43)
         at
    edu.sjsu.recon.execution.DefaultExecutionCoordinator.executeAction(DefaultExecutionCoordinator.java:25)
         at
    edu.sjsu.recon.server.handler.ExecutionRequestHandler.handleRequest(ExecutionRequestHandler.java:63)
         at edu.sjsu.recon.server.RequestProcessor.run(RequestProcessor.java:90)
    NestedThrowablesStackTrace:
    kodo.util.DataStoreException: ORA-01453: SET TRANSACTION must be first
    statement of transaction
         at
    kodo.jdbc.sql.DBDictionary.newDataStoreException(DBDictionary.java:3004)
         at kodo.jdbc.sql.SQLExceptions.getDataStore(SQLExceptions.java:77)
         at kodo.jdbc.sql.SQLExceptions.getDataStore(SQLExceptions.java:63)
         at kodo.jdbc.sql.SQLExceptions.getDataStore(SQLExceptions.java:43)
         at kodo.jdbc.runtime.JDBCStoreManager.connect(JDBCStoreManager.java:871)
         at
    kodo.jdbc.runtime.JDBCStoreManager.retainConnection(JDBCStoreManager.java:189)
         at kodo.jdbc.runtime.JDBCStoreManager.begin(JDBCStoreManager.java:114)
         at
    kodo.runtime.DelegatingStoreManager.begin(DelegatingStoreManager.java:95)
         at
    kodo.runtime.PersistenceManagerImpl.flushInternal(PersistenceManagerImpl.java:1004)
         at
    kodo.runtime.PersistenceManagerImpl.beforeCompletion(PersistenceManagerImpl.java:885)
         at kodo.runtime.LocalManagedRuntime.commit(LocalManagedRuntime.java:69)
         at
    kodo.runtime.PersistenceManagerImpl.commit(PersistenceManagerImpl.java:566)
         at
    edu.sjsu.recon.contribution.action.jdo.v10.kodo.v32.oracle.v101.simple.concurrency.AbstractConcurrentAction.initTestModel(AbstractConcurrentAction.java:290)
         at
    edu.sjsu.recon.contribution.action.jdo.v10.kodo.v32.oracle.v101.simple.concurrency.AbstractConcurrentAction$InitRunnable.run(AbstractConcurrentAction.java:212)
         at
    edu.sjsu.recon.util.ConcurrencyUtilities.executeSynchronized(ConcurrencyUtilities.java:20)
         at
    edu.sjsu.recon.contribution.action.jdo.v10.kodo.v32.oracle.v101.simple.concurrency.AbstractConcurrentAction.setup(AbstractConcurrentAction.java:75)
         at
    edu.sjsu.recon.execution.ServerExecutor.beforeExecute(ServerExecutor.java:27)
         at
    edu.sjsu.recon.execution.AbstractExecutor.execute(AbstractExecutor.java:43)
         at
    edu.sjsu.recon.execution.DefaultExecutionCoordinator.executeAction(DefaultExecutionCoordinator.java:25)
         at
    edu.sjsu.recon.server.handler.ExecutionRequestHandler.handleRequest(ExecutionRequestHandler.java:63)
         at edu.sjsu.recon.server.RequestProcessor.run(RequestProcessor.java:90)
    NestedThrowablesStackTrace:
    java.sql.SQLException: ORA-01453: SET TRANSACTION must be first statement
    of transaction
         at
    oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:125)
         at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:305)
         at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:272)
         at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:623)
         at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:112)
         at oracle.jdbc.driver.T4CStatement.execute_for_rows(T4CStatement.java:474)
         at
    oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1028)
         at oracle.jdbc.driver.OracleStatement.execute(OracleStatement.java:1516)
         at
    oracle.jdbc.driver.PhysicalConnection.setTransactionIsolation(PhysicalConnection.java:1412)
         at
    com.solarmetric.jdbc.DelegatingConnection.setTransactionIsolation(DelegatingConnection.java:266)
         at
    com.solarmetric.jdbc.DelegatingConnection.setTransactionIsolation(DelegatingConnection.java:266)
         at
    com.solarmetric.jdbc.DelegatingConnection.setTransactionIsolation(DelegatingConnection.java:266)
         at
    com.solarmetric.jdbc.DelegatingConnection.setTransactionIsolation(DelegatingConnection.java:266)
         at
    com.solarmetric.jdbc.ConfiguringConnectionDecorator.decorate(ConfiguringConnectionDecorator.java:93)
         at
    com.solarmetric.jdbc.DecoratingDataSource.decorate(DecoratingDataSource.java:90)
         at
    com.solarmetric.jdbc.DecoratingDataSource.getConnection(DecoratingDataSource.java:82)
         at
    com.solarmetric.jdbc.DelegatingDataSource.getConnection(DelegatingDataSource.java:131)
         at
    kodo.jdbc.schema.DataSourceFactory$DefaultsDataSource.getConnection(DataSourceFactory.java:305)
         at
    kodo.jdbc.runtime.JDBCStoreManager.connectInternal(JDBCStoreManager.java:887)
         at kodo.jdbc.runtime.JDBCStoreManager.connect(JDBCStoreManager.java:865)
         at
    kodo.jdbc.runtime.JDBCStoreManager.retainConnection(JDBCStoreManager.java:189)
         at kodo.jdbc.runtime.JDBCStoreManager.begin(JDBCStoreManager.java:114)
         at
    kodo.runtime.DelegatingStoreManager.begin(DelegatingStoreManager.java:95)
         at
    kodo.runtime.PersistenceManagerImpl.flushInternal(PersistenceManagerImpl.java:1004)
         at
    kodo.runtime.PersistenceManagerImpl.beforeCompletion(PersistenceManagerImpl.java:885)
         at kodo.runtime.LocalManagedRuntime.commit(LocalManagedRuntime.java:69)
         at
    kodo.runtime.PersistenceManagerImpl.commit(PersistenceManagerImpl.java:566)
         at
    edu.sjsu.recon.contribution.action.jdo.v10.kodo.v32.oracle.v101.simple.concurrency.AbstractConcurrentAction.initTestModel(AbstractConcurrentAction.java:290)
         at
    edu.sjsu.recon.contribution.action.jdo.v10.kodo.v32.oracle.v101.simple.concurrency.AbstractConcurrentAction$InitRunnable.run(AbstractConcurrentAction.java:212)
         at
    edu.sjsu.recon.util.ConcurrencyUtilities.executeSynchronized(ConcurrencyUtilities.java:20)
         at
    edu.sjsu.recon.contribution.action.jdo.v10.kodo.v32.oracle.v101.simple.concurrency.AbstractConcurrentAction.setup(AbstractConcurrentAction.java:75)
         at
    edu.sjsu.recon.execution.ServerExecutor.beforeExecute(ServerExecutor.java:27)
         at
    edu.sjsu.recon.execution.AbstractExecutor.execute(AbstractExecutor.java:43)
         at
    edu.sjsu.recon.execution.DefaultExecutionCoordinator.executeAction(DefaultExecutionCoordinator.java:25)
         at
    edu.sjsu.recon.server.handler.ExecutionRequestHandler.handleRequest(ExecutionRequestHandler.java:63)
         at edu.sjsu.recon.server.RequestProcessor.run(RequestProcessor.java:90)
    Marc Prud'hommeaux wrote:
    Cleo-
    Can you post the complete stack (including all the nested stack traces)?
    In article <[email protected]>, Cleo wrote:
    Marc,
    Here is the stack:
    ORA-01453: SET TRANSACTION must be first statement of transaction
    kodo.util.FatalDataStoreException
         at
    kodo.runtime.PersistenceManagerImpl.beforeCompletion(PersistenceManagerImpl.java:897)
         at kodo.runtime.LocalManagedRuntime.commit(LocalManagedRuntime.java:69)
         at
    kodo.runtime.PersistenceManagerImpl.commit(PersistenceManagerImpl.java:566)
    This is the code being executed:
    Transaction initTransaction = initPersistenceManager.currentTransaction();
    initTransaction.begin();
    initPersistenceManager.makePersistentAll(model);
    initTransaction.commit(); //EXCEPTION HERE
    initPersistenceManager.close();
    thx
    Marc Prud'hommeaux wrote:
    Cleo-
    Can you post the complete stack trace from the exception? I expect it is
    different from the one posted previously (which was with a much earlier
    version of Kodo).
    In article <[email protected]>, Cleo wrote:
    Has anybody figured out how to solve this?
    I am having the same problem with:
    KODO 3.2
    Oracle JDBC Dirver 10.1.0.3
    thx
    PS: (I am on a deadline for the end of this week)
    Stephen Kim wrote:
    First I would suggest using Kodo 3.0.1. Second I would suggest trying
    to use 9.0.1 drivers which work very well with 8.1.7.
    Wolfgang Hutya wrote:
    Hi,
    Im using Kodo 3.0.0 on Oracle 8.1.7.
    I tried to define the isolation level in the kodo.properties:
    e.g.: kodo.jdbc.TransactionIsolation: serializable
    Unfortunately Oracle throws an exception which says, that "set
    transaction" has to be the first statement called within a
    transaction.
    I
    get this exception on almost every db access.
    java.sql.SQLException: ORA-01453: SET TRANSACTION muss erste
    Anweisung
    der
    Transaktion sein
    at
    kodo.jdbc.sql.SQLExceptions.getFatalDataStore(SQLExceptions.java:42)
    at
    kodo.jdbc.sql.SQLExceptions.getFatalDataStore(SQLExceptions.java:24)
    at
    kodo.jdbc.schema.LazySchemaFactory.findTable(LazySchemaFactory.java:1
    50)
    at
    kodo.jdbc.meta.VerticalClassMapping.fromMappingInfo(VerticalClassMapp
    ing.java:135)
    at
    kodo.jdbc.meta.RuntimeMappingProvider.getMapping(RuntimeMappingProvid
    er.java:56)
    at
    kodo.jdbc.meta.MappingRepository.getMappingInternal(MappingRepository
    java:342)
    at
    kodo.jdbc.meta.MappingRepository.getMapping(MappingRepository.java:29
    7)
    at
    kodo.jdbc.meta.MappingRepository.getMappingInternal(MappingRepository
    java:325)
    at
    kodo.jdbc.meta.MappingRepository.getMapping(MappingRepository.java:29
    7)
    at
    kodo.jdbc.meta.MappingRepository.getMappings(MappingRepository.java:2
    72)
    at
    kodo.jdbc.meta.MappingRepository.getMetaDatas(MappingRepository.java:
    256)
    atkodo.query.AbstractQuery.internalCompile(AbstractQuery.java:538)
    at kodo.query.AbstractQuery.compile(AbstractQuery.java:502)
    atkodo.datacache.CacheAwareQuery.compile(CacheAwareQuery.java:265)
    -- Wolfgang
    Steve Kim
    [email protected]
    SolarMetric Inc.
    http://www.solarmetric.com
    Marc Prud'hommeaux
    SolarMetric Inc.
    Marc Prud'hommeaux
    SolarMetric Inc.

  • SQLServer Isolation Level Problem?

    Since the MSSQLServer driver has trouble when returning explicit cursors, and since the complexitites of one of our procedures requires the use of cursors we have been forced to split a DBMS procedure into two. Ie. we have one method that updates the database and one that retrieves the "return value", ie. a setXXX and then a getXXX method.
    The problem is that we sometimes get an empty resultset from the getXXX method. Note that this behaviour is not due to the server code as this error is not present if run directly on the SQLServer (ie, not through the JDBC driver). Now testing to and fro suggests that the error is caused by some sort of parallell behaviour or isolation level trouble in the driver, but what exactly? The thing is, if I insert a pause of a few milliseconds between the two calls it works correctly (of course I cannot place a "random" pause there in production code since the pause will probably vary depending on the underlying hardware). I have tried setting to different TransactionIsolation levels, but get the same result all the time. There is only one linear execution of the code as shown below (no multithreading in the java application).
    What I'm wondering is: are there any "switches" to turn in relation to waiting for a a stored procedure to finish execution before the next is called? I would expect this to be implicit, but perhaps the SQLServer runs them in parallell and therefore the second method is called before the first is finished, but then, shouldn't the TransactionIsolationLevel of Serializable hinder this? I would appreciate any help on this one, sample code below, the problem being that wereas the while loop should always print the line it sometimes doesn't since the result is empty (not null, empty). It isn't the parameter to the execute_task either, we get the "missing result" problem with different paramaters that all always work when run directly on the SQLServer (via SQL Query Analyzer).
    //Execute the update to the server
    CallableStatement execute_task = connection.prepareCall("{call setXXX(?)}");
    execute_task.setInt(1, 1);
    execute_task.execute();
    //Retrieve the task execution results:
    CallableStatement show_execute_task = connection.prepareCall("{call getXXX}");
    show_execute_task.execute();
    ResultSet show_execute_resultset = show_execute_task.getResultSet();
    while(show_execute_resultset.next()){
    System.out.println("Result received");
    }

    Since the MSSQLServer driver has trouble when
    returning explicit cursors, and since the
    complexitites of one of our procedures requires the
    use of cursors we have been forced to split a DBMS
    procedure into two. Ie. we have one method that
    updates the database and one that retrieves the
    "return value", ie. a setXXX and then a getXXX method.
    The problem is that we sometimes get an empty
    resultset from the getXXX method. Note that this
    behaviour is not due to the server code as this error
    is not present if run directly on the SQLServer (ie,
    not through the JDBC driver). Now testing to and fro
    suggests that the error is caused by some sort of
    parallell behaviour or isolation level trouble in the
    driver, but what exactly? The thing is, if I insert a
    pause of a few milliseconds between the two calls it
    works correctly (of course I cannot place a "random"
    pause there in production code since the pause will
    probably vary depending on the underlying hardware).
    I have tried setting to different
    TransactionIsolation levels, but get the same result
    all the time. There is only one linear execution of
    the code as shown below (no multithreading in the
    java application).
    What I'm wondering is: are there any "switches" to
    turn in relation to waiting for a a stored procedure
    to finish execution before the next is called? I would
    expect this to be implicit, but perhaps the SQLServer
    runs them in parallell and therefore the second methodthis is what i would expect. parallel execution i mean... othewise you might as well use Access...
    is called before the first is finished, but then,
    shouldn't the TransactionIsolationLevel of
    Serializable hinder this? I would appreciate any helpi'm not sure...
    t be perfectly honest while i understand the different transaction levels on their face level as described in java.sql.Connection I am not entirely sure WHAT the expected behaviour actually is for each level.
    not to get too OT here but for example TRANSACTION_REPEATABLE_READ
    if the first transaction reads row 1 from tableA
    then a second transaction changes row 1 in table A and commits the changes
    the first transaction reads row 1 from table A again... what the heck is supposed to happen. i think it is supposed to read the same values as the first pass even though commited changes have been made but i'm not entirely sure.
    on this one, sample code below, the problem being that
    wereas the while loop should always print the line it
    sometimes doesn't since the result is empty (not null,
    empty). It isn't the parameter to the execute_task
    either, we get the "missing result" problem with
    different paramaters that all always work when run
    directly on the SQLServer (via SQL Query Analyzer).
    //Execute the update to the server
    CallableStatement execute_task =
    connection.prepareCall("{call setXXX(?)}");
    execute_task.setInt(1, 1);
    execute_task.execute();
    //Retrieve the task execution results:
    CallableStatement show_execute_task =
    connection.prepareCall("{call getXXX}");
    show_execute_task.execute();
    ResultSet show_execute_resultset =
    show_execute_task.getResultSet();
    while(show_execute_resultset.next()){
    System.out.println("Result received");
    }i don't really follow your code too well here but here is what i would suggest...
    fetch whatever the first procedure returns FIRST.
    this should force your app to wait on this before you execute the second one.
    does that make sense? it seems simple to me but maybe i'm missing something.

  • Isolation level in OBIEE

    Hi Team,
    In RPD connection pool propertie isolation levels there will be 5 levels we can see default,dirty read,commited read,repeatable read,serializable in which cases which level we have to use.Is there is any detailed document or link please post me.
    thanks in advance
    Edited by: 799666 on May 16, 2011 6:23 AM

    Hi,
    Committed Read
    * Locks are held while the data is read to avoid dirty reads. Data can be changed before the transaction ends with that connection.
    * Allows to insert new records and allows to update the records that a transaction isusing and would reflect only after commit, In other words data can be changed before end of transaction resulting in non repeatable reads.
    Dirty Read
    * Locking. Can read uncommitted or dirty data, change values in data during read process in a transaction. Least restrictive of all types.
    * With this option it is possible to read uncommitted data and have rows appear and disappear before end of transaction
    Repeatable Read
    * Places locks on all data used in a query so that nobody can update the data. However new rows can be inserted by other users but will be available in later reads in the current transaction.
    * Allows to insert new records and places the locks on the al the data that is used inthe query to prevent anybody else updating the data.
    Serialization
    * Places a range lock on data set preventing other users to insert or update the rows in data set until the transaction is complete. Most restrictive of all.
    * Locks the dataset and does not allow to insert or update the data set until the transaction is complete.
    hope helps u............
    Assign points and close thread if answered........
    Cheers,
    Aravind

  • CF10 Datasources and Isolation Level

    I've been looking all over for a solution to this and have been unable to find one.  In CF9 using jrun, we are able to set the default isolation level on a datasource so that any time it is used it defaults to dirty read in case a record is in a lock state it still returns the current data.  However in CF10 with it using Tomcat, I cannot find anyway to even configure a datasource through Tomcat, or set the default isolation level on a datasource created in the CF10 administration panel.  I know we could surround every single query with a <cftransaction> tag that sets the isolation level, but that is unrealistic as this is a very large web service with thousands of queries.
    Can anyone help out with this?  Thanks!

    Hello
    You should be able to see your inserted row in the same session
    Session 1:_
    SQL> select * from v$version;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
    PL/SQL Release 10.2.0.1.0 - Production
    CORE    10.2.0.1.0      Production
    TNS for 32-bit Windows: Version 10.2.0.1.0 - Production
    NLSRTL Version 10.2.0.1.0 - Production
    SQL> SELECT *  FROM demo;
            ID
            11
             1
             2
             3
             4
             5
             8
             9
            10
    9 rows selected.
    SQL>
    SQL> INSERT INTO demo  VALUES (11);
    1 row created.
    SQL>
    SQL> SELECT *   FROM demo;
            ID
            11
             1
             2
             3
             4
             5
             8
             9
            10
            11
    10 rows selected.
    Session 2: Different session without committing the result from the above session_
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, Oracle Label Security, OLAP and Data Mining Scoring Engine options
    SQL> select * from demo;
            ID
            11
             1
             2
             3
             4
             5
             8
             9
            10
    9 rows selected.Regards
    Edited by: OrionNet on Jan 4, 2009 9:58 PM

  • Using Isolation Level Serialazable

    I use Visual Basic/Windows Forms to create my project
    I have my form for amending orders ready.
    The flow is:
    Get order number
    Open connection
    sqlcommand to check if order number exists
    if not, return
    Set isolation level serialazable
    Get response to "Delete full order"
    If response is "Yes":
    trans = connlock.BeginTransaction("Delete_OA")
    qry = "delete from ilprod where iorno1 = " & miorno1.Text
    cmd = New SqlCommand(qry, connlock, trans)
    cmd.ExecuteNonQuery()
    trans.Commit()
    MessageBox.Show("OA " & miorno1.Text & "Deleted")
    if response is no:
    I display the order details and items.
    if an item is selected, get response to "Delete item?"
    if response is yes,
    trans = connlock.BeginTransaction("Delete_Item")
    qry = "delete from ilprod where csno = " & mcsno.Text & " and itno = " & mitno.Text
    Dim cmd As SqlCommand
    cmd = New SqlCommand(qry, connlock, trans)
    cmd.ExecuteNonQuery()
    miorno1.Text = 0
    MessageBox.Show("Item " & miorno1.Text & "/" & mnitno.Text & "Deleted")
    trans.Commit()
    and continue amendment.
    if response is no: proceed to amend item details etc.
    Finally, save amendments:
    qry = "select opamno from opcsno"
    Dim cmd = New SqlCommand(qry, connlock, trans)
    new_amno.Text = cmd.ExecuteScalar() + 1
    qry = "update opcsno set opamno = opamno + 1"
    cmd = New SqlCommand(qry, connlock, trans)
    cmd.ExecuteNonQuery()
    qry = "update opmain set "
    qry = qry & "DORNO= " & "'" & DORNOTextBox.Text & "'"
    qry = qry & ",DORDT = " & "'" & CDate(DORDTDateTimePicker.Value) & "'"
    qry = qry & ",SIP1FL = " & SIP1FLTextBox.Text
    qry = qry & ",SIP2FL = " & SIP2FLTextBox.Text
    qry = qry & ",SIP3FL = " & SIP3FLTextBox.Text
    qry = qry & ",STAXFL = " & STAXFLCOMBO.SelectedValue
    qry = qry & ",DESTFL = " & DESTFLCOMBO.Text
    qry = qry & ",MODEFL = " & MODEFLCOMBO.SelectedValue
    qry = qry & ",CARRFL = " & CARRFLCOMBO.SelectedValue
    qry = qry & ",PACKFL = " & PACKFL.SelectedValue
    qry = qry & ",PAYTFL = " & PAYTFLCOMBO.SelectedValue
    qry = qry & ",SCDLFL = 4"
    qry = qry & ",AMNO = " & new_amno.Text
    qry = qry & ",IRDT = " & "'" & CDate(IRDTDateTimePicker.Value) & "'"
    qry = qry & " WHERE csno = " & mcsno.Text & ""
    cmd = New SqlCommand(qry, connlock, trans)
    cmd.ExecuteNonQuery()
    qry = " delete from opnarr where csno = " & mcsno.Text
    cmd = New SqlCommand(qry, connlock, trans)
    cmd.ExecuteNonQuery()
    If vbTrue Then
    qry = "insert into opnarr (csno,narfl,narr,amno) values ("
    qry = qry & mcsno.Text & ","
    qry = qry & "'A',"
    qry = qry & "'" & PAYTFLCOMBO.Text & "'" & ","
    qry = qry & new_amno.Text & ")"
    cmd = New SqlCommand(qry, connlock, trans)
    cmd.ExecuteNonQuery()
    End If
    If vbTrue Then
    qry = "insert into opnarr (csno,narfl,narr,amno) values ("
    qry = qry & mcsno.Text & ","
    qry = qry & "'D'" & ","
    qry = qry & "'" & STAXFLCOMBO.Text & "'" & ","
    qry = qry & new_amno.Text & ")"
    cmd = New SqlCommand(qry, connlock, trans)
    cmd.ExecuteNonQuery()
    End If
    If mmcd = "Z998" Or mmcd = "Z999" Then
    qry = "insert into opnarr (csno,narfl,narr,amno) values ("
    qry = qry & mcsno.Text & ","
    qry = qry & "'X'" & ","
    qry = qry & "'" & mname.Text & "'" & ","
    qry = qry & new_amno.Text & ")"
    cmd = New SqlCommand(qry, connlock, trans)
    cmd.ExecuteNonQuery()
    End If
    For Each row In DataGridView1.Rows
    If row.selected Then
    qry = "update ilprod set qlty = " & row.cells("qlty").value
    qry = qry & ",siz1 = " & row.cells("siz1").value & ",siz2 = " & row.cells("siz2").value & ",siz3 = " & row.cells("siz3").value & ",sppr = " & "'" & row.cells("sppr").value & "'"
    cmd = New SqlCommand(qry, connlock, trans)
    cmd.ExecuteNonQuery()
    qry = "update ilprod set pqty = 0, pmtr = 0 where csno = " & mcsno.Text & " and itno = " & mitno.Text & " and pqty < 0"
    ' This means that qnty will not be equal to inqty+pqty in such records
    cmd = New SqlCommand(qry, connlock, trans)
    cmd.ExecuteNonQuery()
    qry = "delete from opscdl " & "where csno = " & mcsno.Text & " and itno = " & mitno.Text
    cmd = New SqlCommand(qry, connlock, trans)
    cmd.ExecuteNonQuery()
    For Each scdlrow In DataGridView2.Rows
    If scdlrow.cells("csno").value = mcsno.Text And scdlrow.cells("itno").value = mitno.Text And scdlrow.cells("sqnty").value <> 0 Then
    qry = "insert into opscdl (csno,itno,sqnty,srmtr,sdate,stype,amno) values (" & scdlrow.cells("csno").value & "," & scdlrow.cells("itno").value & "," & scdlrow.cells("sqnty").value & "," & scdlrow.cells("srmtr").value & "," & "Convert(date,'" & scdlrow.cells("sdate").value & "')" & "," & "'" & scdlrow.cells("stype").value & "'" & "," & new_amno.Text & ")"
    cmd = New SqlCommand(qry, connlock, trans)
    cmd.ExecuteNonQuery()
    End If
    Next
    End If
    Next
    trans.Commit()
    When I run the form, the Order is amended successfully: but I do not think the isolation level is set correctly: as I can access the order from another (new query) connection when the amendment is going on and not yet saved
    There are a large number of commands between accesing the records from SQL and the above Save Commands.
    Can Can someone guide me?
    To repeat: the flow is
    SET ISOLATION LEVEL
    GET ORDER NUMBER FROM SCREEN
    DELETE ORDER?
    DELETE ITEM?
    AMEND OA AND SAVE AMENDMENTS
    I apologise if the above has been unclear.
    Mohan
    MohanSQL

    Hi Mohan,
    As the issue is more related to Visual Basic/Windows Forms programming, I would like to recommend you post the question in the
    Visual Basic forum or
    Windows Forms forum. It is appropriate and more experts will assist you.
    Also, you can check the following articles about starting database transactions with the specified isolation level in Visual Basic or other projects.
    SqlConnection.BeginTransaction Method (IsolationLevel)
    http://msdn.microsoft.com/en-us/library/5ha4240h(v=vs.110).aspx
    ADO.NET Transactions and Concurrency in VB.NET
    http://www.dotnetheaven.com/article/ado.net-transactions-and-concurrency-in-vb.net
    Thanks,
    Lydia Zhang

  • Changing Isolation Level Mid-Transaction

    Hi,
    I have a SS bean which, within a single container managed transaction, makes numerous
    database accesses. Under high load, we start having serious contention issues
    on our MS SQL server database. In order to reduce these issues, I would like
    to reduce my isolation requirements in some of the steps of the transaction.
    To my knowledge, there are two ways to achieve this: a) specify isolation at the
    connection level, or b) use locking hints such as NOLOCK or ROWLOCK in the SQL
    statements. My questions are:
    1) If all db access is done within a single tx, can the isolation level be changed
    back and forth?
    2) Is it best to set the isolation level at the JDBC level or to use the MS SQL
    locking hints?
    Is there any other solution I'm missing?
    Thanks,
    Sebastien

    Galen Boyer wrote:
    On Sun, 28 Mar 2004, [email protected] wrote:
    Galen Boyer wrote:
    On Wed, 24 Mar 2004, [email protected] wrote:
    Oracle's serializable isolation level doesn't offer what most
    customers I've seen expect it to offer. They typically expect
    that a serializable transaction will block any read-data from
    being altered during the transaction, and oracle doesn't do
    that.I haven't implemented WEB systems that employ anything but
    the default concurrency control, because a web transaction is
    usually very long running and therefore holding a connection
    open during its life is unscalable. But, your statement did
    make me curious. I tried a quick test case. IN ONE SQLPLUS
    SESSION: SQL> alter session set isolation_level =
    serializable; SQL> select * from t1; ID FL ---------- -- 1 AA
    2 BB 3 CC NOW, IN ANOTHER SQLPLUS SESSION: SQL> update t1 set
    fld = 'YY' where id = 1; 1 row updated. SQL> commit; Commit
    complete. Now, back to the previous session. SQL> select *
    from t1; ID FL ---------- -- 1 AA 2 BB 3 CC So, your
    statement is incorrect.Hi, and thank you for the diligence to explore. No, actually
    you proved my point. If you did that with SQLServer or Sybase,
    your second session's update would have blocked until you
    committed your first session's transaction. Yes, but this doesn't have anything to do with serializable.
    This is the weak behaviour of those systems that say writers can
    block readers.Weak or strong, depending on the customer point of view. It does guarantee
    that the locking tx can continue, and read the real data, and eventually change
    it, if necessary without fear of blockage by another tx etc.
    In your example, you were able to change and commit the real
    data out from under the first, serializable transaction. The
    reason why your first transaction is still able to 'see the old
    value' after the second tx committed, is not because it's
    really the truth (else why did oracle allow you to commit the
    other session?). What you're seeing in the first transaction's
    repeat read is an obsolete copy of the data that the DBMS
    made when you first read it. Yes, this is true.
    Oracle copied that data at that time into the per-table,
    statically defined space that Tom spoke about. Until you commit
    that first transaction, some other session could drop the whole
    table and you'd never know it.This is incorrect.Thanks. Point taken. It is true that you could have done a complete delete
    of all rows in the table though..., correct?
    That's the fast-and-loose way oracle implements
    repeatable-read! My point is that almost everyone trying to
    serialize transactions wants the real data not to
    change. Okay, then you have to lock whatever you read, completely.
    SELECT FOR UPDATE will do this for your customers, but
    serializable won't. Is this the standard definition of
    serializable of just customer expectation of it? AFAIU,
    serializable protects you from overriding already committed
    data.The definition of serializable is loose enough to allow
    oracle's implementation, but non-changing relevant data is
    a typically understood hope for serializable. Serializable
    transactions typically involve reading and writing *only
    already committed data*. Only DIRTY_READ allows any access to
    pre-committed data. The point is that people assume that a
    serializable transaction will not have any of it's data re
    committed, ie: altered by some other tx, during the serializable
    tx.
    Oracle's rationale for allowing your example is the semantic
    arguement that in spite of the fact that your first transaction
    started first, and could continue indefinitely assuming it was
    still reading AA, BB, CC from that table, because even though
    the second transaction started later, the two transactions *so
    far*, could have been serialized. I believe they rationalize it by saying that the state of the
    data at the time the transaction started is the state throughout
    the transaction.Yes, but the customer assumes that the data is the data. The customer
    typically has no interest in a copy of the data staying the same
    throughout the transaction.
    Ie: If the second tx had started after your first had
    committed, everything would have been the same. This is true!
    However, depending on what your first tx goes on to do,
    depending on what assumptions it makes about the supposedly
    still current contents of that table, it may ether be wrong, or
    eventually do something that makes the two transactions
    inconsistent so they couldn't have been serialized. It is only
    at this later point that the first long-running transaction
    will be told "Oooops. This tx could not be serialized. Please
    start all over again". Other DBMSes will completely prevent
    that from happening. Their value is that when you say 'commit',
    there is almost no possibility of the commit failing. But this isn't the argument against Oracle. The unable to
    serialize doesn't happen at commit, it happens at write of
    already changed data. You don't have to wait until issuing
    commit, you just have to wait until you update the row already
    changed. But, yes, that can be longer than you might wish it to
    be. True. Unfortunately the typical application writer logic may
    do stuff which never changes the read data directly, but makes
    changes that are implicitly valid only when the read data is
    as it was read. Sometimes the logic is conditional so it may never
    write anything, but may depend on that read data staying the same.
    The issue is that some logic wants truely serialized transactions,
    which block each other on entry to the transaction, and with
    lots of DBMSes, the serializable isolation level allows the
    serialization to start with a read. Oracle provides "FOR UPDATE"
    which can supply this. It is just that most people don't know
    they need it.
    With Oracle and serializable, 'you pay your money and take your
    chances'. You don't lose your money, but you may lose a lot of
    time because of the deferred checking of serializable
    guarantees.
    Other than that, the clunky way that oracle saves temporary
    transaction-bookkeeping data in statically- defined per-table
    space causes odd problems we have to explain, such as when a
    complicated query requires more of this memory than has been
    alloted to the table(s) the DBMS will throw an exception
    saying it can't serialize the transaction. This can occur even
    if there is only one user logged into the DBMS.This one I thought was probably solved by database settings,
    so I did a quick search, and Tom Kyte was the first link I
    clicked and he seems to have dealt with this issue before.
    http://tinyurl.com/3xcb7 HE WRITES: serializable will give you
    repeatable read. Make sure you test lots with this, playing
    with the initrans on the objects to avoid the "cannot
    serialize access" errors you will get otherwise (in other
    databases, you will get "deadlocks", in Oracle "cannot
    serialize access") I would bet working with some DBAs, you
    could have gotten past the issues your client was having as
    you described above.Oh, yes, the workaround every time this occurs with another
    customer is to have them bump up the amount of that
    statically-defined memory. Yes, this is what I'm saying.
    This could be avoided if oracle implemented a dynamically
    self-adjusting DBMS-wide pool of short-term memory, or used
    more complex actual transaction logging. ? I think you are discounting just how complex their logging
    is. Well, it's not the logging that is too complicated, but rather
    too simple. The logging is just an alternative source of memory
    to use for intra-transaction bookkeeping. I'm just criticising
    the too-simpleminded fixed-per-table scratch memory for stale-
    read-data-fake-repeatable-read stuff. Clearly they could grow and
    release memory as needed for this.
    This issue is more just a weakness in oracle, rather than a
    deception, except that the error message becomes
    laughable/puzzling that the DBMS "cannot serialize a
    transaction" when there are no other transactions going on.Okay, the error message isn't all that great for this situation.
    I'm sure there are all sorts of cases where other DBMS's have
    laughable error messages. Have you submitted a TAR?Yes. Long ago! No one was interested in splitting the current
    message into two alternative messages:
    "This transaction has just become unserializable because
    of data changes we allowed some other transaction to do"
    or
    "We ran out of a fixed amount of scratch memory we associated
    with table XYZ during your transaction. There were no other
    related transactions (or maybe even users of the DBMS) at this
    time, so all you need to do to succeed in future is to have
    your DBA reconfigure this scratch memory to accomodate as much
    as we may need for this or any future transaction."
    I am definitely not an Oracle expert. If you can describe for
    me any application design that would benefit from Oracle's
    implementation of serializable isolation level, I'd be
    grateful. There may well be such.As I've said, I've been doing web apps for awhile now, and
    I'm not sure these lend themselves to that isolation level.
    Most web "transactions" involve client think-time which would
    mean holding a database connection, which would be the death
    of a web app.Oh absolutely. No transaction, even at default isolation,
    should involve human time if you want a generically scaleable
    system. But even with a to-think-time transaction, there is
    definitely cases where read-data are required to stay as-is for
    the duration. Typically DBMSes ensure this during
    repeatable-read and serializable isolation levels. For those
    demanding in-the-know customers, oracle provided the select
    "FOR UPDATE" workaround.Yep. I concur here. I just think you are singing the praises of
    other DBMS's, because of the way they implement serializable,
    when their implementations are really based on something that the
    Oracle corp believes is a fundamental weakness in their
    architecture, "Writers block readers". In Oracle, this never
    happens, and is probably one of the biggest reasons it is as
    world-class as it is, but then its behaviour on serializable
    makes you resort to SELECT FOR UPDATE. For me, the trade-off is
    easily accepted.Well, yes and no. Other DBMSes certainly have their share of faults.
    I am not critical only of oracle. If one starts with Oracle, and
    works from the start with their performance arcthitecture, you can
    certainly do well. I am only commenting on the common assumptions
    of migrators to oracle from many other DBMSes, who typically share
    assumptions of transactional integrity of read-data, and are surprised.
    If you know Oracle, you can (mostly) do everything, and well. It is
    not fundamentally worse, just different than most others. I have had
    major beefs about the oracle approach. For years, there was TAR about
    oracle's serializable isolation level *silently allowing partial
    transactions to commit*. This had to do with tx's that inserted a row,
    then updated it, all in the one tx. If you were just lucky enough
    to have the insert cause a page split in the index, the DBMS would
    use the old pre-split page to find the newly-inserted row for the
    update, and needless to say, wouldn't find it, so the update merrily
    updated zero rows! The support guy I talked to once said the developers
    wouldn't fix it "because it'd be hard". The bug request was marked
    internally as "must fix next release" and oracle updated this record
    for 4 successive releases to set the "next release" field to the next
    release! They then 'fixed' it to throw the 'cannot serialize' exception.
    They have finally really fixed it.( bug #440317 ) in case you can
    access the history. Back in 2000, Tom Kyte reproduced it in 7.3.4,
    8.0.3, 8.0.6 and 8.1.5.
    Now my beef is with their implementation of XA and what data they
    lock for in-doubt transactions (those that have done the prepare, but
    have not yet gotten a commit). Oracle's over-simple logging/locking is
    currently locking pages instead of rows! This is almost like Sybase's
    fatal failure of page-level locking. There can be logically unrelated data
    on those pages, that is blocked indefinitely from other equally
    unrelated transactions until the in-doubt tx is resolved. Our TAR has
    gotten a "We would have to completely rewrite our locking/logging to
    fix this, so it's your fault" response. They insist that the customer
    should know to configure their tables so there is only one datarow per
    page.
    So for historical and current reasons, I believe Oracle is absolutely
    the dominant DBMS, and a winner in the market, but got there by being first,
    sold well, and by being good enough. I wish there were more real market
    competition, and user pressure. Then oracle and other DBMS vendors would
    be quicker to make the product better.
    Joe

  • Locks and Isolation Levels

    Hello,
    I'm new to the Oracle environment and have just started using PL/SQL on Oracle 8i. Where can I find info on how Oracle locks data and how transaction isolation levels are used? We've still not recvd our manuals so please any replies referring to manuals are not welcome!

    You could move forward balance into its own table. That was, the SELECT will not lock on the original table. Most of the running balance schemes that I have seen take a snapshot as of a given date (say, statement printing date) and record both the timestamp and the amount. Next month, you simply get last month's vaule and timestamp and query on transactions after that timestamp.
    - Saish

Maybe you are looking for

  • Need to change email on Apple ID and don't want to lose all my purchases!

    I am trying to get iCloud set up but have an issue with my apple ID.  Somehow I managed to get two separate IDs.  My original ID, which has all my itunes etc on it is associated with my old email which used to forward to my current email, but no long

  • What are the best ways to keep my 4 year old iMac going?

    I want to put off having to buy a new desktop iMac as long as possible so, while I'm using my 4+ years old iMac, what are the things I should (or should NOT) do to look after it?  I know that temperature and humidity must be avoided but here in Engla

  • Converting an existing iDoc to an XML file

    Hi experts I want to create an xml file from an iDoc. Is there a program/function module in R/3 that can achieve this? I am running ECC 6.0 Thanks in advance

  • No Boot Up

    after I turned on my iMac running 10.4.1, all I get is the "gong" audio and a totally blank screen. I tried booting up again holding my finger down on the on button until I heard a beep. This didn't work either. Just a blank screen. Does this indicat

  • Html template for email

    I have just started my trial with DreamWeaver. I'm curious will dream weaver allow me to create and design and email that I can insert into my email system using the HTML?  If so, what is the easiest way to do this?