Is snapshot isolation correct in following situation.

Hi,
i am using two databases. d1 and d2
from one data database (d1)  i get data and transfers it to (d2).
the transfering sp (uspTransfer) is in (d1) which is called by an other sp(startuspTrn) from master data base of (d1)
because startuspTrn is seduled by
sp_procoption
and startuspTrn keeps on calling uspTransfer after 10 sec.
             so i had to use in my query(uspTransfer) d1.dbo.table1 to d2.dbo.table1 which is dynamic query.
Please tel me is snapshot is sutable for this situation.
 note: tables of  d1 is used by sanpshot transaction of uspTransfer for reading and tables of d2 are used by this snapshot transaction for insertion deletion etc. other session in d2 only read data.
but other sessions except uspTransfer in d1 can insert update delete tabels of d1( only).
yours sincerley

Can you share details of the application? Thanks.
Kalman Toth Database & OLAP Architect
SQL Server 2014 Design & Programming
New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

Similar Messages

  • Changing isolation level in a session, is valid please see following situation i have used shapshot.

    hi,
    --DBCC FREEPROCCACHE
    --DBCC DROPCLEANBUFFERS
    CREATE TABLE #temp(ID BIGINT NOT NULL)
    SET TRANSACTION ISOLATION LEVEL REPEATABLE READ 
    BEGIN TRAN 
    INSERT INTO #temp (id) SELECT wid FROM w WHERE ss=1
    UPDATE w SET ss =0 WHERE wid IN (SELECT id FROM #Temp)
    COMMIT TRAN 
    IF (EXISTS(SELECT * FROM  #temp))
    BEGIN
    SELECT 'P'
    SET TRANSACTION ISOLATION LEVEL SNAPSHOT 
    BEGIN TRAN 
    insert into a  ( a,b,c)
    SELECT a , b ,c FROM  w WHERE wid= 104300001201746884  
    COMMIT TRAN
    END
    Q1) changin isolation in this way is correct or not?
    Q2) why i have chainged isolation is , because
    this stmt was updated by other trnsaction also, and i also wanted to udpate it , so i made one repetable read and then snapshot.
    UPDATE w SET ss=0 WHERE wid IN (SELECT id FROM #Temp)
    DROP TABLE #temp     
    yours sincerley

    http://blogs.msdn.com/b/craigfr/archive/2007/05/16/serializable-vs-snapshot-isolation-level.aspx
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Pls tel me what is the diffrence between snapshot isolation level of mssql and oracels isolation level

    Hi,
            In mssql i am using following things.
           I have two database D1 and D2, i am using snapshot isolation (ALTER DATABASE MyDatabase
    SET ALLOW_SNAPSHOT_ISOLATION ON) in both database.
    Following is the situation.
    1) There is  one SP sp1 ( it can be in any database d1 or d2), it updates d2 from d1.
    2) d2 is used for reading by web, execept above SP sp1
    3) d1 gets updation from web in readcommite isolation.
    4) both database will be on same instence of mssql.
    Q1) wanted to know how to implement the same thing in oracle 11x express edition.
    Q2) is there any diffrence between snapshot isolation level of mssql and oracel.
    any link would be help full.
    yours sincerely

    >Q1) should i set the option to OFF after the process(ts) is complete
    No keep it on.
    >Q2) ALLOW_SNAPSHOT_ISOLATION  ON , will affect other isolation level's transactions
    No it will not affect any other transaction isolation level.
    >Q3) is my choice of isolation level for process(ts) is correct or there can be any other solution.
    Seems fine, although there are probably many other solutions.
    David
    David http://blogs.msdn.com/b/dbrowne/

  • Snapshot isolation

    We have setup snapshot level isolation in our Berkeley DB XML database, and started getting the following errors during queries after a while:
    PANIC: Cannot allocate memory
    We set the max lockers at 10,000, max locks at 1,000,000 and max lock objects at 1,000,000 as well. We are also very careful to commit or abort every transaction initiated. All of our operations are done under the context of an explicit transaction. Could there be some memory leak? Should we be aware of some other caveats?
    Thank you,
    Alexander.

    Hi Alexander,
    I would suggest running the application under a memory leak checker/debugger, such as Purify or Valgrind. If you do get something suspicious please report it.
    Though, when running with snapshot isolation you have to be prepared for the cost that MVCC (MultiVersion Concurrency Control) implies, that is, larger cache size requirements.
    Pages are being duplicated when a writer takes a read lock on a page, therefore operating on a copy of that page. This avoids the situation where other writers would block due to a read lock held on the page, but it also means that the cache will fill up faster. You might need a larger cache in order to hold the entire working set in memory.
    Note that the need of more cache is amplified when you have a large number of concurrent active long-running transactions, as it increases the volume of active page versions (copies of pages that cannot safely be freed). In such situation, it may worth trying to run updates at serializable isolation and only run queries at snapshot isolation. The queries will not block updates, or vice versa, and the updates will not force page versions to be kept for long periods.
    You should try keeping the transactions running under snapshot isolation as short as possible.
    Of course, the recommended approach to resolve this issue is to increase the cache size, if possible. You can estimate how large your cache should be by taking a checkpoint, followed by a call to the DB_ENV->log_archive() method. The amount of cache required is approximately double the size of the remaining log files (that is, the log files that cannot be archived).
    Also, along with increasing the cache size you may need to increase the number of maximum active transactions that the application supports.
    Please review the following places for further information:
    [http://www.oracle.com/technology/documentation/berkeley-db/db/programmer_reference/transapp_read.html#id1609299]
    [http://www.oracle.com/technology/documentation/berkeley-db/db/gsg_txn/Java/isolation.html#snapshot_isolation]
    Regards,
    Andrei

  • Segmentation fault when using snapshot isolation with Berkeley DB 6.1.19 and 5.1.29

    Hello,
    I have been experimenting with snapshot isolation with Berkeley DB, but I find that it frequently triggers a segmentation fault when write transactions are in progress.  The following test program reliably demonstrates the problem in Linux using either 5.1.29 or 6.1.19. 
    https://anl.app.box.com/s/3qq2yiij2676cg3vkgik
    Compilation instructions are at the top of the file.  The test program creates a temporary directory in /tmp, opens a new environment with the DB_MULTIVERSION flag, and spawns 8 threads.  Each thread performs 100 transactional put operations using DB_TXN_SNAPSHOT.  The stack trace when the program crashes generally looks like this:
    Program received signal SIGSEGV, Segmentation fault.
    [Switching to Thread 0x7ffff7483700 (LWP 11871)]
    0x00007ffff795e190 in __memp_fput ()
       from /usr/lib/x86_64-linux-gnu/libdb-5.1.so
    (gdb) where
    #0  0x00007ffff795e190 in __memp_fput ()
       from /usr/lib/x86_64-linux-gnu/libdb-5.1.so
    #1  0x00007ffff7883c30 in __bam_get_root ()
       from /usr/lib/x86_64-linux-gnu/libdb-5.1.so
    #2  0x00007ffff7883dca in __bam_search ()
       from /usr/lib/x86_64-linux-gnu/libdb-5.1.so
    #3  0x00007ffff7870246 in ?? () from /usr/lib/x86_64-linux-gnu/libdb-5.1.so
    #4  0x00007ffff787468f in ?? () from /usr/lib/x86_64-linux-gnu/libdb-5.1.so
    #5  0x00007ffff79099f4 in __dbc_iput ()
       from /usr/lib/x86_64-linux-gnu/libdb-5.1.so
    #6  0x00007ffff7906c10 in __db_put ()
       from /usr/lib/x86_64-linux-gnu/libdb-5.1.so
    #7  0x00007ffff79191eb in __db_put_pp ()
       from /usr/lib/x86_64-linux-gnu/libdb-5.1.so
    #8  0x0000000000400f14 in thread_fn (foo=0x0)
        at ../tests/transactional-osd/bdb-snapshot-write.c:154
    #9  0x00007ffff7bc4182 in start_thread (arg=0x7ffff7483700)
        at pthread_create.c:312
    #10 0x00007ffff757f38d in clone ()
        at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
    I understand that this test program, with 8 concurrent (and deliberately conflicting) writers, is not an ideal use case for snapshot isolation, but this can be triggered in other scenarios as well.
    You can disable snapshot isolation by toggling the value of the USE_SNAP #define near the top of the source, and the test program then runs fine without it.
    Can someone help me to identify the problem?
    many thanks,
    -Phil

    Hi Phil,
       We have taken a look at this in more detail and there was a bug in the code.   We have fixed the bug.     We will roll it into our next 6.1 release that we do.   If you would like an early patch that will go on top of 6.1.19, please email me at [email protected], reference this forum post and I can get a patch sent out to you.   It will be a .diff file that apply on the source code and then rebuild the library.  Once again thanks for finding the issue, and providing a great test program which tremendously helped in getting this resolved.
    thanks
    mike

  • Snapshot isolation level usage

    Dear All,
                 There are some transaction tables in which more than one user add and update records (only).
    what ever they add and update in transaction tables, based on that entry  they add  a record in Table A1
    , Table A1 has two cols one keeps the table name of transaction table and other col keeps the pk(primarykey) of transaction tables.
    So Table A1 always gets only  inserts,
    Table A1 gets entry only  for transaction tables , and only when transaction table gets entry .
                    At the  same time there is a  process (ts) which reads Table A1 on time basis, picks up all records
    form Table A1 and  reads data from transaction tables on the basis of PK stored in it . there it after inserts all the read records into a
    new temp table.
    and at the end of transaction  it deletes records from Table A1.
    after some time it again picks up new records from Table A1 and repeats the process.
    For process (ts) . i want to use ALLOW_SNAPSHOT_ISOLATION
    so that user can keep on entering records.
    Q1) The ALLOW_SNAPSHOT_ISOLATION
    database option must be set to ON
    before one can start a transaction that uses the SNAPSHOT isolation level. I wanted to know should i set the option to OFF after the process(ts) is complete, and switch
    it on again on the database when process(ts) starts again.
    that is, keeping it on all the time  will affect the database in any case?
    Q2) ALLOW_SNAPSHOT_ISOLATION  ON , will affect other isolation level's transactions or only to snapshot isolation levels transactions. that is, i have old
    stored proc and front end applications like web or window on .net which are using default isolation levels.
    Q3) is my choice of isolation level for process(ts) is correct or there can be any other solution.
    Note: "the information is  quite less but i wont be able to give full information."
    yours sincerely

    >Q1) should i set the option to OFF after the process(ts) is complete
    No keep it on.
    >Q2) ALLOW_SNAPSHOT_ISOLATION  ON , will affect other isolation level's transactions
    No it will not affect any other transaction isolation level.
    >Q3) is my choice of isolation level for process(ts) is correct or there can be any other solution.
    Seems fine, although there are probably many other solutions.
    David
    David http://blogs.msdn.com/b/dbrowne/

  • Snapshot isolation in combination with service broker

    Hi all,
    I'm currently using the service broker in combination with snapshot isolation on the database.
    The notification request is executed under read commited isolation. The code looks like this:
    SqlDependency dependency = new SqlDependency(command, null, 0);
    dependency.OnChange += eventHandler;
    using (SqlConnection conn = new SqlConnection(connectionString))
    using (SqlTransaction tran = conn.BeginTransaction(IsolationLevel.ReadCommitted))
    command.Transaction = tran;
    command.ExecuteNonQuery();
    tran.Commit();
    The request is successfully created and works fine at the first glance.
    Now here is my problem:
    I created a notification request that should monitor two objects. The query (executed under read committed) looks something like this:
    SELECT Id, DataVersion FROM dbo.MyObjects WHERE Id = 1 OR Id = 2
    Afterwards I delete both objects in separate nested transactions. Both of them are running under snapshot isolation. It looks something like this:
    using (SqlConnection conn1 = new SqlConnection(connectionString))
    conn1.Open();
    using (SqlTransaction tran1 = connection1.BeginTransaction(IsolationLevel.Snapshot))
    using (SqlConnection conn2 = new SqlConnection(connectionString))
    conn2.Open();
    using (SqlTransaction tran2 = conn2.BeginTransaction(IsolationLevel.Snapshot))
    SqlCommand command2 = conn2.CreateCommand();
    command2.Transaction = tran2;
    command2.CommandText = "DELETE FROM MyObjects WHERE Id = 2";
    command2.ExecuteNonQuery();
    tran2.Commit();
    SqlCommand command1 = conn1.CreateCommand();
    command1.CommandText = "DELETE FROM MyObjects WHERE Id = 1";
    command1.Transaction = tran1;
    command1.ExecuteNonQuery();
    tran1.Commit(); //-> Conflict exception
    A conflict exception is raised during the commit of last transaction. The conflict seems to occur in the table "sys.query_notifcations_xxxxxxxxx". This is the exact message:
    An unhandled exception of type 'System.Data.SqlClient.SqlException' occurred in System.Data.dll
    Additional information: Snapshot isolation transaction aborted due to update conflict. You cannot use snapshot isolation to access table 'sys.query_notification_45295271' directly or indirectly in database 'MyDatabase' to update, delete,
    or insert the row that has been modified or deleted by another transaction. Retry the transaction or change the isolation level for the update/delete statement.
    Is there any restriction for the service broker that prohibits the usage of snapshot isolation.
    Thanks in advance.

    No, the error has nothing to do with Service Broker. Or for that matter, query notifications, which is the feature you are actually using. (Query notifications uses Service Broker, but Service Broker != Query notification.)
    You would get the same error if you had a trigger in MyObjects that tried to update the same row for both deletions. A snapshot transaction gives you a consistent view of the database in a certain point in time. Consider this situation:
    Snapshot transaction A that started at time T update a row R at time T2. Snapshot transaction B starts at time T1 updates the same row at time T3. Had they been regular non-snapshot transaction, transaction B would have been blocked already when it tried
    to read R, but snapshot transactions do not get blocked. But if B would be permitted to update R, the update from transaction A would be lost. Assume that the update is an incremental one, for instance updating cash balance for an account. You can see that
    this is not permittable.
    In your case, the row R happens to be a row in an internal table for query notifications, but it is the application design which is the problem. There is no obvious reason to use snapshot isolation in your example since you are only deleting. And there is
    even less reason to have two transactions and connections for the task.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Turning On Snapshot Isolation Gotchas

    Hello Experts,
    We have been experiencing high number of Deadlocks while using MERGE statement and turning on Snapshot Isolation perfectly solves our problem and our throughput and concurrency didn't get affected at all.
    We did load testing and monitored TempDB VersionStore size and it was nothing significant and we have 64 Gig Memory allocated in Prod Server. Our team did the reading and research primarily from these online sources.
    My Question is "Is there any gotchas in turing on SnapShot Isolation you won't see right away?". I want learn from experiences before we venture into turning it on in our production Environment?. I saw some folks experienced 60 Gig Version Store
    because there was 3 month old active transaction. 
    What kind of preventive and maintenance scripts would be useful to monitor the system and take corrective action?.
    I have few scripts to monitor tempdb version store size, and peformon Transaction Counters. Is there any other better scripts/tools available?.
    Kimberly Tripp Video on Isolation Levels :
    http://download.microsoft.com/download/6/7/9/679B8E59-A014-4D88-9449-701493F2F9FD/HDI-ITPro-TechNet-mp4video-MCM_11_SnapshotIsolationLecture(4).m4v
    Kendra Little on SnapShot Isolatioin :
    http://www.brentozar.com/archive/2013/01/implementing-snapshot-or-read-committed-snapshot-isolation-in-sql-server-a-guide/
    Microsoft Link: https://msdn.microsoft.com/en-us/library/ms188277(v=sql.105).aspx
    https://msdn.microsoft.com/en-us/library/bb522682.aspx
    SQL Team Link : http://www.sqlteam.com/article/transaction-isolation-and-the-new-snapshot-isolation-level
    Idera Short article on TempDB : http://sqlmag.com/site-files/sqlmag.com/files/uploads/2014/01/IderaWP_Demystifyingtempdb.pdf
    Jim Gray Example by Craig Freedman : http://blogs.msdn.com/b/craigfr/archive/2007/05/16/serializable-vs-snapshot-isolation-level.aspx
    Thanks in advance.
    ~I90Runner
    I90Runner

    It is unclear what isolation level have you enabled RCSI or SI?
    Downsides:
    Excessive tempdb usage due to version store activity. Think about session that deletes 1M rows. All those rows must be copied to version store regardless of session transaction isolation level and/or if there are other sessions
    that running in optimistic isolation levels at the moment when deletion started.
    Extra fragmentation – SQL Server adds 14-byte version tag (version store pointer) to the rows in the data files when they are modified. This tag stayed until index is rebuild
    Development challenges – again, error 3960 with snapshot isolation level. Another example in both isolation levels – trigger or code based referential integrity. You can always solve it by adding with (READCOMMITTED) hint
    if needed. 
    While switching to RCSI could be good emergency technique to remove blocking between readers and writers (if you can live with overhead AND readers are using read committed), I would suggest to find root cause of the blocking.
    Confirm that you have locking issues – check if there are shared lock waits in wait stats, that there is no lock escalations that block readers, check that queries are optimized, etc.  
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Snapshot isolation transaction aborted due to update conflict

    Hi Forum,
    Can anyone help me to give the solution for this below problem.
    We are developing MVC3 application with SQL Server 2008, we are facing as
    Snapshot isolation transaction aborted due to update conflict
    Snapshot isolation transaction aborted due to update conflict. You cannot use snapshot isolation to access table 'dbo.Tb_M_Print' directly or indirectly in database 'DB_Production' to update, delete, or insert the row that has been modified or deleted
    by another transaction. Retry the transaction or change the isolation level for the update/delete statement .
    Please tell me the solution how to proceed for the above problem
    Rama

    change the isolation level for the update/delete statement .
    The error message already mentions the solution.
    See also MSDN
    Lesson 1: Understanding the Available Transaction Isolation Levels => Update Conflicts
    Olaf Helper
    [ Blog] [ Xing] [ MVP]

  • ( if snapshot isolation level is enable in db.) Is the version chain is generated if any read commited transation is executed or it only gnerates version chain, if any snapshot transaction is in running status.

    hi,
    I have enable snapshot isolation level, in my database all queries execute in read commited isolation only one big transaction uses snapshot isolation.
    q1)i wanted to know if snapshot silation transaction is not running but database is enable for snapshot ,then will the normal 
    queries using read commited will create versions or not.
    yours sincerley.

    Enabling snapshot isolation level on DB level does not change behavior of queries in any other isolation levels. In that option you are eliminating all blocking even between writers (assuming they do not update
    the same rows) although it could lead to 3960 errors (data has been modified by other sessions). 
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Berkeley DB's Snapshot Isolation

    After read the reference document, I think snapshot isolation means that a write operation will take a read lock in the data page under a transation. After that another transaction with read operation can read the page.But I code it like this:
    step1:txn1 update the page but not commited
    step2:txn2 read the page
    step3:commit txn1
    step4:commit txn2
    The program is stop in step2 and wait forever.Then I change step1 for read, It can excete normally.That's seems like the snapshot isolation is still take a wirte lock int the page.I feel confuss about how snapshot isolation works.
    If someone can give me an example program , or tell me how it works , I will thank you quit a lot.

    Hi,mike,thanks for you answer.I read the document again and again today.According to the suggestion,we usually use snapshot in a read-only transaction and another update transation to write. So I recode my program with a read-only transaction and a update transaction.But it is also blocked like before.I feel so confuse. I put my program here and wish you can help me with that. thanks for your time.
    #include"db_cxx.h"
    #include<iostream>
    #include<cstring>
    int main()
      u_int32_t env_flags =     DB_CREATE |
                                         DB_INIT_LOCK |
                                         DB_INIT_LOG |
                                         DB_INIT_MPOOL |
                                         DB_INIT_TXN;
      const char* home = "envHome";
      u_int32_t db_flags = DB_CREATE | DB_AUTO_COMMIT;
      const char* fileName = "envtest.db";
      Db* dbp = NULL;
      DbEnv myEnv(0);
      try{
           myEnv.open(home,env_flags,0);
           myEnv.set_flags(DB_MULTIVERSION,1);
           dbp = new Db(&myEnv,0);
           dbp->open(
                               NULL,           //Txn pointer
                               fileName,      //File name
                               NULL,           //Logic db name
                               DB_BTREE, //Database type
                               db_flags,      //Open flags
                               0                //file mode
      }catch(DbException &e){
           std::cerr<<"Error when opening database and Environment:"
                          <<fileName<<","<<home<<std::endl;
           std::cerr<<e.what()<<std::endl;
      //put data normally
      char *key1 = "luffy";
      char *data1 = "op";
      char *key2= "usopp";
      char *data2 = "brave";
      Dbt pkey1(key1,strlen(key1)+1);
      Dbt pdata1(data1,strlen(data1)+1);
      Dbt pkey2(key2,strlen(key2)+1);
      Dbt pdata2(data2,strlen(data2)+1);
      dbp->put(NULL,&pkey1,&pdata1,0);
      dbp->put(NULL,&pkey2,&pdata2,0);
      //using txn cursor to read and another cursor to modify before commit
      try{
           DbTxn *txn1 = NULL;
           myEnv.txn_begin(NULL,&txn1,DB_SNAPSHOT);
           Dbc *cursorp = NULL;
           dbp->cursor(txn1,&cursorp,0);
           Dbt tempData1,tempKey2,tempData2;
           tempData2.set_flags(DB_DBT_MALLOC);
           cursorp->get(&pkey1,&tempData1,DB_SET);
           cursorp->get(&tempKey2,&tempData2,DB_NEXT);
           //cout just to see if it is right
           std::cout<<(char*)pkey1.get_data()<<" : "<<(char*)tempData1.get_data()<<std::endl
                              <<(char*)tempKey2.get_data()<<" : "<<(char*)tempData2.get_data()<<std::endl;
           //txn2 to modify
           DbTxn *txn2 = NULL;
           myEnv.txn_begin(NULL,&txn2,0);
           Dbc *temcur = NULL;
           dbp->cursor(txn2,&temcur,0);
           temcur->put(&pkey1,&pdata2,DB_KEYFIRST);          //the program will stop here and wait forever. if the snapshop isolation made a copy before , why still block here?
                                                                                         //without this line,there won't deadlock.that means page was put a write lock before
           //commit the txn
           txn1->commit(0);
           txn2->commit(0);
      }catch(DbException &e){
          std::cerr<<"Error when use a txn"<<std::endl;
      try{
           dbp->close(0); //dbp should close before environment
           myEnv.close(0);
      }catch(DbException &e){
           std::cerr<<"Error when closing database and environment:"
                                    <<fileName<<","<<home<<std::endl;
           std::cerr<<e.what()<<std::endl;
      return 0;

  • Will be possible the following situation?

    will be possible the following situation?
    a macbook pro to replace the hard drive in the laptop which worked for another hard drive with OS already installed? the machine will accept the default settings that are installed on the second hard drive?

    If the OSX installed on the HDD is older than the MBP, it will not work.  Example, a MBP that came with Lion (10.7) will not be able to run Snow Leopard (10.6). 
    You can test this by connecting the HDD to the MBP via USB.  Start the MBP with the OPTION key down.  If it recognizes the external HDD and boots the MBP, then it can be installed internally.
    Ciao.

  • What can I do for the following situation: I think my insert slot for the headphones does not work; when I put in any cable that should and can go in all I hear is low frequent static sounds.

    What can I do for the following situation: I think my insert slot for the headphones does not work; when I put in any cable that should and can go in all I hear is low frequent static sounds.

    You might try talking to the Apple store manager and see if you don't get more help...you might also contact Apple.com/support and see if they can give you a contact with the Apple Italy offices.
    Sounds like you have two levels of failure, the first shop that did the repair and broke a wire, and the second that has disabled something else and isn't helping.
    Apple officially refurbished equipment is usually good quality.  Surprising this has happened but you need to press on with complaints and try for resolution.

  • An INSERT EXEC statement cannot be nested.Please tel me some good way of formulation in following situation.

    Hi,
        The following query is showing error An INSERT EXEC statement cannot be nested
    CREATE PROCEDURE [dbo].[Procedur3]
    @para1 int
    AS
    BEGIN
    CREATE TABLE #tem
    select * from detialpar where did=@para1
    --this code is quite big and is called from many place so we kept it inside this SP , so that we can call the sp to get result.
    END
    CREATE PROCEDURE [dbo].[Procedur2]
    @para1 int
    @para2 datetime
    AS
    BEGIN
    CREATE TABLE #tem
    insert into #tem (value) exec [dbo].[Procedure3] @para1
    exec ('select * from abc
    left join #tem on id=temid
    where id =' + cast(@para1 as varchar) -- i do not want to change this big dynamic query, because it has many optonal code concatinated by using "if then else".
    END
    CREATE PROCEDURE [dbo].[Procedure1]
    @para1 int,
    @para2 datetime
    AS
    BEGIN
    delete from table1 where id=@para1
    insert into table1 ( col1,col2) exec Procedure2 @para1,@para2
    ……. There are many blocks in this SP where we are deleting and inserting with different SP .
    select Name,Amount from #Temp1
    END
    CREATE PROC Procedure
    AS
    BEGIN
    SET TRANSACTION ISOLATION LEVEL SNAPSHOT
    SET NOCOUNT ON
    LOOP "A" starts here which gests id from a table xyz @para1
    begin try
    begin trans
    exec [Procedure1] @para1
    LOOP "A" ents here
    COMMIT TRANSACTION;
    END TRY
    BEGIN CATCH
    IF @@trancount > 0 ROLLBACK TRANSACTION;
    END CATCH;
    END
    GO
    Please tel me some good way of solving the error.
    yours sincerly

    You can not do like above:
    Try the below:(Not tested), Below, we do not change the code, however, we placed your dynamic execution to different procedure.
    CREATE PROCEDURE [dbo].[Procedur3]
    @para1 int
    AS
    BEGIN
    CREATE TABLE #tem
    insert into #tem (value)
    select * from detialpar where did=@para1
    --this code is quite big and is called from many place so we kept it inside this SP , so that we can call the sp to get result.
    END
    CREATE PROCEDURE [dbo].[Procedur2]
    @para1 int
    @para2 datetime
    AS
    BEGIN
    CREATE TABLE #tem
    exec [dbo].[Procedure3] @para1
    END
    CREATE PROCEDURE [dbo].[Procedure1]
    @para1 int,
    @para2 datetime
    AS
    BEGIN
    delete from table1 where id=@para1
    insert into table1 ( col1,col2)
    exec ('select * from abc
    left join #tem on id=temid
    where id =' + cast(@para1 as varchar) -- i do not want to change this big dynamic query, because it has many optonal code concatinated by using "if then else".
    ……. There are many blocks in this SP where we are deleting and inserting
    with different SP .
    select Name,Amount from #Temp1
    END
    CREATE PROC Procedure
    AS
    BEGIN
    SET TRANSACTION ISOLATION LEVEL SNAPSHOT
    SET NOCOUNT ON
    LOOP "A" starts here which gests id from a table xyz @para1
    begin try
    begin trans
    exec [Procedure1] @para1
    LOOP "A" ents here
    COMMIT TRANSACTION;
    END TRY
    BEGIN CATCH
    IF @@trancount > 0 ROLLBACK TRANSACTION;
    END CATCH;
    END
    GO

  • SELECT hangs despite Snapshot Isolation

    - I'm trying to learn isolation levels.  
        SET TRANSACTION ISOLATION LEVEL Read Snapshot
    I thought this would allow me to issue a SELECT on a dirty row (I thought it would show me the clean version of the row). But my test failed - the SELECT hangs/pends. Why?
    - First I dirtied a row:
        if object_id('tblTestTrans') is not null drop table tblTestTrans
        create table tbltestTrans (
            id int
        insert into tbltestTrans (id) values (1)
        Begin Tran
            Update tbltestTrans set id = 2
    Then I ran a SELECT in a new query window:
        ALTER DATABASE TestDb Set Allow_SnapShot_Isolation ON
        SET TRANSACTION ISOLATION LEVEL Snapshot
        select * from tblTestTrans -- hangs
    The select hangs. Why?   
       

    Your test seems a little incomplete. You should set the connection which is starting transaction to snapshot before running an update. Then you can open another connection and set it to snapshot to see only the clean version of the table (with
    only one record).

Maybe you are looking for

  • Problem in writing to the file

    I use this labview code to read and save some electrical measurement data from a set of instruments. I am having a problem that the code stops writing to the file after a while. It stops responding too. The only way to stop it then is to use the task

  • Reporting Failed Score in HealthStream

    I have created a series of 29 Captivate 7 modules that are published to SCORM 1.2 and loaded into HealthStream. The courses include: an interactive demonstration (basic click boxes to navigate forward in the software) Practice Simulation where the us

  • Just did a clean install and lost iPhoto

    I just did a clean install of Mountain Lion and now iphoto is gone. I see it in the app store but it says it will cost $14.99. How do I get it back for free? I do not have the disks that came with my macbook pro either.

  • Ipod seen as storage device! Help!!

    My computer is recognizing my ipod as a mass storage device instead of an ipod! Itunes in turn isn't recognizing it at all. It doesnt work on any computer. Does anyone know how to make it work again??

  • Images loaded with a random number

    Hi All, I have made a flash movie as seen at http://www.coffeemamma.com.au and would like to change the following: I'd like to generate three random numbers from 1 to 5 inclusive but I want to ensure that each number is different - e.g. 2, 4, 1 (not