How to check if row to be inserted will cause circular circular loop?

Hi-
I'm using connect by clause to do some processing and to check that the data is not causing circular loops (with SQLCODE = -1436).
Customers can add data via API, and this data, after inserted, can cause the problem explained above (loop). Therefore I need to find a way to check that these new values don't collide with the existing data in the table.
One way that currently works is to insert the data, use my pl/sql function to check that there's no loops; if there's a loop, i catch the exception, delete the rows I just inserted, and throw back the error to the user.
I find it very ugly and unneficient, but I can't find the way to use CONNECT BY with data that is not present in the table. example:
table my_table contains
parent_id | child_id
111 | 777
777 | 333
and now customer wants to insert:
parent_id | child_id
777 | 111
Also, if customer wants to insert
333 | 111
if I insert the row and run my script, it will work OK, but only if I insert the row. Is there any way to validate this without inserting/removing the row ?
the script I'm using is similar to the one posted here:
problems using CONNECT BY
thanks

This approach may fail to detect loops introduced by concurrent transactions if one of the transaction uses a serializable isolation level.
For example, assume there are two sessions (A and B), the table and trigger have been created, and the table is empty. Consider the following scenario.
First, session A starts a transaction with serializable isolation level:
A> SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
Transaction set.Next, session B inserts a row and commits:
B> INSERT INTO my_table (parent_id, child_id) VALUES (111, 777);
1 row created.
B> COMMIT;
Commit complete.Now, session A successfully inserts a conflicting row and commits:
A> INSERT INTO my_table (parent_id, child_id) VALUES (777, 111);
1 row created.
A> COMMIT;
Commit complete.
A> SELECT * FROM MY_TABLE;
PARENT_ID  CHILD_ID
      111       777
      777       111Session A "sees" the table "as of" a point in time before session B inserted. Also, once session B commits, the lock it acquired is released.
An alternative approach that would prevent this could use SELECT...FOR UPDATE and a "table of locks" like this:
SQL> DROP TABLE MY_TABLE;
Table dropped.
SQL> CREATE TABLE MY_TABLE
  2  (
  3      PARENT_ID NUMBER,
  4      CHILD_ID NUMBER
  5  );
Table created.
SQL> CREATE TABLE LOCKS
  2  (
  3      LOCK_ID INTEGER PRIMARY KEY
  4  );
Table created.
SQL> INSERT INTO LOCKS(LOCK_ID) VALUES(123);
1 row created.
SQL> COMMIT;
Commit complete.
SQL> CREATE OR REPLACE TRIGGER MY_TABLE_AI
  2      AFTER INSERT ON my_table
  3  DECLARE
  4      v_count NUMBER;
  5      v_lock_id INTEGER;
  6  BEGIN
  7      SELECT
  8          LOCK_ID
  9      INTO
10          v_lock_id
11      FROM
12          LOCKS
13      WHERE
14          LOCKS.LOCK_ID = 123
15      FOR UPDATE;
16         
17      SELECT
18          COUNT (*)
19      INTO  
20          v_count
21      FROM  
22          MY_TABLE
23      CONNECT BY
24          PRIOR PARENT_ID = CHILD_ID;
25 
26  END MY_TABLE_AI;
27  /
Trigger created.Now the scenario plays out like this.
First, session A starts a transaction with serializable isolation level:
A> SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
Transaction set.Next, session B inserts a row and commits:
B> INSERT INTO my_table (parent_id, child_id) VALUES (111, 777);
1 row created.
B> COMMIT;
Commit complete.Now, when session A tries to insert a conflicting row:
A> INSERT INTO my_table (parent_id, child_id) VALUES (777, 111);
INSERT INTO my_table (parent_id, child_id) VALUES (777, 111)
ERROR at line 1:
ORA-08177: can't serialize access for this transaction
ORA-06512: at "TEST.MY_TABLE_AI", line 5
ORA-04088: error during execution of trigger 'TEST.MY_TABLE_AI'To show that this still handles other cases:
1. Conflicting inserts in the same transaction:
SQL> TRUNCATE TABLE MY_TABLE;
Table truncated.
SQL> INSERT INTO my_table (parent_id, child_id) VALUES (111, 777);
1 row created.
SQL> INSERT INTO my_table (parent_id, child_id) VALUES (777, 111);
INSERT INTO my_table (parent_id, child_id) VALUES (777, 111)
ERROR at line 1:
ORA-01436: CONNECT BY loop in user data
ORA-06512: at "TEST.MY_TABLE_AI", line 15
ORA-04088: error during execution of trigger 'TEST.MY_TABLE_AI'2. Read-committed inserts that conflict with previously committed transactions:
SQL> TRUNCATE TABLE MY_TABLE;
Table truncated.
SQL> INSERT INTO my_table (parent_id, child_id) VALUES (111, 777);
1 row created.
SQL> COMMIT;
Commit complete.
SQL> INSERT INTO my_table (parent_id, child_id) VALUES (777, 111);
INSERT INTO my_table (parent_id, child_id) VALUES (777, 111)
ERROR at line 1:
ORA-01436: CONNECT BY loop in user data
ORA-06512: at "TEST.MY_TABLE_AI", line 15
ORA-04088: error during execution of trigger 'TEST.MY_TABLE_AI'3. Conflicting inserts in concurrent, read-committed transactions:
a) First, empty out the table and start a read-committed transaction in one session (A):
A> TRUNCATE TABLE MY_TABLE;
Table truncated.
A> SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
Transaction set.b) Now, start a read-committed transaction in another session (B) and insert a row:
B> SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
Transaction set.
B> INSERT INTO my_table (parent_id, child_id) VALUES (111, 777);
1 row created.c) Now, try to insert a conflicting row in session A:
A> INSERT INTO my_table (parent_id, child_id) VALUES (777, 111);This is blocked until session B commits, and when it does:
B> COMMIT;
Commit complete.the insert in session A fails:
INSERT INTO my_table (parent_id, child_id) VALUES (777, 111)
ERROR at line 1:
ORA-01436: CONNECT BY loop in user data
ORA-06512: at "TEST.MY_TABLE_AI", line 15
ORA-04088: error during execution of trigger 'TEST.MY_TABLE_AI'If updates are permitted on the table, then they could cause loops also, but the trigger could be modified to test for this.

Similar Messages

  • How to check the row level security in TOAD for oracle

    Hi ,
    for ex, i have 2 types of users
    normal user and super user
    super user can see the group set (some column name) created by normal user
    but normal user can not see the set created by super user
    this set crestion aslso has 3 types "U','P',S'
    P & S can be viewed by even normal user
    but U should not
    so here we are having some row level security for the normal user .....
    So, in TOAD for oracle how to check that......
    Let me know if i'm not clear

    Like
    I'm the super user....
    And some records are inserted to a table by different users ('a' , 'b', etc....)
    So,if user 'a' logins then he can be able to see only the records inserted by 'a' only...
    how to see in TOAD where such type of scripts (filter conditions) are written.....

  • How to check the records that were inserted in a day?Please Help!

    HI All,
    How to check the records that were insterted in a day in a standard SAP table?
    For example : I want retrieve the records that were added in a day in WLK1 table.
    How do i do this?
    Urgent!! Please help!
    Thanks in advance!
    Sandeep Shenoy

    HI
        Changes to data within a table can be automatically logged. Such automatic logging of changes is called automatic table history. To turn on logging, tickmark the Log Data Changes check box on the Technical Settings screen
    IF THIS IS ALREADY DONE FOR A PARTICULAR TABLE, YOU CAN GET THE RECORD OF CHANGES THAT YOU MADE FOR THAT PARTICULAR TABLE AS EXPLAINED UNDER
    <a href="http://64.233.179.104/search?q=cache:pOdVy55jfAIJ:cma.zdnet.com/book/abap/ch06/ch06.htmHISTORYOFUPDATESINADAYINABAP&hl=en&gl=in&ct=clnk&cd=1">Automatic Table History and Change Documents</a>
    IF ITS HELPFUL PLEASE REWARD POINTS
    REGARDS
    ANOOP

  • How to check if data is being inserted into table?

    Oracle 10.2.0.1.0
    I have a long running transaction, an insert based on select running from SQLPLUS. It is running for more than 2 hours as the volume of data is extremely high ( more than 2 million records ).
    How do I check:
    1) The volume of data that is already inserted?.
    This is to make sure the session is not "hung", but is actually inserting records into the table.
    Thanks

    I have a long running transaction, an insert based on select running from SQLPLUS. It is running for more than 2 hours as the volume of data is extremely high ( more than 2 million records ).
    How do I check:
    1) The volume of data that is already inserted?.If there is a commit statement then you can directly query the table for number of rows inserted.
    This is to make sure the session is not "hung", but is actually inserting records into the table.To confirm if it is running or not you can query dba view like dba_extents for that objects you can look into bytes or blocks which should increase if you query is running.
    Hope this helps
    Virendra

  • How to skip a row to be inserted in staging table

    Hi Everyone,
    Actually i am transforming data from a source table to staging table and then staging to final table. I have generated a primary key using sequence. As i put the insert method of staging table as truncate/insert. So every time when mapping is loaded, staging table will be truncated and new data is inserted but as i am using sequence in staging table, it will give new numbering to old data from source table and it will duplicated this data into final target table. So for this reason i am using key look up on some of input attributes and than using expression i am trying to avoid duplication. In each output attributes in expression, I am putting the case statement
    boldCASE WHEN INGRP1.ROW_ID IS NULL
    THEN
    INGRP1.ID
    END*bold*
    Due to this condition i am getting error
    bold
    Warning
    ORA-01400: cannot insert NULL into ("SCOTT"."STG_TARGET_TABLE"."ROW_ID")
    bold
    But i am stuck when the value of row_id is null, at this what condition or statement should i write to skip the insertion of the data. I want to insert data only when ROW_ID IS NULL.
    Kindly help me out.
    Thanks
    Regards
    Suhail Dayer

    You don't need the tables to be identical to use MINUS, only the "Select List" must match. Assuming you have the same Business key (one or more columns that uniquely identifies a row from your source data) in both the source and final table you can do the following:
    - Use a Set Operation where the result is the Business Key of the Staging table MINUS the Business Key of the final table
    - The output of the Set Operation is then joined to the Staging table to get the rest of the attributes for these rows
    - The output of the Join is inserted into the final table
    This will make sure only rows with new Business Keys are loaded.
    Hope this helps,
    Roald

  • How to check all rows and colums runtime executing BW query

    Hi Guys,
    I have a BW query with many calculated key figures. while execute this query the performance is really bad/slow. I need to know in which object the query is taking long time, is there any table or anything to get this information. I have looked all the query perfomation stuffs, everything looks good, just want to figure out what is making OLAP runtime longer.
    Thanks,
    Kris

    Hi Krish,
    You can not check the time taken by any particular key figure/ characteristics.
    However, if you really want to reduce the time, you can create aggregates proposed by SAP based on your query.
    First go to RSRT, place your query and click on "Execute + Debug".
    Then select "Display Aggregate Found" under Aggs and "Display Statistics Data" under Others in the "Debug Options".
    Now it will show you how you should create your aggregate based on different objects so that the total query execution time will be less.
    Also after clicking on BACK button you can check the time taken by each event during Query execution.
    Hope it helps.
    Thanks,
    Subrat.

  • No width/height set for items.This will cause an infinite loop. Aborting....... How do I get out of this?

    How do I get out of this?

    What are you talking about? The width/height of what? Is this some sort of error message? If so from what?

  • How can I create a phase shift that will cause cross-cancellation?

    I recently recorded something using a USB audio input, and after it was done realized that a cellular device had interfered with the signal and I have a terrible hiss, some clicks, cell noise, etc. in the recording. Setting a noise print and running "Reduce Noise" did more to help this file than I ever would have thought possible (thank you Apple!!), but I think I might be able to do even better.
    The left channel has the audio I need, plus all the noise. The right channel has ONLY THE NOISE! Can anyone think of a way that I can use this right channel to create a cross-cancellation of the noise in the left channel? Theoretically, this should create a perfect (or close enough to it for me) file, should it not?
    The phase shifter doesn't seem to have what I would need to do this, but I'm sure some audio genius out there can think of a way I can either do this manually or with a filter or effect.
    Thanks for any suggestions!

    Hi Glen,
    IF you were to take two +identical signals+, sum them in equal amounts and flip the phase on one of them 180º to the other you will get +complete cancellation+.
    According to the manual on page 221 the Process>Invert will do this.
    Invert
    +Choosing this command inverts the phase of each sample in the audio file or selection.+
    +Each sample’s amplitude is unchanged, but the phase is inverted. In the waveform+
    +display, the wave’s crests become troughs and vice versa.+
    IF your R channel is the exact same noise as the noise in your L Channel then this technique could work for you.
    You can test this out with any track -> put a copy of it on another track and Invert, they resulting playback will be total silence.

  • Row wise data insertion

    Hi,
    I have a query like this
    select
    wwv_flow_item.display_saveas(1,TXT,50,500) "Date"
    FROM
    WITH all_months AS
    SELECT ADD_MONTHS ( TRUNC ( TO_DATE ('1-JAN-09','dd-MON-YY')
    , 'MM'
    , LEVEL - 1
    )      AS dt
    , LEVEL      AS rn
    FROM dual
    CONNECT BY LEVEL <= MONTHS_BETWEEN ( TO_DATE ('1-APR-09', 'dd-Mon-YY')
    , TO_DATE ('1-JAN-09', 'dd-Mon-YY')
    ) + 1
    SELECT TRANSLATE (SYS_CONNECT_BY_PATH ( TO_CHAR (dt, 'Mon-YY')
    ) AS txt
    FROM     all_months
    WHERE     CONNECT_BY_ISLEAF = 1
    START WITH     rn = 1
    CONNECT BY     rn = PRIOR rn + 1
    The output generated by this query is like this
    Jan-09 Feb-09 Mar-09 Apr-09
    There is table create resource_plan
    which is having a structure like this ID Dates Commnets
    data is getting stored like this Jan-09 Feb-09 Mar-09 Apr-09
    I need to insert the data in this fashion Jan-09
    Feb-09
    Mar-09
    Apr-09
    How to split the data and save please suggest
    This is the insert query i am writing
    insert into resource_plan
    ( id,dates,comments)
    values
    (1,wwv_flow.g_f01,'');
    Please suggest me how to split the row data and insert in column wise.
    Thanks
    Sudhir.

    Sudhir_N wrote:
    My requirement was not with query i need to display the data in the same fashion how is getting populated only reqirement of mine was when getting stored into the table it must store in this order in the table
    01-JAN-09
    01-FEB-09
    01-MAR-09
    01-APR-09
    Thanks
    Sudhir.And the following didn't help you in that?
    Message From OP's previous thread:
    Hope the following code helps:
    SQL> WITH test_tab AS
    2       (SELECT '01-Jan-2008' start_date, '01-Dec-2009' end_date
    3          FROM DUAL)
    4  SELECT     ADD_MONTHS (TO_DATE (start_date, 'DD-Mon-YYYY'), LEVEL - 1) date
    _1
    5        FROM test_tab
    6  CONNECT BY LEVEL <=
    7                  MONTHS_BETWEEN (TRUNC (TO_DATE (end_date, 'DD-Mon-YYYY')),
    8                                  TRUNC (TO_DATE (start_date, 'DD-Mon-YYYY'))
    9                                 )
    10                + 1
    11  /
    DATE_1
    01-JAN-08
    01-FEB-08
    01-MAR-08
    01-APR-08
    01-MAY-08
    01-JUN-08
    01-JUL-08
    01-AUG-08
    01-SEP-08
    01-OCT-08
    01-NOV-08
    DATE_1
    01-DEC-08
    01-JAN-09
    01-FEB-09
    01-MAR-09
    01-APR-09
    01-MAY-09
    01-JUN-09
    01-JUL-09
    01-AUG-09
    01-SEP-09
    01-OCT-09
    DATE_1
    01-NOV-09
    01-DEC-09
    24 rows selected.
    SQL>
    Please explain what you didn't understand.
    Regards,
    Jo

  • How can I make sure my record insert to table was successful?

    Here is the insert statement:
    Class.forName("sun.jdbc.odbc.JdbcOdbcDriver");
    Connection con = DriverManager.getConnection("jdbc:odbc:MyDB", "", "");
    String queryStr = "INSERT INTO MyTable(a,b) VALUES(?,?)";
    PreparedStatement pstmt = con.prepareStatement(queryStr);
    pstmt.setString(1,strFirst);
    pstmt.setString(2,strSecond);
    pstmt.executeUpdate();
    How can check that record has been inserted properly?
    thanks
    Yahya

    It is working.
    Here is the code:
    int result;
    result = pstmt.executeUpdate();
    if (result != 1) {
    System.out.println("Success Insertion");
    thanks
    Yahya

  • How to "discard" a row from a view

    It seems to me like a simple thing, but I can't find an answer:
    I want to programmatically remove a row from an executed view's rowset, but I don't want the underlying entities removed?
    [I don't want any pending updates to the entities affected either. It's basically a UI thing - the user does something and I want to make the current row vanish.  I don't need or want to re-execute or alter the database as a result of this particular removal.]
    Anyone know of a vo.removeRowWithoutAffectingEntities method?
    Thanks,
    Mike.

    Found an old thread myself.
    How to remove a row from a rowset
    Will have to give this "hack" a try, I guess. (Sung, has this API been introduced into an unreleased version yet?)
    I had already tried overriding the updateable-entities' remove methods (which are called from row.remove()), but it seems that the view-row hangs around if the entities aren't actually removed. Found some "removeEntityReferences (i.e. set them to null)" method on a QueryCollection and was wondering if calling that followed by a row.remove() might do the trick, but the method's protected anyway. Might still see if I can call it somehow. Any comments on the viability of this, Sung? (Given that the previous work-around was "uncharted territory"....)
    Anyway, I guess I'll give the original work-around a shot. (Or make application non-ideally commit and requery!)
    Mike.

  • Print,Check,Apppend Row,Delete Row,Insert Row buttons actions code

    HI,
    I have a ALV table in that buttons Print,Check,Apppend Row,Delete Row,Insert Row buttons are there.But client requirement is they don't want those buttons in ALV they want in above the ALV table.
    Can you please let me know how to hide those buttons in ALV.and give me code Print,Check,Apppend Row,Delete Row,Insert Row action code..HI,

    I hope you have instantiated your ALV. Check the below code
    * Instantiate the used component " You can use code wizard to get this code.
      DATA lo_cmp_usage TYPE REF TO if_wd_component_usage.
      lo_cmp_usage =   wd_this->wd_cpuse_usg_alv( ). "usg_alv should be your usage name
      IF lo_cmp_usage->has_active_component( ) IS INITIAL.
        lo_cmp_usage->create_component( ).
      ENDIF.
    * Get Model
      DATA lo_interfacecontroller TYPE REF TO iwci_salv_wd_table .
      lo_interfacecontroller =   wd_this->wd_cpifc_usg_alv( ).
      DATA lo_value TYPE REF TO cl_salv_wd_config_table.
      lo_value = lo_interfacecontroller->get_model(
    * Hide Standard buttons on ALV toolbar
      DATA: l_std_func TYPE REF TO if_salv_wd_std_functions.
      l_std_func ?= lo_value                                 .
      l_std_func->set_edit_append_row_allowed( abap_false )  .
      l_std_func->set_sort_headerclick_allowed( abap_true ) .
      l_std_func->set_edit_append_row_allowed( abap_false )  .
      l_std_func->set_edit_insert_row_allowed( abap_false )  .
      l_std_func->set_edit_delete_row_allowed( abap_false )  .
      l_std_func->set_view_list_allowed( abap_false )        .
      l_std_func->set_sort_headerclick_allowed( abap_false ) .
      l_std_func->set_edit_check_available( abap_false )     .
      l_std_func->set_pdf_allowed( abap_false )              .
      l_std_func->set_export_allowed( abap_true )            .
    Radhika.

  • How to check the insert stmt

    Hi,
    How to check whether the following trigger is working properly or not?
    CREATE OR REPLACE TRIGGER employee_ins_t1
       BEFORE INSERT
       ON employee
       FOR EACH ROW
    BEGIN
       SELECT employee_seq.nextval
         INTO :new.id
         FROM dual;
    END;please excuse me if this is very newbie question

    Yeah , I tried to insert some data into the table but getting the following error: So confused how to check the trigger is working
    SQL> insert into employee(id, ename) values (null, 'abc');
    insert into employee(id, ename) values (null, 'abc')
    ERROR at line 1:
    ORA-08004: sequence EMPLOYEE_SEQ.NEXTVAL exceeds MAXVALUE and cannot be
    instantiatedEdited by: Smile on Feb 19, 2013 2:24 AM

  • How to write a cursor to check every row of a table which has millions of rows

    Hello every one.
    I need help. please... Below is the script (sample data), You can run directly on sql server management studio.
    Here we need to update PPTA_Status column in Donation table. There WILL BE 3 statuses, A1, A2 and Q.
    Here we need to update PPTA_status of January month donations only. We need to write a cursor. Here as this is a sample data we have only some donations (rows), but in the real table we have millions of rows. Need to check every row.
    If i run the cursor for January, cursor should take every row, row by row all the rows of January.
    we have donations in don_sample table, i need to check the test_results in the result_sample table for that donations and needs to update PPTA_status COLUMN.
    We need to check all the donations of January month one by one. For every donation, we need to check for the 2 previous donations. For the previous donations, we need to in the following way. check
    If we want to find previous donations of a donation, first look for the donor of that donation, then we can find previous donations of that donor. Like this we need to check for 2 previous donations.
    If there are 2 previous donations and if they have test results, we need to update PPTA_STATUS column of this donatioh as 'Q'.
    If 2 previous donation_numbers  has  test_code column in result_sample table as (9,10,11) values, then it means those donations has result.
    BWX72 donor in the sample data I gave is example of above scenario
    For the donation we are checking, if it has only 1 previous donation and it has a result in result_sample table, then set this donation Status as A2, after checking the result of this donation also.
    ZBW24 donor in the sample data I gave is example of above scenario
    For the donation we are checking, if it has only 1 previous donation and it DO NOT have a result in result_sample table, then set this donation Status as A1. after checking the result of this donation also.
    PGH56 donor in the sample data I gave is example of above scenario
    like this we need to check all the donations in don_sample table, it has millions of rows per every month.
    we need to join don_sample and result_sample by donation_number. And we need to check for test_code column for result.
    -- creating table
    CREATE TABLE [dbo].[DON_SAMPLE](
    [donation_number] [varchar](15) NOT NULL,
    [donation_date] [datetime] NULL,
    [donor_number] [varchar](12) NULL,
    [ppta_status] [varchar](5) NULL,
    [first_time_donation] [bit] NULL,
    [days_since_last_donation] [int] NULL
    ) ON [PRIMARY]
    --inserting values
    Insert into [dbo].[DON_SAMPLE] ([donation_number],[donation_date],[donor_number],[ppta_status],[first_time_donation],[days_since_last_donation])
    Select '27567167','2013-12-11 00:00:00.000','BWX72','A',1,0
    Union ALL
    Select '36543897','2014-12-26 00:00:00.000','BWX72','A',0,32
    Union ALL
    Select '47536542','2014-01-07 00:00:00.000','BWX72','A',0,120
    Union ALL
    Select '54312654','2014-12-09 00:00:00.000','JPZ41','A',1,0
    Union ALL
    Select '73276321','2014-12-17 00:00:00.000','JPZ41','A',0,64
    Union ALL
    Select '83642176','2014-01-15 00:00:00.000','JPZ41','A',0,45
    Union ALL
    Select '94527541','2014-12-11 00:00:00.000','ZBW24','A',0,120
    Union ALL
    Select '63497874','2014-01-13 00:00:00.000','ZBW24','A',1,0
    Union ALL
    Select '95786348','2014-12-17 00:00:00.000','PGH56','A',1,0
    Union ALL
    Select '87234156','2014-01-27 00:00:00.000','PGH56','A',1,0
    --- creating table
    CREATE TABLE [dbo].[RESULT_SAMPLE](
    [test_result_id] [int] IDENTITY(1,1) NOT NULL,
    [donation_number] [varchar](15) NOT NULL,
    [donation_date] [datetime] NULL,
    [test_code] [varchar](5) NULL,
    [test_result_date] [datetime] NULL,
    [test_result] [varchar](50) NULL,
    [donor_number] [varchar](12) NULL
    ) ON [PRIMARY]
    ---SET IDENTITY_INSERT dbo.[RESULT_SAMPLE] ON
    ---- inserting values
    Insert into [dbo].RESULT_SAMPLE( [test_result_id], [donation_number], [donation_date], [test_code], [test_result_date], [test_result], [donor_number])
    Select 278453,'27567167','2013-12-11 00:00:00.000','0009','2014-01-20 00:00:00.000','N','BWX72'
    Union ALL
    Select 278454,'27567167','2013-12-11 00:00:00.000','0010','2014-01-20 00:00:00.000','NEG','BWX72'
    Union ALL
    Select 278455,'27567167','2013-12-11 00:00:00.000','0011','2014-01-20 00:00:00.000','N','BWX72'
    Union ALL
    Select 387653,'36543897','2014-12-26 00:00:00.000','0009','2014-01-24 00:00:00.000','N','BWX72'
    Union ALL
    Select 387654,'36543897','2014-12-26 00:00:00.000','0081','2014-01-24 00:00:00.000','NEG','BWX72'
    Union ALL
    Select 387655,'36543897','2014-12-26 00:00:00.000','0082','2014-01-24 00:00:00.000','N','BWX72'
    UNION ALL
    Select 378245,'73276321','2014-12-17 00:00:00.000','0009','2014-01-30 00:00:00.000','N','JPZ41'
    Union ALL
    Select 378246,'73276321','2014-12-17 00:00:00.000','0010','2014-01-30 00:00:00.000','NEG','JPZ41'
    Union ALL
    Select 378247,'73276321','2014-12-17 00:00:00.000','0011','2014-01-30 00:00:00.000','NEG','JPZ41'
    UNION ALL
    Select 561234,'83642176','2014-01-15 00:00:00.000','0081','2014-01-19 00:00:00.000','N','JPZ41'
    Union ALL
    Select 561235,'83642176','2014-01-15 00:00:00.000','0082','2014-01-19 00:00:00.000','NEG','JPZ41'
    Union ALL
    Select 561236,'83642176','2014-01-15 00:00:00.000','0083','2014-01-19 00:00:00.000','NEG','JPZ41'
    Union ALL
    Select 457834,'94527541','2014-12-11 00:00:00.000','0009','2014-01-30 00:00:00.000','N','ZBW24'
    Union ALL
    Select 457835,'94527541','2014-12-11 00:00:00.000','0010','2014-01-30 00:00:00.000','NEG','ZBW24'
    Union ALL
    Select 457836,'94527541','2014-12-11 00:00:00.000','0011','2014-01-30 00:00:00.000','NEG','ZBW24'
    Union ALL
    Select 587345,'63497874','2014-01-13 00:00:00.000','0009','2014-01-29 00:00:00.000','N','ZBW24'
    Union ALL
    Select 587346,'63497874','2014-01-13 00:00:00.000','0010','2014-01-29 00:00:00.000','NEG','ZBW24'
    Union ALL
    Select 587347,'63497874','2014-01-13 00:00:00.000','0011','2014-01-29 00:00:00.000','NEG','ZBW24'
    Union ALL
    Select 524876,'87234156','2014-01-27 00:00:00.000','0081','2014-02-03 00:00:00.000','N','PGH56'
    Union ALL
    Select 524877,'87234156','2014-01-27 00:00:00.000','0082','2014-02-03 00:00:00.000','N','PGH56'
    Union ALL
    Select 524878,'87234156','2014-01-27 00:00:00.000','0083','2014-02-03 00:00:00.000','N','PGH56'
    select * from DON_SAMPLE
    order by donor_number
    select * from RESULT_SAMPLE
    order by donor_number

    You didn't mention the version of SQL Server.  It's important, because SQL Server 2012 makes the job much easier (and will also run much faster, by dodging a self join).  (As Kalman said, the OVER clause contributes to this answer).  
    Both approaches below avoid needing the cursor at all.  (There was part of your explanation I didn't understand fully, but I think these suggestions work regardless)
    Here's a SQL 2012 answer, using LAG() to lookup the previous 1 and 2 donation codes by Donor:  (EDIT: I overlooked a couple things in this post: please refer to my follow-up post for the final/fixed answer.  I'm leaving this post with my overlooked
    items, for posterity).
    With Results_Interim as
    Select *
    , count('x') over(partition by donor_number) as Ct_Donations
    , Lag(test_code, 1) over(partition by donor_number order by donation_date ) as PrevDon1
    , Lag(test_code, 2) over(partition by donor_number order by donation_date ) as PrevDon2
    from RESULT_SAMPLE
    Select *
    , case when PrevDon1 in (9, 10, 11) and PrevDon2 in (9, 10, 11) then 'Q'
    when PrevDon1 in (9, 10, 11) then 'A2'
    when PrevDon1 is not null then 'A1'
    End as NEWSTATUS
    from Results_Interim
    Where Test_result_Date >= '2014-01' and Test_result_Date < '2014-02'
    Order by Donor_Number, donation_date
    And a SQL 2005 or greater version, not using SQL 2012 new features
    With Results_Temp as
    Select *
    , count('x') over(partition by donor_number) as Ct_Donations
    , Row_Number() over(partition by donor_number order by donation_date ) as RN_Donor
    from RESULT_SAMPLE
    , Results_Interim as
    Select R1.*, P1.test_code as PrevDon1, P2.Test_Code as PrevDon2
    From Results_Temp R1
    left join Results_Temp P1 on P1.Donor_Number = R1.Donor_Number and P1.Rn_Donor = R1.RN_Donor - 1
    left join Results_Temp P2 on P2.Donor_Number = R1.Donor_Number and P2.Rn_Donor = R1.RN_Donor - 2
    Select *
    , case when PrevDon1 in (9, 10, 11) and PrevDon2 in (9, 10, 11) then 'Q'
    when PrevDon1 in (9, 10, 11) then 'A2'
    when PrevDon1 is not null then 'A1'
    End as NEWSTATUS
    from Results_Interim
    Where Test_result_Date >= '2014-01' and Test_result_Date < '2014-02'
    Order by Donor_Number, donation_date

  • How to display first row value returened from a query as checked as default in a report

    How to display first row value returned from a query as checked as default in a report
    Example
    Parameter 1
    Paramerter2
    ABD
    x(checked)
    Test
    DEF
    JMG
    Mudassar

    Hi Mudassar,
    The issue is caused by the order in which the parameters appear in the report data tab can be difference between our report execution and it failing. In other words, “Parameter2” is execution before parameter “A” due to this issue. We can adjust the parameter’s
    order to solve the issue.
    If “Parameter2” is parameter “A”, we cannot use that expression. Because fields cannot be used in report parameter expression, if we want to display the first value returned from a query as default value, we have to fill the “Specify values” text box with
    the specific value in Default Values dialog box.
    Regards,
    Alisa Tang
    Alisa Tang
    TechNet Community Support

Maybe you are looking for

  • Using a JCheckBox in the header of a JTable

    I would like to know if it is possible to put a JCheckBox in the header of a column of a JTable. In fact my table has several columns and its first column uses checkboxes to signify the seleced items in the table. I would like to have a checkbox in t

  • Blue Screen Error for Windows 7 Home Basic 64 Bit

    I have a Sony Vaio E Series with WINDOWS 7 64bit home basic preinstalled. It worked perfectly fine for 2 yrs until last month, I needed to replace the hard drive due to bad sectors. Now, after a month of replacing my hard drive fresh from Sony Cent

  • It seems my asset links (mostly images) get placed in the WRONG location after exporting

    Hi There, I've been using the test version of Muse for the part 10 days and have really been enjoying it!  I'm about to launch my first site with it. But!  In my design mode, all the correct images are on the right page, and then often when I go to t

  • Kindle Fire 8.9

    Is there any way to get a 1024 x 768 folio file look good on a Kindle Fire HD 8.9 or do I have to do a completely different resolution for it to look good? Also, is there any time period in which an Android and/or Kindle Fire will support PDF-based F

  • Material Rate of particular period

    Hi all, There is a requirement for me to get the rate of a material for the given period. Ex: Input: Material: 800101 Period  : 01.2008 Output: Unit Rate : 12.36 I can get only the current material rate in the table MBEW. Also I tried history table M