Extract only updated rows

Hi!
I've figured out how to get rows changed by update statement in replicat process (I don't need rows where column values stay unchanged).
Is it possible to filter these rows in extract process?

See Managing Conflicts in the Admin guide.
Edit:
You can also use GETUPDATEBEFORES
GETUPDATEBEFORES | IGNOREUPDATEBEFORES
Valid for Extract and Replicat
Use the GETUPDATEBEFORES and IGNOREUPDATEBEFORES parameters to control whether or not
the before images of updated columns are included in the records that are processed by
Oracle GoldenGate. Before images contain column details that existed before a record was
updated. Use the GETUPDATEBEFORES parameter in the Extract parameter file to extract
before images or in the Replicat parameter file to replicate before images.
You can compare before images with after images to identify the net results of a
transaction or perform other delta calculations. For example, if a BALANCE field is $100
before an update and $120 afterward, a comparison would show the difference of $20. You
can use the column-conversion functions of Oracle GoldenGate to perform the comparisons
and calculations.
You also can use GETUPDATEBEFORES to maintain a transaction-history table. For more
information about performing delta calculations and using transaction history, see the
Oracle GoldenGate Windows and UNIX Administrator’s Guide.
The GETUPDATEBEFORES and IGNOREUPDATEBEFORES parameters are table-specific. One
parameter remains in effect for all subsequent TABLE or MAP statements, until the other
parameter is encountered.
Default IGNOREUPDATEBEFORES
Edited by: stevencallan on Sep 10, 2011 9:06 AM

Similar Messages

  • Update only the row in DB where the cursor are positioned using Write Back

    Hello,
    I need to have the possibility to create/change values (insert or update data back into the database), dynamically or by typing, in some columns of the physical DB that are mapped in a report request, and that new or changed values could be used as a filter columns in another report request.
    So, I have created a new report request in a Subject Area where a certain column of the request result could be editable to make possible change the present value and then write it back in the physical DB.
    I have implemented for the column name ‘MOV_VALUE’ the value interaction type format Write Back, and every time I changed any value on this column it updates all rows with this same value from the table ‘MOVEMENTS’.
    See below the Template file used in the Write Back table properties:
    <?xml version="1.0" encoding="utf-8" ?>
    <WebMessageTables xmlns:sawm="com.siebel.analytics.web/message/v1">
    <WebMessageTable lang="en-us" system="WriteBack" table="Messages">
    <WebMessage name= "UpdateMovementData">
    <XML>
    <writeBack connectionPool="Connection Pool">
    <insert> </insert>
    <update> UPDATE MOVEMENTS SET MOV_VALUE=@{c4} </update>
    <!-- <postUpdate>COMMIT</postUpdate> -->
    </writeBack>
    </XML>
    </WebMessage>
    </WebMessageTable>
    </WebMessageTables>
    Is there any possibility to only update the record from where it is the cursor positioned by using the Write Back in the Answers report?
    Is necessary to create any condition to the where clause for the update statement? Which conditions?
    Thanks in advance.
    Regards,
    Pedro Resende

    You might try posting to a BPM forum. BPM is a layered product that happens to use WL JMS, but I think most of the folks on this forum aren't familiar with BPM.
    Tom

  • Only updating specific rows.

    How do I only update certain rows in an AdvancedDataGrid - to avoid updating the whole thing.
    If it's any help, I want to update only the selected cells, or rows containing selected cells.  (allowMultipleSelection="true").
    One possible way is to convert selectedCells row and column values into references to the itemRenderer, and tell each itemRenderer to refresh.  But If there is a more direct way - I'll take it.  (Note that the documented AdvancedDataGrid.selectedItems property doesn't work).

    Hi,
    From your description, you have a list with Hyperlink/Picture column. When you change the Standard View to Datasheet View and perform bulk upload, the error occurs when there
    are .jpg files.
    I tried to reproduce via UI or code, all worked as expected.
    Can this error occur if you upload the .jpg files in Standard View?
    What if you upload one single .jpg file to this list in Datasheet View?
    If the issue still exists, I suggest you do the same test on another clean list in case it is a list issue.
    Feel free to reply with the test result if the issue still exists.             
    Best regards
    Patrick Liang
    TechNet Community Support

  • Grant to update only one row

    Hi all,
    I'm working with an 10.2.0.3 Oracle Enterprise version.
    I need to create a new user on my database with permission to update only one row on table.
    Have you notice if this is possible? In case, how is the grant to do this?
    Regards,
    dbajug

    Try this:
    SQL> create user usr1 identified by usr1;
    User created.
    SQL> create user usr2 identified by usr2;
    User created.
    SQL> grant connect, resource to usr1,usr2;
    Grant succeeded.
    SQL> conn usr1/usr1
    Connected.
    SQL> create table t (id number);
    Table created.
    SQL> insert into t values(1);
    1 row created.
    SQL> insert into t values(2);
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> grant update on t to usr2;
    Grant succeeded.
    SQL> create or replace trigger trg_t
      2  before update on t
      3  for each row
      4  begin
      5  if :old.id>1 then
      6  raise_application_error(-22299,'You cant change the specific value ');
      7  end if;
      8  end;
      9  /
    Trigger created.
    SQL> update usr1.t set id=2 where id=1;
    1 row updated.
    SQL> update usr1.t set id=2 where id=2;
    update usr1.t set id=2 where id=2
    ERROR at line 1:
    ORA-21000: error number argument to raise_application_error of -22299 is out of
    range
    ORA-06512: at "USR1.TRG_T", line 3
    ORA-04088: error during execution of trigger 'USR1.TRG_T'
    SQL>

  • Cursor only updates every second row

    Hello Helpful Peoples
    I have a table with a set of delivery dates, and am wanting to add a date on which a reminder flag should be raised by the system. The delivery dates are already populated and I'm wanting to add functionality where the end user can set how many days prior they are reminded.
    I've opened a cursor to retrieve all the current due dates from my source table. When I loop through this cursor and update the reminder date and reminder text, my code is only updating every second row.
    I can't work out why this code seems to skip a row every time it updates a record. Is anyone able to assist?
    declare
    v_date rep_delivery.reminderdate%type;
    v_rownum number;
    CURSOR reminder_cur
    IS
    SELECT DEL.DUEDATE
    FROM REP_DELIVERY DEL
    WHERE DEL.REP_ID = :P212_REP_ID:
    FOR UPDATE OF DEL.REMINDERDATE;
    begin
    FOR x IN reminder_cur LOOP
    FETCH reminder_cur INTO v_date;
    EXIT WHEN reminder_cur%NOTFOUND;
    v_date := v_date - to_number(:P212_REMDAYS);
    UPDATE REP_DELIVERY DEL2
    SET DEL2.REMINDERDATE = v_date
    WHERE CURRENT OF reminder_cur;
    UPDATE REP_DELIVERY DEL3
    SET DEL3.REMINDERTEXT = :P212_DESCRIPTION
    WHERE CURRENT OF reminder_cur;
    END LOOP;
    commit;
    end;

    Oolite,
    Use this code:
    DECLARE
       v_date   rep_delivery.reminderdate%TYPE;
       CURSOR reminder_cur
       IS
          SELECT        ROWID, duedate
                   FROM rep_delivery
                  WHERE rep_id = :p212_rep_id
          FOR UPDATE OF reminderdate;
    BEGIN
       FOR x IN reminder_cur
       LOOP
          v_date := x.duedate;
          v_date := v_date - TO_NUMBER (:p212_remdays);
          UPDATE rep_delivery del2
             SET del2.reminderdate = v_date,
                 del2.remindertext = :p212_description
           WHERE ROWID = x.ROWID;
       END LOOP;
       COMMIT;
    END;and it will work for you.
    Denes Kubicek
    http://deneskubicek.blogspot.com/
    http://htmldb.oracle.com/pls/otn/f?p=31517:1
    -------------------------------------------------------------------

  • Extract report only 15 rows at a time

    Hi,
    I need to generate report when i export it to a spear sheet it must allow to take only 15 rows at a time how to do this please suggest.
    Thanks
    Sudhir.

    Having the maxbufferrowsize
    But I have a doubt that it is related. You probably max up the allowed size for your version of Excel.
    Arthur My Blog

  • How can I modify this script to return only certain rows of my mySQL table?

    Hi there,
    I have a php script that accesses a mySQL database and it was generated out of the Flex Builder wizard automatically. The script works great and there are no problems with it. It allows me to perform CRUD on a table by calling it from my Flex app. and it retrieves all the data and puts it into a nice MXML format.
    My question, currently when I call "findAll" to retrieve all the data in the table, well, it retrieves ALL the rows in the table. That's fine, but my table is starting to grow really large with thousands of rows.
    I want to modify this script so that I can pass a variable into it from Flex so that it only retrieves the rows that match the "$subscriber_id" variable that I pass. In this way the results are not the entire table's data, only the rows that match 'subscriber_id'.
    I know how to pass a variable from Flex into php and the code on the php side to pick it up would look like this:
    $subscriberID = $_POST['subscriberID'];
    Can anyone shed light as to the proper code modification in "findAll" which will take my $subscriberID variable and compare it to the 'subscriber_id' field and then only return those rows that match? I think it has something to do with lines 98 to 101.
    Any help is appreciated.
    <?php
    require_once(dirname(__FILE__) . "/2257safeDBconn.php");
    require_once(dirname(__FILE__) . "/functions.inc.php");
    require_once(dirname(__FILE__) . "/XmlSerializer.class.php");
    * This is the main PHP file that process the HTTP parameters,
    * performs the basic db operations (FIND, INSERT, UPDATE, DELETE)
    * and then serialize the response in an XML format.
    * XmlSerializer uses a PEAR xml parser to generate an xml response.
    * this takes a php array and generates an xml according to the following rules:
    * - the root tag name is called "response"
    * - if the current value is a hash, generate a tagname with the key value, recurse inside
    * - if the current value is an array, generated tags with the default value "row"
    * for example, we have the following array:
    * $arr = array(
    *      "data" => array(
    *           array("id_pol" => 1, "name_pol" => "name 1"),
    *           array("id_pol" => 2, "name_pol" => "name 2")
    *      "metadata" => array(
    *           "pageNum" => 1,
    *           "totalRows" => 345
    * we will get an xml of the following form
    * <?xml version="1.0" encoding="ISO-8859-1"?>
    * <response>
    *   <data>
    *     <row>
    *       <id_pol>1</id_pol>
    *       <name_pol>name 1</name_pol>
    *     </row>
    *     <row>
    *       <id_pol>2</id_pol>
    *       <name_pol>name 2</name_pol>
    *     </row>
    *   </data>
    *   <metadata>
    *     <totalRows>345</totalRows>
    *     <pageNum>1</pageNum>
    *   </metadata>
    * </response>
    * Please notice that the generated server side code does not have any
    * specific authentication mechanism in place.
    * The filter field. This is the only field that we will do filtering after.
    $filter_field = "subscriber_id";
    * we need to escape the value, so we need to know what it is
    * possible values: text, long, int, double, date, defined
    $filter_type = "text";
    * constructs and executes a sql select query against the selected database
    * can take the following parameters:
    * $_REQUEST["orderField"] - the field by which we do the ordering. MUST appear inside $fields.
    * $_REQUEST["orderValue"] - ASC or DESC. If neither, the default value is ASC
    * $_REQUEST["filter"] - the filter value
    * $_REQUEST["pageNum"] - the page index
    * $_REQUEST["pageSize"] - the page size (number of rows to return)
    * if neither pageNum and pageSize appear, we do a full select, no limit
    * returns : an array of the form
    * array (
    *           data => array(
    *                array('field1' => "value1", "field2" => "value2")
    *           metadata => array(
    *                "pageNum" => page_index,
    *                "totalRows" => number_of_rows
    function findAll() {
         global $conn, $filter_field, $filter_type;
          * the list of fields in the table. We need this to check that the sent value for the ordering is indeed correct.
         $fields = array('id','subscriber_id','lastName','firstName','birthdate','gender');
         $where = "";
         if (@$_REQUEST['filter'] != "") {
              $where = "WHERE " . $filter_field . " LIKE " . GetSQLValueStringForSelect(@$_REQUEST["filter"], $filter_type);     
         $order = "";
         if (@$_REQUEST["orderField"] != "" && in_array(@$_REQUEST["orderField"], $fields)) {
              $order = "ORDER BY " . @$_REQUEST["orderField"] . " " . (in_array(@$_REQUEST["orderDirection"], array("ASC", "DESC")) ? @$_REQUEST["orderDirection"] : "ASC");
         //calculate the number of rows in this table
         $rscount = mysql_query("SELECT count(*) AS cnt FROM `modelName` $where");
         $row_rscount = mysql_fetch_assoc($rscount);
         $totalrows = (int) $row_rscount["cnt"];
         //get the page number, and the page size
         $pageNum = (int)@$_REQUEST["pageNum"];
         $pageSize = (int)@$_REQUEST["pageSize"];
         //calculate the start row for the limit clause
         $start = $pageNum * $pageSize;
         //construct the query, using the where and order condition
         $query_recordset = "SELECT id,subscriber_id,lastName,firstName,birthdate,gender FROM `modelName` $where $order";
         //if we use pagination, add the limit clause
         if ($pageNum >= 0 && $pageSize > 0) {     
              $query_recordset = sprintf("%s LIMIT %d, %d", $query_recordset, $start, $pageSize);
         $recordset = mysql_query($query_recordset, $conn);
         //if we have rows in the table, loop through them and fill the array
         $toret = array();
         while ($row_recordset = mysql_fetch_assoc($recordset)) {
              array_push($toret, $row_recordset);
         //create the standard response structure
         $toret = array(
              "data" => $toret,
              "metadata" => array (
                   "totalRows" => $totalrows,
                   "pageNum" => $pageNum
         return $toret;
    * constructs and executes a sql count query against the selected database
    * can take the following parameters:
    * $_REQUEST["filter"] - the filter value
    * returns : an array of the form
    * array (
    *           data => number_of_rows,
    *           metadata => array()
    function rowCount() {
         global $conn, $filter_field, $filter_type;
         $where = "";
         if (@$_REQUEST['filter'] != "") {
              $where = "WHERE " . $filter_field . " LIKE " . GetSQLValueStringForSelect(@$_REQUEST["filter"], $filter_type);     
         //calculate the number of rows in this table
         $rscount = mysql_query("SELECT count(*) AS cnt FROM `modelName` $where");
         $row_rscount = mysql_fetch_assoc($rscount);
         $totalrows = (int) $row_rscount["cnt"];
         //create the standard response structure
         $toret = array(
              "data" => $totalrows,
              "metadata" => array()
         return $toret;

    Hi there,
    I have a php script that accesses a mySQL database and it was generated out of the Flex Builder wizard automatically. The script works great and there are no problems with it. It allows me to perform CRUD on a table by calling it from my Flex app. and it retrieves all the data and puts it into a nice MXML format.
    My question, currently when I call "findAll" to retrieve all the data in the table, well, it retrieves ALL the rows in the table. That's fine, but my table is starting to grow really large with thousands of rows.
    I want to modify this script so that I can pass a variable into it from Flex so that it only retrieves the rows that match the "$subscriber_id" variable that I pass. In this way the results are not the entire table's data, only the rows that match 'subscriber_id'.
    I know how to pass a variable from Flex into php and the code on the php side to pick it up would look like this:
    $subscriberID = $_POST['subscriberID'];
    Can anyone shed light as to the proper code modification in "findAll" which will take my $subscriberID variable and compare it to the 'subscriber_id' field and then only return those rows that match? I think it has something to do with lines 98 to 101.
    Any help is appreciated.
    <?php
    require_once(dirname(__FILE__) . "/2257safeDBconn.php");
    require_once(dirname(__FILE__) . "/functions.inc.php");
    require_once(dirname(__FILE__) . "/XmlSerializer.class.php");
    * This is the main PHP file that process the HTTP parameters,
    * performs the basic db operations (FIND, INSERT, UPDATE, DELETE)
    * and then serialize the response in an XML format.
    * XmlSerializer uses a PEAR xml parser to generate an xml response.
    * this takes a php array and generates an xml according to the following rules:
    * - the root tag name is called "response"
    * - if the current value is a hash, generate a tagname with the key value, recurse inside
    * - if the current value is an array, generated tags with the default value "row"
    * for example, we have the following array:
    * $arr = array(
    *      "data" => array(
    *           array("id_pol" => 1, "name_pol" => "name 1"),
    *           array("id_pol" => 2, "name_pol" => "name 2")
    *      "metadata" => array(
    *           "pageNum" => 1,
    *           "totalRows" => 345
    * we will get an xml of the following form
    * <?xml version="1.0" encoding="ISO-8859-1"?>
    * <response>
    *   <data>
    *     <row>
    *       <id_pol>1</id_pol>
    *       <name_pol>name 1</name_pol>
    *     </row>
    *     <row>
    *       <id_pol>2</id_pol>
    *       <name_pol>name 2</name_pol>
    *     </row>
    *   </data>
    *   <metadata>
    *     <totalRows>345</totalRows>
    *     <pageNum>1</pageNum>
    *   </metadata>
    * </response>
    * Please notice that the generated server side code does not have any
    * specific authentication mechanism in place.
    * The filter field. This is the only field that we will do filtering after.
    $filter_field = "subscriber_id";
    * we need to escape the value, so we need to know what it is
    * possible values: text, long, int, double, date, defined
    $filter_type = "text";
    * constructs and executes a sql select query against the selected database
    * can take the following parameters:
    * $_REQUEST["orderField"] - the field by which we do the ordering. MUST appear inside $fields.
    * $_REQUEST["orderValue"] - ASC or DESC. If neither, the default value is ASC
    * $_REQUEST["filter"] - the filter value
    * $_REQUEST["pageNum"] - the page index
    * $_REQUEST["pageSize"] - the page size (number of rows to return)
    * if neither pageNum and pageSize appear, we do a full select, no limit
    * returns : an array of the form
    * array (
    *           data => array(
    *                array('field1' => "value1", "field2" => "value2")
    *           metadata => array(
    *                "pageNum" => page_index,
    *                "totalRows" => number_of_rows
    function findAll() {
         global $conn, $filter_field, $filter_type;
          * the list of fields in the table. We need this to check that the sent value for the ordering is indeed correct.
         $fields = array('id','subscriber_id','lastName','firstName','birthdate','gender');
         $where = "";
         if (@$_REQUEST['filter'] != "") {
              $where = "WHERE " . $filter_field . " LIKE " . GetSQLValueStringForSelect(@$_REQUEST["filter"], $filter_type);     
         $order = "";
         if (@$_REQUEST["orderField"] != "" && in_array(@$_REQUEST["orderField"], $fields)) {
              $order = "ORDER BY " . @$_REQUEST["orderField"] . " " . (in_array(@$_REQUEST["orderDirection"], array("ASC", "DESC")) ? @$_REQUEST["orderDirection"] : "ASC");
         //calculate the number of rows in this table
         $rscount = mysql_query("SELECT count(*) AS cnt FROM `modelName` $where");
         $row_rscount = mysql_fetch_assoc($rscount);
         $totalrows = (int) $row_rscount["cnt"];
         //get the page number, and the page size
         $pageNum = (int)@$_REQUEST["pageNum"];
         $pageSize = (int)@$_REQUEST["pageSize"];
         //calculate the start row for the limit clause
         $start = $pageNum * $pageSize;
         //construct the query, using the where and order condition
         $query_recordset = "SELECT id,subscriber_id,lastName,firstName,birthdate,gender FROM `modelName` $where $order";
         //if we use pagination, add the limit clause
         if ($pageNum >= 0 && $pageSize > 0) {     
              $query_recordset = sprintf("%s LIMIT %d, %d", $query_recordset, $start, $pageSize);
         $recordset = mysql_query($query_recordset, $conn);
         //if we have rows in the table, loop through them and fill the array
         $toret = array();
         while ($row_recordset = mysql_fetch_assoc($recordset)) {
              array_push($toret, $row_recordset);
         //create the standard response structure
         $toret = array(
              "data" => $toret,
              "metadata" => array (
                   "totalRows" => $totalrows,
                   "pageNum" => $pageNum
         return $toret;
    * constructs and executes a sql count query against the selected database
    * can take the following parameters:
    * $_REQUEST["filter"] - the filter value
    * returns : an array of the form
    * array (
    *           data => number_of_rows,
    *           metadata => array()
    function rowCount() {
         global $conn, $filter_field, $filter_type;
         $where = "";
         if (@$_REQUEST['filter'] != "") {
              $where = "WHERE " . $filter_field . " LIKE " . GetSQLValueStringForSelect(@$_REQUEST["filter"], $filter_type);     
         //calculate the number of rows in this table
         $rscount = mysql_query("SELECT count(*) AS cnt FROM `modelName` $where");
         $row_rscount = mysql_fetch_assoc($rscount);
         $totalrows = (int) $row_rscount["cnt"];
         //create the standard response structure
         $toret = array(
              "data" => $totalrows,
              "metadata" => array()
         return $toret;

  • Selecting only one row at a time

    Hi experts,
    i have following doubt regarding selecting rows from a db:
    Is there any way of selecting only one row AT A TIME from a dabase just to collect the data in rows instead of in a unique document containing all the rows?
    I would like you to ellaborate on this as i need to send only one row to the IE, and then other row, and so on... without throwing any error!
    I have seen that there are SELECT SINGLE and SELECT UP TO 1 ROW, but these two methods are only useful when retrieving only one row, and that does not match my requirements. I need to process all the rows but one by one..
    I know that we can use the receiver jdbc adapter as if it was a sender by means of its specific datatype, but how to do it row by row??
    Hope i had explained well..
    Thanks in advance and best regards,
    David

    Hi kiran,
    Yes, my table has 5 not null fields but i am selecting and updating fixes values so i think that I will definetely go for the next solution:
    SELECT * FROM t1 WHERE status='0' and ROWNUM<2;
    UPDATE t1 SET status='1' WHERE status='0' and ROWNUM<2;
    My only concern is if the update will take the same row that the select.... BTW, I think it will
    ..What do you guys think?
    I ve been trying to operate with your proposed queries but i received some errors. Your queries are very interesting but i think that with the above ones i meet my requirements as the status field will be 0 for not processed rows and 1 for precessed ones (and the update will look for the row that meets the same 'where' clause than the select, and then, and only then, it will set status='1').
    The only thing i have to care about is what i questioned before.
    Thanks a lot and best regards,
    David

  • Update row in a table based on join on multiple rows in another table

    I am using SQL Server 2005. I have the following update query which is not working as desired.
    UPDATE DocPlant
    SET DocHistory = DocHistory + CONVERT(VARCHAR(20), PA.ActionDate, 100) + ' - ' + PA.ActionLog + '. '
    FROM PlantDoc PD INNER JOIN PlantAction PA on PD.DocID = PA.DocID AND PD.PlantID = PA.PlantID 
    For each DocID and PlantID in PlantDoc table there are multiple rows in PlantAction table. I would like to concatenate ActionDate and ActionLog information into DocHistory column of DocPlant table. But the above update query is considering only one row from
    PlantAction table even though there are multiple rows that match with DocID and PlantID.
    DocHistory column is of type NVARCHAR(MAX).
    How do I fix my query to achieve what I want ? Thanks for the help.

    UPDATE DocPlant
    SET DocHistory = DocHistory + CONVERT(VARCHAR(20), PA.ActionDate, 100) + ' - ' + PA.ActionLog + '. '
    FROM PlantDoc PD INNER JOIN PlantAction PA on PD.DocID = PA.DocID AND PD.PlantID = PA.PlantID 
    We do not use the old Sybase UPDATE..FROM.. syntax. Google it and learn how it does not work. We do not use the old Sybase CONVERT() string function. You are still writing 1950's COBOL with string dates instead of temeproal data types. 
    You also did not post DDL, so we have to guess about everything. Does your boss make you work without DDL? How do you do it? 
    >> For each DocID and PlantID in PlantDoc table there are multiple rows in PlantAction [singular name?] table. I would like to concatenate ActionDate and ActionLog information into DocHistory column of DocPlant table. <<
    Why? What does this new data element mean? This is like dividing Thursday by Red and expecting a reasonable answer. Now, non-SQL programmers who are still writing COBOL will violate the tiered architecture rule about doing display formatting in the database.
    If you will follow forum rules, we can help you. 
    --CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
    in Sets / Trees and Hierarchies in SQL

  • Cursor and Update rows based on value/date

    SQL Server 2012
    Microsoft SQL Server Management Studio
    11.0.3128.0
    Microsoft Analysis Services Client Tools
    11.0.3128.0
    Microsoft Data Access Components (MDAC)
    6.1.7601.17514
    Microsoft MSXML 3.0 4.0 5.0 6.0 
    Microsoft Internet Explorer
    9.11.9600.16518
    Microsoft .NET Framework
    4.0.30319.18408
    Operating System
    6.1.7601
    The objective of this is to test the Cursor and use it on a production environment after this is fixed. What I would like to do is update rows in a column i duplicated originally called 'HiredDate' from AdventureWorks2012 HumanResources.Employee table. I
    made a duplicate column called 'DateToChange' and would like to change it based on a date I have picked, which returns normally 2 results (i.e. date is '04/07/2003'). The code runs but will not change both dates. It did run however with an error but changed
    only 1 of the 2 rows because it said ['nothing available in next fetch'].
    The code to add the columns and perform the query to get the results I am running this against:
    -- ADD column 'DateToChange'
    ALTER TABLE [HumanResources].[Employee] ADD DateToChange Date NOT NULL;
    -- Copy 'HireDate' data to 'DateToChange'
    UPDATE HumanResources.Employee SET DateToChange = HireDate;
    -- Change 'DateToChange' to NOT NULL
    ALTER TABLE [HumanResources].[Employee] ALTER COLUMN DateToChange Date NOT NULL;
    SELECT BusinessEntityID,HireDate, CONVERT( char(10),[DateToChange],101) AS [Formatted Hire Date]
    FROM HumanResources.Employee
    WHERE [DateToChange] = '04/07/2003';
    Code:
    USE AdventureWorks2012;
    GO
    -- Holds output of the CURSOR
    DECLARE @EmployeeID INT
    DECLARE @HiredDate DATETIME
    DECLARE @HiredModified DATETIME
    DECLARE @ChangeDateTo DATETIME
    --Declare cursor
    -- SCROLL CURSOR ALLOWS "for extra options" to pul multiple records: i.e. PRIOR, ABSOLUTE ##, RELATIVE ##
    DECLARE TestCursor CURSOR SCROLL FOR
    -- SELECT statement of what records going to be used by CURSOR
    -- Assign the query to the cursor.
    SELECT /*HumanResources.Employee.BusinessEntityID, HumanResources.Employee.HireDate,*/ CONVERT( char(10),[DateToChange],101) AS [Formatted Hire Date]
    FROM HumanResources.Employee
    WHERE DateToChange = '01/01/1901'
    /*ORDER BY HireDate DESC*/ FOR UPDATE OF [DateToChange];
    -- Initiate CURSOR and load records
    OPEN TestCursor
    -- Get first row from query
    FETCH NEXT FROM TestCursor
    INTO @HiredModified
    -- Logic to tell the Cursor while "@@FETCH_STATUS" 0 the cursor has successfully fetched the next record.
    WHILE (@@FETCH_STATUS = 0 AND @@CURSOR_ROWS = -1)
    BEGIN
    FETCH NEXT FROM TestCursor
    IF (@HiredModified = '04/07/2003')/*05/18/2006*/
    -- Sets @HiredModifiedDate data to use for the change
    SELECT @ChangeDateTo = '01/01/1901'
    UPDATE HumanResources.Employee
    SET [DateToChange] = @ChangeDateTo --'01/01/1901'
    FROM HumanResources.Employee
    WHERE CURRENT OF TestCursor;
    END
    -- CLOSE CURSOR
    CLOSE TestCursor;
    -- Remove any references held by cursor
    DEALLOCATE TestCursor;
    GO
    This query is run successfully but it does not produce the desired results to change the dates
    04/07/2003 to 01/01/1901.
    I would like the query to essentially be able to run the initial select statement, and then update and iterate through the returned results while replacing the necessary column in each row.
    I am also open to changes or a different design all together. 
    For this query I need:
    1. To narrow the initial set of information
    2. Check if the information returned, in particular a date, is before [i.e. this current month minus 12 months or
    12 months before current month]
    3. Next replace the dates with the needed date
    [Haven't written this out yet but it will need to be done]
    4. After all this is done I will then need to update a column on each row:
    if the 'date' is within 12 months to 12 months from the date checked
    NOTE: I am new to TSQL and have only been doing this for a few days, but I will understand or read up on what is explained if given enough information. Thank you in advance for anyone who may be able to help.

    The first thing you need to do is forget about cursors.  Those are rarely needed.  Instead you need to learn the basics of the tsql language and how to work with data in sets.  For starters, your looping logic is incorrect.  You open
    the cursur and immediately fetch the first row.  You enter the loop and the first thing in the loop does what?  Fetches another row.  That means you have "lost" the values from the first row fetched.  You also do not test the success of
    that fetch but immediately try to use the fetched value.  In addition, your cursor includes the condition "DateToChange = '01/01/1901' " - by extension you only select rows where HireDate is Jan 1 1901.  So the value fetched into @HiredModified will
    never be anything different - it will always be Jan 1 1901.  The IF logic inside your loop will always evaluate to FALSE.  
    But forget all that.  In words, tell us what you are trying to do.  It seems that you intend to add a new column to a table - one that is not null (ultimately) and is set to a particular value based on some criteria.  Since you intend the
    column to be not null, it is simpler to just add the column as not null with a default.  Because you are adding the column, the assumption is that you need to set the appropriate value for EVERY row in the table so the actual default value can be anything.
     Given the bogosity of the 1/1/1901 value, why not use this as your default and then set the column based on the Hiredate afterwards.  Simply follow the alter table statement with an update statement.  I don't really understand what your logic
    or goal is, but perhaps that will come with a better description.  In short: 
    alter table xxx add DateToChange date default '19010101'
    update xxx set DateToChange = HireDate where [some unclear condition]
    Lastly, you should consider wrapping everything you do in a transaction so that you recover from any errors.  In a production system, you should consider making a backup immediately before you do anything - strongly consider and have a good reason not
    to do so if that is your choice (and have a recovery plan just in case). 

  • Update Rows with info from other Rows in Same Table.

    I'm trying to update rows with information from the same table. The table gets loaded with info from a report that runs and it has to be a new entry every month but I would like to carry over some of the info from last month. This statement below runs but updates all rows in the new table load and in my test cases I only made a few match so only like 5 records should get updated. This is an example of what I'm trying to do. If I add this(C2.COL_INVC_ID = C1.COL_INVC_ID) to the last "*Where*" statement I get an invalid identifier for "C2.COL_INVC_ID". So what am I doing wrong here??? How can I update only the rows that where also in last months run???
    Thanks in advance for any help!
    Update OpenIssues OI1
    Set(OI1.Num, OI1.Status, OI1.Code, OI1.LastModifiedDate) =
    (Select OI2.Num, OI2.Status, OI2.Code, OI2.LastModifiedDate
    From OpenIssues OI2
    Where OI2.num = OI1.num and OI2.TableLoadDate = TO_DATE('01/31/2012 00:00:00', 'MM/DD/YYYY HH24:MI:SS')
    Where and OI1.TableLoadDate = TO_DATE('02/29/2012 00:00:00', 'MM/DD/YYYY HH24:MI:SS')
    SQLMe

    Hi,
    Welcome to the forum!
    SQLMe wrote:
    I'm trying to update rows with information from the same table. The table gets loaded with info from a report that runs and it has to be a new entry every month but I would like to carry over some of the info from last month. This statement below runs but updates all rows in the new table load and in my test cases I only made a few match so only like 5 records should get updated. This is an example of what I'm trying to do. If I add this(C2.COL_INVC_ID = C1.COL_INVC_ID) to the last "*Where*" statement I get an invalid identifier for "C2.COL_INVC_ID". If the aliases c1 and c2 aren't defined anywhere, then you can't use them anywhere.
    The WHERE clause of the UPDATE statement can only reference the table being updated, ot1 in this case.
    So what am I doing wrong here??? How can I update only the rows that where also in last months run???
    Thanks in advance for any help!
    Update OpenIssues OI1
    Set(OI1.Num, OI1.Status, OI1.Code, OI1.LastModifiedDate) =
    (Select OI2.Num, OI2.Status, OI2.Code, OI2.LastModifiedDate
    From OpenIssues OI2
    Where OI2.num = OI1.num and OI2.TableLoadDate = TO_DATE('01/31/2012 00:00:00', 'MM/DD/YYYY HH24:MI:SS')
    Where and OI1.TableLoadDate = TO_DATE('02/29/2012 00:00:00', 'MM/DD/YYYY HH24:MI:SS')
    ------------There's a syntax error in the last line. Either something got lost when you posted the code, or you just don't want the keyword AND. You certainly don't want AND immediately after WHERE.
    In general, if it's not obvious how to do an UPDATE, then UPDATE is the wrong tool: you want MERGE instead.
    Whenever you have a problem, please post a little sample data (CREATE TABLE and INSERT statements, relevant columns only) from all tables involved.
    Also post the results you want from that data, and an explanation of how you get those results from that data, with specific examples.
    Simplify the problem as much as possible. Remove all tables and columns that play no role in this problem.
    If you're asking about a DML statement, such as UPDATE, the CREATE TABLE and INSERT statements should re-create the tables as they are before the DML, and the results will be the contents of the changed table(s) when everything is finished.
    Always say which version of Oracle you're using.
    See the forum FAQ {message:id=9360002}

  • SQL Server 2012 Undetected Deadlock in a table with only one row

      We have migrated our SQL 2000 Enterprise Database to SQL 2012 Enterprise few days ago.
      This is our main database, so most of the applications access it.
      The day after the migration, when users started to run tasks, the database access started to experiment a total failure.
      That is, all processes in the SQL 2k12 database were in lock with each other. This is a commom case of deadlock, but the Database Engine was unable to detect it.
      After some research, we found that the applications were trying to access a very simple table with only one row. This table has a number that is restarted every day and is used to number all the transactions made against the system.   So, client
    applications start a new transaction, get the current number, increment it by one and commit the transaction.
      The only solution we found was to kill all user processes in SQL Server every time this situation occurs (no more than 5 minutes when all clients are accessing the database).
      No client application was changed in this migration and this process was working very well for the last 10 years.
      The problem is that SQL 2k12 is unable to handle this situation compared to SQL 2k.
      It seems to occurs with other tables too, but as this is an "entry table" the problem occurs with it first.
      I have searched internet and some suggest some workarounds like using table hints to completely lock the table at the begining of the transaction, but it can't be used to other tables.
      Does anyone have heard this to be a problem with SQL 2k12? Is there any fixes to make SQL 2k12 as good as SQL 2k?

    First off re: "Unfortunatelly, this can't be used in production environment as exclusive table lock would serialize the accesses to tables and there will be other tables that will suffer with this problem."
    This is incorrect. 
    Using a table to generate sequence numbers like this is a bad idea exactly because the access must be serialized.  Since you can't switch to a SEQUENCE object, which is the correct solution, the _entire goal_ of this exercise to find a way to properly
    serialize access to this table.  Using exclusive locking will not be necessary for all the tables; just for the single-row table used for generating sequence values with a cursor.
    I converted the sample program to VB.NET:
    Public Class Form1
    Private mbCancel As Boolean = False
    Private Sub Button1_Click(sender As Object, e As EventArgs) Handles Button1.Click
    Dim soConn As ADODB.Connection
    Dim soRst As ADODB.Recordset
    Dim sdData As Date
    Dim slValue As Long
    Dim slDelay As Long
    'create database vbtest
    'go
    ' CREATE TABLE [dbo].[ControlNumTest](
    ' [UltData] [datetime] NOT NULL,
    ' [UltNum] [int] NOT NULL,
    ' CONSTRAINT [PK_CorreioNumTeste] PRIMARY KEY CLUSTERED
    ' [UltData] Asc
    ' )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 80) ON [PRIMARY]
    ' ) ON [PRIMARY]
    mbCancel = False
    Do
    ' Configure the Connection object
    soConn = New ADODB.Connection
    With soConn
    .ConnectionString = "Provider=SQLNCLI11;Initial Catalog=vbtest;Data Source=localhost;trusted_connection=yes"
    .IsolationLevel = ADODB.IsolationLevelEnum.adXactCursorStability
    .Mode = ADODB.ConnectModeEnum.adModeReadWrite
    .CursorLocation = ADODB.CursorLocationEnum.adUseServer
    .Open()
    End With
    ' Start a new transaction
    Call soConn.BeginTrans()
    ' Configure the RecordSet object
    soRst = New ADODB.Recordset
    With soRst
    .ActiveConnection = soConn
    .CursorLocation = ADODB.CursorLocationEnum.adUseServer
    .CursorType = ADODB.CursorTypeEnum.adOpenForwardOnly
    .LockType = ADODB.LockTypeEnum.adLockPessimistic
    .Open("SELECT * FROM dbo.ControlNumTest")
    End With
    With soRst
    sdData = .Fields!UltData.Value ' Read the last Date (LOCK INFO 1: See comments bello
    slValue = .Fields!UltNum.Value ' Read the last Number
    If sdData <> Date.Now.Date Then ' Date has changed?
    sdData = Date.Now.Date
    slValue = 1 ' Restart number
    End If
    .Fields!UltData.Value = sdData ' Update data
    .Fields!UltNum.Value = slValue + 1 ' Next number
    End With
    Call soRst.Update()
    Call soRst.Close()
    ' Ends the transaction
    Call soConn.CommitTrans()
    Call soConn.Close()
    soRst = Nothing
    soConn = Nothing
    txtUltNum.Text = slValue + 1 ' Display the last number
    Application.DoEvents()
    slDelay = Int(((Rnd * 250) + 100) / 100) * 100
    System.Threading.Thread.Sleep(slDelay)
    Loop While mbCancel = False
    If mbCancel = True Then
    Call MsgBox("The test was canceled")
    End If
    Exit Sub
    End Sub
    Private Sub Button2_Click(sender As Object, e As EventArgs) Handles Button2.Click
    mbCancel = True
    End Sub
    End Class
    And created the table
    CREATE TABLE [dbo].[ControlNumTest](
    [UltData] [datetime] NOT NULL,
    [UltNum] [int] NOT NULL,
    CONSTRAINT [PK_CorreioNumTeste] PRIMARY KEY CLUSTERED
    [UltData] Asc
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = on, FILLFACTOR = 80) ON [PRIMARY]
    ) ON [PRIMARY]
    go insert into ControlNumTest values (cast(getdate()as date),1)
    Then ran 3 copies of the program and generated the deadlock:
    <deadlock>
    <victim-list>
    <victimProcess id="processf27b1498" />
    </victim-list>
    <process-list>
    <process id="processf27b1498" taskpriority="0" logused="0" waitresource="KEY: 35:72057594039042048 (a01df6b954ad)" waittime="1970" ownerId="3181" transactionname="implicit_transaction" lasttranstarted="2014-02-14T15:49:31.263" XDES="0xf04da3a8" lockMode="X" schedulerid="4" kpid="9700" status="suspended" spid="51" sbid="0" ecid="0" priority="0" trancount="2" lastbatchstarted="2014-02-14T15:49:31.267" lastbatchcompleted="2014-02-14T15:49:31.267" lastattention="1900-01-01T00:00:00.267" clientapp="vbt" hostname="DBROWNE2" hostpid="21152" loginname="NORTHAMERICA\dbrowne" isolationlevel="read committed (2)" xactid="3181" currentdb="35" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128058">
    <executionStack>
    <frame procname="adhoc" line="1" stmtstart="80" sqlhandle="0x020000008376181f3ad0ea908fe9d8593f2e3ced9882f5c90000000000000000000000000000000000000000">
    UPDATE [dbo].[ControlNumTest] SET [UltData]=@Param000004,[UltNum]=@Param000005 </frame>
    <frame procname="unknown" line="1" sqlhandle="0x0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000">
    unknown </frame>
    </executionStack>
    <inputbuf>
    (@Param000004 datetime,@Param000005 int)UPDATE [dbo].[ControlNumTest] SET [UltData]=@Param000004,[UltNum]=@Param000005 </inputbuf>
    </process>
    <process id="processf6ac9498" taskpriority="0" logused="10000" waitresource="KEY: 35:72057594039042048 (a01df6b954ad)" waittime="1971" schedulerid="5" kpid="30516" status="suspended" spid="55" sbid="0" ecid="0" priority="0" trancount="1" lastbatchstarted="2014-02-14T15:49:31.267" lastbatchcompleted="2014-02-14T15:49:31.267" lastattention="1900-01-01T00:00:00.267" clientapp="vbt" hostname="DBROWNE2" hostpid="27852" loginname="NORTHAMERICA\dbrowne" isolationlevel="read committed (2)" xactid="3182" currentdb="35" lockTimeout="4294967295" clientoption1="671156256" clientoption2="128058">
    <executionStack>
    <frame procname="adhoc" line="1" sqlhandle="0x020000003c6309232ab0edbe0a7790a816a09c4c5ac6f43c0000000000000000000000000000000000000000">
    FETCH API_CURSOR0000000000000001 </frame>
    <frame procname="unknown" line="1" sqlhandle="0x0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000">
    unknown </frame>
    </executionStack>
    <inputbuf>
    FETCH API_CURSOR0000000000000001 </inputbuf>
    </process>
    </process-list>
    <resource-list>
    <keylock hobtid="72057594039042048" dbid="35" objectname="vbtest.dbo.ControlNumTest" indexname="PK_CorreioNumTeste" id="lockff6e6c80" mode="U" associatedObjectId="72057594039042048">
    <owner-list>
    <owner id="processf6ac9498" mode="S" />
    <owner id="processf6ac9498" mode="U" requestType="wait" />
    </owner-list>
    <waiter-list>
    <waiter id="processf27b1498" mode="X" requestType="convert" />
    </waiter-list>
    </keylock>
    <keylock hobtid="72057594039042048" dbid="35" objectname="vbtest.dbo.ControlNumTest" indexname="PK_CorreioNumTeste" id="lockff6e6c80" mode="U" associatedObjectId="72057594039042048">
    <owner-list>
    <owner id="processf27b1498" mode="U" />
    <owner id="processf27b1498" mode="U" />
    <owner id="processf27b1498" mode="X" requestType="convert" />
    </owner-list>
    <waiter-list>
    <waiter id="processf6ac9498" mode="U" requestType="wait" />
    </waiter-list>
    </keylock>
    </resource-list>
    </deadlock>
    It's the S lock that comes from the cursor read that's the villian here.  U locks are compatible with S locks, so one session gets a U lock and another gets an S lock.  But then the session with an S needs a U, and the session with a U needs an
    X.  Deadlock. 
    I'm not sure what kind of locks were taken by this cursor code on SQL 2000, but on SQL 2012, this code is absolutely broken and should deadlock.
    The right way to fix this code is to add (UPDLOCK,SERIALIZABLE) to the cursor
    .Open("SELECT * FROM dbo.ControlNumTest with (updlock,serializable)")
    So each session reads the table with a restrictive lock, and you don't mix S, U and X locks in this transaction.  This resolves the deadlock, but requires a code change.
    I tried several things that didn't require a code, which did not resolve the deadlock;
    1) setting ALLOW_ROW_LOCKS=OFF ALLOW_PAGE_LOCKS=OFF
    2) SERIALIZABLE isolation level
    3) Switching OleDB providers from SQLOLEDB to SQLNCLI11
    Then I replaced the table with a view containing a lock hint:
    CREATE TABLE [dbo].[ControlNumTest_t](
    [UltData] [datetime] NOT NULL,
    [UltNum] [int] NOT NULL,
    CONSTRAINT [PK_CorreioNumTeste] PRIMARY KEY CLUSTERED
    [UltData] Asc
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = on, FILLFACTOR = 80) ON [PRIMARY]
    ) ON [PRIMARY]
    go
    create view ControlNumTest as
    select * from ControlNumTest_t with (tablockx)
    Which, at least in my limited testing, resovlved the deadlock without any client code change.
    David
    David http://blogs.msdn.com/b/dbrowne/

  • Jdev Database Adapter - polling updated rows

    Hello, I have a question reguarding the polling strategy available in the database adapter.
    I set it up and it works great with new inserted rows in the table.
    However, it doesn´t capture the updated rows!
    For instance, i have the following table:
    ID - NAME - AGE
    1 -John- 21
    2 -Mary- 25
    When I insert a new row, it is captured by comparing the last captured ID in the sequencing file.
    ID - NAME - AGE
    1 -John- 21
    2 -Mary- 25
    3 -Cindy- 20 <--------- New row
    But when i UPDATE an already existing row, it doesnt load the changed row!
    ID - NAME - AGE
    1 -John- 26 <----------- Age changed, but polling doesnt recognize it!
    2 -Mary- 25
    3 -Cindy- 20
    Is there a way to get this to work? Should I set an special option? Thank you very much.
    Im using jdeveloper 11g.

    Hi John, it depends on which type of polling strategy you are using to poll the new/updated records. You must have the necessary previleges to add the special field and creat triggers or your db team must have.
    i.Physical delete polling strategy: Cannot capture the UPDATE operations on the table.This because when the adapter listens to the table, when ever a record is polled, that record is deleted after the polling process. After polling the records, if it is not deleted, then the adapter knows that it’s a updated one but here when a record is polled, it is deleted. So when an adapter encounters a record, it’s a new record to the adapter though the record is updated before the polling cycle). So physical delete cannot capture UPDATED records.
    ii. Logical delete polling strategy: The logical delete polling strategy updates a special field of the table after processing each row (updates the where clause at runtime to filter out processed rows).The status column must be marked as processed after polling and the read value must be provided while configuring the adapter. Modified where clause and post-read values are handled automatically by the db adapter.
    Usage:
    <operation name="receive">
    <jca:operation
    ActivationSpec="oracle.tip.adapter.db.DBActivationSpec"
    PollingStrategyName="LogicalDeletePollingStrategy"
    MarkReadField="STATUS"
    MarkReadValue="PROCESSED"
    This polling strategy captures the updated records only if triggers are added.This is because when the record is polled, its status is updated to 'PROCESSED'. To capture this record if its updated, its status has to be 'UNPROCESSED'. So a trigger has to be added to update the status field to 'UNPROCESSED' whenever the record is updated. Below is the example.
    Ex:
    CREATE OR REPLACE TRIGGER nameOftrigger_modified
    BEFORE UPDATE ON table_name
    REFERENCING NEW AS modifiedRow
    FOR EACH ROW
    BEGIN
    *:modifiedRow.STATUS :='UNPROCESSED';*
    END;
    In this example, STATUS is the special field(of the polling table). When the record is updated, the trigger gets fired and updates the STATUS field to 'UNPROCESSED'. So when the table is polled, as this record's status is Unprocessed, this record ll be captured during polling.
    Other polling strategeis like "Sequencing Table Last Updated","Sequencing Table Last-Read Id" Polling strategy,etc also can be used to capture the updated records. In these strategies also, you need to add the triggers like the above. These need an extra helping table also to poll.
    Logical delete polling strategy is good enough to poll the updated records.
    Hope this helps.
    Thank you.

  • Execute only one Row

    I have a view object that is made up of 4 Entity Objects, only one is update able the rest allow me to show the names of the meta data.
    When a user edits FK in the Update Screen of the Update able Entity, I lose the labels to the meta data, when I do a execute they are show again proper.
    Example:
    Updateable Entity Initial Values
    Code Value = 2 FK shows Fire Department
    User changes the value to 3 now it should show Police Department, The value is 3 in the updateable entity but there is nothing in the other entity. When I execute() things come back....
    Question:
    My understanding of execute(), this refreshes the view object redoing the whole query, is it possible to just execute() for just the updated row? Or is there a better way to do this? Seems like a waste to re query the data every time.
    Cory

    Here is the real issue, when I test this on the Employee/Dept generating the business objects it works fine.
    In our case we have generated all our business components in a central location which everyone uses. When I link Entities in one view using the associations generated in the central location I get the issue described above.
    When I create the associations in my own project and link the entities things seem to work fine?
    Why does this happen????
    Cory
    null

  • ExecuteBatch(): number of successfully updated rows

    Hello everybody:
    Here is a simple but often a repeated question in java forums:
    Requirement:
    1.To read a flat file that has many rows of data.
    2.Parse the data and update the database accordingly.
    3.Find the number of successfully updated rows.
    Approach:
    After reading the file and parsing its data,
    - use PreparedStatement
    - use executeBatch()
    I found this as unadvisable to use executeBatch() as its implementation is
    inherently driver specific. The executeBatch() returns an array of update counts.
    Now,can any one tell me, what is the best way to trace the number of successfully
    (and unsuccessfully) updated rows by using this count?
    Is there any other way to achieve the same by not using executeBatch()?
    Can any one share a snippet of code to achieve this specific functionality?
    [Need is to log the number of unsuccessful attempts along with their
    corresponding rows of data].
    Thanks & regards,
    Venkat Kosigi

    executeBatch submits a batch of commands to the database for execution and if all commands execute successfully, returns an array of update counts. The int elements of the array that is returned are ordered to correspond to the commands in the batch, which are ordered according to the order in which they were added to the batch. The elements in the array returned by the method executeBatch may be one of the following:
    -- A number greater than or equal to zero indicates that the command was processed successfully and is an update count giving the number of rows in the database that were affected by the command's execution
    -- A value of -2 indicates that the command was processed successfully but that the number of rows affected is unknown
    If one of the commands in a batch update fails to execute properly, this method throws a BatchUpdateException, and a JDBC driver may or may not continue to process the remaining commands in the batch. However, the driver's behavior must be consistent with a particular DBMS, either always continuing to process commands or never continuing to process commands.
    If the driver continues processing after a failure, the array returned by the method BatchUpdateException.getUpdateCounts will contain as many elements as there are commands in the batch, and at least one of the elements will be the following:
    -- A value of -3 indicates that the command failed to execute successfully and occurs only if a driver continues to process commands after a command fails.
    return values have been modified in the Java 2 SDK, Standard Edition, version 1.3 to accommodate the option of continuing to proccess commands in a batch update after a BatchUpdateException obejct has been thrown.
    Throws BatchUpdateException (a subclass of SQLException) if one of the commands sent to the database fails to execute properly or attempts to return a result set. The BatchUpdateException getUpdateCounts() method allows you to known the element who caused the fail identified by a -3 value.
    -- So, if you have a succesfully result, look for at the executeBatch returned array ( #values >= 0 ) + ( #values == -2 ) = successes
    and if you have not a succesfully result, catching the BatchUpdateException take the array returned by the getUpdateCounts() method, and look for the position in which array values are -3. You could take the data at this position on batch and log it.
    -- Other way to insert a bulk copy on database is to use a bcp command ( it�s not java, bcp is an independent command ) that allows you to do bulk inserts from file, indicate an error file, bcp will give to you as result a file with those lines not where inserted.
    I hope have help you.;)

Maybe you are looking for