MRU process: Only touch updates rows

I have a MRU process on a updatable SQL query region, works fine but I am not sure if it updates all the rows or only the ones where the user made some change.
Is there a way to determine and/or control this behaviour?
For example, if the region shows 10 rows and user changes one row and hits Submit, does the MRU process update all 10 rows with the same values or only the changed row?
Thanks

Yes, I suppose you could, but I would prefer to do it closer to the data. Triggers are perfect for this, and would cover situations where data is updated outside of HTML DB. I follow the Tom Kyte (asktom.oracle.com) school of db design which dictates that you keep data integrity controls as close to the data as possible. If you can't do it in a check constraint, do it in a trigger. If you can't do it in a trigger, do it in your PL/SQL API.
A lot of posts here can be solved with standard database constructs, such as triggers. People ask about getting around them using things like populating PK's with HTML DB computations. What happens when you want to update the data outside of HTML DB?
Do it in a trigger if you can.
Tyler

Similar Messages

  • How to get only the changed rows in h:dataTable

    Hi,
    I am very new in jsf technology.
    For our application, I need to process only the changed rows in the jsf datatable in managed bean.
    User may also enter new rows.We have to consider this also.
    Pls post ur suggession as soon as possible.This is urgent.This will be best if someone can post me some code snippets.
    Thanks
    -S

    As klejs said, set the valueChangeListener attribute to each of the input fields. Have a method in your backing bean which would
    1) get the index from the component id
    2) get the object and the above index from ArrayList
    3) check if the old Value is different from the new value. If yes, set some boolean flag in that object to true indicating that the row is modified.
    Sample code below. Please feel free to add checks if you need.
         public void updateDirtyFlag(ValueChangeEvent valueChangeEvent){
             if(valueChangeEvent.getOldValue()==null && valueChangeEvent.getNewValue()==null){
                     return;
             if(valueChangeEvent.getComponent().getId()==null){
                 return;
          //Extract the index from the component id in the getId method
             int idVal=getId(valueChangeEvent.getComponent().getClientId(FacesContext.getCurrentInstance()));
             ArrayList dataList=getData(); // data from session or request scope
              if((valueChangeEvent.getOldValue()==null && valueChangeEvent.getNewValue()!=null)        
                     || (valueChangeEvent.getOldValue()!=null && valueChangeEvent.getNewValue()==null)){
                 if(dataList!=null && idVal<dataList.size() && idVal>-1){
                     YourBaseValueObject pojo=(YourBaseValueObject)dataList.get(idVal);
                     pojo.setUpdateFlag(true);
             else if(valueChangeEvent.getOldValue()!=null && valueChangeEvent.getNewValue()!=null && !valueChangeEvent.getOldValue().equals(valueChangeEvent.getNewValue())){
                 if(dataList!=null && idVal<dataList.size() && idVal>-1){
                     YourBaseValueObject pojo=(YourBaseValueObject)dataList.get(idVal);
                     pojo.setUpdateFlag(true);
         }HTH.
    Karthik

  • MRU - trying to restrict UPDATE to only certain users - everyone INSERT

    Application Express 2.2.1.00.04
    I’m looking for a solution to a problem. I have a region that is a multi-row SQL Query (Updateable). Everything works as expected. Now, I’d like to restrict the ability to update ANY rows based upon an authorization scheme.
    I want everyone accessing my application to be able to insert rows (WORKS FINE).
    I only want certain authorized people to be able to update existing rows (CAN’T GET THIS TO WORK).
    Anyone have any ideas?
    Thanks, Mike

    Hi Mike,
    I think you have two possibilities:
    1) Create a page validation with some similar code:
    DECLARE
        vHasUpdatePriv BOOLEAN := apex_util.public_check_authorization('MY_AUTHORIZATION_SCHEME');
        vExists NUMBER;
    BEGIN
        FOR ii IN 1 .. Apex_Application.g_f01.COUNT
        LOOP
            IF [check if record has changed by comparing against MD5 checksum]
              AND Apex_Application.g_f01(ii) IS NOT NULL -- I assume f01 stores the PK
              AND NOT vHasUpdatePriv
            THEN
                RAISE_APPLICATION_ERROR(-20123, 'You try to update, but you don''t have update privileges!');
            END IF;
        END LOOP;
    END;Or if you don't want to care how to check if a record has changed, take a look at http://inside-apex.blogspot.com/2006/12/plug-play-tabular-form-handling.html
    With that library the code would look like:
    DECLARE
        vHasUpdatePriv BOOLEAN := apex_util.public_check_authorization('MY_AUTHORIZATION_SCHEME');
        vExists NUMBER;
    BEGIN
        FOR ii IN 1 .. ApexLib_TabForm.getRowCount
        LOOP
            IF    ApexLib_TabForm.hasRowChanged(ii)
              AND ApexLib_TabForm.NV('YOUR_PK_COLUMN') IS NOT NULL
              AND NOT vHasUpdatePriv
            THEN
                ApexLib_Error.raiseError
                  ( pError => 'You try to update, but you don''t have update privileges!'
            END IF;
        END LOOP;
    END;2) You could write your own MRU process, which does the same as above, but additionally you also have to check for lost-update-detection, ...
    Hope that gives you a direction
    Patrick

  • Processing Static (via automatic row processing) & Dynmaic fields

    Hi,
    I have a page that has 2 sections. Section S is statically driven which I'd like to process via a Automatic Row Process DML. Section D is for dynamic fields which I process via a PL/SQL script.
    I need to process Section D (dynamic) first.
    Now they're 2 things that I'm noticing when I try this. Can someone please confirm.
    - After my process of Section D it seems to make a commit. I know this since I have a error in Section S, and the values from Section D are committed to the DB. I need to make sure a commit only occurs after all page processes have completed error free.
    - My Automatic Row Process DML for Section S doesn't seem to work at all. It can't seem to read the values at all. I know this since I have several columns which are "NOT NULL" and the appropriate error messages are being raised. The Automatic Row Fetch for Section S does work properly.
    For the time being the work around is writing a process for the entire page which includes both Section S and Section D. The thing is I thought that HTMLDB would be able to help me out a lot with Section S since it had static fields etc.

    Martin - I would try to debug these two processes separately. If the Auto DML process isn't firing, perhaps the button used to submit the page isn't setting the request to one of the standard values recognized by the Auto DML package ('INSERT','CREATE','CREATE_AGAIN','CREATEAGAIN' for inserts and 'SAVE','APPLY CHANGES','UPDATE','UPDATE ROW','CHANGE','APPLY' or like 'APPLY%CHANGES%' for update).
    A commit happens whenever session state is changed, so if your process saves an item value, that would do it. If you think that is not the cause of the commit, let me know the details of the process and I'll take a closer look. There is no way to prevent the commit when session state is updated.
    Scott

  • How to apply the constraint ONLY to new rows

    Hi, Gurus:
       I have one question as follows:
       We need to migrate a legacy system to a new production server. I am required to add two columns to every table in order to record who updates the row most recently through triggers, and  I should apply not null constraint to the columns . However, since legacy system already has data for every table, and old data does not have value for the 2 new columns. If we apply the constraint, all of existing rows will raise exception. I wonder if there is possibility to apply the constraint ONLY to new rows to come in future.
    Thanks.
    Sam

       We need to migrate a legacy system to a new production server. I am required to add two columns to every table in order to record who updates the row most recently through triggers, and  I should apply not null constraint to the columns .
    The best suggestion I can give you is that you make sure management documents the name of the person that came up with that hair-brained requirement so they can be sufficiently punished in the future for the tremendous waste of human and database resources they caused for which they got virtually NOTHING in return.
    I have seen many systems over the past 25+years that have added columns such as those: CREATED_DATE, CREATED_BY, MODIFIED_DATE, MODIFIED_BY.
    I have yet to see even ONE system where that information is actually useful for any real purpose. Many systems have application/schema users and those users can modify the data. Also, any DBA can modify the data and many of them can connect as the schema owner to do that.
    Many tables also get updated by other applications or bulk load processes and those processes use generic connections that can NOT be tied back to any particular system.
    The net result is that those columns will be populated by user names that are utterly useless for any auditing purposes.
    If a user is allowed to modify a table they are allowed to modify a table. If you want to track that you should implement a proper security strategy using Oracle's AUDIT functionality.
    Cluttering up ALL, or even many, of your tables with such columns is a TERRIBLE idea. Worse is adding triggers that server no other purpose but capture useless infomation but, because they are PL/SQL cause performance impacts just aggravates the total impact.
    It is certainly appropriate to be concerned about the security and auditability of your important data. But adding columns and triggers such as those proposed is NOT the proper solution to achieve that security.
    Before your organization makes such an idiotic decision you should propose that the same steps be taken before adding that functionality that you should take before the addition of ANY MAJOR structural or application changes:
    1. document the actual requirement
    2. document and justify the business reasons for that requirement
    3. perform testing that shows the impact of that requirement on the production system
    4. determine the resource cost (people, storage, etc) of implementing that requirement
    5. demonstrate how that information will actually be used EFFECTIVELY for some business purpose
    As regards items #1 and #2 above the requirement should be stated in terms of the PROBLEM to be solved, not some preconceived notion of the solution that should be used.
    Your org should also talk to other orgs or other depts in your same org that have used your proposed solution and find out how useful it has been for them. If you do this research you will likely find that it hasn't met their needs at all.
    And in your own org there are likely some applications with tables that already have such columns. Has anyone there EVER used those columns and found them invaluable for identifying and resolving any actual problem?
    If you can't use them and their data for some important process why add them to begin with?
    IMHO it is a total waste of time and resources to add such columns to ALL of your tables. Any such approach to auditing or security should, at most, be limited to those tables with key data that needs to be protected and only then when you cannot implement the proper 'best practices' auditing.
    A migration is difficult enough without adding useless additional requirements like those. You have FAR more important things you can do with the resources you have available:
    1. Capture ALL DDL for the existing system into a version control system
    2. Train your developers on using the version control system
    3. Determining the proper configuration of the new server and system. It is almost a CERTAINTY that settings will get changed and performance will suffer even though you don't think you have changed anything at all.
    4. Validating that the data has been migrated successfully. That can involve extensive querying and comparison to make sure data has not been altered during the migration. The process of validating a SINGLE TABLE is more difficult if the table structures are not the same. And they won't be if you add two columns to every table; every single query you do will have to specify the columns by name in order to EXCLUDE your two new columns.
    5. Validating the performance of the app on the new system. There WILL BE problems where things don't work like they used to. You need to find those problems and fix them
    6. Capturing the proper statistics after the data has been migrated and all of the indexes have been rebuilt.
    7. Capturing the new execution plans to use a a baseline for when things go wrong in the future.
    If it is worth doing it is worth doing right.

  • Problem with multiple MRU processes on a page

    I have a page with 3 report regions. The first 2 are based on the same query with 2 different filters while the 3rd is based on a view on the same table the 1st 2 are based on.
    I have created a MRU process for the 1st 2 regions that is active on both reports and it works perfectly fine. As soon as I make the 3rd report an updatable sql report everuthing breaks. The 2 updatable queries now throw htis error
    Error in mru internal routine: ORA-20001: Error in MRU: row= 1, ORA-20001: ORA-20001: Current version of data in database has changed since user initiated update process. current checksum = "A13886FC420B931B80182B6DE0409BB5", item checksum = "74D6B272FE94C895054E4FB16E7B7FAB"., update "EIMS"."PLATE" set "PLATE_ID" = :b1, "PLATE_BARCODE" = :b2, "CREATED_BY" = :b3
    Error
    OK
    as well as the 3rd one
    Error in mru internal routine: ORA-20001: Error in MRU: row= 1, ORA-20001: ORA-20001: Current version of data in database has changed since user initiated update process. current checksum = "A13886FC420B931B80182B6DE0409BB5", item checksum = "74D6B272FE94C895054E4FB16E7B7FAB"., update "EIMS"."TAQMAN_PLATE_INFO_V" set "PLATE_ID" = :b1, "PLATE_BARCODE" = :b2, "CREATED_BY" = :b3
    Error
    OK
    the 2 queries look like
    select
    htmldb_item.checkbox(10, p.plate_id) sel,
    htmldb_item.checkbox(5, p.plate_id) parent,
    p.plate_id plateID,
    p.PLATE_ID,
         p.PLATE_BARCODE,
         p.PLATE_TYPE ,
         p.CREATED_BY ,
         p.CREATED_ON ,
         p.STATUS,
    p.UPLOADED_ON,
         decode(p.FILENAME, null, 'N', 'Y') loaded
    from plate p
    where
    plate_type not in ( 'zzz', 'yyy')
    and
    select p.PLATE_ID,
    p.plate_id plateid,
         p.PLATE_BARCODE,
         p.PLATE_TYPE ,
         p.CREATED_BY ,
         p.CREATED_ON ,
         p.UPLOADED_ON,
         nvl(p.FILENAME, 'Not Loaded') filename,
         p.NAME,
    htmldb_item.checkbox(11, p.plate_id) sel,
    htmldb_item.checkbox(7, p.plate_id) parent,
    htmldb_item.checkbox(6, p.plate_id) child,
    p.STATUS
    from plate p
    where
    plate_type = 'yyy'
    the view for the 3rd region looks similar to
    create or replace view zzz_plate_info_v as
    select '***' as select_taqman,
    htmldb_item.checkbox(23, p.plate_id) sel,
    htmldb_item.checkbox(8, p.plate_id) child,
    p.PLATE_ID,
    p.plate_ID "Plate Id",
         p.PLATE_BARCODE,
         p.PLATE_TYPE ,
         p.CREATED_BY ,
         p.CREATED_ON ,
         p.STATUS,
         p.UPLOADED_ON,
         decode(p.FILENAME, null, 'N', 'Y') loaded,
    pr.run_id assay_run
    from plate p, plate_run pr
    where p.plate_type = 'zzz'
    and p.plate_id = pr.plate_id (+)
    and the query is
    select select_taqman,
    PLATE_ID,
    plate_ID "Plate Id",
    plate_barcode,
    sel,
    status,
    child,
    plate_type,
    created_by,
    created_on,
    uploaded_on,
    loaded,
    assay_run
    from zzz_plate_info_v
    This is very puzzling and frustrating. Any hints and suggestions are appreciated.

    Thanks,
    this answers my question. It is intersting to notes that I could split the update across 2 queries... but it fails if one adds a 3rd region. Now it would be nice if I can mimic the 3rd MRU using my own pl/sql code. Any example that you know and that can be used?
    --Nabil                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Update rows in portions

    Hello,
    I have a batch that creates output files. One file for each id. In a first step it only asigns a file_id to each record. The generation of the file is a second step after approval by a controller and not part of this problem.
    DROP TABLE test_update;
    CREATE TABLE test_update (
         processed VARCHAR2(1)
        ,amount    NUMBER
        ,file_id   VARCHAR2(3)
        ,id        NUMBER
    INSERT INTO test_update (processed,amount,id) VALUES ('n',10,8);
    INSERT INTO test_update (processed,amount,id) VALUES ('n',20,8);
    INSERT INTO test_update (processed,amount,id) VALUES ('n',30,8);
    INSERT INTO test_update (processed,amount,id) VALUES ('n',40,8);
    INSERT INTO test_update (processed,amount,id) VALUES ('n',10,9);
    INSERT INTO test_update (processed,amount,id) VALUES ('n',20,9);
    INSERT INTO test_update (processed,amount,id) VALUES ('n',30,9);
    INSERT INTO test_update (processed,amount,id) VALUES ('n',40,7);
    COMMIT;Expected result:
    id 7 file_id 1
    id 8 file_id 2
    id 9 file_id 3The current implementation opens a cursor to select distinct id's with unprocessed rows and loops.
    For each id it calls a procedure that generates a new file_id and loops in another cursor through each record, inserts the file_id, adds up the amount and counts the rows.
    I would like to do the second part in a single update and use
    DECLARE
        x           NUMBER;
        y           NUMBER;
        new_file_id NUMBER;
    BEGIN
        FOR r IN (
                SELECT  DISTINCT id
                FROM    test_update
                WHERE   processed = 'n'
        LOOP
            SELECT  NVL(MAX(file_id),0) + 1
            INTO    new_file_id
            FROM    test_update;
            UPDATE  test_update
            SET     processed = 'y'
                   ,file_id = new_file_id
            WHERE   processed = 'n'
            AND     id = r.id
            --AND     ROWNUM < 3
            RETURNING SUM(amount)
                    ,COUNT(*)
            INTO x,y;
            -- The following information will be stored in another table for approval
            dbms_output.put_line('id '||r.id||'#new_file_id '||new_file_id||'#SUM '||TO_CHAR(x)||'# COUNT '||TO_CHAR(y));
        END LOOP;
    END;
    id 7#new_file_id 1#SUM 40# COUNT 1
    id 8#new_file_id 2#SUM 100# COUNT 4
    id 9#new_file_id 3#SUM 60# COUNT 3It works, but now there is an additional requirement, that there is a limit of lines in each file, assume it is 3, then I need two files for id 8.
    id 7#new_file_id 1#SUM 40# COUNT 1
    id 8#new_file_id 2#SUM 60# COUNT 3
    id 8#new_file_id 3#SUM 40# COUNT 1
    id 9#new_file_id 4#SUM 60# COUNT 3I could put the content of the loop in a PROCEDURE, restrict the update with rownum and call it again and again until COUNT is less than the limit. Yet the finding of the rows to be updated is expensive because the actual table has millions of rows, the file limit is 50,000 rows and there can be up to 400,000 rows for an id.This would imply to call the procedure eight times. Therefore I would like to find a way to do it in one call.
    Any idea?
    Regards
    Marcus

    How about :
    SQL> var last_file_id number
    SQL> exec :last_file_id := 0
    PL/SQL procedure successfully completed.
    SQL> var row_per_file number
    SQL> exec :row_per_file := 3
    PL/SQL procedure successfully completed.
    SQL> merge into test_update t
      2  using (
      3    select rid, dense_rank() over(order by id, bucket_id) as file_id
      4    from (
      5      select rowid as rid
      6           , id
      7           , trunc(
      8               (row_number() over(partition by id order by null) - 1) / :row_per_file
      9             ) as bucket_id
    10      from test_update
    11      where processed = 'n'
    12    )
    13  ) v
    14  on ( t.rowid = v.rowid )
    15  when matched then update
    16   set t.file_id = :last_file_id + v.file_id
    17     , t.processed = 'y'
    18  ;
    8 rows merged.
    SQL> select * from test_update order by id, file_id;
    P     AMOUNT FIL         ID
    y         40 1            7
    y         10 2            8
    y         20 2            8
    y         30 2            8
    y         40 3            8
    y         20 4            9
    y         30 4            9
    y         10 4            9
    8 rows selected.
    SQL> select id, file_id, sum(amount), count(*)
      2  from test_update
      3  where file_id > :last_file_id
      4  group by id, file_id
      5  order by id
      6  ;
            ID FIL SUM(AMOUNT)   COUNT(*)
             7 1            40          1
             8 2            60          3
             8 3            40          1
             9 4            60          3

  • Jdev Database Adapter - polling updated rows

    Hello, I have a question reguarding the polling strategy available in the database adapter.
    I set it up and it works great with new inserted rows in the table.
    However, it doesn´t capture the updated rows!
    For instance, i have the following table:
    ID - NAME - AGE
    1 -John- 21
    2 -Mary- 25
    When I insert a new row, it is captured by comparing the last captured ID in the sequencing file.
    ID - NAME - AGE
    1 -John- 21
    2 -Mary- 25
    3 -Cindy- 20 <--------- New row
    But when i UPDATE an already existing row, it doesnt load the changed row!
    ID - NAME - AGE
    1 -John- 26 <----------- Age changed, but polling doesnt recognize it!
    2 -Mary- 25
    3 -Cindy- 20
    Is there a way to get this to work? Should I set an special option? Thank you very much.
    Im using jdeveloper 11g.

    Hi John, it depends on which type of polling strategy you are using to poll the new/updated records. You must have the necessary previleges to add the special field and creat triggers or your db team must have.
    i.Physical delete polling strategy: Cannot capture the UPDATE operations on the table.This because when the adapter listens to the table, when ever a record is polled, that record is deleted after the polling process. After polling the records, if it is not deleted, then the adapter knows that it’s a updated one but here when a record is polled, it is deleted. So when an adapter encounters a record, it’s a new record to the adapter though the record is updated before the polling cycle). So physical delete cannot capture UPDATED records.
    ii. Logical delete polling strategy: The logical delete polling strategy updates a special field of the table after processing each row (updates the where clause at runtime to filter out processed rows).The status column must be marked as processed after polling and the read value must be provided while configuring the adapter. Modified where clause and post-read values are handled automatically by the db adapter.
    Usage:
    <operation name="receive">
    <jca:operation
    ActivationSpec="oracle.tip.adapter.db.DBActivationSpec"
    PollingStrategyName="LogicalDeletePollingStrategy"
    MarkReadField="STATUS"
    MarkReadValue="PROCESSED"
    This polling strategy captures the updated records only if triggers are added.This is because when the record is polled, its status is updated to 'PROCESSED'. To capture this record if its updated, its status has to be 'UNPROCESSED'. So a trigger has to be added to update the status field to 'UNPROCESSED' whenever the record is updated. Below is the example.
    Ex:
    CREATE OR REPLACE TRIGGER nameOftrigger_modified
    BEFORE UPDATE ON table_name
    REFERENCING NEW AS modifiedRow
    FOR EACH ROW
    BEGIN
    *:modifiedRow.STATUS :='UNPROCESSED';*
    END;
    In this example, STATUS is the special field(of the polling table). When the record is updated, the trigger gets fired and updates the STATUS field to 'UNPROCESSED'. So when the table is polled, as this record's status is Unprocessed, this record ll be captured during polling.
    Other polling strategeis like "Sequencing Table Last Updated","Sequencing Table Last-Read Id" Polling strategy,etc also can be used to capture the updated records. In these strategies also, you need to add the triggers like the above. These need an extra helping table also to poll.
    Logical delete polling strategy is good enough to poll the updated records.
    Hope this helps.
    Thank you.

  • Cube Processing approach when Process only the Current Partition?

    Could you validate my SSAS Processing strategy for the given scenarios:<o:p></o:p>
    Background about cube and data:<o:p></o:p>
    A Sales cube has Partitions for each year for "Sales" Measure Group and it associated with dimensions "Product" and "Sales
    Rep". Both are type 1 Dimensions.<o:p></o:p>
    Here some time user will re-classify the products, product Hierarchy(Product -> Sub Category - > Category);  <o:p></o:p>
    Similarly re-classify Sales Rep (Sales Rep –> District Manager -> Regional Manager)<o:p></o:p>
    Processing strategy:<o:p></o:p>
    1. Process(full process) only the current Partition every day.<o:p></o:p>
    2. Perform "Process update” for all the dimensions. (going for process update, as Dimension full process is processing all the old partitions
    of measure groups)<o:p></o:p>
    Questions:<o:p></o:p>
    1.   
    What are disadvantages when processing only the current partition? <o:p></o:p>
    2.   
    Does the old partitions data will roll up as per the hierarchy changes, when I go for Dimension “process update” options.<o:p></o:p>
     Thanks,
    Liyasker Samraj K

    1.   
    What are disadvantages when processing only the current partition? <o:p></o:p>
    2.   
    Does the old partitions data will roll up as per the hierarchy changes, when I go for Dimension “process update” options.<o:p></o:p>
    The strategy looks good. Partitioning is the way to go to reduce processing time. However, keep in mind that partitions are only supported in the enterprise version.
    1. Other than not being able to refresh older data from other partitions, i don't see a downside in processing the most recent partitions. 
    2. Yes. A process update should touch all the dependent partitions.
    SS

  • ExecuteBatch(): number of successfully updated rows

    Hello everybody:
    Here is a simple but often a repeated question in java forums:
    Requirement:
    1.To read a flat file that has many rows of data.
    2.Parse the data and update the database accordingly.
    3.Find the number of successfully updated rows.
    Approach:
    After reading the file and parsing its data,
    - use PreparedStatement
    - use executeBatch()
    I found this as unadvisable to use executeBatch() as its implementation is
    inherently driver specific. The executeBatch() returns an array of update counts.
    Now,can any one tell me, what is the best way to trace the number of successfully
    (and unsuccessfully) updated rows by using this count?
    Is there any other way to achieve the same by not using executeBatch()?
    Can any one share a snippet of code to achieve this specific functionality?
    [Need is to log the number of unsuccessful attempts along with their
    corresponding rows of data].
    Thanks & regards,
    Venkat Kosigi

    executeBatch submits a batch of commands to the database for execution and if all commands execute successfully, returns an array of update counts. The int elements of the array that is returned are ordered to correspond to the commands in the batch, which are ordered according to the order in which they were added to the batch. The elements in the array returned by the method executeBatch may be one of the following:
    -- A number greater than or equal to zero indicates that the command was processed successfully and is an update count giving the number of rows in the database that were affected by the command's execution
    -- A value of -2 indicates that the command was processed successfully but that the number of rows affected is unknown
    If one of the commands in a batch update fails to execute properly, this method throws a BatchUpdateException, and a JDBC driver may or may not continue to process the remaining commands in the batch. However, the driver's behavior must be consistent with a particular DBMS, either always continuing to process commands or never continuing to process commands.
    If the driver continues processing after a failure, the array returned by the method BatchUpdateException.getUpdateCounts will contain as many elements as there are commands in the batch, and at least one of the elements will be the following:
    -- A value of -3 indicates that the command failed to execute successfully and occurs only if a driver continues to process commands after a command fails.
    return values have been modified in the Java 2 SDK, Standard Edition, version 1.3 to accommodate the option of continuing to proccess commands in a batch update after a BatchUpdateException obejct has been thrown.
    Throws BatchUpdateException (a subclass of SQLException) if one of the commands sent to the database fails to execute properly or attempts to return a result set. The BatchUpdateException getUpdateCounts() method allows you to known the element who caused the fail identified by a -3 value.
    -- So, if you have a succesfully result, look for at the executeBatch returned array ( #values >= 0 ) + ( #values == -2 ) = successes
    and if you have not a succesfully result, catching the BatchUpdateException take the array returned by the getUpdateCounts() method, and look for the position in which array values are -3. You could take the data at this position on batch and log it.
    -- Other way to insert a bulk copy on database is to use a bcp command ( it�s not java, bcp is an independent command ) that allows you to do bulk inserts from file, indicate an error file, bcp will give to you as result a file with those lines not where inserted.
    I hope have help you.;)

  • My new computer freezes in my iPod Touch update (3.1.3)

    I was installing the new update for my 3rd generation ipod touch (update 3.1.3) and my computer that work with Windows 7 freezes. It was the new i7 processor that freezes and the only to debug my computher is to press the big button. After, I debug my iPod using the 2 buttons on my iPod pressing 10 seconds. When I plog my iPod on iTunes, it said that the ipod needs a restore and then install the update... but iTunes said that it was impossible. I think that iTunes said that it was the error 1604. Now, on my iPod screen, it said to plog my iPod to iTunes but my problem persist.
    Thank you to help me with my problem.
    Philippe Choquette
    Canada

    Welcome to the discussions,
    do you have any firewall or anti virus software active during the restore process? If yes, disable it and try again to restore. Also make sure that Apple Mobile Device Service is installed and active: http://support.apple.com/kb/TS1567

  • How can i update rows  in a table based on a match from a select query

    Hello
    How can i update rows in a table based on a match from a select query fron two other tables with a update using sqlplus ?
    Thanks Glenn
    table1
    attribute1 varchar2 (10)
    attribute2 varchar2 (10)
    processed varchar2 (10)
    table2
    attribute1 varchar2 (10)
    table3
    attribute2 varchar2 (10)
    An example:
    set table1.processed = "Y"
    where (table1.attribute1 = table2.attribute1)
    and (table1.attribute2 = table3.attribute2)

    Hi,
    Etbin wrote:
    Hi, Frank
    taking nulls into account, what if some attributes are null ;) then the query should look like
    NOT TESTED !
    update table1 t1
    set processed = 'Y'
    where exists(select null
    from table2
    where lnnvl(attribute1 != t1.attribute1)
    and exists(select null
    from table3
    where lnnvl(attribute2 != t1.attribute2)
    and processed != 'Y'Regards
    EtbinYes, you could do that. OP specifically requested something else:
    wgdoig wrote:
    set table1.processed = "Y"
    where (table1.attribute1 = table2.attribute1)
    and (table1.attribute2 = table3.attribute2)This WHERE clause won't be TRUE if any of the 4 attribute columns are NULL. It's debatable about what should be done when those columns are NULL.
    But there is no argument about what needs to be done when processed is NULL.
    OP didn't specifically say that the UPDATEshould or shouldn't be done on rows where processed was already 'Y'. You (quite rightly) introduced a condition that would prevent redo from being generated and triggers from firing unnecessarily; I'm just saying that we have to be careful that the same condition doesn't keep the row from being UPDATEd when it is necessary.

  • I have an Ipod touch updated to last available ios 4.2.1. How can I get the last compatible versions of the apps?

    I have an Ipod touch updated to last available ios 4.2.1.
    How can I get the last compatible versions of the apps?
    In the appstore there are only the lat app versions, most of them are incompatible with my device.
    I know that previous versions of these apps were compatible with my ipod.
    Can all download them someway?
    thanks

    The problem is: I never had the compatible versions. But I want them, I'd buy them if I could.
    So I'm stuck with my Ipod, which can't be upgraded anymore to higher IOS versions, and with the appstore, that does not offer older versions; versions that have existed and still exist, perhaps in the recycle bin of some users, or somewhere in the appstore servers.
    I know, its a policy, just to sell more devices. Will not work with me. I'm not buying more Ipods

  • Obiee oracle gateway error while updating row count

    Hi ,
    OBIEE server 11.1.1.5 ,oracle server11g installed in linux 64bit,
    while updating row count in Admin tool i am getting the following error
    [NQODBC][SQL_STATE:HY000][nQSError:10058] A general error has occured.
    [nQSError: 43113]Message returned from OBIS.
    [nQSError:43093]An error occured while processing the EXECUTE PHYSICAL statement.
    [nQSError:17003]Oracle gateway error: OCIEnvNIsCreate or OCIEnvInit failed to initialize environment.Please check your Oracle Client installation and make sure the correct version of OCI libraries are in the library path.
    i am able to check the database from sqlplus it is working fine.
    Any suggestion highly appreciated plzzz

    Make sure your connection pool is valid and able to import or execute reports.
    If everything good as above said, in Physical layer database properties-> general tab choose the database version and try it once.
    If not
    Check the doc id 1271486.1
    Or
    To resolve the issue create a softlink (ln -s) in the <OracleBI>/server/Bin folder to link to the 32-bit Oracle Client Driver file.
    The example below shows how to perform a softlink from the 64 bit directory:
    cd /u10/app/orcladmin/oracle/OracleBI/server/Bin
    ln -s $ORACLE_HOME/lib32/libclntsh.so.10.1 libclntsh.so.10.1
    If helps mark

  • IPod Touch Update 1.1.1 - Unknown error occurred, Wiped my iPod

    I attempted the iPod Touch Update 1.1.1. The software downloaded completely, then the progress dialog appeared that it was preparing my iPod for the update. After a lengthy delay I received an error dialog saying that an unknown error had occurred(1602).
    After looking through my logs, the only relevant entry I can find is the following in iPodUpdater 3.log:
    2007-09-27 18:06:56.000 iTunes[432:d03]: device connected (isDFU = 0)
    2007-09-27 18:06:56.000 iTunes[432:d03]: error getting plugin interface for device: 0xe00002be
    2007-09-27 18:06:56.000 iTunes[432:d03]: an erorr occurred handling a connected device: 0x7d1
    2007-09-27 18:06:56.000 iTunes[432:d03]: _AMRecoveryModeDeviceFinalize: 0x14caa540
    This failure left that initial screen of the iPod plugin and iTunes symbol. I closed iTunes and relaunched it, as my iPod wasn't listed in the devices list. Upon relaunch, the iPod was detected but an error dialog appeared and said that my iPod was in a recovery state and had to be restored in order to work. So I restored my iPod and it updated it to 1.1.1.
    I have done nothing special to my iPod, I've only had it for a few days, but this update wiped my iPod clean. This is absolutely unacceptable for an update and I am very disappointed in Apple for this. They claim they are keeping the platform locked so things like this don't occur, but clearly it isn't helping.

    Robert Bjoraker wrote:
    Mine upgraded from 1.1 flawlessly, but I was holding my breath...
    So far, I have never had a glitch like the one described here on my Ipods, but have read similar stories everywhere. This happens on Pocket PC PDAs as well.
    I've used my dubious firmware patching abilities to (as they say) brick a couple of things over the years, including a fairly nice Vaio. There was a time (after walking 10 miles through the snow, uphill both ways) that a failed firmware update meant that your computer was useful only as a doorstop if something went wrong.

Maybe you are looking for

  • Issue: Ipad iTunes Wi-Fi syncing to wrong PC

    On my Ipad, General Tab - iTunes Wi-Fi Sync, my Ipad wants to sync with an old PC.  How do I change this so it will sync with my new PC? I have authorized my new PC to sync with my Ipad. I see no options on the Ipad to change which PC it syncs with. 

  • Hello, when I click on the tray icon..FF opens up. I then click again to open a second FF browser..no new Browser..???

    I start my PC click on the FF icon in me lower tray. Browser opens up. I do the same actions again to have a second FF browser beside the first FF and toggle between the two, but it does not open a second FF browser. Thank you == This happened == Eve

  • BDX1100 Blu Ray player won't connect to 50HM66 DLP TV via HDMI

    Hi all.  I have been trying to connect my Toshiba BDX1100 Blu Ray player to my Toshiba 50HM66 DLP TV through the HDMI.  I have tried both HDMI 1 & 2 ports.  I have tried 3 different cables.  The Blu Ray player connects through HDMI on another TV so I

  • Best practice for updating SL to 10.6.8

    recently purchased a 2009 iMac with Snow Leopard upgrade installed  OS 10.6 Looks as though there are two updates; should I install both or can I install the last/latest. Appreciate being directed to best practices discussions. FYI I will want to ins

  • Cursor setting problem

    Below is a simple example of a frame that contains a button to call a method (longProcess()) that takes some time to execute. While longProcess is executing, the cursor is changed to an hour-glass. The problem I have is that when the cursor leaves th