JCA DB Adapter - merge - changing Primary Key

Hello,
I have an OSB business service that does a merge operation using a JCA DB Adapter. I recently changed the primary key of the table it's merging into, so also changed the toplink file of the business service (manually via sbconsole).
However, in certain conditions, I'm getting an ORA-00001 error (JCA-11616). i.e. the merge is trying to do an insert that the violates primary key of the table.
Should I have also made a change to something other than the business service? The table WSDL and XML don't mention the primary key as far as I can tell...
Any ideas?
Cheers

That is probably a database error. If you changed the primary key of the toplink file to be less restrictive than the compound key in the database, you could potentially get this error. You need to make sure you have the primary keys or unique constraint columns from the database in the toplink file. You can then add additional columns to the toplink file based on your business logic, but never remove a database primary key from the file because the service could try a invalid insert.

Similar Messages

  • Urgent - ESB: DB Adapter with composite primary keys no returning any data

    I have a DB Adapter in the ESB that inserts/updates/selects data to/from a table with 2 columns as primary keys, but table has several columns.
    1. Initially, the db table had constraints for the composite primary key, The DB adapter had valid data coming in, but no result data.
    2. Then I removed the db constraints on the composite primary key, and selected the 2 columns in the DB adapter wizard. Still, valid data is going in, since I am outputing to a file prior to call this node, but no result data is appearing. The result XML is empty.
    Do I need to do something in Toplink for this?
    The table spec is below. the ACCT_FIELD and ACCT_CODE columns make up the composite primary key.
    CREATE TABLE AFF_DATA_SYNC
    ACCT_FIELD NUMBER NOT NULL,
    ACCT_CODE VARCHAR2(16) NOT NULL,
    ACCT_EXISTS_FLAG VARCHAR2(1),
    SAVE_ACCT_SEG_XML CLOB,
    LAST_UPDATE_DATE DATE
    The following xml is the request to the DB adapter:
    <top:AffDataSyncReadDBAdapterSelect_accountField_accountCodeInputParameters xmlns:top="http://xmlns.oracle.com/pcbpel/adapter/db/top/AffDataSyncReadDBAdapter">
    <top:accountField>8</top:accountField>
    <top:accountCode>0003888</top:accountCode>
    </top:AffDataSyncReadDBAdapterSelect_accountField_accountCodeInputParameters>
    Log output:
    JCA: esb:///ESB_Projects/STRIPES-AFF-Data-Intg_AFF-Data-Integration/AffDataSyncReadDBAdapter.wsdl [ AffDataSyncReadDBAdapter_ptt::AffDataSyncReadDBAdapterSelect_accountField_accountCode(AffDataSyncReadDBAdapterSelect_accountField_accountCode_inparameters,Af
    DataSyncWipCollection) ] - No XMLRecord headers provided
    JCA: <oracle.tip.adapter.db.DBInteraction executeOutboundRead> Executing query with arguments [8, 0003917]
    JCA: <oracle.tip.adapter.db.TopLinkLogger log> SELECT ACCT_FIELD, ACCT_CODE, ACCT_EXISTS_FLAG, SAVE_ACCT_SEG_XML, LAST_UPDATE_DATE FROM
    AFF_DATA_SYNC_WIP WHERE ((ACCT_FIELD = ?) AND (ACCT_CODE = ?))
    bind => [8, 0003888]
    JCA: <oracle.tip.adapter.db.DBInteraction executeOutboundRead> Read the following objects: []
    Message was edited by:
    user589357
    Message was edited by:
    user589357
    Message was edited by:
    user589357
    Message was edited by:
    user589357

    The Toplink has no errors. Now I changed my table with only a single primary key, but for some reason, I am still getting no data.
    JDeveloper 10.1.3.3 / SOA Suite (only using ESB) 10.1.3.3 with Oracle DB 10g 10.2.0.3.
    1. What does No XMLRecord headers found mean?
    2. Notice the last log item; Read ... []
    Here are the log contents:
    Invoking next service "AffDataSyncReadDBAdapterSelect_recordId" with payload :
    <top:AffDataSyncReadDBAdapterSelect_recordIdInputParameters xmlns:top="http://xmlns.oracle.com/pcbpel/adapter/db/top/AffDataSyncReadDBAdapter">
    <top:recordId>80003888</top:recordId>
    </top:AffDataSyncReadDBAdapterSelect_recordIdInputParameters>
    JCA: esb:///ESB_Projects/STRIPES-AFF-Data-Intg_AFF-Data-Integration/AffDataSyncReadDBAdapter.wsdl [ AffDataSyncReadDBAdapter_ptt::AffDataSyncReadDBAdapterSelect_recordId(AffDataSyncReadDBAdapterSelect_recordId_inparameters,AffDataSyncWipCollection)
    ] - No XMLRecord headers provided
    JCA: esb:///ESB_Projects/STRIPES-AFF-Data-Intg_AFF-Data-Integration/AffDataSyncReadDBAdapter.wsdl [ AffDataSyncReadDBAdapter_ptt::AffDataSyncReadDBAdapterSelect_recordId(AffDataSyncReadDBAdapterSelect_recordId_inparameters,AffDataSyncWipCollection)
    ] - Starting JCA LocalTransaction
    JCA: esb:///ESB_Projects/STRIPES-AFF-Data-Intg_AFF-Data-Integration/AffDataSyncReadDBAdapter.wsdl [ AffDataSyncReadDBAdapter_ptt::AffDataSyncReadDBAdapterSelect_recordId(AffDataSyncReadDBAdapterSelect_recordId_inparameters,AffDataSyncWipCollection)
    ] - Invoking JCA Outbound Interaction
    JCA: <oracle.tip.adapter.db.DBInteraction executeOutboundRead> executing the NamedQuery: AffDataSyncReadDBAdapter.AffDataSyncWip.AffDataSyncReadDBAdapterSelect
    JCA: <oracle.tip.adapter.db.DBInteraction executeOutboundRead> Parsing header record element.
    JCA: <oracle.tip.adapter.db.TopLinkLogger log> client acquired
    JCA: <oracle.tip.adapter.db.DBInteraction executeOutboundRead> Executing query with arguments [80003888]
    JCA: <oracle.tip.adapter.db.TopLinkLogger log> SELECT RECORD_ID, ACCT_FIELD, ACCT_CODE, ACCT_EXISTS_FLAG, SAVE_ACCT_SEG_XML, LAST_UPDATE_DATE
    FROM AFF_DATA_SYNC_WIP WHERE (RECORD_ID = ?)
    bind => [80003888]
    JCA: <oracle.tip.adapter.db.DBInteraction executeOutboundRead> Read the following objects: []
    Message was edited by:
    user589357
    Message was edited by:
    user589357

  • How to change primary keys of existing InfoCube.

    Greetings everyone!
    I’m trying to change the Key Fields in my reporting architecture as per our new company mandate.  I’ve been able to successfully change the primary keys for DS, DSO and InfoSource.  Can any kind soul out there please tell me how to change the primary keys on an existing InfoCube?  I will surely appreciate all the assistance I can get.  Its kinda urgent!
    Regards,
    Philips

    Hi,
    Check the possibility with Remodelling option . If it is not possible with Remodelling, then you can only change the cube deleting the Data.
    With rgds,
    Anil Kumar Sharma .P

  • Issues while changing primary key in table

    Hi
    I have one table. In that two fields are primary key. I want to change the second PK as FK only. But when i am changing this field as FK. Its showing one error 'Primary Key Change not permitted for value Table ZCAUSECATMASTER' . How to avoid this error.
    Please help me.

    Hi.....
    Remove that primary key for the second field and assign your foregin key..table to that filed.....
    So when you give entries in that second filed it will be validated with its foregin key table......its nothing but check table...
    what all values in the foregin table only can given.....
    regards
    raja

  • Change primary key in ztable

    Hello everyone
    I need some advice.
    I have transp. table ZINVOICE which have data in production system.
    field  of table are
    VBELN  Primary key
    KUNNR
    NAME
    FKART
    FKDAT
    NETWR
    STAT
    TAXINVNUM
    ZFKDAT
    ZNETWR
    now i want table ZINVOICE have 2 primary (VBELN + TAXINVNUM)
    when i activate system show
    Key is already defined; field TAXINVNUM cannot be in the key
    Message no. AD434
    Diagnosis
    When defining table fields, you added further key fields at the end after entering a block of key fields followed by a block of non-key fields.
    All the key fields of a table must be in a block at the beginning of the table.
    Procedure
    All key fields in a table should be entered in a single block.
    is it mean that primay key must be in beginning of the table ?
    can i delete field in table ZINVOICE and insert field and change position of fields to
    VBELN  Primary key
    TAXINVNUM Primary key
    KUNNR
    NAME
    FKART
    FKDAT
    NETWR
    STAT
    ZFKDAT
    ZNETWR
    can i do like this ?
    is it impact my existing data in SAP ?
    please advice.

    As you are adding a new key, you should not get problem with duplicate records.
    You should create two transport requests and activate/transport the table twice.
    - In the first request, change the order of fields in the database table, don't add primary key now, activate , if required adjust database table (should not be required), transport database table in target system.
    - In the second request, declare the second field as a primary key, proceed same way.
    DON'T TRANSPORT BOTH REQUESTS TOGETHER
    First step
    VBELN K
    TAXINVNUM
    KUNNR
    NAME
    Second step
    VBELN K
    TAXINVNUM K
    KUNNR
    NAME
    Question: Did you forget to add the client field, if yes you will have to adjust the table and the records will be copied in every client, you will have to create and execute a small cleanup program
    Third step
    MANDT K
    VBELN K
    TAXINVNUM K
    KUNNR
    NAME
    Regards,
    Raymond

  • APEX 4.0 -Show and able to change Primary key values for Detail

    In a Master-Detail form, is there anyway I can show and able to change/select my primary keys from a select list field? One of my primary keys on the Detail is also primary key from another table which restricts my values for this field. I was able to show the fields but I can not make changes to this field and save the changes. Is there anyway I can have both show the field and able to change field's value and save it? Please advice. Thank you very much for your help in advance.
    -Grace

    Yes AFAIK Apex (for better or worse) was designed such that the PKs are generated automatically with PL/SQL, by a trigger, or whatever other algorithm that isn't in the control of the end user. It also only seems to allow a composite PK of no more than two columns.
    My usual strategy is to:
    1. Define the PK as a number (some sort of RECORD_ID, RECORD_SEQ, whatever) and populate it via a trigger on-insert.
    2. Define the "business" PK as a separate unique index. This way the user can set and modify it to their heart's content and it also isn't limited to just two columns (if the composite key's business requirement is such that more than two columns are needed).

  • Changing primary key(s)

    Is it possible to change the primary key of existing persistent objects at
    runtime? (If yes, are references from other tables updated automatically?)
    Regards
    Achim

    Hi,
    AFAIK, Kodo JDO 3.0.1 doesn't support the JDO optional feature:
    javax.jdo.option.ChangeApplicationIdentity
    So assuming you are using Application Identity for your class then you won't
    be able to change the fields that are part of the primary key.
    Not sure if that's what you meant?
    Cheers
    - Keiron
    "Hans-Joachim Oehme" <[email protected]> wrote in message
    news:bujkqd$2bt$[email protected]..
    Is it possible to change the primary key of existing persistent objects at
    runtime? (If yes, are references from other tables updated automatically?)
    Regards
    Achim

  • Change primary key field in Master-Detail Form (Urgent)

    Hi,
    Can some experts share your valuable experience on the problem below?
    We have created a form Master-Detail relation. For some reasons, we have to allow the primary key field to be editable. But we couln't achieve it in Form 6i. It gives update error during commit.
    Your kind help will be highly appreciated.
    Regards,
    YM

    Hi there,
    I think the problem is because u are updating the primary key in the master block but since it is a master detail block u need to update it in the detail block as well.so what i suggest u to do is in the pre-insert trigger assign the value of master primarykey to the detail primary key....
    just check this out hopefully this shud help u.
    Bye
    Atul

  • Error while creating the Unique Index of the Primary Key of an Item

    Hi all,
    I have deployed a new item (CO_CONTRACTUNIT_PRODUCT) in my publication. The deploy appears to be successfull as the item can be seen in the repository through the Mobile Manager.
    The problem occurs when i sync my local DB to get the item offline. While synchronizing, the following error appears, both in the sync window and the log file ol_sync.log.
    "ERROR",POL-5130,"11/09/2010 11:43:52","table or view %s.%s not found:CO_CONTRACTUNIT_PRODUCT,CO_CONTRACTID,OD_PRODUCTID,CO_CONTRACTUNITID","DB_ROSHNI"
    However, the debug file gives this other error regarding this table.
    ALL_INDEX:CREATE UNIQUE INDEX "TPCO_CONTRACTUNIT_PRODUCT_PK" ON CO_CONTRACTUNIT_PRODUCT (CO_CONTRACTID,OD_PRODUCTID,CO_CONTRACTUNITID) -5130Error at C:\ADE\omeprod_ol103021\olite\db\build\win\ocapi\..\..\..\src\ocapi\allindexes.cpp line:329 rc:-5130
    Build date Mar 29 2010
    okErr=(table or view %s.%s not found)
    mess=(CO_CONTRACTUNIT_PRODUCT,CO_CONTRACTID,OD_PRODUCTID,CO_CONTRACTUNITID)
    AddLog(-5130 "ERROR",POL-5130,"11/09/2010 11:43:52","table or view %s.%s not found:CO_CONTRACTUNIT_PRODUCT,CO_CONTRACTID,OD_PRODUCTID,CO_CONTRACTUNITID","DB_ROSHNI")
    But the index that is being created and is giving the error is the index created automatically with the Primary Key of the table, and so nothing has been modified in that.
    The primary key of the table is created with the three columns that are part of the index that is returning the error.
    As I could not solve the error, I tried to drop and re-create the item in the repository, but no luck. As a last option, i tried to remove the item from the repository to be able to sync properly again (just like before creating the item), but the error still happens.
    Another weird point is that i have tried creating the item in another publication of another database (but with almost equal items), and the item could was downloaded to my local DB without any problem, which makes this problem still more bizarre.
    What can it be?
    Any help would be great!
    Roshni

    have you tried unistalling the client and reinstalling it?
    schema evolution changes are not always handled correctly please check thread:
    Modification of publication item into Mobile Server
    i quote from rekounas instructions:
    If you are just adding a field, you should only have to run the alter publication item API call.
    Here is an old note on schema evolution on different scenarios. The names for the APIs that they use are now deprecated. Use the ConsolidatorManager class and call the method alterPublicationItem("PUBLICATION_ITEM_NAME", "SELECT STMT") :
    A) Add column
    1. Upload all client data. Clients should not add new data until they are told by the administrator to sync again!!
    2. Stop Mobile Server listener
    3. Change the Oracle8i/9i database schema (add column)
    4. Create a Java program to call the Consolidator Admin API AlterPublicationItem()
    5. Start Mobile Server
    6. Execute a sync from the client
    7. The new column should be seen on the client. Use MSQL to check snapshot definitions.
    B) Drop column
    1. Upload all client data. Clients should not add new data until they are told by the administrator to sync again!!
    2. Stop Mobile Server listener
    3. Delete column of the base table in the Oracle database schema
    4. Create a Java program to call the Consolidator Admin API DropPublicationItem()
    5. Create a Java program to call the Consolidator Admin API CreatePublicationItem() and AddPublicationItem().
    6. Start Mobile Server
    7. Execute a sync from the client
    8. The new column should be seen on the cliet. Use MSQL to check snapshot definitions.
    C) Change column datatype
    Changing datatypes in a repliatated system is not an easy task. You have to follow certain procedures in order to make it to work. Use DropPublicationItem, CreatePublicationItem and AddPublicationItem methods from the Consolidator Admin API. You must stop/start Mobile Server listener to refresh the cache.
    1. Client syncs. Clients should not add new data until they are told by the administrator to sync again!!
    2. Stop Mobile Server listener
    3. Drop/create column (do not use conversion procudures) at the base table
    4. Call DropPublicationItem(). Check if the ErrorQueue and InQueue no longer exist.
    5. Call CreatePublicationItem() and AddPublicationItem(). Check if the ErrorQueue and InQueue reflect the new column datatype
    6. Start Mobile Server. This automatically resumes application
    7. Client executes sync. This should drop the old snapshot and recreate the new snapshot. Use MSQL to check
    snapshot definitions.
    D) Drop table
    1. Client syncs. Clients should not add new data until they are told by the administrator to sync again!!
    2. Stop Mobile Server listener
    3. Drop base table
    4. Call DropPublicationItem(). Check if the ErrorQueue and InQueue no longer exist.
    5. Start Mobile Server. This automatically resumes application
    6. Client executes sync. This should drop the old snapshot. Use MSQL to check snapshot definitions.
    E) Add table
    1. Client syncs. Clients should not add new data until they are told by the administrator to sync again!!
    2. Stop Mobile Server listener
    3. Add new base table
    4. Call CreatePublicationItem() and AddPublicationItem() method
    5. Start Mobile Server. This automatically resumes application
    6. Client executes sync. This should add the new snapshot. Use MSQL to check snapshot definitions.
    F) Changing Primary Keys
    Chaning PK is a severe operation which must be executed manually. A snapshot must be deleted and recreated to propagate the changes to the clients. This causes a full refresh on this snapshot.
    1. Client syncs. Clients should not add new data until they are told by the administrator to sync again!!
    2. Stop Mobile Server listener
    3. Drop the snapshot using DropPublicationItem() method o
    4. Alter the base table
    5. Call CreatePublicationItem()and AddPublicationItem() methods for the altered table
    6. Start Mobile Server. This automatically resumes application
    7. Client executes sync. The old snapshot will be replaced by the new snapstot using a full refresh. Use MSQL to check snapshot definitions.
    G) To Change a Table Weight =>
    Follow the procedure below to change the Table Weight parameter. The table weight is used by the Mobile Server/Synchronization to determin the sequence in which client records are applied to the Oracle database.
    1. Run MGP to apply any changes in the InQueue to the Oracle databse
    2. Change table weight using SetTemplateItemMetadata() method
    3. Add/change constraint on the the base table which reflects the change in table weight
    4. Synchronize
    gl m8

  • Switch the sequence of Primary Key Columns

    Hi MaxDB experts,
    As part of our product upgrade we've made a schema change to 10 of our database tables. Typically these tables contain about 5 million rows in them. The change involves switching the order of columns in the primary key (original order SampleTime, Id; new order would be Id, SampleTime). What's the quickest way to achieve this?
    So far I've tried the following approaches without much luck:
    1. Use the "alter table alter primary key (new seuqence of columns)". Takes an average of 1 hour per table.
    2. Copy original table content into another table with "insert into newTable (select * from existingTable)" command. Takes the same amount of time as #1
    3. Tried using "Export Table" option presented by the "loadercli". Unfortunately, exported data can not be imported to a table with a different schema (in our case changed primary key column sequence).
    What else can we try? Any advise / direction would be greatly appreciated. Thanks for your time.
    Sincerely,
    Sameer Apte

    Hi there,
    ok, I assume that you figured out that you like to have data entries belonging to a specific ID stored rather nearby than scattered around in the table by TIMESTAMP or that you have more queries that specify the ID but not the TIMESTAMP.
    Both would be good reasons to perform such a change.
    Concerning the speed: basically things won't get any faster than ALTER TABLE or INSERT (SELECT * FROM)...
    The ALTER TABLE approach would have the advantage to be transactional atomic and simple to use.
    The INSERT approach would enable the use of the PREFETCHING feature for the read-I/O on the source table.
    So if you're just focussing on speed, then I'd recommend to:
    setup a data cache that can hold both source and target tables im memory
    enable prefetching by setting the parameter READAHEAD_TABLE_THRESHOLD to say 128 (be aware that you've to use MaxDB 7.6.05 or higher, but not 7.7.x  for that!)
    After you've copied each table, make sure to create the secondary indexes one by one since otherwise the internal parallelism won't be used.
    regards,
    Lars
    p.s.
    it is possible to export/import into different schemas - it's even supported via DB Studio.
    Anyhow, it wouldn't make anything quicker for this case.

  • Help Needed..... Problems with Primary Key.

    Hello all.
    ive been trying to do this for a while but not luck, im after some advice on ive got a java program connected to mysql database and i want to create a copy of a table, then be able to delete records from the new table. it seems like you cannot update copied tables because it doesnt copy the primary keys because i get
    SQLException: Result Set not updatable (referenced table has no primary keys)
    is there a way around this?
    heres what i have:
    public void createTemporyTables(){
    try{
    Statement stmt5 = conn.createStatement();
    int rows1 = stmt5.executeUpdate("CREATE TABLE tempAnimals SELECT * FROM animals");
    Statement stmt6 = conn.createStatement();
    int rows2 = stmt6.executeUpdate("CREATE TABLE tempAnimalsQuestions SELECT * FROM animalsquestions");
    if (rows1 == 0){
    System.out.println("Don't add any row!");
    else{
    System.out.println(rows1 + " row(s)affected.");
    if (rows2 == 0){
    System.out.println("Don't add any row!");
    else{
    System.out.println(rows2 + " row(s)affected.");
    catch(Exception e)
    System.out.println("SQLException: " + e.getMessage());
    System.out.println("Method = createTemporyDatabase Error");
    public void remove(){
    try{
    Statement stmt13 = conn.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_UPDATABLE);
    ResultSet yesItems = stmt13.executeQuery("SELECT * FROM tempanimals WHERE Q5 =\"y\"");
    Statement stmt14 = conn.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_UPDATABLE);
    ResultSet yesQuestions = stmt14.executeQuery("SELECT * FROM tempanimalsquestions WHERE question = '5'");
    while(yesItems.next())
    yesItems.deleteRow();
    while(yesQuestions.next())
    yesQuestions.deleteRow();
    catch(Exception e)
    System.out.println("SQLException: " + e.getMessage());
    System.out.println("Method = removeYes Error");
    any help will be much appreciated..
    Ben

    Hi,
    Your temperary table has no primary key. Mention primary key field, when creating temperary table. See below statement change primary key field according to you.
    int rows1 = stmt5.executeUpdate("CREATE TABLE tempAnimals (primary key(animalid)) SELECT * FROM animals");
    int rows2 = stmt6.executeUpdate("CREATE TABLE tempAnimalsQuestions (primary key(questionid)) SELECT * FROM animalsquestions");Regards,
    Ram.

  • Database Adapter Merge with char/varchar primary key

    Hi guys,
    It seems as though merge statements in BPEL database adapters do not work if the primary key of the table contains a char/varchar. This is in Jdeveloper 10.1.3.4.0
    If I create the table below:
    create table test_merge (
      id        number primary key,
      text      varchar2(255)
    );Then the merge operation will update and insert as expected. However, if I create the same table but with id as a char/varchar, then the merge statement will never update.
    It seems like it never finds a record with the same id (if it is a char/varchar), and always attempts to insert, which results in unique key constraint errors for the primary key column.
    Has anyone else encountered this issue and found a way to get the merge statement to work correctly? I can obviously perform the select myself, and then conditionally update/insert, but I would prefer the merge to work as expected.
    Thanks

    After investigating further, it seems that even a database adapter select is not working correctly. Consider the following table:
    create table test_merge (
      id        varchar2(255) primary key,
      text      varchar2(255)
    );And I have inserted a record with id = "1" and text="abc"
    If I create a BPEL process and add a database adapter (with only the select checkbox ticked) with the following SQL:
    SELECT ID, TEXT FROM TEST_MERGE WHERE (ID = #id)If I invoke this adapter, passing in "1" as the id, then a record is returned, with the correct text (i.e. "abc") BUT the id returned is "-9900000000000000000000000000"
    Can anyone explain why this is happening?

  • IR - if count (primary key) cannot change to count different column

    Apex 4.1.1.00.23 Windows 7 IE8 / Firefox 16
    If I add a Group By and Count to an Interactive report and choose the primary key column as the one to Count, run the report and then edit the Group By to count a different column, the result set does not change, and if I edit the Group By again it shows that the Counted column has reverted back to the primary key column.
    If I initially choose a different column I can change it and rerun the report successfully, but once I choose the primary key column it cannot be changed. This seems to happen on all applications and all browsers. Is it a bug in Apex?
    Thanks,
    Nick.

    I've tried this in 4.1.0.00.32: works normally. In 4.1.1.00.23 however i'm getting the weird behaviour: you can count on any column, but the moment you count the same column as the one grouped on you can not change the column back to another one anymore. Something which works fine in 4.1.0.00.32.

  • JBO-25014: Another user has changed the row with primary key oracle.jbo.Key

    Hi,
    I am developing a Fusion Web Application using Jdeveloper 11.1.2.1.0. I have a home.jspx page that has a ADF table built on efttBilling View Object. . When you click on one of the rows in the table, it will take you to detail.jspx where you can edit the row and save. When 'save' is clicked, stored procedures are executed to update/insert rows into few tables , and then go back to home.jspx where you need to see updated content for that row.
    To get down to the exact issue, updates are made to the tables on which the efttBilling View Object is built using a stored procedure. Once this is done, I am trying to requery view object to see new content. But I keep getting JBO-25014: Another user has changed the row with primary key oracle.jbo.Key error. Following are the approaches I followed to query new results:
    a. Executed Application Modules Commit Method. Created 'Commit' Action binding and tied it to homePageDef.xml. Called this binding from a view scope bean.
        BindingContainer bindings = BindingContext.getCurrent().getCurrentBindingsEntry();
         OperationBinding operationBinding = bindings.getOperationBinding("Commit");
        Object result = operationBinding.execute();
       if (!operationBinding.getErrors().isEmpty())
        return null;
    b. Marked 'Refresh on Insert' , 'Refresh on Update', 'Change Indicator' checkboxes for all the attributes in the entities associated with efttBilling View Object.
    c. Tried to Requery View Object. Created a refreshViewObject method in Application Module Impl.java file, exposed this method to the client interface and created a invokeMethod Action binding in home.jspx
    Code in Application Module:
      public void refresheftTransactionsforBillingAccountViewObj1View()
        System.out.println("In eftTransactionsforBillingAccountViewObj1");
      findViewObject("eftTransactionsforBillingAccountViewObj1").executeQuery();
    Code in view scope bean
            DCBindingContainer bindings =
           (DCBindingContainer)BindingContext.getCurrent().getCurrentBindingsEntry();
            OperationBinding operation =
            bindings.getOperationBinding("refresheftTransactionsforBillingAccountViewObj1View");
            operation.execute();
    I have searched web, ADF forums and tried methods suggested in there but no sucess.
    Could anyone please provide some insight in this issue. I have been battling with this since quite some time. I can provide you with the log file too.
    Thanks!
    Shai.

    What code does your Commit method have .. can you try using the Commit executable from the AM itself instead ?
    Also -
    Shai wrote:
    'Change Indicator' checkboxes for all the attributes in the entities associated with efttBilling View Object.
    which all attributes you set this property for . it should just be for History columns as such.
    Did you also check if this could be your scenario ?
    Decompiling ADF Binaries: Yet another reason for &quot;JBO-25014: Another user has changed the row with primary key orac…
    OR
    JBO-25014: Another user has changed the row with primary key oracle.jbo.Key
    OR
    Another user has changed the row with primary key -Table changed externally
    Message was edited by: SudiptoDesmukh

  • How to change the source type for a primary key on a form?

    Hi,
    At the time of creating a form, I had set the source type for the primary key to an existing sequence.
    Now I want to change the source to a trigger.
    Can anyone suggest how to do it?
    Thanks in advance,
    Annie

    Annie:
    Define the trigger and then delete the page process named 'Get PK'
    Varad

Maybe you are looking for