Rollback in database adapter with delete polling strategy

Hi All,
We have designed a database adapter with "Delete the Rows That Were Read" after read strategy with auto-retry attempts as 5. In BPEL process, where we are receiving the DB records, we are throwing a rollback fault in case of any fault.
Database adapter polling is being re-tried 5 times in case of faults but the data is being deleted from the tables after 5 retries. Is this is the expected behavior of DB adapter? Doesn't the rollback fault rollback the complete transaction and leave the failed data back in the tables?
Can any one provide more information on this polling strategy after the number of auto-retries are completed?.
Thanks.
-Pavan

You need to include your bpel process in the same DB adapter transaction
Use the following properties in the bpel component to do this
<property name="bpel.config.transaction" type="xs:string" many="false">requiresNew</property>
    <property name="bpel.config.oneWayDeliveryPolicy" type="xs:string"
              many="false">sync</property>
Make sure the connection factory that you are using in the DB adapter is XA transaction enabled

Similar Messages

  • Database Adapter using Logical Delete Polling Strategy not updating field

    I have an ESB database adapter defined against a table. The adapter is set to use the Logical Delete Polling Strategy, and the table has an extra column defined as the "delete" field. When I register the adapter to the ESB and add a record to the table, the adapter reads the contents of the record as expected. However, 15 seconds later (that being the polling interval) it reads it again, and again 15 seconds after that, ad infinitum.
    I have defined the logical delete field as a single character field and set the Read value for "T" (a record is normally inserted with that field having a null value). Results as outlined in the previous paragraph. I redefined the field as a number and set the Read value to "1" (with a record normally inserted with that field having a value of "0"). Results as outlined in the previous paragraph.
    Apparently the update of the logical delete value to the Read value is not occurring. My question (obviously) is: Why?
    Thanks for your time.

    Here is the adapter WSDL:
    <?xml version="1.0" encoding="UTF-8"?>
    <definitions
    name="Poll_PM_LOG"
    targetNamespace="http://xmlns.oracle.com/pcbpel/adapter/db/Poll_PM_LOG/"
    xmlns:tns="http://xmlns.oracle.com/pcbpel/adapter/db/Poll_PM_LOG/"
    xmlns:plt="http://schemas.xmlsoap.org/ws/2003/05/partner-link/"
    xmlns:jca="http://xmlns.oracle.com/pcbpel/wsdl/jca/"
    xmlns:pc="http://xmlns.oracle.com/pcbpel/"
    xmlns:top="http://xmlns.oracle.com/pcbpel/adapter/db/top/PollPMLOG"
    xmlns="http://schemas.xmlsoap.org/wsdl/">
    <types>
    <schema xmlns="http://www.w3.org/2001/XMLSchema">
    <import namespace="http://xmlns.oracle.com/pcbpel/adapter/db/top/PollPMLOG"
    schemaLocation="PollPMLOG_table.xsd"/>
    </schema>
    </types>
    <message name="LogCollection_msg">
    <part name="LogCollection" element="top:LogCollection"/>
    </message>
    <portType name="Poll_PM_LOG_ptt">
    <operation name="receive">
    <input message="tns:LogCollection_msg"/>
    </operation>
    </portType>
    <binding name="Poll_PM_LOG_binding" type="tns:Poll_PM_LOG_ptt">
    <pc:inbound_binding/>
    <operation name="receive">
    <jca:operation
    ActivationSpec="oracle.tip.adapter.db.DBActivationSpec"
    DescriptorName="PollPMLOG.Log"
    QueryName="Poll_PM_LOG"
    PollingStrategyName="LogicalDeletePollingStrategy"
    MarkReadFieldName="PROCESSED"
    MarkReadValue="T"
    SequencingFieldName="ID"
    MaxRaiseSize="1"
    MaxTransactionSize="unlimited"
    PollingInterval="15"
    NumberOfThreads="1"
    UseBatchDestroy="false"
    ReturnSingleResultSet="false"
    MappingsMetaDataURL="PollPMLOG_toplink_mappings.xml" />
    <input/>
    </operation>
    </binding>
    <service name="Poll_PM_LOG">
    <port name="Poll_PM_LOG_pt" binding="tns:Poll_PM_LOG_binding">
    <jca:address location="eis/DB/PM"
    UIConnectionName="PM"
    ManagedConnectionFactory="oracle.tip.adapter.db.DBManagedConnectionFactory"
    />
    </port>
    </service>
    <plt:partnerLinkType name="Poll_PM_LOG_plt" >
    <plt:role name="Poll_PM_LOG_role" >
    <plt:portType name="tns:Poll_PM_LOG_ptt" />
    </plt:role>
    </plt:partnerLinkType>
    </definitions>
    The field PROCESSED is defined as CHAR(1), and is normally null when a record is inserted. The value of the field is to be set to 'T' when the record is read by the polling adapter.

  • DB Adapter Locking Database Rows in Distributed Delete Polling Strategy

    I am stuck with an issue. To explain the issue in simple steps
    I am creating a Database Polling Adapter with Distributed Delete Polling Strategy in OSB for running in Clustered Environment.
    We are custom SQL so that the records are not deleted after they are fetched but only a column Status Column is getting Updated.
    The Polling query and Delete SQL is as follows
    <query name="ReqJCAAdapterSelect" xsi:type="read-all-query">
    <criteria operator="equal" xsi:type="relation-expression">
    <left name="Status" xsi:type="query-key-expression">
    <base xsi:type="base-expression"/>
    </left>
    <right xsi:type="constant-expression">
    <value>READY</value>
    </right>
    </criteria>
    <reference-class>ReqJCAAdapter.ItemTbl</reference-class>
    <refresh>true</refresh>
    <remote-refresh>true</remote-refresh>
    <lock-mode>lock-no-wait</lock-mode>
    <container xsi:type="list-container-policy">
    <collection-type>java.util.Vector</collection-type>
    </container>
    </query>
    <delete-query>
    <call xsi:type="sql-call">
    <sql>update ITEM_TBL
    set STATUS = 'IN_PROCESS'
    where ID = #ID</sql>
    </call>
    </delete-query>
    In case of any error in Service Error handler the Status is being updated to ERROR.
    Now the problem which I am facing is in the request Pipeline if we want to do any update on the same record we detect that in ROW is locked and is not allowed to do an update and because of this the process can not proceed.
    Also if any error occurs in Request pipeline then from the Service Error handler we are supposed to Update the status as ERROR, but the same thing is happening and the process can not proceed.
    but In the response Pipeline we can successfully update the status of the same record.
    We have tried to use both XA and NON-XA Datasource but no luck.
    Any help in this is appreciated.
    Regards,
    Dilip

    I am stuck with an issue. To explain the issue in simple steps
    I am creating a Database Polling Adapter with Distributed Delete Polling Strategy in OSB for running in Clustered Environment.
    We are custom SQL so that the records are not deleted after they are fetched but only a column Status Column is getting Updated.
    The Polling query and Delete SQL is as follows
    <query name="ReqJCAAdapterSelect" xsi:type="read-all-query">
    <criteria operator="equal" xsi:type="relation-expression">
    <left name="Status" xsi:type="query-key-expression">
    <base xsi:type="base-expression"/>
    </left>
    <right xsi:type="constant-expression">
    <value>READY</value>
    </right>
    </criteria>
    <reference-class>ReqJCAAdapter.ItemTbl</reference-class>
    <refresh>true</refresh>
    <remote-refresh>true</remote-refresh>
    <lock-mode>lock-no-wait</lock-mode>
    <container xsi:type="list-container-policy">
    <collection-type>java.util.Vector</collection-type>
    </container>
    </query>
    <delete-query>
    <call xsi:type="sql-call">
    <sql>update ITEM_TBL
    set STATUS = 'IN_PROCESS'
    where ID = #ID</sql>
    </call>
    </delete-query>
    In case of any error in Service Error handler the Status is being updated to ERROR.
    Now the problem which I am facing is in the request Pipeline if we want to do any update on the same record we detect that in ROW is locked and is not allowed to do an update and because of this the process can not proceed.
    Also if any error occurs in Request pipeline then from the Service Error handler we are supposed to Update the status as ERROR, but the same thing is happening and the process can not proceed.
    but In the response Pipeline we can successfully update the status of the same record.
    We have tried to use both XA and NON-XA Datasource but no luck.
    Any help in this is appreciated.
    Regards,
    Dilip

  • Poll Database Adapter Physicial Delete strategy

    Guys,
    The database adapter should delete the record from table only if the record was processed successfully in esb or BPEL. But as far as i tested it deletes the polled record regardless of success or fault.
    Is there any setting that I need to do to force delete only when a successful instance is created.

    DB Adapter should participate in transaction, so upon failure it should be rolled back.
    Can you make sure you aren't using any MCF properties, and also in your process you don't have any dehydration point (e.g. recv/on-msg)?
    Regards,
    Chintan

  • How DB adapter works when polling strategy is "remove row selected"?

    How DB adapter works when polling strategy is "delete the rows that were read"?
    I want to know how database adapter works when polling startegy is "remove row".This polling strategy helps for polling the changes to table and remove records after new records are inserted or changes are done to table.Here,i want to understand how DB adapter identifies which record to be deleted.
    For example: there is a table with 100 recorrds.How DB adapter works when polling strategy is "delete the rows that were read"?
    I want to know how database adapter works when polling startegy is "delete the rows that were read".This polling strategy helps for polling the changes to table and remove records after new records are inserted or changes are done to table.Here,i want to understand how DB adapter identifies which record to be deleted.
    For example:
    There is a table EMP with 100 recorrds.Now, i deploy a BPEl process with db adapter polling on table EMP for changes.So that ,if any change happen to records in table that records will be picked up.I use a polling startegy "delete the rows that were read" .
    Now i insert 9 new records and update one existing records.These 10 records should be picked up and then deleted by Db adapter.In this 109 records how DB adapter identifies these 10 records when it polls.How it differentiatess old records with new records when there is no flag field or no sequence id used to identify the new or modified records.
    Please let me know.
    Why i want to know this?
    Some times customer may not allow BPEL process to do any modifications to source table or source database.In this case the options provided by database adapter wizard are not useful.
    If there is any mechanism to identify new or modified records without having a FLAG field or sequence table,then it is possible to have an option like only read the changes from table rather than deleting the records after reading.Which helps in meeting above requirement.
    Please let me know if there is any way to do this.
    thanks
    Arureddy

    Once the record has been read it is deleted. Therefore, you can update rows in this table as many times you like before it is read. Once it is read there will be nothing to update as it will be deleted.
    If you don't want to use a sequence table, you can use a sequence file. You can only use this functionality if the key you are using increments sequentially. e.g. 1,2,3,4. If your key is random, e.g. 2,1,3,5,7,4 then your options are delete or use a processing flag.
    The other option is to create a trigger that inserts a key into a polling table when insert or updates occur. You can then use the delete, which is the most desirable as it uses database locking.
    cheers
    James

  • Database Adapter with sql server connection cannot import user tables

    Hi,
    In jdeveloper 11g, I tried to create a sql server database connection, i could create the connection, i could see the tables and navigate through the data in
    Database navigator section. But while trying to import the tables in the Database adapter to do polling operation, It displays only the system tables like this
    syscolumns (dbo)
    syscomments (dbo)
    sysdepends (dbo)
    sysfilegroups (dbo)
    sysfiles (dbo)
    sysforeignkeys (dbo)
    sysfulltextcatalogs (dbo)
    sysindexes (dbo)
    sysindexkeys (dbo)
    sysmembers (dbo)
    sysobjects (dbo)
    syspermissions (dbo)
    sysprotects (dbo)
    sysreferences (dbo)
    systypes (dbo)
    sysusers (dbo)
    I am not understanding this wierd bahaviour of jdeveloper.
    please advise.
    -Chaitu
    Edited by: chaitu123 on May 13, 2010 9:16 AM

    This is NOT related solely to the SOA Suite. The same behavior appears in the Connections Navigator.
    IMHO, this is related to how JDeveloper is parsing the .getMetaData() response.
    Scott
    Edited by: user8951683 on May 21, 2010 8:08 AM

  • Database Adapter Logical Delete Not Working....

    Hi,
    I have an issue with the DB Adapter under BPEL GA 10.1.3.1. I'm trying to do a logical delete on a table however the logical delete isn't updating the records to show that they've been processed.
    I've created a simple test case with a 3 column table sitting in Oracle XE DB with the third column containing the logical delete flag. I've created a new process consisting of a DB Adapter partnerlink and a receive. In the logs (below) I can see the Select statement followed by the Update (logical delete) occurring but the Update statement just actually run against the DB. I can copy the update statement and run this through SQL as the same DB user and it updates the records.
    Has anyone seen this before?
    Thanks.
    <2006-11-13 15:06:30,901> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> client acquired
    <2006-11-13 15:06:30,917> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> SELECT A, B, C FROM F_TABLE WHERE (C = ?)
         bind => [IN]
    <2006-11-13 15:06:30,917> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> client acquired
    <2006-11-13 15:06:30,917> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> TX beginTransaction, status=NO_TRANSACTION
    <2006-11-13 15:06:30,917> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> TX Internally starting
    <2006-11-13 15:06:30,917> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> external transaction has begun internally
    <2006-11-13 15:06:30,917> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.DBAdapterConstants isElementFormDefaultQualified> Element is FTABLE namespace is http://xmlns.oracle.com/pcbpel/adapter/db/top/ReadTABLE
    <2006-11-13 15:06:30,933> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.ox.O_XParser parse> Transforming the row(s) [<FTABLE Record A />, <FTABLE Record B />, <FTABLE Record C />] read from the database into xml.
    <2006-11-13 15:06:30,933> <DEBUG> <default.collaxa.cube.activation> <AdapterFramework::Inbound> [Read_TABLE_ptt::receive(FTABLECollection)]Posting inbound JCA message to BPEL Process 'Read_TABLE' receive activity:
    <FTABLECollection xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/pcbpel/adapter/db/top/ReadTABLE">
    <FTABLE>
    <a>Record</a>
    <b>A</b>
    <c>IN</c>
    </FTABLE>
    <FTABLE>
    <a>Record/a>
    <b>B</b>
    <c>IN</c>
    </FTABLE>
    <FTABLE>
    <a>Record</a>
    <b>C</b>
    <c>IN</c>
    </FTABLE>
    </FTABLECollection>
    <2006-11-13 15:06:30,933> <DEBUG> <default.collaxa.cube.activation> <AdapterFramework::Inbound> Delivery Thread 'JCA-work-instance:Database Adapter-6 performing unsynchronized post() to localhost
    <2006-11-13 15:06:31,073> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> Begin batch statements
    <2006-11-13 15:06:31,073> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> UPDATE F_TABLE SET C = ? WHERE ((A = ?) AND (B = ?))
    <2006-11-13 15:06:31,073> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log>      bind => [OUT, Record, A]
    <2006-11-13 15:06:31,073> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log>      bind => [OUT, Record, B]
    <2006-11-13 15:06:31,073> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log>      bind => [OUT, Record, C]
    <2006-11-13 15:06:31,073> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> End Batch Statements
    <2006-11-13 15:06:31,073> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> TX commitTransaction, status=STATUS_ACTIVE
    <2006-11-13 15:06:31,073> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> TX Internally committing
    <2006-11-13 15:06:31,323> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> external transaction has committed internally
    <2006-11-13 15:06:31,323> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> client released
    <2006-11-13 15:06:31,323> <DEBUG> <default.collaxa.cube.activation> <AdapterFramework::Inbound> onBatchBegin: Batch 'bpel___localhost_default_Read_TABLE_1_4__1163389596916.ReadTABLE.FTABLE1163394391323' (bpel___localhost_default_Read_TABLE_1_4__1163389596916.ReadTABLE.FTABLE1163394391323) starting...
    <2006-11-13 15:06:31,323> <DEBUG> <default.collaxa.cube.activation> <AdapterFramework::Inbound> onBatchComplete: Batch 'bpel___localhost_default_Read_TABLE_1_4__1163389596916.ReadTABLE.FTABLE1163394391323' (bpel___localhost_default_Read_TABLE_1_4__1163389596916.ReadTABLE.FTABLE1163394391323) has completed - final size = 3
    <2006-11-13 15:06:31,323> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> client released
    <2006-11-13 15:06:31,339> <DEBUG> <default.collaxa.cube.engine.bpel> <BPELExecution::Read_TABLE> entering <scope> at line [no line]
    <2006-11-13 15:06:31,339> <DEBUG> <default.collaxa.cube.engine.bpel> <BPELExecution::Read_TABLE> entering <scope> at line [no line]
    <2006-11-13 15:06:31,339> <DEBUG> <default.collaxa.cube.engine.bpel> <BPELExecution::Read_TABLE> entering <sequence> at line 55
    <2006-11-13 15:06:31,339> <DEBUG> <default.collaxa.cube.engine.bpel> <BPELExecution::Read_TABLE> entering <sequence> at line 55
    <2006-11-13 15:06:31,339> <DEBUG> <default.collaxa.cube.engine.bpel> <BPELEntryReceiveWMP::Read_TABLE> executing <receive> at line 58
    <2006-11-13 15:06:31,339> <DEBUG> <default.collaxa.cube.engine.bpel> <BPELEntryReceiveWMP::Read_TABLE> set variable 'Read_TABLE_receive_InputVariable' to be readOnly, payload ref {FTABLECollection=108e2d22815529ac:-3067a9ff:10edf296212:-78da}
    <2006-11-13 15:06:31,339> <DEBUG> <default.collaxa.cube.engine.bpel> <BPELEntryReceiveWMP::Read_TABLE> variable 'Read_TABLE_receive_InputVariable' content {FTABLECollection=oracle.xml.parser.v2.XMLElement@1303465}
    <2006-11-13 15:06:31,339> <DEBUG> <default.collaxa.cube.engine.bpel> <BPELInvokeWMP::Read_TABLE> executing <invoke> at line 61

    Hi,
    I haven't yet used 10.1.3, but we had a number of issues under 10.1.2.0.2 around caching and upd/ins/del.
    A number of things we changed were
    - set usesBatchWriting to false in oc4j-ra.xml file
    - set identityMap to NoIdentityMap via toplink work bench
    - set should-always-refresh-cache-on-remote,should-disable-cache-hits,should-disable-cache-hits-on-remote to true in toplink mappings.xml file (note this last one is only if toplink was not used to insert the source data).
    Ashley

  • Insert Multiple records using Database adapter with Stored procedure func

    Hi All,
    I want to insert multiple records on a database using a stored procedure. I wanted to insert those records using a Database Adapter and the Database adapter should be invoked by a Mediator.
    Can somebody suggest me with ideas whether it can be acheived with OOB capabtilities in SOA suite or not?
    Thanks for your help in advance.
    Thanks,
    Shiv

    The use case you want to achieve is feature supported by the DBAdapter and it is possible to invoke the same from mediator.
    Please have a look at the oracle documentation and you should be able to get the necessary information.
    The below links should help you as well:
    http://download.oracle.com/docs/cd/E15523_01/integration.1111/e10231/adptr_db.htm
    http://blogs.oracle.com/ajaysharma/2011/03/using_file_adapter_database_adapter_and_mediator_component_in_soa_11g.html
    There are some video tutorials as well :)
    http://www.youtube.com/watch?v=dFldS-fDx70 This should also help
    Thanks,
    Patrick

  • Database adapter to continue polling after hitting exception

    I am using BPEL 10.1.3.5. Using Database adapter "LogicalDeletePollingStrategy".
    On hitting the exception in one of the records the adapter stops polling/processing other records for that transaction.
    Ideally it is expected to continue processing other records.
    I believe in 11g there is a JCA property Undying that makes the adapter to continue processing after hitting an exception. How is it in 10g to get this.
    Thanks,

    There is no rejection handler been configured and I am not sure if its a better way to configure rejection handler for Database failed records to file.
    As in 10g, the DB adapter stops after hitting exception for that transaction.

  • Jdev Database Adapter - polling updated rows

    Hello, I have a question reguarding the polling strategy available in the database adapter.
    I set it up and it works great with new inserted rows in the table.
    However, it doesn´t capture the updated rows!
    For instance, i have the following table:
    ID - NAME - AGE
    1 -John- 21
    2 -Mary- 25
    When I insert a new row, it is captured by comparing the last captured ID in the sequencing file.
    ID - NAME - AGE
    1 -John- 21
    2 -Mary- 25
    3 -Cindy- 20 <--------- New row
    But when i UPDATE an already existing row, it doesnt load the changed row!
    ID - NAME - AGE
    1 -John- 26 <----------- Age changed, but polling doesnt recognize it!
    2 -Mary- 25
    3 -Cindy- 20
    Is there a way to get this to work? Should I set an special option? Thank you very much.
    Im using jdeveloper 11g.

    Hi John, it depends on which type of polling strategy you are using to poll the new/updated records. You must have the necessary previleges to add the special field and creat triggers or your db team must have.
    i.Physical delete polling strategy: Cannot capture the UPDATE operations on the table.This because when the adapter listens to the table, when ever a record is polled, that record is deleted after the polling process. After polling the records, if it is not deleted, then the adapter knows that it’s a updated one but here when a record is polled, it is deleted. So when an adapter encounters a record, it’s a new record to the adapter though the record is updated before the polling cycle). So physical delete cannot capture UPDATED records.
    ii. Logical delete polling strategy: The logical delete polling strategy updates a special field of the table after processing each row (updates the where clause at runtime to filter out processed rows).The status column must be marked as processed after polling and the read value must be provided while configuring the adapter. Modified where clause and post-read values are handled automatically by the db adapter.
    Usage:
    <operation name="receive">
    <jca:operation
    ActivationSpec="oracle.tip.adapter.db.DBActivationSpec"
    PollingStrategyName="LogicalDeletePollingStrategy"
    MarkReadField="STATUS"
    MarkReadValue="PROCESSED"
    This polling strategy captures the updated records only if triggers are added.This is because when the record is polled, its status is updated to 'PROCESSED'. To capture this record if its updated, its status has to be 'UNPROCESSED'. So a trigger has to be added to update the status field to 'UNPROCESSED' whenever the record is updated. Below is the example.
    Ex:
    CREATE OR REPLACE TRIGGER nameOftrigger_modified
    BEFORE UPDATE ON table_name
    REFERENCING NEW AS modifiedRow
    FOR EACH ROW
    BEGIN
    *:modifiedRow.STATUS :='UNPROCESSED';*
    END;
    In this example, STATUS is the special field(of the polling table). When the record is updated, the trigger gets fired and updates the STATUS field to 'UNPROCESSED'. So when the table is polled, as this record's status is Unprocessed, this record ll be captured during polling.
    Other polling strategeis like "Sequencing Table Last Updated","Sequencing Table Last-Read Id" Polling strategy,etc also can be used to capture the updated records. In these strategies also, you need to add the triggers like the above. These need an extra helping table also to poll.
    Logical delete polling strategy is good enough to poll the updated records.
    Hope this helps.
    Thank you.

  • How to delete the content of a datatable by the "Database Adapter"

    Hello,
    i want to delete all entries in a table by a database adapter - like "DELETE FROM table" in SQL. (Oracle SOA Suite 11g SR1, with Oracle DB over JDBC))
    When i create a new Database adapter i can select standard functions for "insert, select, delete". The function delete does have the disadvantage, that it needs a input message with the key values of the data entry which have to be deleted, which i don't have and don't want to get.
    So i tried the "Pure SQL" mode for the adapter with the SQL string "DELETE FROM table" but this instruction is somehow not commited to the database.
    My questions are:
    - Is there any commit needed for the pure sql instructions? How is the syntax to write several SQL instructions for one Database Adapter?
    - Does anyone have another solution to delete the entries?
    Thanx in advance
    Tobias

    Hi Shishir,
    it is a pretty simple example. I think the problem is that the Database Adapter is not commiting.
    The BPEL is simple sending an empty message to the adapter which should trigger the SQL execution (See Code below)
    The PURE SQL String i'm using is "DELETE FROM table"
    Thanks Tobias
    <?xml version = "1.0" encoding = "UTF-8" ?>
    <!--
    Oracle JDeveloper BPEL Designer
    Created: Wed May 05 09:13:00 CEST 2010
    Author: oracle
    Purpose: Asynchronous BPEL Process
    -->
    <process name="BPELProcess1"
    targetNamespace="http://xmlns.oracle.com/App_DWH_Prototype/TestDELETE/BPELProcess1"
    xmlns="http://schemas.xmlsoap.org/ws/2003/03/business-process/"
    xmlns:client="http://xmlns.oracle.com/App_DWH_Prototype/TestDELETE/BPELProcess1"
    xmlns:ora="http://schemas.oracle.com/xpath/extension"
    xmlns:oraext="http://www.oracle.com/XSL/Transform/java/oracle.tip.pc.services.functions.ExtFunc"
    xmlns:xpath20="http://www.oracle.com/XSL/Transform/java/oracle.tip.pc.services.functions.Xpath20"
    xmlns:ldap="http://schemas.oracle.com/xpath/extension/ldap"
    xmlns:ui="http://xmlns.oracle.com/soa/designer/"
    xmlns:task="http://xmlns.oracle.com/bpel/workflow/task"
    xmlns:taskservice="http://xmlns.oracle.com/bpel/workflow/taskService"
    xmlns:wfcommon="http://xmlns.oracle.com/bpel/workflow/common"
    xmlns:wf="http://schemas.oracle.com/bpel/extension/workflow"
    xmlns:xdk="http://schemas.oracle.com/bpel/extension/xpath/function/xdk"
    xmlns:dvm="http://www.oracle.com/XSL/Transform/java/oracle.tip.dvm.LookupValue"
    xmlns:xref="http://www.oracle.com/XSL/Transform/java/oracle.tip.xref.xpath.XRefXPathFunctions"
    xmlns:bpelx="http://schemas.oracle.com/bpel/extension"
    xmlns:bpws="http://schemas.xmlsoap.org/ws/2003/03/business-process/"
    xmlns:ns1="http://xmlns.oracle.com/pcbpel/adapter/db/App_DWH_Prototype/TestDELETE/TestDelete%2F"
    xmlns:ns2="http://xmlns.oracle.com/pcbpel/adapter/db/top/TestDelete">
    <!--
    PARTNERLINKS
    List of services participating in this BPEL process
    -->
    <partnerLinks>
    <!--
    The 'client' role represents the requester of this service. It is
    used for callback. The location and correlation information associated
    with the client role are automatically set using WS-Addressing.
    -->
    <partnerLink name="bpelprocess1_client" partnerLinkType="client:BPELProcess1" myRole="BPELProcess1Provider" partnerRole="BPELProcess1Requester"/>
    <partnerLink name="TestDelete" partnerRole="TestDelete_role"
    partnerLinkType="ns1:TestDelete_plt"/>
    </partnerLinks>
    <!--
    VARIABLES
    List of messages and XML documents used within this BPEL process
    -->
    <variables>
    <!-- Reference to the message passed as input during initiation -->
    <variable name="inputVariable" messageType="client:BPELProcess1RequestMessage"/>
    <!-- Reference to the message that will be sent back to the requester during callback -->
    <variable name="outputVariable" messageType="client:BPELProcess1ResponseMessage"/>
    <variable name="Invoke_1_delete_InputVariable"
    messageType="ns1:TActIngredientCollection_msg"/>
    </variables>
    <!--
    ORCHESTRATION LOGIC
    Set of activities coordinating the flow of messages across the
    services integrated within this business process
    -->
    <sequence name="main">
    <!-- Receive input from requestor. (Note: This maps to operation defined in BPELProcess1.wsdl) -->
    <receive name="receiveInput" partnerLink="bpelprocess1_client" portType="client:BPELProcess1" operation="process" variable="inputVariable" createInstance="yes"/>
    <!--
    Asynchronous callback to the requester. (Note: the callback location and correlation id is transparently handled using WS-addressing.)
    -->
    <assign name="Assign_1">
    <copy>
    <from expression="''"/>
    <to variable="Invoke_1_delete_InputVariable"
    part="TActIngredientCollection"
    query="/ns2:TActIngredientCollection/ns2:TActIngredient/ns2:plotNo"/>
    </copy>
    <copy>
    <from variable="inputVariable" part="payload"
    query="/client:process/client:input"/>
    <to variable="outputVariable" part="payload"
    query="/client:processResponse/client:result"/>
    </copy>
    </assign>
    <invoke name="Invoke_1" inputVariable="Invoke_1_delete_InputVariable"
    partnerLink="TestDelete" portType="ns1:TestDelete_ptt"
    operation="delete"/>
    <invoke name="callbackClient" partnerLink="bpelprocess1_client" portType="client:BPELProcess1Callback" operation="processResponse" inputVariable="outputVariable"/>
    </sequence>
    </process>

  • Database Adapter Merge with char/varchar primary key

    Hi guys,
    It seems as though merge statements in BPEL database adapters do not work if the primary key of the table contains a char/varchar. This is in Jdeveloper 10.1.3.4.0
    If I create the table below:
    create table test_merge (
      id        number primary key,
      text      varchar2(255)
    );Then the merge operation will update and insert as expected. However, if I create the same table but with id as a char/varchar, then the merge statement will never update.
    It seems like it never finds a record with the same id (if it is a char/varchar), and always attempts to insert, which results in unique key constraint errors for the primary key column.
    Has anyone else encountered this issue and found a way to get the merge statement to work correctly? I can obviously perform the select myself, and then conditionally update/insert, but I would prefer the merge to work as expected.
    Thanks

    After investigating further, it seems that even a database adapter select is not working correctly. Consider the following table:
    create table test_merge (
      id        varchar2(255) primary key,
      text      varchar2(255)
    );And I have inserted a record with id = "1" and text="abc"
    If I create a BPEL process and add a database adapter (with only the select checkbox ticked) with the following SQL:
    SELECT ID, TEXT FROM TEST_MERGE WHERE (ID = #id)If I invoke this adapter, passing in "1" as the id, then a record is returned, with the correct text (i.e. "abc") BUT the id returned is "-9900000000000000000000000000"
    Can anyone explain why this is happening?

  • How do you know that the DB Adapter has stopped polling

    Hi,
    Is there a smart way to figure out if the DB Adapter has stopped polling a table ? [We are using logical delete polling strategy].We are looking for a better alternative than just verifying the entire laundry list of tables for the markUnreadValue after every polling interval.
    rgds,
    Ram

    Thank you for your post. Although this is a feature for Buzzword, right now it's not possible to know if someone has viewed a file you've sent via Share. We are working to integrate Share/My Files/Buzzword so that this will be corrected in future.
    Let me know if you have additional questions.
    Michelle

  • Dynamic where clause in Database Adapter, or workaround?

    Hello friends,
    Is there any way to change where clause in Database Adapter (DA) dynamically.
    For e.g. i do have Database Adapter with sql statement: select col1,col2 from my_table. Originally i have many parameters which are not mandatory so instead of creating where clause in DA like:
    select col1, col2 from my_table where+
    *((#p_col_1 is not null and #p_col_1=col1) or #p_col_1 is null)*
    and ...
    and  ((#p_col_n is not null and #p_col_n=col_n) or #p_col_n is null)
    i would like to create a where clause dynamically depend on what parameters are not empty or receive where clause as parameter. Is it possible? Are there any others workarounds you can suggest?
    will appreciate any thoughts...

    Thanks Deepa for the links, their where quite helpful. And i must admit this partially answers my question (at least example i provided).
    However - how about if i'd like to use LIKE clauses instead of =, or IN or NOT IN? e.g:
    select col1, col2 from my_table where+
    ((#p_col_1 is not null and #p_col_1 LIKE col1) or #p_col_1 is null)
    and ...
    and ((#p_col_n is not null and col_n IN #p_col_n) or #p_col_n is null)
    Is it possible?
    thanks in advance

  • How to pass dynamic values to JDBC Database adapter in PureSQL operation?

    I want to fetch the records from the table via JDBC database adapter with the criteria but I should not hardcode the values inside my query. I am using Execute by PureSQL operation. Values might change in future and it is the reason I should not hardcode the values and instead I should pass it dynamically.
    This is my query:
    SELECT FROM Table0 t0, Table1 t1 WHERE t0.Field1= 'ABC' AND t0.Field2= 'DEF' AND t0.Field3 = 'CBA' AND t1.FLAG IS NULL AND t1.Field1 = t0.Field4*
    The values 'ABC', 'DEF', 'CBA', 'NULL' are not fixed and it can be changed. Hence, I need to pass the values for these fields dynamically. What must be passed in the query and where to pass the User Defined Values during the run time.
    Please can anyone suggest how to resolve this. I am using SOA 11g.
    Thanks in advance.

    Hi,
    Ensure that you use the Adapter Configuration Wizard to do the query modification suggested by Arik. This will allow the schema for the DB Adapter getting created automatically. Also revisit this schema to assure that the data types of the bind parameters (parameters with #) have been correctly populated in the schema.
    You can get some help from this too although this doesn't fully match your use-case.
    Regards,
    Neeraj Sehgal

Maybe you are looking for

  • Database failed to start up - ORA-03113: end-of-file on communication chann

    Hi, We had power failure, and i had to restore the database from offline backup. after restore i tried to start the database and i received following errors. SQL> startup ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance ORA

  • Problem with Magic Mouse

    I have just installed Lion onto my 27" iMac (3.2 GHz Intel Core i3) which i bought about six months ago.  I really love the new OS.  My only gripe is that when I scroll down with two fingers the screen goes up and visa versa.  I have looked in settin

  • Date Picker for Adobe 10

    From the little bit of research I've done, there is no date picker feature currently in Adobe 10.  I've been trying to find some kind of plug-in/add-on/something along those lines so I can include a date picker in an adobe form that I am setting up. 

  • [Solved] C question

    Hi, how do I get data to my application like this in C: my_application -u Usr_Name -p passwd Last edited by phr (2008-08-15 10:40:02)

  • Allowed for commercial use?

    As crazy as it may sound, I was thinking about using iWeb to do some basic web design work for a few people. Is there anything in the Agreement that doesn't allow for iWeb to be used for commercial use?