Logical delete in DB adapter with 2 columns to update

Hi',
I am using Oracle SOA 11G.
I am polling a DB table and after reading the data I have to update 2 columns.
With logical delete we can update 1 field, is it possible to update 2 fields.
If Yes, then I have to update the first column BPEL_FLAG = R and BPEL_TIME = currentTime()
Please advice,
Thanks
Yatan

Hi Yatan,
Came across the possible solution at this link.
Could not try and validate though.
http://myexperienceswithsoa.blogspot.in/2010/06/db-adapter-polling-tricks.html
Please try and let us know if it works.
Thanks,
Deepak.

Similar Messages

  • Logical delete in database adapter

    Hello
    I was wondering if someone has solution the problem with polling database. You can specify the logical delete column and you can give values for READ, UNREAD and RESERVED states. The problem is that when for example ESB project polls some specific table and starts an instance for every new row with specified logical delete field with value UNREAD, when something unexpected happens and something goes wrong the database adapter updates the row with READ value. This is problematic if we have thousands of rows, and we would like to separate the errored rows from the successfully read rows. Is there anyway (easy) way to update those rows that went wrong to some other value than READ?
    I don't know if anyone understood me, but just for clarification here's a example:
    I have a ESB-project which poll specific database table and parses and XML from the data. After this the ESB-project sends the data to some Web Service. The database table has column CONDITION_CODE in which value 0 means unread and value 1 means read. Now if everything goes fine there is no problems. But if the Web Service is unavailable or the data is malformed, the database adapter still updates the CONDITION_CODE to 1! We have no ways (except to listen ESB_ERROR topic and implement some error handling there) to know what rows were successfully delivered and which were not...
    Hope I was able to clarify the problem... And I hope someone could be able to provide me with answer.
    Best Regards Tuomas

    Did you use the RESERVED value property? How about the transaction mechanism? Do you have global transactions? I gues you would have to use them!

  • !!! Statements of Logic Deleting Files and messing with System are True !!!

    This morning I answered some guys post about Logic deleting everything in the same level as the project folder under certain circumstances. I Tried trouble shooting for him but could not recreate it.
    Then this happened to me today:
    When working in Logic 8 in Leopard all of a sudden it stopped communicating with my Unitor8 via USB. I restarted but the Unitor 8 Which I reset twice would NOT communicate with L8 anymore. It would remain in patch mode (red light lit though CPU is running)
    I've had this erratic behavior many times since the past years on many computers so it wasn't really new. All one needs to do is reinstall the Unitor Family driver.
    So I decided to do so:
    I inserted my Logic Studio DVD and opened the installer and checked ONLY the Unitor family Drivers. After completed install I restarted the Mac. Everything went fine. So I booted L8. While booting I saw my hardrives on the right side of the desktop flash once. I thought that to be strange and took closer look.
    Now my boot drive was missing from all the drives on the desktop and in the sidebar window...
    I tried to Apple+Click on a Application in the dock. The finder opened the applications folder and showed me the app I had clicked on. But I could not navigate to the root of my System Drive even via the "path" sysmbol in the toolbar. So I hit shiftappleg and entered /Volumes/MySystemVolume - it showed my volume "Grayed Out" in the finder window...
    I tried repairing the disk and permissions but nothing was wrong with my boot drive apparently besides the fact that the finder (and all other applications when wanting to open or save) could not "see" my system drive... I could launch all the apps normally that were residing on the SystemDrive in the AppFolder... So this kept getting weirder...
    So I opened my favorite application Tinkertool System and went to the Tab Files and chose the underlying tab "Attributes" which shows me Macintosh HFS and Finder Attributes... I pulled my SystemDrive onto the Drop Area and discovered something VERY amazing: The Display in Finder option had mysteriously been altered to "INVISIBLE"
    I changed it to Visible and there it was - My System Drive... Showing up in the finder...
    I made a restart to make sure everything was OK....
    After the restart the following settings had VANISHED:
    1) Pixadex = ALL ICONS (I had 230 MB) of sorted icons stored in my Application Support Folder
    2) All my Safari Book marks were gone - They were still in ~/Library/Safari - but the Bookmarks.plist was UNREADABLE even with a text editor and XML editor
    3) Numerous other applications had LOST their authorization preferences so I had to re-authorize many of my Audio Unit plug-ins...
    4) My Dock had been reset
    5) My Monitor arrangement had been reset...
    6) My date and time had been reset
    7) My energy settings had been reset.... (I had NOT zapped the P-RAM)
    Since installing Leopard 2 weeks ago - I have CAREFULLY monitored ANY activity after INSTALLING anything in order to be able to troubleshoot - and this problem definitely occurred right after installing the USB Family Drivers and launching L8....
    There was really NO HARM done other than the sweating that I did troubleshooting but if L8 is capable of doing what happened to this other guy loosing his whole folder and now doing this to me - where are the limits? When is someone going to get REAL hurt.
    Please Apple - I am sending you this post as a BUG report as well - could you PLEASE look into this as there has to be some kind of VERY dangerous MALFUNCTION within Logic or its Installers.

    Wow, I was just joking with the medication thing. I didn't actually think it was seriously a mental health issue. Sorry.
    In any case, though, if he has full access to the Mac, mental health issues or no, there's not a whole lot to do about it. If it is possible to take away administrative privileges without causing a huge fuss, then you can limit his access to certain things. However, you can't mess with privileges on a Time Machine backup (doing so breaks it), and that means he's going to have access to at least parts of the backup. Which means he can trash at least some of the backup data, and if he deletes files from the TM backup using the Finder, he'll have essentially trashed the whole backup. He might as well have the ability to delete all the files on the machine if he has the power to delete backups.
    Honestly, in this situation, he should either only be allowed to use an account with Parental Controls on, which may not be an option (I don't know how offended he would be at such a suggestion), or he should be using a different computer entirely and have no access at all to his wife's computer. Alternately, have his wife keep a backup that is hidden - ie, connect the backup drive periodically and then remove it and hide it so he can't mess with it. That will at least secure the backup.

  • DB Adapter for Logical Delete - selecting multiple StatusesAND

    Hi,
    I am using SOA 11.1.1.3 and my requirement is to use a DB Adapter to poll both ERROR & NEW Records. Once Polled, I want the status to be changed to "PICKED" (Logical Delete)
    The DB Adapter only allows for a single Unread Value to be selected. I tried to edit the SQL to use a "OR" condition but the code always replaces it with an "AND":
    Polling SQL:
    SELECT QUEUE_ID, BATCH_ID, STATUS, DEL_ID, ORDER_NUMBER, HEADER_ID, LINE_ID, ORD_TYPE, OPERATION, ERR_MESG, CREATE_DT, INV_ORG_ID, PROCESS_DATE FROM DEV.DEV_Q WHERE ((STATUS = 'ERROR') AND (STATUS = 'NEW')) ORDER BY QUEUE_ID ASC
    After Read SQL:
    UPDATE DEV.DEV_Q SET STATUS = 'PICKED' WHERE (QUEUE_ID = #QUEUE_ID)
    I tried to edit the SQL in the <PartnerLink>-or-mappings.xml but it isn't working.
    I did read some posts http://blogs.oracle.com/soa_how_to/2009/11/a_db_adapter_sample_for_pure_sql_and_logicaldeletepollingstrategy_scenario.html that mention that sql edits can be made in DB Adapter only if a Delete is used.
    Question is - Where can I get the sequel generated (it is not there in the TOPLINK mapping) in order to edit it? Can someone please provide the steps?
    Thx a ton!
    Suhas.
    Edited by: user13616720 on Apr 28, 2011 3:42 AM

    Did you tried modifying SQL to
    WHERE STATUS in ('ERROR','NEW')or
    where regexp_like(status,'ERROR|NEW')Thanks
    AJ

  • DB Adapter for Logical Delete - selecting multiple Statuses

    Hi,
    I am using SOA 11.1.1.3 and I am using a DB Adapter to poll ERROR/NEW Records. Once Polled, I want the status to be changed to "PICKED" (Logical Delete)
    The DB Adapter only allows for a single Unread Value to be selected. I tried to edit the SQL to use a "OR" condition but the code always replaces it with an "ADD":
    Polling SQL:
    SELECT QUEUE_ID, BATCH_ID, STATUS, DEL_ID, ORDER_NUMBER, HEADER_ID, LINE_ID, ORD_TYPE, OPERATION, ERR_MESG, CREATE_DT, INV_ORG_ID, PROCESS_DATE FROM DEV.DEV_Q WHERE ((STATUS = 'ERROR') AND (STATUS = 'NEW')) ORDER BY QUEUE_ID ASC
    After Read SQL:
    UPDATE DEV.DEV_Q SET STATUS = 'PICKED' WHERE (QUEUE_ID = #QUEUE_ID)
    I tried to edit the SQL in the +<PartnerLink>-or-mappings.xml+ but it isn't working.
    I did read some posts http://blogs.oracle.com/soa_how_to/2009/11/a_db_adapter_sample_for_pure_sql_and_logicaldeletepollingstrategy_scenario.html that mention that sql edits can be made in DB Adapter only if a Delete is used.
    Question is - Where can I get the sequel generated (it is not there in the TOPLINK mapping) in order to edit it? Can someone please provide the steps?
    Thx a ton!
    Suhas.
    Edited by: user13616720 on Apr 28, 2011 3:37 AM

    Hi,
    wrong form. Use the SOA forum here: SOA Suite
    Frank

  • Db Adapter Logical Delete not working

    Hi,
    I have an ESB that contains a dbadapter that performs a logical delete once the esb has finished processing. The problem we are seeing is that this logical delete is not always happening. We update a field in the source table from 0 to 1 on successful completion, but as I said, this does not always work, causing unique constraint violations on our destination tables. Disabling and re-enabling the dbadapter service in the ESB Console usually clears the problem up, though at times a bounce of the SOA Suite using ./opmnctl stopall is necessary. We are using SOA Suite 10.1.3.1.
    Any ideas what could be causing this behavior?

    The 10.1.3.1 had a number of issues and I would highly recommend upgrading at the earliest possible. One common issue that people get with 10.1.3.1 is people developing SOA object in 10.1.3.3 or 10.1.3.4. You must make sure that your developers used the same version of JDeveloper, e.g. 10.1.3.1.
    Here is a list of patches that I believe you should have in a 10.1.3.1 environment at a minimum, sorry I don't have the descriptions, hopefully one will address your issue.
    2617419
    5877231
    5838073
    5841736
    5905744
    5742242
    5729652
    5724766
    5664594
    5965376
    5672007
    6033824
    5758956
    5876231
    5900308
    5915792
    5473225
    5853207
    5990764
    5669155
    5149744
    cheers
    James

  • How to specify custom SQL in polling db adapter with logical delete option

    Hi all,
    I am writing a SOA composite app using JDeveloper SOA Suite 11.1.1.4 connecting to a SQL Server db using a polling DB Adapter with the logical delete option to send data to a BPEL process.
    I have requirements which go beyond what is supported in the JDeveloper UI for DB Adapter polling options, namely:
    * update more than one column to mark each row read, and
    * specify different SQL for the logical delete operation based on whether bpel processing of the data polled was successful or not.
    A complicating factor is that the polling involves two tables. Here is my full use-case:
    1) Polling will select data derived from two tables: e.g. 'headers' and 'details' simplified for this example:
    table: headers
    hid - primary key
    name - data label
    status - 'unprocessed', 'processed', or 'error'
    processedDate - null when data is loaded, set to current datetime when row is processed
    table: details
    hid - foreign key pointed at header.hid
    attr - data attribute name
    value - value of data attribute
    2) There is a many:1 relationship between detail and header rows through the hid columns. The db adapter polling SELECT shall return results from an outer join consisting of one header row and the associated detail rows where header.status = 'unprocessed' and header.hid = details.hid. (This is supported by the Jdeveloper UI)
    3) The polled data will be sent to be processed by a bpel process:
    3.1) If the bpel processing succeeds, the logical delete (UPDATE) operation shall set header.status = 'processed', and header.processedDate = 'getdate()'.
    3.2) If bpel processing fails (e.g. hits a data error while processing the selected data) the logical delete (UPDATE) operation shall set header.status = 'failed', header.processedDate = 'getdate()', and header.errorMsg = '{some text returned from bpel}'.
    Several parts of #3 are not supported by the JDeveloper UI: updating multiple columns to mark the row processed, using getdate() to populate a value of one of those column updates, doing different update operations based on the results of the BPEL processing of the data (success or error), and using data obtained from BPEL processing as a value of those column updates (error message).
    I have found examples which describe specifying custom SQL using the polling delete option to create a template then modifying the toplink file(s) to specify custom select and update SQL to implement a logical delete. (e.g. http://dlimiter.wordpress.com/2009/11/05/advanced-logic-in-oracle-bpel-polling-database-adapter/ and http://myexperienceswithsoa.blogspot.com/2010/06/db-adapter-polling-tricks.html). But none of them match what I've got in my project, in the first case because maybe because I'm using a higher version of JDeveloper, and in the second I think because in my case two tables are involved.
    Any suggestions would be appreciated. Thanks, John

    Hi John,
    You've raised a good scenario.
    First of all let me say that the purpose of the DB polling transaction, is to have an option to initiate a process from a DB table/view and not to update multiple fields in a table (or have other complex manipulation on the table).
    So, when choose to update a field in a record, after reading it, you are "telling" the engine not to poll this record again. Sure, i guess you can find a solution/workaround for it, but I don't think this is the way....
    The question now is what to do?
    You can have another DB adapter where you can update the data after finishing the process. In that case, after reading the data (on polling transaction) - update the header.status = 'processed' for example, and after processing the selected data update the rest of the fields.
    Hope it make some sense to you.
    Arik

  • Database Adapter using Logical Delete Polling Strategy not updating field

    I have an ESB database adapter defined against a table. The adapter is set to use the Logical Delete Polling Strategy, and the table has an extra column defined as the "delete" field. When I register the adapter to the ESB and add a record to the table, the adapter reads the contents of the record as expected. However, 15 seconds later (that being the polling interval) it reads it again, and again 15 seconds after that, ad infinitum.
    I have defined the logical delete field as a single character field and set the Read value for "T" (a record is normally inserted with that field having a null value). Results as outlined in the previous paragraph. I redefined the field as a number and set the Read value to "1" (with a record normally inserted with that field having a value of "0"). Results as outlined in the previous paragraph.
    Apparently the update of the logical delete value to the Read value is not occurring. My question (obviously) is: Why?
    Thanks for your time.

    Here is the adapter WSDL:
    <?xml version="1.0" encoding="UTF-8"?>
    <definitions
    name="Poll_PM_LOG"
    targetNamespace="http://xmlns.oracle.com/pcbpel/adapter/db/Poll_PM_LOG/"
    xmlns:tns="http://xmlns.oracle.com/pcbpel/adapter/db/Poll_PM_LOG/"
    xmlns:plt="http://schemas.xmlsoap.org/ws/2003/05/partner-link/"
    xmlns:jca="http://xmlns.oracle.com/pcbpel/wsdl/jca/"
    xmlns:pc="http://xmlns.oracle.com/pcbpel/"
    xmlns:top="http://xmlns.oracle.com/pcbpel/adapter/db/top/PollPMLOG"
    xmlns="http://schemas.xmlsoap.org/wsdl/">
    <types>
    <schema xmlns="http://www.w3.org/2001/XMLSchema">
    <import namespace="http://xmlns.oracle.com/pcbpel/adapter/db/top/PollPMLOG"
    schemaLocation="PollPMLOG_table.xsd"/>
    </schema>
    </types>
    <message name="LogCollection_msg">
    <part name="LogCollection" element="top:LogCollection"/>
    </message>
    <portType name="Poll_PM_LOG_ptt">
    <operation name="receive">
    <input message="tns:LogCollection_msg"/>
    </operation>
    </portType>
    <binding name="Poll_PM_LOG_binding" type="tns:Poll_PM_LOG_ptt">
    <pc:inbound_binding/>
    <operation name="receive">
    <jca:operation
    ActivationSpec="oracle.tip.adapter.db.DBActivationSpec"
    DescriptorName="PollPMLOG.Log"
    QueryName="Poll_PM_LOG"
    PollingStrategyName="LogicalDeletePollingStrategy"
    MarkReadFieldName="PROCESSED"
    MarkReadValue="T"
    SequencingFieldName="ID"
    MaxRaiseSize="1"
    MaxTransactionSize="unlimited"
    PollingInterval="15"
    NumberOfThreads="1"
    UseBatchDestroy="false"
    ReturnSingleResultSet="false"
    MappingsMetaDataURL="PollPMLOG_toplink_mappings.xml" />
    <input/>
    </operation>
    </binding>
    <service name="Poll_PM_LOG">
    <port name="Poll_PM_LOG_pt" binding="tns:Poll_PM_LOG_binding">
    <jca:address location="eis/DB/PM"
    UIConnectionName="PM"
    ManagedConnectionFactory="oracle.tip.adapter.db.DBManagedConnectionFactory"
    />
    </port>
    </service>
    <plt:partnerLinkType name="Poll_PM_LOG_plt" >
    <plt:role name="Poll_PM_LOG_role" >
    <plt:portType name="tns:Poll_PM_LOG_ptt" />
    </plt:role>
    </plt:partnerLinkType>
    </definitions>
    The field PROCESSED is defined as CHAR(1), and is normally null when a record is inserted. The value of the field is to be set to 'T' when the record is read by the polling adapter.

  • Database Adapter Logical Delete Not Working....

    Hi,
    I have an issue with the DB Adapter under BPEL GA 10.1.3.1. I'm trying to do a logical delete on a table however the logical delete isn't updating the records to show that they've been processed.
    I've created a simple test case with a 3 column table sitting in Oracle XE DB with the third column containing the logical delete flag. I've created a new process consisting of a DB Adapter partnerlink and a receive. In the logs (below) I can see the Select statement followed by the Update (logical delete) occurring but the Update statement just actually run against the DB. I can copy the update statement and run this through SQL as the same DB user and it updates the records.
    Has anyone seen this before?
    Thanks.
    <2006-11-13 15:06:30,901> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> client acquired
    <2006-11-13 15:06:30,917> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> SELECT A, B, C FROM F_TABLE WHERE (C = ?)
         bind => [IN]
    <2006-11-13 15:06:30,917> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> client acquired
    <2006-11-13 15:06:30,917> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> TX beginTransaction, status=NO_TRANSACTION
    <2006-11-13 15:06:30,917> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> TX Internally starting
    <2006-11-13 15:06:30,917> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> external transaction has begun internally
    <2006-11-13 15:06:30,917> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.DBAdapterConstants isElementFormDefaultQualified> Element is FTABLE namespace is http://xmlns.oracle.com/pcbpel/adapter/db/top/ReadTABLE
    <2006-11-13 15:06:30,933> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.ox.O_XParser parse> Transforming the row(s) [<FTABLE Record A />, <FTABLE Record B />, <FTABLE Record C />] read from the database into xml.
    <2006-11-13 15:06:30,933> <DEBUG> <default.collaxa.cube.activation> <AdapterFramework::Inbound> [Read_TABLE_ptt::receive(FTABLECollection)]Posting inbound JCA message to BPEL Process 'Read_TABLE' receive activity:
    <FTABLECollection xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/pcbpel/adapter/db/top/ReadTABLE">
    <FTABLE>
    <a>Record</a>
    <b>A</b>
    <c>IN</c>
    </FTABLE>
    <FTABLE>
    <a>Record/a>
    <b>B</b>
    <c>IN</c>
    </FTABLE>
    <FTABLE>
    <a>Record</a>
    <b>C</b>
    <c>IN</c>
    </FTABLE>
    </FTABLECollection>
    <2006-11-13 15:06:30,933> <DEBUG> <default.collaxa.cube.activation> <AdapterFramework::Inbound> Delivery Thread 'JCA-work-instance:Database Adapter-6 performing unsynchronized post() to localhost
    <2006-11-13 15:06:31,073> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> Begin batch statements
    <2006-11-13 15:06:31,073> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> UPDATE F_TABLE SET C = ? WHERE ((A = ?) AND (B = ?))
    <2006-11-13 15:06:31,073> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log>      bind => [OUT, Record, A]
    <2006-11-13 15:06:31,073> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log>      bind => [OUT, Record, B]
    <2006-11-13 15:06:31,073> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log>      bind => [OUT, Record, C]
    <2006-11-13 15:06:31,073> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> End Batch Statements
    <2006-11-13 15:06:31,073> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> TX commitTransaction, status=STATUS_ACTIVE
    <2006-11-13 15:06:31,073> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> TX Internally committing
    <2006-11-13 15:06:31,323> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> external transaction has committed internally
    <2006-11-13 15:06:31,323> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> client released
    <2006-11-13 15:06:31,323> <DEBUG> <default.collaxa.cube.activation> <AdapterFramework::Inbound> onBatchBegin: Batch 'bpel___localhost_default_Read_TABLE_1_4__1163389596916.ReadTABLE.FTABLE1163394391323' (bpel___localhost_default_Read_TABLE_1_4__1163389596916.ReadTABLE.FTABLE1163394391323) starting...
    <2006-11-13 15:06:31,323> <DEBUG> <default.collaxa.cube.activation> <AdapterFramework::Inbound> onBatchComplete: Batch 'bpel___localhost_default_Read_TABLE_1_4__1163389596916.ReadTABLE.FTABLE1163394391323' (bpel___localhost_default_Read_TABLE_1_4__1163389596916.ReadTABLE.FTABLE1163394391323) has completed - final size = 3
    <2006-11-13 15:06:31,323> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> client released
    <2006-11-13 15:06:31,339> <DEBUG> <default.collaxa.cube.engine.bpel> <BPELExecution::Read_TABLE> entering <scope> at line [no line]
    <2006-11-13 15:06:31,339> <DEBUG> <default.collaxa.cube.engine.bpel> <BPELExecution::Read_TABLE> entering <scope> at line [no line]
    <2006-11-13 15:06:31,339> <DEBUG> <default.collaxa.cube.engine.bpel> <BPELExecution::Read_TABLE> entering <sequence> at line 55
    <2006-11-13 15:06:31,339> <DEBUG> <default.collaxa.cube.engine.bpel> <BPELExecution::Read_TABLE> entering <sequence> at line 55
    <2006-11-13 15:06:31,339> <DEBUG> <default.collaxa.cube.engine.bpel> <BPELEntryReceiveWMP::Read_TABLE> executing <receive> at line 58
    <2006-11-13 15:06:31,339> <DEBUG> <default.collaxa.cube.engine.bpel> <BPELEntryReceiveWMP::Read_TABLE> set variable 'Read_TABLE_receive_InputVariable' to be readOnly, payload ref {FTABLECollection=108e2d22815529ac:-3067a9ff:10edf296212:-78da}
    <2006-11-13 15:06:31,339> <DEBUG> <default.collaxa.cube.engine.bpel> <BPELEntryReceiveWMP::Read_TABLE> variable 'Read_TABLE_receive_InputVariable' content {FTABLECollection=oracle.xml.parser.v2.XMLElement@1303465}
    <2006-11-13 15:06:31,339> <DEBUG> <default.collaxa.cube.engine.bpel> <BPELInvokeWMP::Read_TABLE> executing <invoke> at line 61

    Hi,
    I haven't yet used 10.1.3, but we had a number of issues under 10.1.2.0.2 around caching and upd/ins/del.
    A number of things we changed were
    - set usesBatchWriting to false in oc4j-ra.xml file
    - set identityMap to NoIdentityMap via toplink work bench
    - set should-always-refresh-cache-on-remote,should-disable-cache-hits,should-disable-cache-hits-on-remote to true in toplink mappings.xml file (note this last one is only if toplink was not used to insert the source data).
    Ashley

  • Duplicate processing by DBAdapter when using Distributed Polling with Logical Delete Strategy

    We have DBAdapter based polling services in OSB running across two Active-Active clusters (total 20 managed service across 2 clusters),
    listening to the same database table. (Both clusters read from the same source DB). We want to ensure distributed polling without duplication,
    hence in the DBAdapter we have selected Distributed Polling option, meaning we are using "Select For Update Skip Locking".
    But we see that sometimes, same rows are processed by two different nodes and transactions are processed twice.
    How do we ensure that only one managed server processes a particular row using select for update? We do not want to use the markReservedValue option which was preferred in older version of DBAdapter.
    We are using following values in DB Adapter configuration, the Jdev project for DBAdapter and the OSB proxy using DBAdapter are attached.
    LogicalDeletePolling Strategy
    MarkReadValue = Processed
    MarkUnreadValue = Initiate
    MarkReservedValue = <empty as we are using Skip Locking>
    PollingFrequency = 1 second
    maxRaiseSize = 1
    MaxTransactionSize = 10
    DistributionPolling = checked   (adds lock-n-wait in properties file and changes the SQL to SELECT FOR UPDATE SKIP LOCKED)
    Thanks and Regards

    Hi All,
    Actually I'm also facing the same problem.
    Step I follwed:
    1) Created a job_table in database
    create talbe job_table(id, job_name, job_desc, job_status)
    2)created a bpel process to test the Inbound distributed polling.
    3)Configure the DBAdapter for polling.
    a)update a field in the job_table with logical delete.
    b)select the field name form the drop down
    c) change the read value-->Inprogress and unRead value--->Ready
    d) dont change the value for Reserved value
    e) select the check box for "distributed polling".
    f) the query will be appended with "For update NoWait."
    g)click next and then finish.
    4) Then i followed the below steps.
    To enable pessimistic locking, run through the wizard once to create an inbound polling query. In the Applications Navigator window, expand Application Sources, then TopLink, and click TopLink Mappings. In the Structure window, click the table name. In Diagram View, click the following tabs: TopLink Mappings, Queries, Named Queries, Options; then the Advanced… button, and then Pessimistic Locking and Acquire Locks. You see the message, "Set Refresh Identity Map Results?" If a query uses pessimistic locking, it must refresh the identity map results. Click OK when you see the message, "Would you like us to set Refresh Identity Map Results and Refresh Remote Identity Map Results to true?Ó Run the wizard again to regenerate everything. In the new toplink_mappings.xml file, you see something like this for the query: <lock-mode>1</lock-mode>.
    5) lock-mose is not changed to 1 in toplink_mappingss.xml
    Can we edit the toplink_mappings.xml manually.
    If yes, what allt he values Ineed to change in toplink_mappings.xml file, so that it will not pick the same record for the multiple times in clustered environment.
    Please help me out this is urgent.
    Thanking you in advance.

  • Rollback in database adapter with delete polling strategy

    Hi All,
    We have designed a database adapter with "Delete the Rows That Were Read" after read strategy with auto-retry attempts as 5. In BPEL process, where we are receiving the DB records, we are throwing a rollback fault in case of any fault.
    Database adapter polling is being re-tried 5 times in case of faults but the data is being deleted from the tables after 5 retries. Is this is the expected behavior of DB adapter? Doesn't the rollback fault rollback the complete transaction and leave the failed data back in the tables?
    Can any one provide more information on this polling strategy after the number of auto-retries are completed?.
    Thanks.
    -Pavan

    You need to include your bpel process in the same DB adapter transaction
    Use the following properties in the bpel component to do this
    <property name="bpel.config.transaction" type="xs:string" many="false">requiresNew</property>
        <property name="bpel.config.oneWayDeliveryPolicy" type="xs:string"
                  many="false">sync</property>
    Make sure the connection factory that you are using in the DB adapter is XA transaction enabled

  • DBAdapter polling with logical delete x distrib polling x DB rows per trans

    Hi all.
    I'm trying to configure a DBAdapter with "logical delete" polling strategy, distributed polling (cluster environment) and a defined number of "Database Rows per Transaction".
    When I check the box "Distributed Polling", the SQL generated gets appended by "FOR UPDATE NOWAIT"
    However, when I set a value for "Database Rows per Transaction" the "FOR UPDATE NOWAIT" sql clause disappear.
    Is this a bug, or some limitation related to the "logical delete" strategy???
    Thanks
    Denis

    Hi All,
    Actually I'm also facing the same problem.
    Step I follwed:
    1) Created a job_table in database
    create talbe job_table(id, job_name, job_desc, job_status)
    2)created a bpel process to test the Inbound distributed polling.
    3)Configure the DBAdapter for polling.
    a)update a field in the job_table with logical delete.
    b)select the field name form the drop down
    c) change the read value-->Inprogress and unRead value--->Ready
    d) dont change the value for Reserved value
    e) select the check box for "distributed polling".
    f) the query will be appended with "For update NoWait."
    g)click next and then finish.
    4) Then i followed the below steps.
    To enable pessimistic locking, run through the wizard once to create an inbound polling query. In the Applications Navigator window, expand Application Sources, then TopLink, and click TopLink Mappings. In the Structure window, click the table name. In Diagram View, click the following tabs: TopLink Mappings, Queries, Named Queries, Options; then the Advanced… button, and then Pessimistic Locking and Acquire Locks. You see the message, "Set Refresh Identity Map Results?" If a query uses pessimistic locking, it must refresh the identity map results. Click OK when you see the message, "Would you like us to set Refresh Identity Map Results and Refresh Remote Identity Map Results to true?Ó Run the wizard again to regenerate everything. In the new toplink_mappings.xml file, you see something like this for the query: <lock-mode>1</lock-mode>.
    5) lock-mose is not changed to 1 in toplink_mappingss.xml
    Can we edit the toplink_mappings.xml manually.
    If yes, what allt he values Ineed to change in toplink_mappings.xml file, so that it will not pick the same record for the multiple times in clustered environment.
    Please help me out this is urgent.
    Thanking you in advance.

  • Using File Adapter with Logical Name

    I am creating a file adapter with Logical name. Apart from giving the Logical name is there any other configuration that I am suppose to do on the Web Logic before I start using it?
    Thanks in Advance.

    After configuring the file adapter, all you need to do is , give the path as a value for the logical name which you have created earlier. That you can do on the file adapter property inspector. There is no need to do anything on the server level. Lemme know
    Thanks,
    N

  • DB Adapter polling and logical delete: Process succeed but lock on database

    We have the following busisness flow, that reseult in a BPEL behaviour we did not expect. I hope that some of you had the same exeprience.
    The database adpater polls from a table new records based on a value (20) in a status field. When it finds records, it will process them and update the status field (20 -> 30).
    When we update from sqlplus a record, select for update on record with status 20, then We see in the logging that the update fails, but the process ran sucessfully. At the database we see an outstanding lock. If we commit the sqlplus session (after a few minutes), the status field is updated to 30.
    This is strange to me, the BPEL process was already sucessfully finished.
    Anyone seen this behaviour?
    Regards,
    Marc
    The logging
    <2006-05-02 13:06:37,052> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> SELECT STATUS,
    USERID FROM AQADM.TEST_UPDATE_VANUIT_BPEL WHERE ((STATUS = ?) AND ((STATUS <> ?) OR (STATUS IS NULL)))
    /ORA-
    <2006-05-02 12:03:21,473> <DEBUG> <default.collaxa.cube.activation> <AdapterFramework::Inbound> Delivery Thread 'JCA-work-instance:Database Adapter-0 pe
    rforming unsynchronized post() to localhost
    <2006-05-02 12:03:21,879> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> begin transact
    ion
    <2006-05-02 12:03:21,881> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> UPDATE PROV.PR
    OV_USER_ACCOUNT SET STATUS_CMMAC = ? WHERE (PUAT_ID = ?)
    bind => [30, 81600]
    <2006-05-02 12:03:22,188> <WARN> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.InboundWork runOnce> Exception dur
    ing polling of the database ORABPEL-11624
    DBActivationSpec Polling Exception.
    Query name: [prov_cmmac], Descriptor name: [PROVprocessmacaddress.ProvUserAccount]. Polling the database for events failed on this iteration.
    If the cause is something like a database being down successful polling will resume once conditions change. Caused by Exception [TOPLINK-4002] (OracleAS
    TopLink - 10g (9.0.4.5) (Build 040930)): oracle.toplink.exceptions.DatabaseException
    Exception Description: java.sql.SQLException: ORA-20220: MDB3 ERR: session not in admin mode
    ORA-06512: at "PROV.PUAT_BUR", line 90
    ORA-04088: error during execution of trigger 'PROV.PUAT_BUR'

    The following entry:
    <MESSAGE>
    <HEADER>
    <TSTZ_ORIGINATING>2008-12-11T17:50:20.119+01:00</TSTZ_ORIGINATING>
    <COMPONENT_ID>tip</COMPONENT_ID>
    <MSG_TYPE TYPE="TRACE"></MSG_TYPE>
    <MSG_LEVEL>16</MSG_LEVEL>
    <HOST_ID>u514603.cmcdev.be</HOST_ID>
    <HOST_NWADDR>10.151.70.3</HOST_NWADDR>
    <MODULE_ID>esb.server.service.impl.inadapter</MODULE_ID>
    <THREAD_ID>18</THREAD_ID>
    <USER_ID>orasoad</USER_ID>
    </HEADER>
    <CORRELATION_DATA>
    <EXEC_CONTEXT_ID><UNIQUE_ID>10.151.70.3:11323:1229014220014:8</UNIQUE_ID><SEQ>0</SEQ></EXEC_CONTEXT_ID>
    </CORRELATION_DATA>
    <PAYLOAD>
    <MSG_TEXT>JCA: &lt;oracle.tip.adapter.db.TopLinkLogger log> SELECT UNIQUE_SUPPLIER_ID, SUPPLIER_NAME, SUPPLIER_NUMBER, NUMBER_TYPE, CM_SUPPLIER_TYPE, VAT_REGISTRATION_NUM, M_NUMBER, ADM_ID, OFB_PARTY_ID, CENTURY, LANGUAGE, ATTRIBUTE_CATEGORY, SPARE6, SPARE8, SPARE9, SPARE10, SPARE11, SPARE12, SPARE13, SPARE14, SPARE15, STATUS, ERROR_CODE, FILE_ID FROM XXCM.XXCM_AP_SUPPLIERS WHERE (STATUS = ?) ORDER BY UNIQUE_SUPPLIER_ID ASC FOR UPDATE NOWAIT
         bind => [MINE[undef:instance]]
    </MSG_TEXT>
    </PAYLOAD>
    </MESSAGE>
    can be found in $ORACLE_HOME/j2ee/oc4j_soa/log/oc4j_soa_default_group_1/oc4j/*.xml
    Regards

  • Logical Delete in DB Polling - OSB

    Hello All,
    I have a question in polling. I have a logical delete column with Read value as 'P and unread value as 'N', Unlike BPEL, OSB's polling, does not make a record to 'P' until the process completes successfully. My DB polling adapter polls the same records while the previous instance is under processing or the previous instance ended in error. Is there a way to logically delete the record immediately once the record is read?
    To avoid this scenario, I added an error handler to make the record to 'E' if the instance encountered any error. By the time the error handler kicks in and updates the record, OSB polls the record a few times. I increased my polling frequency from 5 seconds to 30 seconds, but no luck. Any clue how to handle this scenario?
    Thanks,
    Dwarak

    Is there a way to logically delete the record immediately once the record is read?Dont do any logic in db adapter proxy. Instead make the proxy to just write to a jms queue and then have your processing logic in the jms proxy service which reads off the jms queue.

Maybe you are looking for