Logically delete or Complete manually?

Hi group,
Sometimes we need to finish off a WF.  Examples:
- A travel form has been approved in backend (PR05), and we need to end the WF that has the trip with the approver.
- The leave request application sometimes gets messed up, and a WF needs to be terminated
I am a little unsure whether to use "Logically delete" or "Complete manually" for the step.  Could anybody explain the difference? I am a little new with WF.  My hunch is that "Complete manually" just executes the step, whereas "Logically delete" stops the entire WF.  But I have not found good litterature on when to use these options.
Thanks in advance
Kirsten

Hi Kibo,
COmplete manulayy will come into picture if Approver is not able to execute workitem for some reason and you want to complete the rest of the Workflow steps e.g sending mail to initiator after approval, then if you complete the workitem the workflow will be completed as normal.
However if by any chance, workflow is no more relevant and stes are not required after aapproval, then you can logically delete the workflow .
Regards,
Sangvir Singh

Similar Messages

  • Wait for Event FIPP - Completed Logically Deleted

    Hi WF Experts,
    We have a WF for Release of payments.
    It has 1 Fork with 2 parallel branches (both necessary).
    1 Branch has the approval process for Amount release and the other branch calling the Account assignment approval Subworkflow.
    The approval process branch has the until loop with increment counter and it picks the agents within the loop until a loop condition is reached and thereby till no more approvals required.
    The other branch, before calling the subworkflow, it checks a WAIT FOR EVENT FIPP->COMPLETED with container element FIPPID.
    Both these branches needs to be completed so that the forks gets ended and the wf comes out of the fork.
    The approval process branch is working perfectly. But in the other branch the WAIT FOR EVENT FIPP->COMPLETED  gets logically deleted and thereby, this branch does not go firther to start the Account assignment approval Subworkflow. This way the Fork with the 2 necessary branches does not end, and thereby, the WF stops here and could not go further to set release indicators (Which is a backgorund task).This will confirm the end of WF process.
    When I pass the WI ID for this Wait event in the SWIA, it shows the status as CANCELLED.
    The WF User log as a whole shows the status as COMPLETED as all the approvers has approved the release.
    Why is the Wait for Event gets logically deleted ?Please advice.
    Edited by: Sameer Anwar on Jun 12, 2011 9:17 AM
    Edited by: Sameer Anwar on Jun 12, 2011 9:21 AM

    Dear Experts,
    Any update on this . I am unable to sort this out why the Workflow is getting stopped.
    -Anwar

  • How to delete a completed workitem logically or completely

    Hi buddys,
    Any one kindly help me how to delete a completed leave request work item completely or logically.
    For example: An employee has allpied for leave and that has been apoproved by his/her manager and the same has been updated in the backend.  Now the workitem status is completed. Now I need to delete the completed workitem either completely or logically.
    I went to SWia and gave the workitem number and followed the procudere, but it is not happening.
    Any one tell me how to delete this workitem
    Regards
    Siri

    Try through T.code SBWP.

  • Logical Delete in DB Polling - OSB

    Hello All,
    I have a question in polling. I have a logical delete column with Read value as 'P and unread value as 'N', Unlike BPEL, OSB's polling, does not make a record to 'P' until the process completes successfully. My DB polling adapter polls the same records while the previous instance is under processing or the previous instance ended in error. Is there a way to logically delete the record immediately once the record is read?
    To avoid this scenario, I added an error handler to make the record to 'E' if the instance encountered any error. By the time the error handler kicks in and updates the record, OSB polls the record a few times. I increased my polling frequency from 5 seconds to 30 seconds, but no luck. Any clue how to handle this scenario?
    Thanks,
    Dwarak

    Is there a way to logically delete the record immediately once the record is read?Dont do any logic in db adapter proxy. Instead make the proxy to just write to a jms queue and then have your processing logic in the jms proxy service which reads off the jms queue.

  • Database Adapter Logical Delete Not Working....

    Hi,
    I have an issue with the DB Adapter under BPEL GA 10.1.3.1. I'm trying to do a logical delete on a table however the logical delete isn't updating the records to show that they've been processed.
    I've created a simple test case with a 3 column table sitting in Oracle XE DB with the third column containing the logical delete flag. I've created a new process consisting of a DB Adapter partnerlink and a receive. In the logs (below) I can see the Select statement followed by the Update (logical delete) occurring but the Update statement just actually run against the DB. I can copy the update statement and run this through SQL as the same DB user and it updates the records.
    Has anyone seen this before?
    Thanks.
    <2006-11-13 15:06:30,901> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> client acquired
    <2006-11-13 15:06:30,917> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> SELECT A, B, C FROM F_TABLE WHERE (C = ?)
         bind => [IN]
    <2006-11-13 15:06:30,917> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> client acquired
    <2006-11-13 15:06:30,917> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> TX beginTransaction, status=NO_TRANSACTION
    <2006-11-13 15:06:30,917> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> TX Internally starting
    <2006-11-13 15:06:30,917> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> external transaction has begun internally
    <2006-11-13 15:06:30,917> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.DBAdapterConstants isElementFormDefaultQualified> Element is FTABLE namespace is http://xmlns.oracle.com/pcbpel/adapter/db/top/ReadTABLE
    <2006-11-13 15:06:30,933> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.ox.O_XParser parse> Transforming the row(s) [<FTABLE Record A />, <FTABLE Record B />, <FTABLE Record C />] read from the database into xml.
    <2006-11-13 15:06:30,933> <DEBUG> <default.collaxa.cube.activation> <AdapterFramework::Inbound> [Read_TABLE_ptt::receive(FTABLECollection)]Posting inbound JCA message to BPEL Process 'Read_TABLE' receive activity:
    <FTABLECollection xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/pcbpel/adapter/db/top/ReadTABLE">
    <FTABLE>
    <a>Record</a>
    <b>A</b>
    <c>IN</c>
    </FTABLE>
    <FTABLE>
    <a>Record/a>
    <b>B</b>
    <c>IN</c>
    </FTABLE>
    <FTABLE>
    <a>Record</a>
    <b>C</b>
    <c>IN</c>
    </FTABLE>
    </FTABLECollection>
    <2006-11-13 15:06:30,933> <DEBUG> <default.collaxa.cube.activation> <AdapterFramework::Inbound> Delivery Thread 'JCA-work-instance:Database Adapter-6 performing unsynchronized post() to localhost
    <2006-11-13 15:06:31,073> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> Begin batch statements
    <2006-11-13 15:06:31,073> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> UPDATE F_TABLE SET C = ? WHERE ((A = ?) AND (B = ?))
    <2006-11-13 15:06:31,073> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log>      bind => [OUT, Record, A]
    <2006-11-13 15:06:31,073> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log>      bind => [OUT, Record, B]
    <2006-11-13 15:06:31,073> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log>      bind => [OUT, Record, C]
    <2006-11-13 15:06:31,073> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> End Batch Statements
    <2006-11-13 15:06:31,073> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> TX commitTransaction, status=STATUS_ACTIVE
    <2006-11-13 15:06:31,073> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> TX Internally committing
    <2006-11-13 15:06:31,323> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> external transaction has committed internally
    <2006-11-13 15:06:31,323> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> client released
    <2006-11-13 15:06:31,323> <DEBUG> <default.collaxa.cube.activation> <AdapterFramework::Inbound> onBatchBegin: Batch 'bpel___localhost_default_Read_TABLE_1_4__1163389596916.ReadTABLE.FTABLE1163394391323' (bpel___localhost_default_Read_TABLE_1_4__1163389596916.ReadTABLE.FTABLE1163394391323) starting...
    <2006-11-13 15:06:31,323> <DEBUG> <default.collaxa.cube.activation> <AdapterFramework::Inbound> onBatchComplete: Batch 'bpel___localhost_default_Read_TABLE_1_4__1163389596916.ReadTABLE.FTABLE1163394391323' (bpel___localhost_default_Read_TABLE_1_4__1163389596916.ReadTABLE.FTABLE1163394391323) has completed - final size = 3
    <2006-11-13 15:06:31,323> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> client released
    <2006-11-13 15:06:31,339> <DEBUG> <default.collaxa.cube.engine.bpel> <BPELExecution::Read_TABLE> entering <scope> at line [no line]
    <2006-11-13 15:06:31,339> <DEBUG> <default.collaxa.cube.engine.bpel> <BPELExecution::Read_TABLE> entering <scope> at line [no line]
    <2006-11-13 15:06:31,339> <DEBUG> <default.collaxa.cube.engine.bpel> <BPELExecution::Read_TABLE> entering <sequence> at line 55
    <2006-11-13 15:06:31,339> <DEBUG> <default.collaxa.cube.engine.bpel> <BPELExecution::Read_TABLE> entering <sequence> at line 55
    <2006-11-13 15:06:31,339> <DEBUG> <default.collaxa.cube.engine.bpel> <BPELEntryReceiveWMP::Read_TABLE> executing <receive> at line 58
    <2006-11-13 15:06:31,339> <DEBUG> <default.collaxa.cube.engine.bpel> <BPELEntryReceiveWMP::Read_TABLE> set variable 'Read_TABLE_receive_InputVariable' to be readOnly, payload ref {FTABLECollection=108e2d22815529ac:-3067a9ff:10edf296212:-78da}
    <2006-11-13 15:06:31,339> <DEBUG> <default.collaxa.cube.engine.bpel> <BPELEntryReceiveWMP::Read_TABLE> variable 'Read_TABLE_receive_InputVariable' content {FTABLECollection=oracle.xml.parser.v2.XMLElement@1303465}
    <2006-11-13 15:06:31,339> <DEBUG> <default.collaxa.cube.engine.bpel> <BPELInvokeWMP::Read_TABLE> executing <invoke> at line 61

    Hi,
    I haven't yet used 10.1.3, but we had a number of issues under 10.1.2.0.2 around caching and upd/ins/del.
    A number of things we changed were
    - set usesBatchWriting to false in oc4j-ra.xml file
    - set identityMap to NoIdentityMap via toplink work bench
    - set should-always-refresh-cache-on-remote,should-disable-cache-hits,should-disable-cache-hits-on-remote to true in toplink mappings.xml file (note this last one is only if toplink was not used to insert the source data).
    Ashley

  • Duplicate processing by DBAdapter when using Distributed Polling with Logical Delete Strategy

    We have DBAdapter based polling services in OSB running across two Active-Active clusters (total 20 managed service across 2 clusters),
    listening to the same database table. (Both clusters read from the same source DB). We want to ensure distributed polling without duplication,
    hence in the DBAdapter we have selected Distributed Polling option, meaning we are using "Select For Update Skip Locking".
    But we see that sometimes, same rows are processed by two different nodes and transactions are processed twice.
    How do we ensure that only one managed server processes a particular row using select for update? We do not want to use the markReservedValue option which was preferred in older version of DBAdapter.
    We are using following values in DB Adapter configuration, the Jdev project for DBAdapter and the OSB proxy using DBAdapter are attached.
    LogicalDeletePolling Strategy
    MarkReadValue = Processed
    MarkUnreadValue = Initiate
    MarkReservedValue = <empty as we are using Skip Locking>
    PollingFrequency = 1 second
    maxRaiseSize = 1
    MaxTransactionSize = 10
    DistributionPolling = checked   (adds lock-n-wait in properties file and changes the SQL to SELECT FOR UPDATE SKIP LOCKED)
    Thanks and Regards

    Hi All,
    Actually I'm also facing the same problem.
    Step I follwed:
    1) Created a job_table in database
    create talbe job_table(id, job_name, job_desc, job_status)
    2)created a bpel process to test the Inbound distributed polling.
    3)Configure the DBAdapter for polling.
    a)update a field in the job_table with logical delete.
    b)select the field name form the drop down
    c) change the read value-->Inprogress and unRead value--->Ready
    d) dont change the value for Reserved value
    e) select the check box for "distributed polling".
    f) the query will be appended with "For update NoWait."
    g)click next and then finish.
    4) Then i followed the below steps.
    To enable pessimistic locking, run through the wizard once to create an inbound polling query. In the Applications Navigator window, expand Application Sources, then TopLink, and click TopLink Mappings. In the Structure window, click the table name. In Diagram View, click the following tabs: TopLink Mappings, Queries, Named Queries, Options; then the Advanced… button, and then Pessimistic Locking and Acquire Locks. You see the message, "Set Refresh Identity Map Results?" If a query uses pessimistic locking, it must refresh the identity map results. Click OK when you see the message, "Would you like us to set Refresh Identity Map Results and Refresh Remote Identity Map Results to true?Ó Run the wizard again to regenerate everything. In the new toplink_mappings.xml file, you see something like this for the query: <lock-mode>1</lock-mode>.
    5) lock-mose is not changed to 1 in toplink_mappingss.xml
    Can we edit the toplink_mappings.xml manually.
    If yes, what allt he values Ineed to change in toplink_mappings.xml file, so that it will not pick the same record for the multiple times in clustered environment.
    Please help me out this is urgent.
    Thanking you in advance.

  • Db Adapter Logical Delete not working

    Hi,
    I have an ESB that contains a dbadapter that performs a logical delete once the esb has finished processing. The problem we are seeing is that this logical delete is not always happening. We update a field in the source table from 0 to 1 on successful completion, but as I said, this does not always work, causing unique constraint violations on our destination tables. Disabling and re-enabling the dbadapter service in the ESB Console usually clears the problem up, though at times a bounce of the SOA Suite using ./opmnctl stopall is necessary. We are using SOA Suite 10.1.3.1.
    Any ideas what could be causing this behavior?

    The 10.1.3.1 had a number of issues and I would highly recommend upgrading at the earliest possible. One common issue that people get with 10.1.3.1 is people developing SOA object in 10.1.3.3 or 10.1.3.4. You must make sure that your developers used the same version of JDeveloper, e.g. 10.1.3.1.
    Here is a list of patches that I believe you should have in a 10.1.3.1 environment at a minimum, sorry I don't have the descriptions, hopefully one will address your issue.
    2617419
    5877231
    5838073
    5841736
    5905744
    5742242
    5729652
    5724766
    5664594
    5965376
    5672007
    6033824
    5758956
    5876231
    5900308
    5915792
    5473225
    5853207
    5990764
    5669155
    5149744
    cheers
    James

  • DBAdapter polling with logical delete x distrib polling x DB rows per trans

    Hi all.
    I'm trying to configure a DBAdapter with "logical delete" polling strategy, distributed polling (cluster environment) and a defined number of "Database Rows per Transaction".
    When I check the box "Distributed Polling", the SQL generated gets appended by "FOR UPDATE NOWAIT"
    However, when I set a value for "Database Rows per Transaction" the "FOR UPDATE NOWAIT" sql clause disappear.
    Is this a bug, or some limitation related to the "logical delete" strategy???
    Thanks
    Denis

    Hi All,
    Actually I'm also facing the same problem.
    Step I follwed:
    1) Created a job_table in database
    create talbe job_table(id, job_name, job_desc, job_status)
    2)created a bpel process to test the Inbound distributed polling.
    3)Configure the DBAdapter for polling.
    a)update a field in the job_table with logical delete.
    b)select the field name form the drop down
    c) change the read value-->Inprogress and unRead value--->Ready
    d) dont change the value for Reserved value
    e) select the check box for "distributed polling".
    f) the query will be appended with "For update NoWait."
    g)click next and then finish.
    4) Then i followed the below steps.
    To enable pessimistic locking, run through the wizard once to create an inbound polling query. In the Applications Navigator window, expand Application Sources, then TopLink, and click TopLink Mappings. In the Structure window, click the table name. In Diagram View, click the following tabs: TopLink Mappings, Queries, Named Queries, Options; then the Advanced… button, and then Pessimistic Locking and Acquire Locks. You see the message, "Set Refresh Identity Map Results?" If a query uses pessimistic locking, it must refresh the identity map results. Click OK when you see the message, "Would you like us to set Refresh Identity Map Results and Refresh Remote Identity Map Results to true?Ó Run the wizard again to regenerate everything. In the new toplink_mappings.xml file, you see something like this for the query: <lock-mode>1</lock-mode>.
    5) lock-mose is not changed to 1 in toplink_mappingss.xml
    Can we edit the toplink_mappings.xml manually.
    If yes, what allt he values Ineed to change in toplink_mappings.xml file, so that it will not pick the same record for the multiple times in clustered environment.
    Please help me out this is urgent.
    Thanking you in advance.

  • !!! Statements of Logic Deleting Files and messing with System are True !!!

    This morning I answered some guys post about Logic deleting everything in the same level as the project folder under certain circumstances. I Tried trouble shooting for him but could not recreate it.
    Then this happened to me today:
    When working in Logic 8 in Leopard all of a sudden it stopped communicating with my Unitor8 via USB. I restarted but the Unitor 8 Which I reset twice would NOT communicate with L8 anymore. It would remain in patch mode (red light lit though CPU is running)
    I've had this erratic behavior many times since the past years on many computers so it wasn't really new. All one needs to do is reinstall the Unitor Family driver.
    So I decided to do so:
    I inserted my Logic Studio DVD and opened the installer and checked ONLY the Unitor family Drivers. After completed install I restarted the Mac. Everything went fine. So I booted L8. While booting I saw my hardrives on the right side of the desktop flash once. I thought that to be strange and took closer look.
    Now my boot drive was missing from all the drives on the desktop and in the sidebar window...
    I tried to Apple+Click on a Application in the dock. The finder opened the applications folder and showed me the app I had clicked on. But I could not navigate to the root of my System Drive even via the "path" sysmbol in the toolbar. So I hit shiftappleg and entered /Volumes/MySystemVolume - it showed my volume "Grayed Out" in the finder window...
    I tried repairing the disk and permissions but nothing was wrong with my boot drive apparently besides the fact that the finder (and all other applications when wanting to open or save) could not "see" my system drive... I could launch all the apps normally that were residing on the SystemDrive in the AppFolder... So this kept getting weirder...
    So I opened my favorite application Tinkertool System and went to the Tab Files and chose the underlying tab "Attributes" which shows me Macintosh HFS and Finder Attributes... I pulled my SystemDrive onto the Drop Area and discovered something VERY amazing: The Display in Finder option had mysteriously been altered to "INVISIBLE"
    I changed it to Visible and there it was - My System Drive... Showing up in the finder...
    I made a restart to make sure everything was OK....
    After the restart the following settings had VANISHED:
    1) Pixadex = ALL ICONS (I had 230 MB) of sorted icons stored in my Application Support Folder
    2) All my Safari Book marks were gone - They were still in ~/Library/Safari - but the Bookmarks.plist was UNREADABLE even with a text editor and XML editor
    3) Numerous other applications had LOST their authorization preferences so I had to re-authorize many of my Audio Unit plug-ins...
    4) My Dock had been reset
    5) My Monitor arrangement had been reset...
    6) My date and time had been reset
    7) My energy settings had been reset.... (I had NOT zapped the P-RAM)
    Since installing Leopard 2 weeks ago - I have CAREFULLY monitored ANY activity after INSTALLING anything in order to be able to troubleshoot - and this problem definitely occurred right after installing the USB Family Drivers and launching L8....
    There was really NO HARM done other than the sweating that I did troubleshooting but if L8 is capable of doing what happened to this other guy loosing his whole folder and now doing this to me - where are the limits? When is someone going to get REAL hurt.
    Please Apple - I am sending you this post as a BUG report as well - could you PLEASE look into this as there has to be some kind of VERY dangerous MALFUNCTION within Logic or its Installers.

    Wow, I was just joking with the medication thing. I didn't actually think it was seriously a mental health issue. Sorry.
    In any case, though, if he has full access to the Mac, mental health issues or no, there's not a whole lot to do about it. If it is possible to take away administrative privileges without causing a huge fuss, then you can limit his access to certain things. However, you can't mess with privileges on a Time Machine backup (doing so breaks it), and that means he's going to have access to at least parts of the backup. Which means he can trash at least some of the backup data, and if he deletes files from the TM backup using the Finder, he'll have essentially trashed the whole backup. He might as well have the ability to delete all the files on the machine if he has the power to delete backups.
    Honestly, in this situation, he should either only be allowed to use an account with Parental Controls on, which may not be an option (I don't know how offended he would be at such a suggestion), or he should be using a different computer entirely and have no access at all to his wife's computer. Alternately, have his wife keep a backup that is hidden - ie, connect the backup drive periodically and then remove it and hide it so he can't mess with it. That will at least secure the backup.

  • Logical Deletes

    I'm working on a project with a database where database entries are deleted logically (a column deletets is set to the current time).
    How can I handle this with JDO/Kodo?
    Requirements:
    1. Records with a deletets != null must not be read. Is there a generic way to do this?
    2. If the Store Manager decides to remove a record it should not delete the record but send an update set deletets=xx
    Thanks for any help

    It is dirty but you could try this:
    - Map your model onto views which have deleted=false criterion
    - Decorate Kodo PM class (and register it in kodo.properties) with delete
    methods overrides where you will:
    - mark object as logically deleted
    - remove any references to it whether N-1 or 1-N, M-N
    - Decorate commit method so after commit you inspect all transactional
    objects (get them into a list before calling commit) and make all logically
    deleted object transient to get them out of your PM
    Potential issues are
    - you still have them in database so any uniquiness constraints will fire
    and you have no way of pre-verifying them
    - kodo might clash this filtering view during database flush
    - You need to track and remove all references to a logically deleted object
    - any other issue I missed :-)
    It might give you desired transparency but might cause some subtle problems
    down the road
    <Laurent Goldsztejn> wrote in message news:[email protected]..
    1. Kodo doesn't offer a generic way for updating the boolean type column
    instead of deleting the records. This column should be managed and
    updated as any generic boolean column.
    2. A delete call shouldn't be called there but an update query on the
    delete column for the physical delete not to occur.
    So the best way is IMHO to manage this column manually (set true or false)
    and not use JDO specific method for deletion.
    Laurent

  • Logic pro 7 complete

    I am about to sell my apple Logic pro 7 complete, disc , books & XSkey,
    I would like to hear what this forum thinks is a fair value.
    firmware 2.51-0  Ver 7.2.0(954)
    many thanx
    cheers mick

    I have seen them sell, (in that condition and with all the manuals and XSkey)  on eBay in the US, for $109 - $129

  • Logical delete in database adapter

    Hello
    I was wondering if someone has solution the problem with polling database. You can specify the logical delete column and you can give values for READ, UNREAD and RESERVED states. The problem is that when for example ESB project polls some specific table and starts an instance for every new row with specified logical delete field with value UNREAD, when something unexpected happens and something goes wrong the database adapter updates the row with READ value. This is problematic if we have thousands of rows, and we would like to separate the errored rows from the successfully read rows. Is there anyway (easy) way to update those rows that went wrong to some other value than READ?
    I don't know if anyone understood me, but just for clarification here's a example:
    I have a ESB-project which poll specific database table and parses and XML from the data. After this the ESB-project sends the data to some Web Service. The database table has column CONDITION_CODE in which value 0 means unread and value 1 means read. Now if everything goes fine there is no problems. But if the Web Service is unavailable or the data is malformed, the database adapter still updates the CONDITION_CODE to 1! We have no ways (except to listen ESB_ERROR topic and implement some error handling there) to know what rows were successfully delivered and which were not...
    Hope I was able to clarify the problem... And I hope someone could be able to provide me with answer.
    Best Regards Tuomas

    Did you use the RESERVED value property? How about the transaction mechanism? Do you have global transactions? I gues you would have to use them!

  • Logical delete in a DBAdapter

    Hi,
    I am using a logical delete to poll a table. A simple logic of unproccesed records assigned to 'N', processed ones to 'Y' and locked once to 'X' doesnt seem to work. The process doesnt pickup any inserted records.
    Are any other additional settings required for this logical delete to get working.
    Thanks,
    Valli.

    When you have the table definition which has the primary key defined already, the database adapter directly takes the field and constructs a where condition(update tbl set col1="", col2="" where primarykey=#input) based on the primary key; in this case your input should have the primary key field mapped from the BPEL process!
    If you don't have a primary key defined on that you obviously have to choose one otherwise db adapter will not proceed further and rest all the is the same logic!
    You can also observe that you can't select only few fields for updation as the XML schema generated will have all the columns available in the table. You can simply assign/tranform input data to what ever the fields which require for updation and rest all will not be treated as null but they will have the same value which is already existing in the database!!
    Hope it helps!! ( You can still have a question !! how do I change my primary key if required by the business !! ) - well I have no answer for this yet !!

  • I am trying to delete Spotify completely on Yosemite. I think I was doing well so far but there's this file left in the trash that I can't get rid of because 'it's in use'(???)

    Hello! Here's some information that might be familiar, I don't know:
    I got my Mac late 2014
    Yosemite, 13-inch, Retina
    120 GB, 8 GB RAM
    This would be the second time that I'm 'uninstalling' Spotify. The first time was a few months back, when it wouldn't play music and it said something like it's not available in my country (?! which is another issue altogether 'cause my sister can totally use it here). So I uninstalled it and I thought it was gone. Just last week, however, I installed it again because I realised it was working perfectly fine for some people so I wanted to try, and it still says not available on the website so I uninstalled it again. (Which may or may not just be a fault in my part because I didn't really look too closely.)Anyway, while trying to fix something on my Mac this morning, I found out that I somehow had Spotify running in my login items. And this was weirding me out because I knew it was already deleted! Completely! Here is a screenshot just to show you where it used to be and what I mean. I only thought of taking a screenshot after deleting all the Spotify items hehe (ALL, or so I thought! --wait for it--). Sorry, but you get the idea.
    This time, I remembered that you don't completely erase a program by the usual way of dragging the icon to the Trash etc etc (that's what I've been doing until just last week) and that you should go to the Library and erase even the preferences files or something. So I did this to Spotify and suddenly there's this tiny file that won't go away that says that it's in use even though I know perfectly as far as my knowledge goes that it is not being freakin' used as of the moment.
    And it just bugs me that I can't completely empty my trash because of this one sneaky little file that wasn't even aware had been running on my computer I know this isn't the saddest, most tragic Mac dilemma out there and, clearly, many people have had it worse, but please help me

    That file could be set as a Launch Agent or Daemon in your ~/Library or /Library, You could try to look for it or Safeboot your computer and then empty the trash. Safebooting disables Launch Agents/Daemon and Third Party Kernal extensions. Here's the article to Safeboot OS X: What is Safe Boot, Safe Mode? - Apple Support

  • I have deleted a contact on my phone, and they are no longer in my contacts list however when I go to text there name still appears as (other) is there a way of deleting them completely of my phone? - I have tried everything!!

    I have deleted a contact on my phone, and they are no longer in my contacts list however when I go to text if i type in the letter there name begins with there name still appears as (other) is there a way of deleting them completely of my phone? - I have tried everything!!
    I have resorted my phone, used the spring clean app - none of these work, Have I saved this number somewhere else but I just can't get to it?

    You got the new iphone?????   I have same problem.  I transferred audiobooks to device to find no audiobooks on device (despite it being in iTunes as if it was).  Have you found a solution?????   I even tried to change import settings on format transfer but hasn't worked. 

Maybe you are looking for

  • Target Data File Name and Path

    Hi, I'm trying to deploy a mapping that writes data into a file, but I need to dynamically set the name and the directory where I want to write this file. The only workaround I found was manually change the pl/sql generated code and change the values

  • Remove special characters in xml

    hi all, i have a problem with special char. in my data few records have special char like ' or " because of that xml file gives error. let me know the solution to this . thanks in advance Edited by: 836924 on Mar 8, 2011 1:41 AM

  • Where do I find my iweb files on my computer?

    Help! My 17" PB suffered a hard drive failure and I had to have it replaced. A local (independent) shop recovered some of my files and copied them to dvd's but I don't know where to look for my iweb files on the dvd's. I have visited the sites that I

  • Can Safari 7 scroll bar arrangement be changed?

    After upgrading to Mavericks the new version of Safari (7.1.3 now) no longer has the immediately adjacent vertical scrolling arrows in the lower right corner.  Although it seems most folks like the new style, I prefer the older scheme.  Is there a wa

  • Allocated storage from ODS

    Hello folks, I have loaded data into an ods (SVS_P013) object and activated all the the data afterwards. In transaction DB02 it still shows that storage is allocated from the table were the data is stored before activation (/BIC/ASVS_P013<b>40</b>).