Issue in workflow - Logically Deleted Scenario

Hi,
We have an issue in Journal Entry Workflow. The user has tried to upload a document for Journal Entry Posting. Document is in Parked Status. In the workflow, when it has come to the required step of workflow approval and checks for the document type, it has got logically deleted. Thereby the workflow got terminated.
Kindly let us know as how to proceed further. What would have been the reason for this automatic logical delete? Mail us the solution regarding the same.
Regards,
Veera

Automatic Logical deletion ! I think this is not possible check with the log of the workitem whose user id is appearing then in the code check weter you have assigned any FM to locially delete the work item which is being trigered by mistake if not just start another WF and proceed with your work.
Regards
Bikas

Similar Messages

  • Workflow restart after logical delete..??

    Hi All
    Is there any way to restart workflows after a logical delete is performed..?? If we can do it, please describe the procedure.
    Thanks
    Deepak

    Hi Arghadip
    Thanks for the quick reply. I tried to see the tasks using the SWUS transaction, however when i executed the transaction with the WF number which was logically deleted; I got an error message - "No task available with the specified number".
    How do i deal with this now..?? It'll be great if you can elaborate more on this process.
    Thanks
    Deepak

  • Unable to update Logical Delete field in AS400

    hi all,
    We have an ESB process that polls for data in AS400 DB and transfers this data into an Oracle DB. This polling is based on Logical Delete. The data transferred is in the range of some lakhs of records. So we are sending around 5000 records per transaction in ESB. When we deploy the process, the data in AS400 is being read and getting transferred. However, the status field is not changing to 'Read Value'. Due to this, we are facing a lot of issues like, when we're in middle of data transfer and the opmn restarts, the data which has already been transferred is also being read and this is throwing an error because of primary constraint violation. Even after the data transfer, if we delete any rows in Oracle DB, the data transfer resumes as the status field is not updated.
    I have worked with logical delete in Oracle DB before and was working fine.
    Please help me with Logical Delete functionality in case of AS400 Database and also please let me know the privilages we need to have on the AS400 DB to successfully run this process.
    Thanks,
    Kamal.

    Hi,
    According to your post, my understanding is that you was unable to update managed metadata field in designer workflow.
    You need to provide the exact string for the MMS value, in the form of <id>;<value>.
    The format of the value you wanted to set is uncorrectly, so you cannot update the managed metadata field.
    If you want to set the managed metadata field value with correct format, you’d better to create custom action.
    There is an article for your reference, although it is about the SharePoint 2010, it is similar to SharePoint 2013.
    http://patrickboom.wordpress.com/2013/07/23/workflow-activity-set-managed-metadata-column/
    Best Regards,
    Linda Li
    Linda Li
    TechNet Community Support

  • Logical Delete in DB Polling - OSB

    Hello All,
    I have a question in polling. I have a logical delete column with Read value as 'P and unread value as 'N', Unlike BPEL, OSB's polling, does not make a record to 'P' until the process completes successfully. My DB polling adapter polls the same records while the previous instance is under processing or the previous instance ended in error. Is there a way to logically delete the record immediately once the record is read?
    To avoid this scenario, I added an error handler to make the record to 'E' if the instance encountered any error. By the time the error handler kicks in and updates the record, OSB polls the record a few times. I increased my polling frequency from 5 seconds to 30 seconds, but no luck. Any clue how to handle this scenario?
    Thanks,
    Dwarak

    Is there a way to logically delete the record immediately once the record is read?Dont do any logic in db adapter proxy. Instead make the proxy to just write to a jms queue and then have your processing logic in the jms proxy service which reads off the jms queue.

  • How to specify custom SQL in polling db adapter with logical delete option

    Hi all,
    I am writing a SOA composite app using JDeveloper SOA Suite 11.1.1.4 connecting to a SQL Server db using a polling DB Adapter with the logical delete option to send data to a BPEL process.
    I have requirements which go beyond what is supported in the JDeveloper UI for DB Adapter polling options, namely:
    * update more than one column to mark each row read, and
    * specify different SQL for the logical delete operation based on whether bpel processing of the data polled was successful or not.
    A complicating factor is that the polling involves two tables. Here is my full use-case:
    1) Polling will select data derived from two tables: e.g. 'headers' and 'details' simplified for this example:
    table: headers
    hid - primary key
    name - data label
    status - 'unprocessed', 'processed', or 'error'
    processedDate - null when data is loaded, set to current datetime when row is processed
    table: details
    hid - foreign key pointed at header.hid
    attr - data attribute name
    value - value of data attribute
    2) There is a many:1 relationship between detail and header rows through the hid columns. The db adapter polling SELECT shall return results from an outer join consisting of one header row and the associated detail rows where header.status = 'unprocessed' and header.hid = details.hid. (This is supported by the Jdeveloper UI)
    3) The polled data will be sent to be processed by a bpel process:
    3.1) If the bpel processing succeeds, the logical delete (UPDATE) operation shall set header.status = 'processed', and header.processedDate = 'getdate()'.
    3.2) If bpel processing fails (e.g. hits a data error while processing the selected data) the logical delete (UPDATE) operation shall set header.status = 'failed', header.processedDate = 'getdate()', and header.errorMsg = '{some text returned from bpel}'.
    Several parts of #3 are not supported by the JDeveloper UI: updating multiple columns to mark the row processed, using getdate() to populate a value of one of those column updates, doing different update operations based on the results of the BPEL processing of the data (success or error), and using data obtained from BPEL processing as a value of those column updates (error message).
    I have found examples which describe specifying custom SQL using the polling delete option to create a template then modifying the toplink file(s) to specify custom select and update SQL to implement a logical delete. (e.g. http://dlimiter.wordpress.com/2009/11/05/advanced-logic-in-oracle-bpel-polling-database-adapter/ and http://myexperienceswithsoa.blogspot.com/2010/06/db-adapter-polling-tricks.html). But none of them match what I've got in my project, in the first case because maybe because I'm using a higher version of JDeveloper, and in the second I think because in my case two tables are involved.
    Any suggestions would be appreciated. Thanks, John

    Hi John,
    You've raised a good scenario.
    First of all let me say that the purpose of the DB polling transaction, is to have an option to initiate a process from a DB table/view and not to update multiple fields in a table (or have other complex manipulation on the table).
    So, when choose to update a field in a record, after reading it, you are "telling" the engine not to poll this record again. Sure, i guess you can find a solution/workaround for it, but I don't think this is the way....
    The question now is what to do?
    You can have another DB adapter where you can update the data after finishing the process. In that case, after reading the data (on polling transaction) - update the header.status = 'processed' for example, and after processing the selected data update the rest of the fields.
    Hope it make some sense to you.
    Arik

  • Database Adapter Logical Delete Not Working....

    Hi,
    I have an issue with the DB Adapter under BPEL GA 10.1.3.1. I'm trying to do a logical delete on a table however the logical delete isn't updating the records to show that they've been processed.
    I've created a simple test case with a 3 column table sitting in Oracle XE DB with the third column containing the logical delete flag. I've created a new process consisting of a DB Adapter partnerlink and a receive. In the logs (below) I can see the Select statement followed by the Update (logical delete) occurring but the Update statement just actually run against the DB. I can copy the update statement and run this through SQL as the same DB user and it updates the records.
    Has anyone seen this before?
    Thanks.
    <2006-11-13 15:06:30,901> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> client acquired
    <2006-11-13 15:06:30,917> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> SELECT A, B, C FROM F_TABLE WHERE (C = ?)
         bind => [IN]
    <2006-11-13 15:06:30,917> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> client acquired
    <2006-11-13 15:06:30,917> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> TX beginTransaction, status=NO_TRANSACTION
    <2006-11-13 15:06:30,917> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> TX Internally starting
    <2006-11-13 15:06:30,917> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> external transaction has begun internally
    <2006-11-13 15:06:30,917> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.DBAdapterConstants isElementFormDefaultQualified> Element is FTABLE namespace is http://xmlns.oracle.com/pcbpel/adapter/db/top/ReadTABLE
    <2006-11-13 15:06:30,933> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.ox.O_XParser parse> Transforming the row(s) [<FTABLE Record A />, <FTABLE Record B />, <FTABLE Record C />] read from the database into xml.
    <2006-11-13 15:06:30,933> <DEBUG> <default.collaxa.cube.activation> <AdapterFramework::Inbound> [Read_TABLE_ptt::receive(FTABLECollection)]Posting inbound JCA message to BPEL Process 'Read_TABLE' receive activity:
    <FTABLECollection xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/pcbpel/adapter/db/top/ReadTABLE">
    <FTABLE>
    <a>Record</a>
    <b>A</b>
    <c>IN</c>
    </FTABLE>
    <FTABLE>
    <a>Record/a>
    <b>B</b>
    <c>IN</c>
    </FTABLE>
    <FTABLE>
    <a>Record</a>
    <b>C</b>
    <c>IN</c>
    </FTABLE>
    </FTABLECollection>
    <2006-11-13 15:06:30,933> <DEBUG> <default.collaxa.cube.activation> <AdapterFramework::Inbound> Delivery Thread 'JCA-work-instance:Database Adapter-6 performing unsynchronized post() to localhost
    <2006-11-13 15:06:31,073> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> Begin batch statements
    <2006-11-13 15:06:31,073> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> UPDATE F_TABLE SET C = ? WHERE ((A = ?) AND (B = ?))
    <2006-11-13 15:06:31,073> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log>      bind => [OUT, Record, A]
    <2006-11-13 15:06:31,073> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log>      bind => [OUT, Record, B]
    <2006-11-13 15:06:31,073> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log>      bind => [OUT, Record, C]
    <2006-11-13 15:06:31,073> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> End Batch Statements
    <2006-11-13 15:06:31,073> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> TX commitTransaction, status=STATUS_ACTIVE
    <2006-11-13 15:06:31,073> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> TX Internally committing
    <2006-11-13 15:06:31,323> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> external transaction has committed internally
    <2006-11-13 15:06:31,323> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> client released
    <2006-11-13 15:06:31,323> <DEBUG> <default.collaxa.cube.activation> <AdapterFramework::Inbound> onBatchBegin: Batch 'bpel___localhost_default_Read_TABLE_1_4__1163389596916.ReadTABLE.FTABLE1163394391323' (bpel___localhost_default_Read_TABLE_1_4__1163389596916.ReadTABLE.FTABLE1163394391323) starting...
    <2006-11-13 15:06:31,323> <DEBUG> <default.collaxa.cube.activation> <AdapterFramework::Inbound> onBatchComplete: Batch 'bpel___localhost_default_Read_TABLE_1_4__1163389596916.ReadTABLE.FTABLE1163394391323' (bpel___localhost_default_Read_TABLE_1_4__1163389596916.ReadTABLE.FTABLE1163394391323) has completed - final size = 3
    <2006-11-13 15:06:31,323> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> client released
    <2006-11-13 15:06:31,339> <DEBUG> <default.collaxa.cube.engine.bpel> <BPELExecution::Read_TABLE> entering <scope> at line [no line]
    <2006-11-13 15:06:31,339> <DEBUG> <default.collaxa.cube.engine.bpel> <BPELExecution::Read_TABLE> entering <scope> at line [no line]
    <2006-11-13 15:06:31,339> <DEBUG> <default.collaxa.cube.engine.bpel> <BPELExecution::Read_TABLE> entering <sequence> at line 55
    <2006-11-13 15:06:31,339> <DEBUG> <default.collaxa.cube.engine.bpel> <BPELExecution::Read_TABLE> entering <sequence> at line 55
    <2006-11-13 15:06:31,339> <DEBUG> <default.collaxa.cube.engine.bpel> <BPELEntryReceiveWMP::Read_TABLE> executing <receive> at line 58
    <2006-11-13 15:06:31,339> <DEBUG> <default.collaxa.cube.engine.bpel> <BPELEntryReceiveWMP::Read_TABLE> set variable 'Read_TABLE_receive_InputVariable' to be readOnly, payload ref {FTABLECollection=108e2d22815529ac:-3067a9ff:10edf296212:-78da}
    <2006-11-13 15:06:31,339> <DEBUG> <default.collaxa.cube.engine.bpel> <BPELEntryReceiveWMP::Read_TABLE> variable 'Read_TABLE_receive_InputVariable' content {FTABLECollection=oracle.xml.parser.v2.XMLElement@1303465}
    <2006-11-13 15:06:31,339> <DEBUG> <default.collaxa.cube.engine.bpel> <BPELInvokeWMP::Read_TABLE> executing <invoke> at line 61

    Hi,
    I haven't yet used 10.1.3, but we had a number of issues under 10.1.2.0.2 around caching and upd/ins/del.
    A number of things we changed were
    - set usesBatchWriting to false in oc4j-ra.xml file
    - set identityMap to NoIdentityMap via toplink work bench
    - set should-always-refresh-cache-on-remote,should-disable-cache-hits,should-disable-cache-hits-on-remote to true in toplink mappings.xml file (note this last one is only if toplink was not used to insert the source data).
    Ashley

  • Db Adapter Logical Delete not working

    Hi,
    I have an ESB that contains a dbadapter that performs a logical delete once the esb has finished processing. The problem we are seeing is that this logical delete is not always happening. We update a field in the source table from 0 to 1 on successful completion, but as I said, this does not always work, causing unique constraint violations on our destination tables. Disabling and re-enabling the dbadapter service in the ESB Console usually clears the problem up, though at times a bounce of the SOA Suite using ./opmnctl stopall is necessary. We are using SOA Suite 10.1.3.1.
    Any ideas what could be causing this behavior?

    The 10.1.3.1 had a number of issues and I would highly recommend upgrading at the earliest possible. One common issue that people get with 10.1.3.1 is people developing SOA object in 10.1.3.3 or 10.1.3.4. You must make sure that your developers used the same version of JDeveloper, e.g. 10.1.3.1.
    Here is a list of patches that I believe you should have in a 10.1.3.1 environment at a minimum, sorry I don't have the descriptions, hopefully one will address your issue.
    2617419
    5877231
    5838073
    5841736
    5905744
    5742242
    5729652
    5724766
    5664594
    5965376
    5672007
    6033824
    5758956
    5876231
    5900308
    5915792
    5473225
    5853207
    5990764
    5669155
    5149744
    cheers
    James

  • !!! Statements of Logic Deleting Files and messing with System are True !!!

    This morning I answered some guys post about Logic deleting everything in the same level as the project folder under certain circumstances. I Tried trouble shooting for him but could not recreate it.
    Then this happened to me today:
    When working in Logic 8 in Leopard all of a sudden it stopped communicating with my Unitor8 via USB. I restarted but the Unitor 8 Which I reset twice would NOT communicate with L8 anymore. It would remain in patch mode (red light lit though CPU is running)
    I've had this erratic behavior many times since the past years on many computers so it wasn't really new. All one needs to do is reinstall the Unitor Family driver.
    So I decided to do so:
    I inserted my Logic Studio DVD and opened the installer and checked ONLY the Unitor family Drivers. After completed install I restarted the Mac. Everything went fine. So I booted L8. While booting I saw my hardrives on the right side of the desktop flash once. I thought that to be strange and took closer look.
    Now my boot drive was missing from all the drives on the desktop and in the sidebar window...
    I tried to Apple+Click on a Application in the dock. The finder opened the applications folder and showed me the app I had clicked on. But I could not navigate to the root of my System Drive even via the "path" sysmbol in the toolbar. So I hit shiftappleg and entered /Volumes/MySystemVolume - it showed my volume "Grayed Out" in the finder window...
    I tried repairing the disk and permissions but nothing was wrong with my boot drive apparently besides the fact that the finder (and all other applications when wanting to open or save) could not "see" my system drive... I could launch all the apps normally that were residing on the SystemDrive in the AppFolder... So this kept getting weirder...
    So I opened my favorite application Tinkertool System and went to the Tab Files and chose the underlying tab "Attributes" which shows me Macintosh HFS and Finder Attributes... I pulled my SystemDrive onto the Drop Area and discovered something VERY amazing: The Display in Finder option had mysteriously been altered to "INVISIBLE"
    I changed it to Visible and there it was - My System Drive... Showing up in the finder...
    I made a restart to make sure everything was OK....
    After the restart the following settings had VANISHED:
    1) Pixadex = ALL ICONS (I had 230 MB) of sorted icons stored in my Application Support Folder
    2) All my Safari Book marks were gone - They were still in ~/Library/Safari - but the Bookmarks.plist was UNREADABLE even with a text editor and XML editor
    3) Numerous other applications had LOST their authorization preferences so I had to re-authorize many of my Audio Unit plug-ins...
    4) My Dock had been reset
    5) My Monitor arrangement had been reset...
    6) My date and time had been reset
    7) My energy settings had been reset.... (I had NOT zapped the P-RAM)
    Since installing Leopard 2 weeks ago - I have CAREFULLY monitored ANY activity after INSTALLING anything in order to be able to troubleshoot - and this problem definitely occurred right after installing the USB Family Drivers and launching L8....
    There was really NO HARM done other than the sweating that I did troubleshooting but if L8 is capable of doing what happened to this other guy loosing his whole folder and now doing this to me - where are the limits? When is someone going to get REAL hurt.
    Please Apple - I am sending you this post as a BUG report as well - could you PLEASE look into this as there has to be some kind of VERY dangerous MALFUNCTION within Logic or its Installers.

    Wow, I was just joking with the medication thing. I didn't actually think it was seriously a mental health issue. Sorry.
    In any case, though, if he has full access to the Mac, mental health issues or no, there's not a whole lot to do about it. If it is possible to take away administrative privileges without causing a huge fuss, then you can limit his access to certain things. However, you can't mess with privileges on a Time Machine backup (doing so breaks it), and that means he's going to have access to at least parts of the backup. Which means he can trash at least some of the backup data, and if he deletes files from the TM backup using the Finder, he'll have essentially trashed the whole backup. He might as well have the ability to delete all the files on the machine if he has the power to delete backups.
    Honestly, in this situation, he should either only be allowed to use an account with Parental Controls on, which may not be an option (I don't know how offended he would be at such a suggestion), or he should be using a different computer entirely and have no access at all to his wife's computer. Alternately, have his wife keep a backup that is hidden - ie, connect the backup drive periodically and then remove it and hide it so he can't mess with it. That will at least secure the backup.

  • Wait for Event FIPP - Completed Logically Deleted

    Hi WF Experts,
    We have a WF for Release of payments.
    It has 1 Fork with 2 parallel branches (both necessary).
    1 Branch has the approval process for Amount release and the other branch calling the Account assignment approval Subworkflow.
    The approval process branch has the until loop with increment counter and it picks the agents within the loop until a loop condition is reached and thereby till no more approvals required.
    The other branch, before calling the subworkflow, it checks a WAIT FOR EVENT FIPP->COMPLETED with container element FIPPID.
    Both these branches needs to be completed so that the forks gets ended and the wf comes out of the fork.
    The approval process branch is working perfectly. But in the other branch the WAIT FOR EVENT FIPP->COMPLETED  gets logically deleted and thereby, this branch does not go firther to start the Account assignment approval Subworkflow. This way the Fork with the 2 necessary branches does not end, and thereby, the WF stops here and could not go further to set release indicators (Which is a backgorund task).This will confirm the end of WF process.
    When I pass the WI ID for this Wait event in the SWIA, it shows the status as CANCELLED.
    The WF User log as a whole shows the status as COMPLETED as all the approvers has approved the release.
    Why is the Wait for Event gets logically deleted ?Please advice.
    Edited by: Sameer Anwar on Jun 12, 2011 9:17 AM
    Edited by: Sameer Anwar on Jun 12, 2011 9:21 AM

    Dear Experts,
    Any update on this . I am unable to sort this out why the Workflow is getting stopped.
    -Anwar

  • Logical Deletes

    I'm working on a project with a database where database entries are deleted logically (a column deletets is set to the current time).
    How can I handle this with JDO/Kodo?
    Requirements:
    1. Records with a deletets != null must not be read. Is there a generic way to do this?
    2. If the Store Manager decides to remove a record it should not delete the record but send an update set deletets=xx
    Thanks for any help

    It is dirty but you could try this:
    - Map your model onto views which have deleted=false criterion
    - Decorate Kodo PM class (and register it in kodo.properties) with delete
    methods overrides where you will:
    - mark object as logically deleted
    - remove any references to it whether N-1 or 1-N, M-N
    - Decorate commit method so after commit you inspect all transactional
    objects (get them into a list before calling commit) and make all logically
    deleted object transient to get them out of your PM
    Potential issues are
    - you still have them in database so any uniquiness constraints will fire
    and you have no way of pre-verifying them
    - kodo might clash this filtering view during database flush
    - You need to track and remove all references to a logically deleted object
    - any other issue I missed :-)
    It might give you desired transparency but might cause some subtle problems
    down the road
    <Laurent Goldsztejn> wrote in message news:[email protected]..
    1. Kodo doesn't offer a generic way for updating the boolean type column
    instead of deleting the records. This column should be managed and
    updated as any generic boolean column.
    2. A delete call shouldn't be called there but an update query on the
    delete column for the physical delete not to occur.
    So the best way is IMHO to manage this column manually (set true or false)
    and not use JDO specific method for deletion.
    Laurent

  • Workflow gets deleted but dependant steps are not deleted

    Hi Experts,
    When OMR_DELETED event of BOR object ECO is raised, as per design the workflow instance is logically deleted. But all the dependant steps of the workflow are not getting deleted logically.
    Why is this happening?I mean why workitems generated for child steps are not deleted when parent workflow instance is logically deleted?
    Thanks,
    Sonali.

    Hi,
    as per design the workflow instance is logically deleted. But all the dependant
    steps of the  workflow are not getting deleted logically.
    Are you using wait to delete logically or through Function Module. Can you tell where you doing this.
    Regards,
    Surjith

  • Logically delete or Complete manually?

    Hi group,
    Sometimes we need to finish off a WF.  Examples:
    - A travel form has been approved in backend (PR05), and we need to end the WF that has the trip with the approver.
    - The leave request application sometimes gets messed up, and a WF needs to be terminated
    I am a little unsure whether to use "Logically delete" or "Complete manually" for the step.  Could anybody explain the difference? I am a little new with WF.  My hunch is that "Complete manually" just executes the step, whereas "Logically delete" stops the entire WF.  But I have not found good litterature on when to use these options.
    Thanks in advance
    Kirsten

    Hi Kibo,
    COmplete manulayy will come into picture if Approver is not able to execute workitem for some reason and you want to complete the rest of the Workflow steps e.g sending mail to initiator after approval, then if you complete the workitem the workflow will be completed as normal.
    However if by any chance, workflow is no more relevant and stes are not required after aapproval, then you can logically delete the workflow .
    Regards,
    Sangvir Singh

  • Logical delete in database adapter

    Hello
    I was wondering if someone has solution the problem with polling database. You can specify the logical delete column and you can give values for READ, UNREAD and RESERVED states. The problem is that when for example ESB project polls some specific table and starts an instance for every new row with specified logical delete field with value UNREAD, when something unexpected happens and something goes wrong the database adapter updates the row with READ value. This is problematic if we have thousands of rows, and we would like to separate the errored rows from the successfully read rows. Is there anyway (easy) way to update those rows that went wrong to some other value than READ?
    I don't know if anyone understood me, but just for clarification here's a example:
    I have a ESB-project which poll specific database table and parses and XML from the data. After this the ESB-project sends the data to some Web Service. The database table has column CONDITION_CODE in which value 0 means unread and value 1 means read. Now if everything goes fine there is no problems. But if the Web Service is unavailable or the data is malformed, the database adapter still updates the CONDITION_CODE to 1! We have no ways (except to listen ESB_ERROR topic and implement some error handling there) to know what rows were successfully delivered and which were not...
    Hope I was able to clarify the problem... And I hope someone could be able to provide me with answer.
    Best Regards Tuomas

    Did you use the RESERVED value property? How about the transaction mechanism? Do you have global transactions? I gues you would have to use them!

  • Logical delete in a DBAdapter

    Hi,
    I am using a logical delete to poll a table. A simple logic of unproccesed records assigned to 'N', processed ones to 'Y' and locked once to 'X' doesnt seem to work. The process doesnt pickup any inserted records.
    Are any other additional settings required for this logical delete to get working.
    Thanks,
    Valli.

    When you have the table definition which has the primary key defined already, the database adapter directly takes the field and constructs a where condition(update tbl set col1="", col2="" where primarykey=#input) based on the primary key; in this case your input should have the primary key field mapped from the BPEL process!
    If you don't have a primary key defined on that you obviously have to choose one otherwise db adapter will not proceed further and rest all the is the same logic!
    You can also observe that you can't select only few fields for updation as the XML schema generated will have all the columns available in the table. You can simply assign/tranform input data to what ever the fields which require for updation and rest all will not be treated as null but they will have the same value which is already existing in the database!!
    Hope it helps!! ( You can still have a question !! how do I change my primary key if required by the business !! ) - well I have no answer for this yet !!

  • How to hide row from table after logical delete

    Hello.
    I am using Jdeveloper 11.1.1.3.0, ADF BC and ADF Faces.
    I want to implement Logical delete in my application.
    In my Entity object I have Deleted attribute and I override the remove() method in my EntityImpl class.
        @Override
        public void remove()
           setDeleted("Y");
        }and I added this condition to my view object
    WHERE NVL(Deleted,'N') <> 'Y'in my page I have a table. this table has a column to delete each row. I dragged and drop RemoveRowWithKey action from the data control
    and set the parameter to *#{row.rowKeyStr}* .
    I what I need is this:
    when the user click the delete button I want to hide the roe from the table. I tried to re-execute the query after the delete but the row is still on the page. Why execute query does not hide the row from the screen.
    here is the code I used for delete and execute query
        public String deleteLogically()
            BindingContainer bindings = getBindings();
            OperationBinding operationBinding = bindings.getOperationBinding("removeRowWithKey");
            Object result = operationBinding.execute();
            DCBindingContainer dc=(DCBindingContainer) bindings;
            DCIteratorBinding iter=dc.findIteratorBinding("TakenMaterialsView4Iterator");
            iter.getCurrentRow().setAttribute("Deleted", "Y");
            //iter.getViewObject().executeQuery();
            iter.executeQuery();
            return null;
        }as you see I used two method iter.getViewObject().executeQuery(); and  iter.executeQuery(); but the result is same.

    Thank you Jobinesh.
    I used this method.
        @Override
        protected boolean rowQualifies(ViewRowImpl viewRowImpl)
          Object attrValue =viewRowImpl.getAttribute("Deleted"); 
            if (attrValue != null) { 
             if ("Y".equals(attrValue)) 
                return false; 
             else 
                return true; 
            return super.rowQualifies(viewRowImpl);
        }But I have one drawback for using it, and here is the case:
    If the user clicks the delete button *(no commit)* the row will be hidden in the table, but when the user click cancel changes the row is not returned since it is not returned due to the rowQualifies(ViewRowImpl viewRowImpl) (the Deleted attribute is set to "N" now).
    here is the code for delete and cancel change buttons
        public String deleteLogically()
            BindingContainer bindings = getBindings();
            OperationBinding operationBinding =
                bindings.getOperationBinding("removeRowWithKey");
            Object result = operationBinding.execute();
            DCBindingContainer dc = (DCBindingContainer)bindings;
            DCIteratorBinding iter =
                dc.findIteratorBinding("TakenMaterialsView4Iterator");
            iter.getCurrentRow().setAttribute("Deleted", "Y");
             iter.executeQuery();
            AdfFacesContext adfFacesContext = AdfFacesContext.getCurrentInstance();
            adfFacesContext.addPartialTarget(this.getTakenMaterialsTable());
            return null;
        public String cancelChanges(String iteratorName)
            System.out.println("begin cancel change");
            BindingContainer bindings =
                BindingContext.getCurrent().getCurrentBindingsEntry();
            DCBindingContainer dc = (DCBindingContainer)bindings;
            DCIteratorBinding iter =
                (DCIteratorBinding)dc.findIteratorBinding(iteratorName);
            ViewObject vo = iter.getViewObject();
            //create a secondary RowSetIterator to avoid disturbing row currency
            RowSetIterator rsi = vo.createRowSetIterator(null);
            //move the currency to the slot before the first row.
            rsi.reset();
            while (rsi.hasNext())
                    currentRow = rsi.next();
                    currentRow.setAttribute("Deleted", "N");
            rsi.closeRowSetIterator();
            iter.executeQuery();
            AdfFacesContext adfFacesContext = AdfFacesContext.getCurrentInstance();
            adfFacesContext.addPartialTarget(this.getTakenMaterialsTable());
            return null;
        }as example, if the user initially has 8 rows, then deleted 2 rows, in cancelChanges only 6 rows appears. and the deleted rows are not there??
    any suggestion?

Maybe you are looking for