BPM Parallel Process with Exclusive Gateway

Hi,
I am facing issue with Exclusive Gateway in Parallel Process.
Issu is, process always in In-Progress state at parallel Join. I mean process stops at Parallel Join and more over there are no errors in the process. If I delete Exclusive Gateway in Parallel process, the process is going to next level human task through Parallel Join. It means working fine.
I have designed my process in such a way that, 1st task is Human Task ---> then Parallel Split with 2 Human tasks, out of one task performing throught Excusive Gateway and another one is just simple approval. Finally I am merging these two Human tasks using Parallel Join then finally triggerting Approval Human task, and closing the Process.
Appreciate your quick suggestions to fix this issue.
Thanks in advance,
Dev...

Hi Unni,
Thanks for your reply.
I have checked all the parallel tasks and all are in completed state. No errors.
If I delete Exclusive Gateway it is working fine. I have checked step by step tasks in NWA, and every thing goes well.
Please let me know If I missout anything.
Thanks in advance,
Dev

Similar Messages

  • Problem with exclusive gateway in multi-instance subprocess

    Hi,
    I've recently developed a BPM process with 11gPS4 FP, the idea is simple:
    1. The initiator select a set of "StakeHolders" from LDAP, each "StakeHolder" has an BOOLEAN attribute "isApprover"
    2. There is a multi-instance subprocess, I used Parallel and Collection and I point the input & output array to the list selected in Initiator step
    3. There is an exclusive gateway named "is approver or not" which have one unconditional sequence and one conditional sequence. In the conditional sequence I use the expression inputDataItem.isApprover == true which inputDataItem is a predefined object.
    My intention is that when entering the subprocess, the stakeholders with "isApprover" is true should receive a task, but I found out that no matter "isApprover" is true or not, the expression will never become true. It always goes to the unconditional sequence. Then I tried to use some string attribute like inputDataItem.userID == "weblogic" and it works fine.
    I've tried different expressions, like "inputDataItem.isApprover" or XPATH Expression but none of them work. I think it might be a bug that the gateway in a subprocess cannot use boolean type of attribute?
    Has anyone met this problem before? Am I wrong somewhere or it's really a bug? Thank you in advance!

    At first your configuration is not really correct.
    =====================================================
    In the LISTENER.ORA from 10g, you must only refer to the database and hsodbc 10g
    D:\oracle\product\10.2.0\db_1\NETWORK\ADMIN\listener.ora:
    SID_LIST_LISTENER =
    (SID_LIST =
    (SID_DESC =
    (SID_NAME = PLSExtProc)
    (ORACLE_HOME = D:\oracle\product\10.2.0\db_1)
    (PROGRAM = extproc)
    (SID_DESC =
    (GLOBAL_DBNAME = INTEGRAT)
    (ORACLE_HOME = D:\oracle\product\10.2.0\db_1)
    (SID_NAME = INTEGRAT)
    (SID_DESC =
    (SID_NAME = MSSQLTEST)
    (PROGRAM = D:\oracle\product\10.2.0\db_1\bin\hsodbc)
    (ORACLE_HOME = D:\oracle\product\10.2.0\db_1)
    LISTENER =
    (DESCRIPTION_LIST =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1))
    (ADDRESS = (PROTOCOL = TCP)(HOST = pegasus.cursor.de)(PORT = 1521))
    ===========================================================
    Then you have to create a new listener (new name) in the gateway oracle_home on a different port than instance database:
    D:\oracle\product\11.1.0\tg_1\network\admin\listener.ora
    SID_LIST_LISTENERGTW =
    (SID_LIST =
    (SID_DESC=
    (SID_NAME=DG4MSQL)
    (ORACLE_HOME=D:\oracle\product\11.1.0\tg_1)
    (PROGRAM=D:\oracle\product\11.1.0\tg_1\BIN\dg4msql)
    LISTENERGTW=
    (DESCRIPTION_LIST =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = pegasus.cursor.de)(PORT = 1522))
    To start it, use the command below:
    D:\oracle\product\11.1.0\tg_1\bin\LSNRCTL start listenergtw
    ===========================================================
    D:\oracle\product\10.2.0\db_1\NETWORK\ADMIN\tnsnames.ora:
    match the DG4MSQL with the listener on 1522 port
    HSODBC match with binary10g on listener 1521 port
    MSSQLDG =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = PEGASUS)(PORT = 1522))
    (CONNECT_DATA =(SERVICE_NAME = DG4MSQL))
    (HS = OK)
    MSSQL =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = PEGASUS)(PORT = 1521))
    (CONNECT_DATA =(SERVICE_NAME = MSSQLTEST))
    (HS = OK)
    Try to configure and let me know your feedback
    regards,
    Mireille

  • BPM - Parallel process

    Hi All,
    I have multipe messages in a multiline container in a BPM.
    Now i want to send each message individually to a webservice and get the response.Want to acheive this requirment in parallel process so that the total time is less.
    Please let me know how to implement this requirement in BPM.
    Regards,
    Srinivas.

    Hi Adish,
    >> I want to send message to syn webservice
    From which system you are sending data and what manner(synchronous or asynchronous).
    >>Where I don’t know exactly what will be the value for n in transformations step. It can vary depend upon input message that BPM will receive.
    For this you can use a container variable of type xsd:integer and store the value in it and later you can use it for the next steps.
    but give a arrow diagram and brief your exact requirement.(e.g., File->XI->HTTP)
    Thanks,
    Gujjeit

  • Initiating bpm 11g process with an attachment?

    Hi,
    Facing problem in passing the attached file to the next human task.
    Basically, initiating a bpm process with file adapter getting the file as an attachment. after that need to pass the file to next human task for approval.
    stucked in passing the document to next human task.
    is there any way i can do it??
    Please suggest something.
    thanks in advance
    regards

    User/Role management is now done through the workspace itself. You must login as an administrative user (like weblogic) and you will see an Admin link in the upper right of the workspace.
    Authentication and user lookup is now handled directly by Weblogic - so you setup authentication Providers in the WLS security realm through the admin console.
    By default Weblogic is configured to use an internal LDAP that stores local users (this is where the "weblogic" user itself is defined).
    General steps are provided here:
    http://download.oracle.com/docs/cd/E14571_01/integration.1111/e10226/hwf_config.htm#BHCJGBFJ
    http://download.oracle.com/docs/cd/E12840_01/wls/docs103/secmanage/atn.html#wp1198953
    Edited by: Mike Rokitka on Jul 2, 2010 11:17 AM

  • Parallel process with reentrant VI have same value in both threads

    Has it been so long since I programmed LabVIEW that I forgot some basic stuff??
    I have a main VI which originally called dynamic processes in parallel.
    I then called the sub-vi's dirtectly and still run them in parallel, but thwey now have seperate names.
    I use a QSM.  Each parallel thread now has it's own QSM, because although I was using a Queue Name for each dynamic queue, the data that was being extracted was shared or done between the two threads.  If I confused everyone with the statement, I shall explain.
    Two parallel processes, calling a QSM (re-entrant).  They have the same number of elements and matching sets.
    EX:
    Process 1          Process 2
    Task1                 Task1
    Task2                 Task2
    Task3                 Task3
    Task4                 Task4
    Task5                 Task5
    Task6                 Task6
    I was expecting each thread (each process) to extract from the queue the list of tasks as entered (from Task1 to Task6).  What the processes were getting was the following:
    Process 1          Process 2
    Task1                 Task1
    Task2                 Task3
    Task4                 Task5
    Task6                 default
    Each Process was sending a different Queue Name to the QSM.  Each queue should have it's own name.
    I need to get this running by tomorrow with no excuse!  So I decided to do a lame workaround by also having 2 QSMs.  That fixed it..
    In each parallel process (which are a copy of each other with different names) there is a call to open a telnet session.  I probed and placed breakpoints in the code.  Although each process has a different name and the call to the function that opens the telnet session is re-entrant, the very same telnet reference number is assigned to both processes.
    Why?  Why would they get the same reference number?  I made all vi's down to (and including) Telnet Open Connection as re-entrant (although it was not needed) and it still assigns the same reference number to each telnet session.  Why?  What I am not seeing?  What am I missing?
    Unfortunately, I cannot post the code..  But it is not complicated code.  Just 2 sessions with different IP addresses.  I would expect different telnet session references... 
    As a matter of fact, I need to try something silly.. 

    I should provide more details with the solution....
    I just have to stop saying "D'Oh!!"
    Okay... here goes...
    In the LVOOP, I am using Notifiers and Semaphores to ensure that a race condition cannot occur.  Works well with previously written code.
    In this particular implementation, I have the same / similar object being created more than once (twice at this time).
    Where I went wrong (D'Oh!!!!), was to have a static name for a given object when creating the Notifier and Semaphore references.  Since the same name was given, so was the reference.  SInce the references were the same, so was the data, and so on.
    D'Oh!!!!!!
    D'Oh!!!!!!
    Now I know why a particular bird was called D'Oh-D'Oh bird... 
    D'Oh!!!  Such a silly mistake...
    D'Oh!!!
    Hope it makes a few people laugh...  or help another bird....

  • Parallel process with a queue and file?

    Hello, first of all sorry for my bad english^^:
    I am working for days on my project where I have to show the parallel process while transfering information by different ways and their problems (like timing and so on). 
    I chose the transmission of information to a parallel process by (1) a queue, and by (2) a file (.txt). (other ways are welcome, do you have 1-2 other ideas?)
    To solve this problem I made three while loops, the first one is the original one, where the original information (as a signal) is created and send by queue and file to the other two while loops, where this information is getting evaluted to create the same signal.
    so in the end you can compare all signals, if they are the same - so that you can answer the question with the parallelity of process.
    but in my vi file i have some problems:
    the version with the queue works pretty fine - it's almost parallel
    but the version with file doesn't work parallel, and i have no idea how i can solve it - -
    i'm a newbie^^
    can someone correct my file, so both (file and queue version) run parallel with the original one, or tell me what i can or must do?
    Attachments:
    Queue_Data_Parallel_FORUM.vi ‏23 KB

    LapLapLap wrote:
    Hello, first of all sorry for my bad english^^:
    I am working for days on my project where I have to show the parallel process while transfering information by different ways and their problems (like timing and so on). 
    I chose the transmission of information to a parallel process by (1) a queue, and by (2) a file (.txt). (other ways are welcome, do you have 1-2 other ideas?)
    To solve this problem I made three while loops, the first one is the original one, where the original information (as a signal) is created and send by queue and file to the other two while loops, where this information is getting evaluted to create the same signal.
    so in the end you can compare all signals, if they are the same - so that you can answer the question with the parallelity of process.
    but in my vi file i have some problems:
    the version with the queue works pretty fine - it's almost parallel
    but the version with file doesn't work parallel, and i have no idea how i can solve it - -
    i'm a newbie^^
    can someone correct my file, so both (file and queue version) run parallel with the original one, or tell me what i can or must do?
    A queue is technically never parallell, though you can have several if you really need parallellism. Other methods include Events, Action Engines, Notifiers (and why not webservices) for information transfer between processes.
    Due to limitations in the disk system you can only read/write one file at a time from one process, so i wouldn't recommend that. If you use a ramdisk it might work.
    /Y
    LabVIEW 8.2 - 2014
    "Only dead fish swim downstream" - "My life for Kudos!" - "Dumb people repeat old mistakes - smart ones create new ones."
    G# - Free award winning reference based OOP for LV

  • How to do parallel processing with dynamic internal table

    Hi All,
    I need to implement parallel processing that involves dynamically created internal tables. I tried doing so using RFC function modules (using starting new task and other such methods) but didn't get success this requires RFC enabled function modules and at the same time RFC enabled function modules do not allow generic data type (STANDARD TABLE) which is needed for passing dynamic internal tables. My exact requirement is as follows:
    1. I've large chunk of data in two internal tables, one of them is formed dynamically and hence it's structure is not known at the time of coding.
    2. This data has to be processed together to generate another internal table, whose structure is pre-defined. But this data processing is taking very long time as the number of records are close to a million.
    3. I need to divide the dynamic internal table into (say) 1000 records each and pass to a function module and submit it to run in another task. Many such tasks will be executed in parallel.
    4. The function module running in parallel can insert the processed data into a database table and the main program can access it from there.
    Unfortunately, due to the limitation of not allowing generic data types in RFC, I'm unable to do this. Does anyone has any idea how to implement parallel processing using dynamic internal tables in these type of conditions.
    Any help will be highly appreciated.
    Thanks and regards,
    Ashin

    try the below code...
      DATA: w_subrc TYPE sy-subrc.
      DATA: w_infty(5) TYPE  c.
      data: w_string type string.
      FIELD-SYMBOLS: <f1> TYPE table.
      FIELD-SYMBOLS: <f1_wa> TYPE ANY.
      DATA: ref_tab TYPE REF TO data.
      CONCATENATE 'P' infty INTO w_infty.
      CREATE DATA ref_tab TYPE STANDARD TABLE OF (w_infty).
      ASSIGN ref_tab->* TO <f1>.
    * Create dynamic work area
      CREATE DATA ref_tab TYPE (w_infty).
      ASSIGN ref_tab->* TO <f1_wa>.
      IF begda IS INITIAL.
        begda = '18000101'.
      ENDIF.
      IF endda IS INITIAL.
        endda = '99991231'.
      ENDIF.
      CALL FUNCTION 'HR_READ_INFOTYPE'
        EXPORTING
          pernr           = pernr
          infty           = infty
          begda           = '18000101'
          endda           = '99991231'
        IMPORTING
          subrc           = w_subrc
        TABLES
          infty_tab       = <f1>
        EXCEPTIONS
          infty_not_found = 1
          OTHERS          = 2.
      IF sy-subrc <> 0.
        subrc = w_subrc.
      ELSE.
      ENDIF.

  • Parallel Processing with version 2.0

    Say I wanted to loop through a collection of servers, and copy a bunch of files to them, this part is simple, but how would I go about doing it, so that it copies the files to each server in parallel rather than serial? I know version 3.0 has Start-Job,
    but version 2.0 doesn't.
    I cannot use Invoke-Command because PSRemoting is not setup and most likely isn't allowed (I know dumb, but I don't make the rules).
    If you find that my post has answered your question, please mark it as the answer. If you find my post to be helpful in anyway, please click vote as helpful.
    Don't Retire Technet

    Workflows require PowerShell 3.0.  Jobs should do the trick, so long as they're used smartly.  There's quite a bit of overhead involved with starting up a new job, so I wouldn't use them for really small individual tasks, but for a long file transfer
    like you're describing, you probably won't notice the difference.
    Do keep in mind that each job has a new process of powershell.exe running, which eats up 50MB or more of RAM until it's finished.  You don't want too many of them going at once.
    An alternative would be to use runspaces.  They're faster, but quite a bit more complicated to use properly.  There are a few functions available on the internet which try to wrap this concept up and make it easier to call from a
    script; try searching for "Invoke-Parallel" and "ForEach-Parallel", if you're interested in that.

  • Parallel processing in workflow with fork

    Hello,
    I have a case in production system. the workflow has parallel processing with fork. This fork has 2 branches as inputs.
    It has 2 necessary branches with no other condition.
    Does anyone know of any scenario where the workflow proceeds ahead with one branch executed even though 2 branches are mandatory.
    Thanks.

    Hi,
    Take at look at the following 2 articles. Using the concepts outlined in them you should be able to achieve what you are trying to do.
    http://odiexperts.com/interface-parallel-execution-a-new-solution
    http://odiexperts.com/processing-multiple-interface-through-single-package

  • Parallel processing issue withing same server

    hi,
    i need to perform parallel processing withing same server using work processes available in same server.
    suggest if this can be accomplished and explain the design if possible.

    Hello Venkata,
    You can achieve parallel processing by using CALL FUNCTION .... STARTING NEW TASK <task name>.
    In this case function module runs in asynchronous mode without stopping calling program.
    For more details you can refer following link:
    https://wiki.sdn.sap.com/wiki/display/Snippets/Easilyimplementparallelprocessinginonlineandbatchprocessing
    Thanks,
    Augustin.

  • Parallel Processing in ABAP

    Hi,
    I have a internal table that has object references in it. Each item in the table are indepenent of the other. I want to extract info from each object and convert it into a internal table so that i can pass it to an RFC function.
    So how can i do this extraction of the info from the objects in internal table in parallel.
    To use the STARTING NEW TASK, i created a fn module that is RFC enabled.... then i can't pass the object reference to this module. So how can do this?
    Also i read that this function module call will create a main or external session which has a limit of 6 per user session.Is this correct?
    If above can be done, I also wanted to restrict the no of parallel processes being executed at any point of time to be 5 or so.
    thanks in advance
    Murugesh

    Hi Murugesh,
    Parallel processing can be implemented in the application reports that are to run in the background. You can implement parallel processing in your own background applications by using the function modules and ABAP keywords.
    Refer following docs.
    <b>Parallel Processing in ABAP</b>
    /people/naresh.pai/blog/2005/06/16/parallel-processing-in-abap
    <b>Parallel Processing with Asynchronous RFC</b>
    http://help.sap.com/saphelp_webas610/helpdata/en/22/0425c6488911d189490000e829fbbd/frameset.htm
    <b>Parallel-Processing Function Modules</b>
    http://help.sap.com/saphelp_nw04s/helpdata/en/fa/096ff6543b11d1898e0000e8322d00/frameset.htm
    Dont forget to reward pts, if it helps ;>)
    Regards,
    Rakesh.

  • Explain Plan - Parallel Processing Degree of 2 and CPU_Cost

    Explain Plan - Parallel Processing Degree of 2 and CPU_Cost
    When I use a hint to use parallel processing with a degree of 2
    the I/O cost seems consistently divided by (1.8) but the cpu cost
    adjustment is inconsistent(between 2.17 and 2.62).
    Any ideas on why the cpu cost varies with each table ?
    Is there a formula to adjust the cpu_cost ?
    Thanks,
    Summary:
    The i/o cost reduction is consistent (divide by 1.8)
    Table 1: 763/424 = 1.8
    Table 2: 18774/10430 = 1.8
    Table 3(not shown): 5/1.8 = 3
    But the cpu cost reduction varies:(between 2.17 and 2.62)
    Table 1: 275812018/122353500 = 2.25
    Table 2: 7924072407/3640755000 = 2.17
    Table 3(not shown): 791890/301467 = 2.62
    Example:
    Oracle Database 10.2.0.4.0
    Table 1:
    1.) Full table scan on Table 1 without parallel processing.
    EXPLAIN PLAN FOR select/*+ CPU_COSTING
    PARALLEL(table_1,1)*/
    * from table_1
    SQL> select cost,io_cost,cpu_cost from plan_table;
    IO_COST CPU_COST
    763 275812018
    2.) Process Table 1 in parallel with a degree of 2.
    EXPLAIN PLAN FOR select/*+ CPU_COSTING
    PARALLEL(table_1,2)*/
    * from table_1
    IO_COST CPU_COST
    424 122353500
    Table 2:
    3.) Full table scan on Table 2 without parallel processing.
    EXPLAIN PLAN FOR select/*+ CPU_COSTING
    PARALLEL(table_2,1)*/
    * from table_2
    IO_COST CPU_COST
    18774 7924072407
    4.) Process Table 2 in parallel with a degree of 2.
    EXPLAIN PLAN FOR select/*+ CPU_COSTING
    PARALLEL(table_2,2)*/
    * from table_2
    IO_COST CPU_COST
    10430 3640755000

    The COST value is for the benefit of the CBO; not YOU.
    What should be more important to you is the elapsed run time for the SQL

  • Parallel processing using arfc.

    Hi experts,
    Does any have a presntation on how to proceed with parallel processing with ARFC?
    Thanks in Advance.

    look at the piece of code ....
    FORM start_onhand_extract_task .
      DO.
        IF g_num_running < g_avail_wps.
          EXIT.
        ENDIF.
        WAIT UP TO 5 SECONDS.
      ENDDO.
    Creating the file name with task number
      ADD 1 TO g_task_num.
      CONCATENATE p_file1 g_task_num INTO g_task_name.
      CONCATENATE g_filename g_task_name  INTO task_tab-filename1.
      CONDENSE task_tab-filename1 NO-GAPS.
      task_tab-task_name    = g_task_name.
      APPEND task_tab.
      CLEAR g_msg_text.
      CALL FUNCTION 'ZMIO_GET_MARD_DATA'
        STARTING NEW TASK g_task_name
        DESTINATION IN GROUP p_grp
        PERFORMING decrease_wp ON END OF TASK
        EXPORTING
          i_filename = task_tab-filename1
        TABLES
          i_matnr    = r_matnr
          i_werks    = r_werks.
      CASE sy-subrc.
        WHEN 0.
          ADD 1 TO g_num_running.
          g_num_submitted = g_num_submitted + 1.
        WHEN 1.
          error_rec-task_name    = g_task_name.
          error_rec-filename1    = task_tab-filename1.
          APPEND error_rec.
          APPEND it_exp_t001w TO it_err_t001w.
          ADD 1 TO g_num_err.
          g_hold_num = g_num_running.
          WAIT UNTIL g_num_running < g_hold_num OR
                     g_hold_num = 0
                     UP TO 5 SECONDS.
        WHEN OTHERS.
          error_rec-task_name    = g_task_name.
          error_rec-filename1    = task_tab-filename1..
          error_rec-msg_text     = g_msg_text.
          APPEND error_rec.
          APPEND it_exp_t001w TO it_err_t001w.
          ADD 1 TO g_num_err.
      ENDCASE.
    ENDFORM.                    " start_onhand_extract_task

  • Parallel processing using BLOCK step..

    Hi,
    I have used parallel processing with a BLOCK step. I have put a multiline container element. In the BLOCK step, I have visibily to another container element generated because of the block step (multilne container_LINE). Thus the number of parallel processes are getting created as in the requirement, but the problem is the value in multilne container_LINE is not getting passed to the send mail step. I have checked the binding, everything is ok. Please
    Sukumar.

    Hi
    When I am sure that I am doing properly a binding but it doesn't work then:
    1. I activate workflow template definition (take a joke).
    2. I write the magic word
    /$sync
    in the command line, to refresh buffers.
    3. I delete the strange binding defined with drag&drop and define it one more time using old method from former binding editors in R/3 4.6c systems. I take container elements from the lists of possible entries or I write their name directly. I don't use drag&drop.
    Regards
    Mikolaj
    There are no problems, just "issues" and "improvement opportunities".

  • Parallel processing for program RBDAPP01

    Hi All,
    I am running this program RBDAPP01 daily after every 30minutes to clear the error I Docs (Status 51 Application document not posted) Status Message u201CObject requested is currently locked by user ADMINJOBSu201D when I run this job it only clears few Idocs because of the Status Message u201CObject requested is currently locked by user ADMINJOBSu201D Means when one Idoc is getting updated the second one tries to update the same time for same order, same customer, same material and same plant but different ship to party it finds locked and cannot be posted.
    Can any one tell me what parallel processing is and will it help my case.
    Thanks

    You didn't specfiy which release you use so I can just give some suggestions:
    Note 547253 - ALE: Wait for end of parallel processing with RBDAPP01
    Note 715851 - IDoc: RBDAPP01 with parallel processing
    Markus

Maybe you are looking for