Parallel processing in workflow with fork

Hello,
I have a case in production system. the workflow has parallel processing with fork. This fork has 2 branches as inputs.
It has 2 necessary branches with no other condition.
Does anyone know of any scenario where the workflow proceeds ahead with one branch executed even though 2 branches are mandatory.
Thanks.

Hi,
Take at look at the following 2 articles. Using the concepts outlined in them you should be able to achieve what you are trying to do.
http://odiexperts.com/interface-parallel-execution-a-new-solution
http://odiexperts.com/processing-multiple-interface-through-single-package

Similar Messages

  • Defining index in Dynamic parallel processing of workflow

    Hi all,
    I am using dynamic parallel processing feature in workflow for a particular multi-line element. But I am not able to define the index in that particular task.
    Consider that i have a multi-line element "Material". For this element, I need to loop that task for that n number of records. so i am using dynamic parallel processing. Now for each workitem generated, i need to show that particular material in the workitem. I remember that we need to use index, but i couldn't recollect how it is defined.
    Could anyone help me in this regard?
    Thanks in advance

    Nikhil,
    When you use dynamic parallel processing the index is available in <b>_Wf_ParForEach_Index</b>. A reference for the line of the multi line element is automatically generated for each work item created. You can see this in the Binding Editor for the step. In your case this will be "Material()". When you drag this element to the WF to Step binding window, it will be resolved as "&Material[&_Wf_ParForEach_Index&]&. Therefore you can get the material for each WI by defining "Material" in your task container (not as multiline) and doing the appropriate binding. If you in fact need the Index in your method, you can define a container element your task with reference to Type SWC_INDEX and bind to ]_Wf_ParForEach_Index.
    Cheers,
    Ramki Maley.
    Please reward points if the answer is helpful.
    For info on awarding points click on this link: https://www.sdn.sap.com/sdn/index.sdn?page=crp_help.htm

  • Parallel Processing - In conjuction with TWS scheduler.

    We have a .Bat file that uses upshell.exe to execute a custom script we've created.
    This custom script looks for files in the inbox and moves them to the relevant OpenBatches/OpenBatchesML folder. Once files have been moved it then runs a Parallel process up-to-load for all data. However, what we're finding is that because we're running this in parallel the .bat script is completing despite FDM still processing in the background. This is evident, as the TBATCH table shows the batch as not 100% complete. This unfortunately causes ourselves a problem as the scheduler (TWS) thinks all processing has finished and subsequent downstream processing kicks off. As I understand it this wouldn't be an issue if we'd used Serial processing and not Parallel.
    We've chosen Parallel becuase of the volumes and the limitation of our batch window. Am I correct in assuming that in principle parallel will process data faster than serial? I am currently looking at specific performance tests on my data at present to prove this.
    The real issue is that we somehow need to ensure TWS doesn't kick off downstream processes and I need to somehow have FDM create a log file for batches that complete successfully. I'm assuming it should be done within an Event script but I'm not sure which as this is quite new to me.
    Has anyone come across this issue themselves? If so, I'm looking for some guidance/examples how you've managed to get round the problem.
    Thanks in advance.

    Hi,
    Take at look at the following 2 articles. Using the concepts outlined in them you should be able to achieve what you are trying to do.
    http://odiexperts.com/interface-parallel-execution-a-new-solution
    http://odiexperts.com/processing-multiple-interface-through-single-package

  • Parallel Processing in ODI with Unique Requirement

    Hi All,
    I have a unique requirement. I have a scenerio which i have to run multiple times in parallel with different user inputs.
    Eg Suppose user inputs A,B,C then i have to run the same scenerio 3 times with input filter as A B and C respectively in parallel.
    If user inputs A,B,C, and D then i have to run the same scenerio 4 times with inputs A B C and D in parallel.
    Does any one know a way to achieve this.
    Thanks in advance for your suggestions...
    Regards
    Edited by: 872116 on Jul 12, 2011 11:32 PM

    Hi,
    Take at look at the following 2 articles. Using the concepts outlined in them you should be able to achieve what you are trying to do.
    http://odiexperts.com/interface-parallel-execution-a-new-solution
    http://odiexperts.com/processing-multiple-interface-through-single-package

  • FORK is Not happening Parallel processing- It's working sequential

    Hi,
       we are into PI 7.O and SP 13.
       I am trying to test Parallel processing using Fork step. (With Two branches)
    My problem is sxm_moni both branches are not executed simultenously and it's executing one after the other.
    Did any body done in XI parallel processing using BPM...both calls has to finish at the same time. I mean first call 10 min and second call aslo has to finish first 10 min ..not other 10 min.
    I heard this problem from XI 3.0 and PI 7.O. But PI 7.1 did any body test the Parallel processing using Fork step.
       Pls help me is this issue will resolve if I go to PI 7.1.
    Regards,
    Venu.

    Hi Henrique,
    they would not necessarily start at the same time but shouldnt also be queued - Customer expecting the response within a 17 sec or 20 Sec but coming response 34 sec will not ok for the customer..tomorrow need add some more target again 17 sec will take...How PI can handle the Multi threading they are checking...I am not sure this problem fixed in PI 7.1 or not.
    there're # of connection restrictions in your system? Check that - Where can I check connections restrictions...If you know pls through some light on this.
    Also, how's your BPM transactional behavior (did you flag the create new transaction steps)?
    - I did not checked the flag for create new transaction step..once my server is up I can check the flag and I can test.
    Regards,
    Venu.

  • Parallel workflow in Process controlled workflow

    Dear experts,
    could you please answer -
    is it possible to use parallel worklow in the process controlled workflow?
    for example, there is a requirement that all the department managers should approve the RFx document or contract.
    Is it possible to simulate this process using the process controlled workflow in SRM 7.0?
    It seems, that in the standard it is possible only via sequential approval - i.e. the first manager approves
    the document, then it goes to the second etc.
    Or all the managers receive the work item, but only one manager in fact approves it.
    Both variants are not suitable.
    Maybe there is another way to simulate the parallel worklow process?
    There is an option called decision sets, but it can be used for shopping cart only.
    Thanks a lot in advance,
    Andrey Averin.

    Hi,
       Yes- This is possible through Process controlled workflow. I am doing a smiler kind of workflow development. like category approvers .. Category approval will split by category...but you have to build the logic such a way that read all the items and send to all the approvers.. But now i have noticed that even through all approvers will receive parallel workitem but Approver A can't open his/her workitem if Approver B is in the process of approve/reject the workitem( Meaning when he double click the workitem in detail) then Approver A will receive a Error pop up message saying that Approver B is working on this document.
    John.

  • BPM Parallel Process with Exclusive Gateway

    Hi,
    I am facing issue with Exclusive Gateway in Parallel Process.
    Issu is, process always in In-Progress state at parallel Join. I mean process stops at Parallel Join and more over there are no errors in the process. If I delete Exclusive Gateway in Parallel process, the process is going to next level human task through Parallel Join. It means working fine.
    I have designed my process in such a way that, 1st task is Human Task ---> then Parallel Split with 2 Human tasks, out of one task performing throught Excusive Gateway and another one is just simple approval. Finally I am merging these two Human tasks using Parallel Join then finally triggerting Approval Human task, and closing the Process.
    Appreciate your quick suggestions to fix this issue.
    Thanks in advance,
    Dev...

    Hi Unni,
    Thanks for your reply.
    I have checked all the parallel tasks and all are in completed state. No errors.
    If I delete Exclusive Gateway it is working fine. I have checked step by step tasks in NWA, and every thing goes well.
    Please let me know If I missout anything.
    Thanks in advance,
    Dev

  • Parallel process with reentrant VI have same value in both threads

    Has it been so long since I programmed LabVIEW that I forgot some basic stuff??
    I have a main VI which originally called dynamic processes in parallel.
    I then called the sub-vi's dirtectly and still run them in parallel, but thwey now have seperate names.
    I use a QSM.  Each parallel thread now has it's own QSM, because although I was using a Queue Name for each dynamic queue, the data that was being extracted was shared or done between the two threads.  If I confused everyone with the statement, I shall explain.
    Two parallel processes, calling a QSM (re-entrant).  They have the same number of elements and matching sets.
    EX:
    Process 1          Process 2
    Task1                 Task1
    Task2                 Task2
    Task3                 Task3
    Task4                 Task4
    Task5                 Task5
    Task6                 Task6
    I was expecting each thread (each process) to extract from the queue the list of tasks as entered (from Task1 to Task6).  What the processes were getting was the following:
    Process 1          Process 2
    Task1                 Task1
    Task2                 Task3
    Task4                 Task5
    Task6                 default
    Each Process was sending a different Queue Name to the QSM.  Each queue should have it's own name.
    I need to get this running by tomorrow with no excuse!  So I decided to do a lame workaround by also having 2 QSMs.  That fixed it..
    In each parallel process (which are a copy of each other with different names) there is a call to open a telnet session.  I probed and placed breakpoints in the code.  Although each process has a different name and the call to the function that opens the telnet session is re-entrant, the very same telnet reference number is assigned to both processes.
    Why?  Why would they get the same reference number?  I made all vi's down to (and including) Telnet Open Connection as re-entrant (although it was not needed) and it still assigns the same reference number to each telnet session.  Why?  What I am not seeing?  What am I missing?
    Unfortunately, I cannot post the code..  But it is not complicated code.  Just 2 sessions with different IP addresses.  I would expect different telnet session references... 
    As a matter of fact, I need to try something silly.. 

    I should provide more details with the solution....
    I just have to stop saying "D'Oh!!"
    Okay... here goes...
    In the LVOOP, I am using Notifiers and Semaphores to ensure that a race condition cannot occur.  Works well with previously written code.
    In this particular implementation, I have the same / similar object being created more than once (twice at this time).
    Where I went wrong (D'Oh!!!!), was to have a static name for a given object when creating the Notifier and Semaphore references.  Since the same name was given, so was the reference.  SInce the references were the same, so was the data, and so on.
    D'Oh!!!!!!
    D'Oh!!!!!!
    Now I know why a particular bird was called D'Oh-D'Oh bird... 
    D'Oh!!!  Such a silly mistake...
    D'Oh!!!
    Hope it makes a few people laugh...  or help another bird....

  • Parallel process with a queue and file?

    Hello, first of all sorry for my bad english^^:
    I am working for days on my project where I have to show the parallel process while transfering information by different ways and their problems (like timing and so on). 
    I chose the transmission of information to a parallel process by (1) a queue, and by (2) a file (.txt). (other ways are welcome, do you have 1-2 other ideas?)
    To solve this problem I made three while loops, the first one is the original one, where the original information (as a signal) is created and send by queue and file to the other two while loops, where this information is getting evaluted to create the same signal.
    so in the end you can compare all signals, if they are the same - so that you can answer the question with the parallelity of process.
    but in my vi file i have some problems:
    the version with the queue works pretty fine - it's almost parallel
    but the version with file doesn't work parallel, and i have no idea how i can solve it - -
    i'm a newbie^^
    can someone correct my file, so both (file and queue version) run parallel with the original one, or tell me what i can or must do?
    Attachments:
    Queue_Data_Parallel_FORUM.vi ‏23 KB

    LapLapLap wrote:
    Hello, first of all sorry for my bad english^^:
    I am working for days on my project where I have to show the parallel process while transfering information by different ways and their problems (like timing and so on). 
    I chose the transmission of information to a parallel process by (1) a queue, and by (2) a file (.txt). (other ways are welcome, do you have 1-2 other ideas?)
    To solve this problem I made three while loops, the first one is the original one, where the original information (as a signal) is created and send by queue and file to the other two while loops, where this information is getting evaluted to create the same signal.
    so in the end you can compare all signals, if they are the same - so that you can answer the question with the parallelity of process.
    but in my vi file i have some problems:
    the version with the queue works pretty fine - it's almost parallel
    but the version with file doesn't work parallel, and i have no idea how i can solve it - -
    i'm a newbie^^
    can someone correct my file, so both (file and queue version) run parallel with the original one, or tell me what i can or must do?
    A queue is technically never parallell, though you can have several if you really need parallellism. Other methods include Events, Action Engines, Notifiers (and why not webservices) for information transfer between processes.
    Due to limitations in the disk system you can only read/write one file at a time from one process, so i wouldn't recommend that. If you use a ramdisk it might work.
    /Y
    LabVIEW 8.2 - 2014
    "Only dead fish swim downstream" - "My life for Kudos!" - "Dumb people repeat old mistakes - smart ones create new ones."
    G# - Free award winning reference based OOP for LV

  • Use of global data in transformation with parallel processing

    Hi,
    In an upgrade I have a global variable in a routine in 3.5 in a transfer rule. The global variable keeps count of an ID.
    The global variable is used in every data package,because the corrsponding infopackage is set to PSA only, where help says:
    "If you select this processing option and then request processing is done serially during loading, the global data are maintained as long as the process with which the data were processed remains.
    In 7.0 I also use a global variable to do the same thing, but an Infopackage is no longer available, because we use DTP´s now.
    How can I store my global variable so that all parallel processing of the data packages use the same global variable? or can I set an option so that the DTP does serial processing, similar to the infopackage setting in 3.5?
    As a workaround I increased the package size to a very large figure, but I would be more comfortable with a sounder solution.
    thanks

    Hi Max,
      Try to declare your global variable in start routine. Among the lines:
    $$ begin of global - insert your declaration only below this line  -
    $$ end of global - insert your declaration only before this line   -
    Best Regards.
    Javier Gómez

  • Pointers for optimizing system performance (run time) while running DP process chain with parallel processing

    Hi Experts,
    We are running APO DP process chain with parallel processing in our company, we are experiencing some issues regarding run time of process chain, need your help on below points;
    - What are the ways we can optimize process chain run time.
    - Special points we need to take care of in case of parallel processing profiles used in process chain.
    - Any specific sequence to be followed for different processes in process chain - if there is some best practice followed.
    - Any notes suggesting ways to improve system performance for APO version 7 with different enhancement packs 1 and 2.
    Any help will be really appreciated.
    Regards

    HI Neelesh,
    There are many ways to optimize performance of the process chains (background jobs) in APO system.
    Firstly I would recommend you to identify the pain areas (steps) which are completing with more runtimes. Then each one of the step has got different approaches to decrease the runtime.
    Like you may end up with steps like infopackage executions, DTPs, DP mass processing jobs etc which might be running with more runtimes. So now target each one of them differently and find out the ways to optimize. At the same time the approach you follow should be technically possible with basis perspective (system load and utilization) as well.
    And coming to parallel processing, you can use parallel processing for different for different jobs. You can further r explore on the same using parallel processing. Like loading an infocube, mass processing, infopackage execution, DTP, TSCOPY etc.
    Check the below link for more info
    Performance problems in DP mass processing
    Let me know if you require further info.
    Regards,
    Raj

  • Parallel workflow with final Reviewer in 10.1.3

    Hi,
    I am looking how to implement the equivalent of Parallel workflow with final Reviewer (used in 10.1.2) in our 10.1.3 BPEL version.
    when i used Group Vote + Sigle Approver, it does not give me what i want ............. and how to access the subtasks from a given parent task?
    How to create a parent task with subtasks ???
    thanks
    BG.

    Hi Karl!
    I've the same problem! Did you find any solution for that problem ???
    Thanks,
    Nuno Sénica.

  • Strange responsetime with query triggering parallel processing

    Hi,
    Are there anyone who can run this SQL below on a RAC 11g R2, and give me feedback on how long time it took?
    In my environment it takes more than a minute, and I suspect there is a problem with the database and parallel processing.
    If I collect the values from the inner select and make a list of comma-separated values, it goes fast.
    Replace SCHEMA_NAME with an existing SCHEMA_NAME on you database before running.
    SELECT sa.*
      FROM gv$sqlarea sa
      WHERE sa.sql_id in (SELECT sql_id from gv$sqlarea where parsing_schema_name like '%SCHEMA_NAME%' and rownum < 100)
    Thanks!

    You are using the hint a bit incorrectly.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/sql_elements006.htm#SQLRF50805
    <quote>
    The NO_PARALLEL hint overrides a PARALLEL parameter in the DDL that created or altered the table. For example:
    SELECT /*+ NO_PARALLEL(hr_emp) */ last_name
    FROM employees hr_emp;
    <quote>
    Check Metalink note 267330.1 as well.

  • Use of parallel processing profiles with SNP background planning

    I am using APO V5.1.
    In SNP background planning jobs I am noticing different planning results depending on whether I use a parallel processing profile or not.
    For example if I use a profile with 4 parallel processes, and I run a network heuristic to process 5 location products I get an incomplete planning answer.
    Is this expected behaviour? What are the 'good practices' for using these profiles?
    Any advise appreciated...

    Hello,
    I don't think using parallel processing profile is a good idea when you run network heuristic, since in network heuristic, the squence of the location/product is quite important. The sequence is determined by low-level code, as you may already know.
    For example, in case of external procurement, it must first plan the distribution center then the supplying plant, and in case of inhouse production, it must first plan the final product then the components.
    If you use parallel processing, the data set which is sorted by low-level code would be divided into several blocks and be processed at the same time. This may mess the planning sequence. For example, before the final product is planned in one block, the component is already planned in another block. When it plans the final product, new requirement of the component is generated, but the component will not be planned again, which results supply shortage of the component .
    If there're many location products, maybe dividing the data set manually is a good practice. You can put those related location products in one job, and set several background jobs to plan different data set.
    Best Regards,
    Ada

  • Job fail with Timeout for parallel process (for SID Gener.): 006000

    Hello all,
    Im getting below error and not able to find any issue with Basis side. Please anyone help on this!
    Job started
    Data package has already been activated successfully (will be skipped)
    Process started
    Process started
    Process started
    Process started
    Process started
    Import from cluster of the data package to be activated () failed
    Process 000001 returned with errors
    Process 000002 returned with errors
    Process 000003 returned with errors
    Process 000004 returned with errors
    Background process BCTL_4XU7J1JPLOHYI3Y5RYKD420UL terminated due to missing confirmation
    Process 000006 returned with errors
    Data pkgs 000001; Added records 1-; Changed records 0; Deleted records 0
    Log for activation request ODSR_4XUG2LVXX3DH4L1WT3LUFN125 data package 000001...000001
    Errors occured when carrying out activation
    Analyze errors and activate again, if necessary
    Activation of M records from DataStore object CRACO20A terminated
    Activation is running: Data target CRACO20A, from 1,732,955 to 1,732,955
    Overlapping check with archived data areas for InfoProvider CRACO20A
    Data to be activated successfully checked against archiving objects
    Parallel processes (for Activation); 000005
    Timeout for parallel process (for Activation): 006000
    Package size (for Activation): 100000
    Task handling (for Activation): Backgr Process
    Server group (for Activation): No Server Group Configured
    Parallel processes (for SID Gener.); 000002
    Timeout for parallel process (for SID Gener.): 006000
    Package size (for SID Gener.): 100000
    Task handling (for SID Gener.): Backgr Process
    Server group (for SID Gener.): No Server Group Configured
    Activation started (process is running under user *****)
    Not all data fields were updated in mode "overwrite"
    Data package has already been activated successfully (will be skipped)
    Process started
    Process started
    Process started
    Process started
    Process started
    Import from cluster of the data package to be activated () failed
    Process 000001 returned with errors
    Process 000002 returned with errors
    Process 000003 returned with errors
    Process 000004 returned with errors
    Errors occured when carrying out activation
    Analyze errors and activate again, if necessary
    Activation of M records from DataStore object CRACO20A terminated
    Report RSODSACT1 ended with errors
    Job cancelled after system exception ERROR_MESSAGE

    Thanks for the link TSharma I will try that today.
    UPDATE:
    I ran a non-parallel Data Pump and just let it run overnight. This time it finished after 9 hours.  In this run I set the STATUS=300 parameter in the PARFILE which basically echos STATUS updates to standard out every 300 seconds (5 minutes).
    And as before after 2 hours it finished 99% of the export and just spit out WAITING status for the last 7 hours until it finished.  The remaining TABLES it exported (a few hundred) were all very small or ZERO rows.  There clearly is something going on that is not normal.  I've done this expdp before on clones of this database and it usually takes about 2-2.5 hours to finish.
    The database is about 415 Gigabytes in size.
    I will update what the TRACE finds and I'm also opening a case with MOS.

Maybe you are looking for

  • Issue in Asset Mgmt

    Hi I have an issue in implementing Asset Accounting. We are live on SAP FI and have GL Accounts for all asset fo accumulated depreciation, depreciation expense and cost of asset. Now we want to use Asset accounting  and would classify the asset based

  • How to open a new window from the login window?

    hi, can someone tell me how to open a new window from an existing window, here by window i mean frame. The case is i hv two java files - oracle.java and FDoptions.java. The first frame is in the Login.java. The oracle.java file has a button "Login",

  • SG300: How to set up routing between VLANs?

    I have recently purchased a Cisco SG300-10.  I need it to perform routing between two VLANs on the switch. Seems like this should be quick and easy to do from the built in GUI. When I configure it according to the documentation, it does not ropute be

  • Hi can someone  please help...!?!?!

    um..my bro bought me a video ipod from some guy...and i pretty much know how it works..i already had a ipod mini....and i have videos i downloading....but it says its not the right format...so i got quicktime pro 7!!....it says itll convert them....s

  • How to delete Pending WIP Move Transactions with running status

    hello, can any one guide me, how to delete Pending WIP Move Transactions with running status thanks in advance sadiq