Workflow hooks to improve routing process to multiple people

Friends,
I am a beginner to Workflow. This is a general question. What are the workflow hooks to improve routing process to multiple people.
Points will be rewarded for helpful answers.
Thanks.

I think this the best book available
http://www.workflowbook.com/
If you have further queries then you can consider SDN BPM and Workflow forum as a repository where you can find most of the answers.
Thanks
Arghadip

Similar Messages

  • Improving ODM Process Performance

    Hi Everyone,
    I'm running several workflow on sqldeveloper data miner tools to create my model. My Data is around 3 million rows, to monitor the process I look to oracle enterprise manager.
    From what I've seen in oracle enterprise manage most of process ODM from my modelling didn't get parallel and sometimes my process not finished more than a day.
    Any tips/suggestion how we can improve ODM Process Performance ? By enable parallelism on each process/query maybe ?
    Thanks

    Ensure that any input table used in modeling or scoring has a PARALLEL attribute set properly. Since minig algorithms are usually CPU bound try to utilize whatevet CPU power you have. Following might be a good starting point:
    ALTER TABLE myminingtable PARALLEL <Number of Physical Cores on your Hardware>;

  • Parallel processing on multiple IdM instances -- real enterprise class

    Hi all. We run what we earnestly hoped would be a true enterprise class IdM v6 SP1 installation: 2x v440 Sol9 app servers each with 2 IdM instances (AS virtual servers), each host dual CPU and 4GB RAM available; connected to a 4 node dual CPU Oracle RAC cluster.
    But our performance, to use a technical term, sucks bollocks, and has done for >12 months. The main area where this hurts is when we run a DBTable ActiveSync process to update the user base and any associated resources.
    We suspect a few things, but ultimately the most frustrating and intractable problem is this: IdM processes all the Update User tasks stemming from the ActiveSync poll one by one, sequentially, in series, and all on the originating IdM instance. So if we have, say, 5000 updates to process, we watch 5000 sequential User Update tasks, one after the other. Even if each task takes only a couple seconds, we often notice inexplicable gaps of many seconds between one sequential task completing (start time + execution time) and the next beginning. The end result is a throughput rate of usually less than 300/hr -- more than 10hrs to process those 5000 updates!
    Despite setting the [custom] Update User wf to execMode='async', IdM seems to refuse to run these tasks in parallel. In an installation of this size and resource capacity, this is excruciating. Plus there's the fact that as I write, and as it crawls along, the IdM instance running the tasks is showing up in prstat like this:
    PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
    28946 sentinel 2583M 2379M cpu3 28 0 87:27:53 25% appservd/177
    That's quite a lot of resource-use for not a lot of outcome...
    So, does anyone know: how can we get these tasks to run multiple in parallel rather than sequentially; how can we get the task load to be spread across all 4 available IdM instances rather than just the one that executed the ActiveSync poll?
    In long suffering desperation, any help greatly appreciated!
    PS - 2 things regarding parallel:
    1. on the TaskDefinition for the Update User wf, there is also an attribute named 'syncControlAllowed' -- anyone know what this means/does? Would setting it to 'false' perhaps give us true async/parallel mode?
    2. I suspect forcing the task into background could potentially give parallel behaviour, but the problem is that Update User is of course called by any user update task, and when done interactively the administrator may not wish to have the task run in the background.

    Hi,
    i'm afraid i don't know an easy answer to your question. But there are several things that should be considered:
    1) 12s average update time seems a lot. I'm sure that can be optimized. Have you thought about a dedicated sync form (assigned to the proxy admin used for active sync)? If you have that you might also consider setting viewOptions.Process there to use a simplified workflow optimized for your sync process. Take a look at the dynamic tabbed user form example as well - some of the resources you have may not be subject to synchronization and those should be ruled out by setting the sync forms "TargetResources" property to an aproptiet value.
    2) i recently found an interessting way of parallelizing tasks. This is potentially dangerous as you have to build something in that prevents this from being a "fork bomb". Still some example code for a simple scenario where you just want to sync global.email, populated by your AS form. If the workflow consisted out of "start", "nonblocking task launch" and "end" the "nonblocking task launch" had an action like
    <Action id='0'>
        <expression>
          <block>
            <defvar name='session'>
              <invoke name='getLighthouseContext'>
                <ref>WF_CONTEXT</ref>
              </invoke>
            </defvar>
            <defvar name='tt'>
              <new class='com.waveset.object.TaskTemplate'>
                <invoke name='getObject'>
                  <ref>session</ref>
                  <s>TaskDefinition</s>
                  <s>Nonblocking Workflow</s>
                  <map/>
                </invoke>
              </new>
            </defvar>
            <invoke name='setSubject'>
              <ref>tt</ref>
              <invoke name='getSubject'>
                <ref>session</ref>
              </invoke>
            </invoke>
            <invoke name='setVariable'>
              <ref>tt</ref>
              <s>accountId</s>
              <ref>user.waveset.accountId</ref>
            </invoke>
            <invoke name='setVariable'>
              <ref>tt</ref>
              <s>email</s>
              <ref>user.global.email</ref>
            </invoke>
            <invoke name='runTask'>
              <ref>session</ref>
              <ref>tt</ref>
            </invoke>
          </block>
        </expression>
      </Action>The workflow "Nonblocking Workflow" then would have accountId and email available in its variables and if it is defined as "async" it will really be launched that way - build in something that prevents your system from exploding...
    3) probably safer than trying what i implied in 2) (i only used this for a totally different task that can not explode) you could consider to have several instance of the database table resource adapter. Lets say your primary key in the db is "employeeId". If you define for seperate resources, each handling employeeIds modulo [1,2,3,4] only you could distribute the load among your cluster. I did a similar thing with flatfileactivesync before.
    4) back to the average 12s again. If you don't have some slow resources this could mean that the parallel resource limit kicks in. Take a look at waveset.properties about limits like this.
    Synchronization is not the ootb strength of IDM - but with some optimization you should be able to get to reasonable results.
    Regards,
    Patrick

  • MDG-F 7.0 Processing of Multiple Objects

    Hello all,
    Under the "Processing of Multiple Objects", there are a few of function listed below I don't quite understand. What each of the funciton really do?
    Create Hierarchy Change Request for Cost Center Group
    Process Hierarchy for Cost Center Group
    (I believe these are a pair of function i need to do both)
    Other pair is called
    Create mass chagne request for cost center group hierarchy
    Process Cost Center Group Hierarchy
    Thanks,
    LUO

    Hi Luo,
    Under Processing of multiple object--> Create hierarchy change request for cost center group include all the 3 entity types, i.e. CCTR, CCTRG,CCTRH. This is a standard change request type "CCGHP1". Here you can specify the object for all the entity types and also perform the collective processing. Here you will only need one change request to do all the activities. Can create the entries and hierarchy structure within this
    For the CR type "CCGHP1"  assign the workflow steps(submission/activation/revision).
    3 different change request: the other way of it was creating first cost center, then in another change request cost center group and hierarchy in another one. Any way you will need 1 more change request of type CCGHP1(Create hierarchy change request for cost center group) if you want to process the entity together to create hierarchy.
    Regards,
    Reema

  • Processing the multiple sender xml one by one in a time gap to RFC

    Dear Experts,
             I have to process the multiple sender xml file one by one from FTP to RFC in time gap.
    For Ex:
            I will place 10 xml file in a FTP path at a  time, PI is picking 10 file at a time and process it to RFC at a time.
    Any other way to process the multiple file one by one through PI in a time gap to RFC
    (i,e) PI needs to process the 10 files one by one, once the first file processed successfully from FTP to RFC then the next file to process in a time gap to avoid getting the error in RFC.
    Kindly suggest your ideas or share some links how to process this multiple files.
    Best Regards,
    Monikandan.

    Hi Monikandan,
    You can use CE BPM with PI 7.1 But first check the suggestion of Anupam in the below thread:
    reading file sequentially from FTP using SAP PI file adapter
    Regards,
    Nabendu.

  • Dump during processing on multiple work process.

    Hi ,
    I have designed a prohgram "report1" which creates variants for another program "report2" and schedules those variants in back ground on multiple work processess.
    When run the "report1", it creates 19 variants for "report2" and those variants will be scheduled for background process on multiple workprocess.
    My problem is, when i execute "report1", out of 19 jobs 7 jobs have been cancelled due to dump.
    The dump analysis is as follows*
    Runtime errors         TSV_TNEW_PAGE_ALLOC_FAILED
    Short dump has not been completely stored. It is too big.
    h1 No storage space available for extending the internal table.
    What happened?*
    You attempted to extend an internal table, but the required space was
    not available.
    Error analysis*
    The internal table (with the internal identifier "IT_77") could not be
    enlarged any further. To enable error handling, the internal table had
    to be deleted before this error log was formatted. Consequently, if you
    navigate back from this error log to the ABAP Debugger, the table will
    be displayed there with 0 lines.
    When the program was terminated, the internal table concerned returned
    the following information:
    Line width: 1240
    Number of lines: 340000
    Allocated lines: 340000
    New no. of requested lines: 10000 (in 1250 blocks).
          Last error logged in SAP kernel*
    Component............ "EM"
    Place................ "SAP-Server SR-3110_ISI_30 on host SR-3110 (wp 26)"
    Version.............. 37
    Error code........... 7
    Error text........... "Warning: EM-Memory exhausted: Workprocess gets PRIV "
    Description.......... " "
    System call.......... " "
    Module............... "emxx.c"
    Line................. 1886.
    Source code extract*
    001640   FORM get_new_info .
    001650
    001660     DATA: wa_res LIKE zrisu_openitems_view,
    001670           it_temp_final TYPE TABLE OF zrisu_openitems_view.
    001680
    001690     REFRESH: it_temp_final, it_final.
    001700   * collect the free memory
    001710     CALL METHOD cl_abap_memory_utilities=>do_garbage_collection.
    001720
    001730   * First find all the open FI document items on the system satisfying
    001740   *  the selection criteria.
    001750     SELECT opbel opupw opupk opupz gpart vtref vkont abwbl abwtp stakz
    001760            waers studt betrw mahnv blart xblnr
    001770            INTO CORRESPONDING FIELDS OF TABLE it_temp_final
    001780            FROM dfkkop
    001790            PACKAGE SIZE c_blksiz
    001800            FOR ALL ENTRIES IN s_opbel
    001810            WHERE opbel = s_opbel-low
    001820              AND augst = space
    001830              AND gpart IN s_part
    001840              AND abwtp = space
    001850              AND abwbl = space
    001860              AND bldat IN s_bldat
    001870              AND blart IN s_blart.
                APPEND LINES OF it_temp_final TO it_final.  ---> dump occurs at this statement
    001890       REFRESH it_temp_final.
    001900     ENDSELECT.
    001910     REFRESH s_opbel.
    001920
    01930     SORT it_final BY opbel opupw opupk opupz.
    01940
    01950     LOOP AT it_final INTO wa_final.
    01960   * Check the documentdate if relevant
    01970       IF wa_res-opbel EQ wa_final-opbel.
    01980   * Use information from last record  if same document  to make the
    01990   *    program faster.
    02000         IF wa_res-txt EQ 'DELETE NEXT RECORD'.
    02010   * The record should be deleted for the same reason as the last item
    02020           DELETE it_final.
    02030           CONTINUE.
    02040         ELSE.
    02050           wa_final-bldat = wa_res-bldat.
    02060         ENDIF.
    02070       ENDIF.
    If i run the jobs for which i got dump individualyy one after other, then the jobs executed successfully.
    Pls help me in this regrad.
    Can i use filed-symbols in thsi case to avoid dump?
    Thanks in advance .
    Taj

    HI
    It seems memory available to a particular process in your case was not enough to continue.
    Could you try to implement the same by convertin Report2 as an FM and calling it as:
    CALL FUNCTION func STARTING NEW TASK task
    So now instead of making different variants you could just loop over the FM with different values.
    If this does not seem feasible restrict your data anyhow.
    Revet with findings!

  • How to check which action is completed in a Process having Multiple Actions

    Hi,
         In a GP that we designed, a Process contains multiple actions, so what we want to achieve is after 1st Action is completed i.e once 1st Approver approves the request, next action is executed wherein a Notification Workitem in his/her UWL is sent to initiator to notify the approvers action whether it rejected or approved.
         My question is when a Initiator starts a process, then he/she is not able to start the same process again, unless & until the process is finished, but now the requirement is, if 1st Level Approver i.e 1st Action, is completed, we need to allow the Initiator to start a new process of same name in-spite of 2nd Action & previously started process is not completed.
         Please let me know if I can find whether in a Process a particular Action is completed or not.
      Thanks.
    Regards
    Tushar

    Hi,
         Any possible inputs for my query......
        do let me know for your suggestions.
    Thanks
    Best Regards
    Tushar.S

  • Problem initiating process with multiple operations using HTTP/SOAP

    Hi,
    I have defined a process which has multiple operations. My process starts with a pick activity containing onMessage branches for each possible operation. When I initiate this service using the BPEL console, I choose one of the operations and everything works fine.
    However, when I initiate this process using an HTTP/SOAP web service call using JMeter, always the first branch is executed regardless of the message I send.
    My operations are document/literal. Due to some restrictions, I cannot define a SOAPAction for the operations. Could this be the problem? Is there a workaround for this? If this is not the problem, what could be the cause?
    I'm using version 10.1.2.1.
    Any help will be appreciated. Thanks in advance..

    hi i am using 10.1.3 and still i can not initiate a process with multiple operations. Is there anybody to tell if pick activity works fine and if there are some points to consider?
    If there is a problem is there any solution to make a process with multiple operations?

  • The IP Routing Process for VLAN's?

    I have the CCNA Study Guide third edition. Chapter five, page 254 has a description of the ip routing process.  It descripes the ip routing process using two nodes on different subnets and a router.
    We've recently deployed VLAN's, so I'm asking if I replace the router with a switch in the description, if the principles would still apply?  I'm also asking if someone knows of a link that describes the ip routing process when using a VLAN?  I'm looking for documentation that's similar to what's in the book, but modernized.
    tia

    I do not have the book but i will try to help:
    Switches dont do inter-vlan routing (Unless you are using a layer 3 switch) so if you are have different Vlans configured on your switch and want them to comunicate each other without a router you cant.
    If you have a router the same principles apply to normal routing, remember that VLANS are just a way of dividing your clients, so your VLANS configured on your subinterfaces will appear directly connected on your ip route table. For example:
    VLAN 10: 192.168.10.0
    VLAN 20: 192.168.20.0
    VLAN 30: 192.168.30.0
    You have vlan 10 and 20 locally on a lan interface (They will be directly connected on your ip route) but vlan 30 is reachable thru a serial interface (ppp or frame relay) you would use:
    ip route 192.168.30.0 255.255.255.0 serialx/x

  • Workflow tasks related to error processing when u201Cinvoices received via EDI"

    Hi all,
    Please tell me the process like how an workflow tasks related to error processing when u201Cinvoices received via EDI".
    Give me in details the inform.
    Thanks in advance
    Chakri

    Hi Sven,
    Please implement SAP Note: 1321676 in your backend system to solve the current issue.
    But later you will have to implement SAP Note: 1380788 also in your backend system to solve some other issues.
    Regards,
    Binson

  • 2 - way routing process in an ESB

    Hi !
    I want a 2-way routing process in an ESB .
    My requirement is there is two table say Emp and Company .
    I will select some data from Emp and after some transformation I will insert those to data into Company table .
    When I am using poll then it is working fine but when I am trying to use select and insert query then router only accept select wsdl not the insert wsdl .
    Help me regarding this

    Hi,
    I am afraid that there is no way to have two routing nodes in one flow. But you can use service call to get same job done.
    Regards,
    Emir

  • Improve cube process time

    Hi,
    I am processing cube with process full. its taking around 15 mins. I planned to improve this by adding indexes to the database. I used SQL Trace while processing & later used the result to the database tuning advisor to get recommendations. I got the
    recommendations & i have included those changes to the Database & then processed the cube but there is no improvement. I even tried to get the expensive queries from the trace & run in SSMS with Query execution plan but I am not able to figure
    out. Please help.
    Thanks
    sush

    Hi susheel1347,
    According to your description, you want to improve the cube processing. Right?
    In Analysis Services, cube processing is performed in Analysis Services by executing Analysis Services-generated SQL statements against the underlying relational database. Here we have some advices on improving the cube processing:
    Use integer keys if at all possible
    Use query binding to optimize processing
    Partition measure groups if you have a lot of data
    Use ProcessData and ProcessIndex instead of ProcessFull
    For more information, please refer to links below:
    SQL Server Best Practices Article
    Improving cube processing time
    If you have any question, please feel free to ask.
    Simon Hou
    TechNet Community Support

  • 6513 ACL List ION Routing Process

    Hello everyone I have a question about ION Routing Process that seems to show up on the 6513 cisco switch.
    Ok question, when I do a show access-list Test-ACL
    It will show me Test-ACL list then right after that with ION Routing Process the next line starts with all of the ACL lists that are on the device.
    We have two 6513 and they both do this but when i check out two 6509 they dont do this.
    Also check on 3750e too dont do this
    Can anyone explain this to me ?

    Craig
    What you are asking for can be done but it will be a bit tedious. You describe having 20 entries in the topology table and you want to control which one gets placed into the routing table. To do that you can either make the composite metric of the one entry more attractive or you can make the composite metric of the other 19 less attractive.
    When you use an offset list you add to the metric of an entry. It will not make any entry be more attractive but it can make other entries less attractive. So you could configure an offset list identifying the route in question and apply some offset (100 should work for example). Then you would need to apply this offset to the 19 neighbors who are advertising the route.
    HTH
    Rick

  • Routing process

    Hi,
    While I am debuging ICMP traffic originated from a router ( Debug ip packet details) it is showing packets routed via FIB and reply is showing as routed via RIB .is it means ICMP packet originating from the router is CEF switched ? if it does then whether the CEF switched packet will show in the debug  output ?.Documents showing that only process switched packet can be debugged .
    one more doubt regarding the routing process
    documents stating that the routing process is actually consist of three processes 1.routing 2.switching 3.encapsulation
    so regardless of every switching methods (Process ,Fast or CEF ) router is doing routing table lookup in the first process (Routing)
    thanks
    Arun Mohan

    hi,
    Routing is one of the master data used in PP module.
    For detailed explanation, pl. follow the link
    http://help.sap.com/erp2005_ehp_04/helpdata/en/7e/d42611455911d189400000e8323c4f/frameset.htm
    Madhava

  • How to Improve Delete Process

    How to Improve Delete Process for File Reloads of Same Month of Data on Cubes.We currently use a SAP generated program for selective deletion to do this.  It would be more efficient to delete by InfoPackage request (or packet) and use delivered processes in process chains to do this.How can we use the more efficient delete processes in the process chains to perform the task.Could u explain me with steps as it would be more clear

    How to Improve Delete Process for File Reloads of Same Month of Data on Cubes.We currently use a SAP generated program for selective deletion to do this.  It would be more efficient to delete by InfoPackage request (or packet) and use delivered processes in process chains to do this.How can we use the more efficient delete processes in the process chains to perform the task.Could u explain me with steps as it would be more clear

Maybe you are looking for

  • Has anyone done a MacBook Pro Exchange?

    Hey guys, Just wondered if I could ask you guys what you think of getting a replacement for my MacBook Pro - it's well within warranty, and I'm having a fair amount of problems with it! 1) The fans sound like a card in a bicycle spoke - and when I ge

  • Column link - call java script & assign current report column value to item

    Hi, How to call java script and assing current report column value to item? I have a button column in the report to 'delete' the selected row. I want to first show dialog box with message 'Are you sure?'. If yes, process to delete will be executed, e

  • Publishing Excel WorkBook in SharePoint 2013

    I have deployed Excel Services, SSRS and BI in SharePoint 2013.   The Site BI has been identified as a Trusted location for Excel Services. When I publish a Excel workbook to a document library of the BI site, I get an error "Upload failed Your Chang

  • Permission to read file

    Hello, I'm developing a class that is supposed to read information from a .txt via Scanner(). The file contents give the program instructions on what to draw in a graphics area. The problem is that I keep an AccessControlException saying access is de

  • No drag and drop from iTunes to Toast anymore

    Hello I love to create compilations for my friends using the crossfade tool within Toast10. Now apple prevents dragging titels directly into Toast since iTunes 8 or 9. You have to copy them on the desktop first, which is great because you have to kee