Modelling Collaborative BPM across Value Chain Evaluation of Approaches

Question: Which approach is easy for modelling Collaborative BPM across value chain A) Centralise, B.) Decentralise, and C) Peer to Peer approach for?  What are the pros and cons?  What are the other important factors related with each approach?
Explanation of these approaches:
Centralise CBPM:  Ownership of Collaborative System is with one Central Organisation and Other Partners Participate in this collaboration through various UI such as portal.  Example, Collaboration between the automobile manufacturer and dealer were auto mobile manufacturer provides portal for dealer to place order, manage its customer and handle warranty and recalls.
Decentralise CBPM: In this approach every partner provides communication technology like Web Services for its Business Partner for collaboration.  The ownership is decentralise and more flexible.
Peer-to-Peer CBPM: In ideal condition in this approach every Partner should have the same technology which is developed for handling collaborative Business Processes.  In this case the same modelling facility is available across the value chain.
I kindly request you to post your views on this topic as this is related with my dissertation topic.
I welcome your views and guidance.
Regards,
Ganesh Sawant

kRISHNA,
-->Send Step to send the file message with system ACK
By system acknowledgement do you mean by acknowledgement type application or transport?
-->After checking the status ,call the RFC synchronously.. i.e Synch Send Step.
can i call rfc synchronously there after if yes then please let me know logic/reason behind...?
-->But where are you planning to have mapping ?
since in first two steps receive and send i am just picking the file and posting it some other place so no mapping is required but after posting the file i am using a graphical mapping program defined in IR only for mapping sync source message with the BAPi/RFC which is going to be triggered.
but i am getting error related to mapping...
Kindly correct me if i am wrong

Similar Messages

  • Collaborative BPM across Value Chain Evaluation of Approaches

    Hi all,
    Question: Which approach is easy for modelling Collaborative BPM across value chain A) Centralise, B.) Decentralise, and C) Peer to Peer approach for? What are the pros and cons? What are the other important factors related with each approach?
    Explanation of these approaches:
    Centralise CBPM: Ownership of Collaborative System is with one Central Organisation and Other Partners Participate in this collaboration through various UI such as portal. Example, Collaboration between the automobile manufacturer and dealer were automobile manufacturer provides portal for dealer to place order, manage its customer and handle warranty and recalls.
    Decentralise CBPM: In this approach every partner provides communication technology like Web Services for its Business Partner for collaboration. The ownership is decentralise and more flexible.
    Peer-to-Peer CBPM: In ideal condition in this approach every Partner should have the same technology which is developed for handling collaborative Business Processes. In this case the same modelling facility is available across the value chain.
    I kindly request you to post your views on this topic as this is related with my dissertation topic.
    I welcome your views and guidance.
    Regards,
    Ganesh Sawant

    kRISHNA,
    -->Send Step to send the file message with system ACK
    By system acknowledgement do you mean by acknowledgement type application or transport?
    -->After checking the status ,call the RFC synchronously.. i.e Synch Send Step.
    can i call rfc synchronously there after if yes then please let me know logic/reason behind...?
    -->But where are you planning to have mapping ?
    since in first two steps receive and send i am just picking the file and posting it some other place so no mapping is required but after posting the file i am using a graphical mapping program defined in IR only for mapping sync source message with the BAPi/RFC which is going to be triggered.
    but i am getting error related to mapping...
    Kindly correct me if i am wrong

  • Collaborative BPM appraoch evaluation

    I am doing Masters dissertation in Collaborative BPM. My research topic is the evaluation of three approached for collaborating BPM across value chain.
    Explanation of these approaches:
    Centralise CBPM:  Ownership of Collaborative System is with one Central Organisation and Other Partners Participate in this collaboration through various UI such as portal.  Example, Collaboration between the automobile manufacturer and dealer were automobile manufacturer provides portal for dealer to place order, manage its customer and handle warranty and recalls.
    Decentralise CBPM: In this approach every partner provides communication technology like Web Services for its Business Partner for collaboration.  The ownership is decentralise and more flexible
    Peer-to-Peer CBPM: In ideal condition in this approach every  Partner should have the same technology which is developed for handling collaborative Business Processes.  In this case the same modelling facility is available across the value chain.
    I found below critical aspects for evaluating these three approaches:
    1] Autonomy
    2] Collaborative process Modelling
    3] Monitoring, Controlling, and Analysis
    4] u201CPlug and Playu201D based Platform for Collaboration (Includes Security, Web Service and various adapters for communication and Language Support like Java for easy custom enhancement)
    5] Governance
    I kindly request you all to post your views about pros and cons for evaluating these approaches from the aforementioned critical aspects.
    I welcome your questions and appreciate your valuable guidance.
    Thank you ,
    Regards,
    Ganesh Sawant

    Hi there,
    I am certainly not an expert, only started last month but lets hope I can be useful.
    2/ Install feature pack. Very easy to do that. You just select "exclude previous participant" and he wont be assigned the task.
    7/ When you generate the task form there is an "ACTIONS" menu. Amongst the choices there is "Escalate"
    Someone with more experience will have to help you through the rest.
    Regards,
    Yanis

  • Preaggregation across value based hierarchy dimension in 11g

    Hi All,
    I have created a cube with 6 dimensions in olap 11g. One of those six dimensions has only one hierarchy which is value based hierarchy. I have chosen level based aggregation as I know at what levels exactly users are going to query. When I was going through the dimensions to choose levels to preaggregate I noticed there were no options available for my value based hierarchy dimension ( I could see "all" option for the same case in 10g) then I tried to look for definitions of underlying objects just to make sure it will preaggregate data across my value based hierarchy dimension.
    I found value set corresponding to my value based hierarchy dimension in <CUBE NAME>SOLVEAGGMAP object , which AWM uses to decide which dimension values to preaggregate but if I do rpr on that value set (rp r<CUBE NAME>SOLVE<DIMENSION NAME>_PVSET) it shows NA so my question is can I pre aggregate across value based hierarchy dimension in olap 11g?
    Olap Version: 11.2.0.1
    AWM version: 11.2.0.1
    Thanks

    Even if you know exactly which levels your users will query, percent based precompute (e.g. 30%) may still be faster in practice because queries are returned using 'sparse looping' instead of 'dense looping'. This was the single biggest performance advantage of 11g over 10g.
    But if you still want to use level based precompute, then you should look at the XML template for the cube (as saved by AWM, for example). In it you should find something called PrecomputeCondition. This defines the set of members that are precomputed. Here is an example I just created using the GLOBAL schema
    <PrecomputeCondition>
    <![CDATA[
      "TIME" LEVELS ("TIME"."MONTH", "TIME".CALENDAR_QUARTER, "TIME".CALENDAR_YEAR),
      CHANNEL LEVELS (CHANNEL.TOTAL_CHANNEL, CHANNEL.CHANNEL),
      CUSTOMER LEVELS (CUSTOMER.MARKET_SEGMENT, CUSTOMER.REGION, CUSTOMER.SHIP_TO),
      PRODUCT LEVELS (PRODUCT.CLASS, PRODUCT.FAMILY, PRODUCT.ITEM)]]>
    </PrecomputeCondition>The PrecomputeCondition is also visible through the USER_CUBES view.
    SELECT PRECOMPUTE_CONDITION
    FROM USER_CUBES
    WHERE CUBE_NAME = 'MY_CUBE';You can hand modify this condition in the XML to specify an alternative 'non level based' precompute condition for any dimension. For example, if you define an attribute named 'SHOULD_PRECOMPUTE' on your PRODUCT dimension that is 1 for members to be precomputed and 0 for all others, then you can change the condition as follows.
    <PrecomputeCondition>
    <![CDATA[
      "TIME" LEVELS ("TIME"."MONTH", "TIME".CALENDAR_QUARTER, "TIME".CALENDAR_YEAR),
      CHANNEL LEVELS (CHANNEL.TOTAL_CHANNEL, CHANNEL.CHANNEL),
      CUSTOMER LEVELS (CUSTOMER.MARKET_SEGMENT, CUSTOMER.REGION, CUSTOMER.SHIP_TO),
      PRODUCT WHERE PRODUCT.SHOULD_PRECOMPUTE = 1]]>
    </PrecomputeCondition>If you recreate the cube from the XML with this condition, then the PVSET valueset you discovered should contain all dimension members for which the attribute value is 1.   This gives you complete control over what is precomputed.   Note that AWM doesn't support this form of condition, so it won't show up if you go to the Precompute tab, but it is valid for the server.  The PL/SQL below will modify the PrecomputeCondition (for the cube named MYCUBE) without going through AWM.
    begin
      dbms_cube.import_xml(q'!
    <Metadata
      Version="1.3"
      MinimumDatabaseVersion="11.2.0.2">
      <Cube Name="MY_CUBE">
        <Organization>
          <AWCubeOrganization>
            <PrecomputeCondition>
              <![CDATA[
               "TIME" LEVELS ("TIME"."MONTH","TIME".CALENDAR_QUARTER, "TIME".CALENDAR_YEAR),
               CHANNEL LEVELS (CHANNEL.TOTAL_CHANNEL,CHANNEL.CHANNEL),
               CUSTOMER LEVELS (CUSTOMER.MARKET_SEGMENT,CUSTOMER.REGION,CUSTOMER.SHIP_TO),
               PRODUCT WHERE PRODUCT.SHOULD_PRECOMPUTE = 1]]>
            </PrecomputeCondition>
          </AWCubeOrganization>
        </Organization>
      </Cube>
    </Metadata>
    end;
    /

  • Mapping in BPM - set value of collection item

    Hello,
    is it possible to set value to exact item of collection in mapping step in netweaver BPM?
    I need something like set(<collection_variable>,<item_index>,<item_value>)  - so exact opposite of GET generic function, which gets specific item from collection.
    Is it possible in NW BPM?

    You dont have to apologize. I didnt meant that as offend.
    I appreciate the possibility to discuss that topic with somebody, because discussion itself sometimes shows other perspective to the problem, which can lead to solution
    Of course I wrote EJB function to solve that - but I cant believe that it isnt standard solution to that pretty common use case.
    I think, that problem lays in very limited implementation of XPath into Netweaver BPM. I will bet my left hand, that in some future SP of BPM will SAP introduce something like this:
    myCollection[1]/notificationId = notificationId
    which is standard XPath way to do that

  • BPM payload value reading problem

    Hi all,
    In a bpm scenario, I have a switch that checks an element's attribute and another that checks an element's value.  In the former case, the TRUE branch gets executed when the condition is satisfied.  However, in the latter the condition is NEVER satisfied (even when it should be).  Here's an example message.  In my BPM, I have one switch that uses <user action="xxxx"> and another that uses <jobCode>.
    <ns1:AccessRequestReply xmlns:ns1="xxxxx">
       <ns1:AccessReply type="complete">
          <ns1:user action="update">
             <ns1:jobCode>1234</ns1:jobCode>
          </ns1:user>
       </ns1:AccessReply>
    </ns1:AccessRequestReply>
    During runtime, the integration server can read the <user> attribute "action" with no problems.  However, it cannot read the <jobCode> element.  From sxi_cache I went into the corresponding workflow and loaded a message into the XML object in question.  Sure enough, only the attribute "action" got loaded.  The <jobCode> element never gets loaded.  I verified the XPath expressions were correct.
    Any ideas as to what might be causing this?
    Thanks,
    --jtb

    Hi James,
    might this problem be related to a problem with the usage of namespaces? Normally XI does explicitly use namespaces only on root node level. But in your case, all elements are prefixed with namespaces.
    Could you simply try to send a message to your integration process, where there is only a namespace prefix on root node level, i.e.:
    <ns1:AccessRequestReply xmlns:ns1="xxxxx">
    <AccessReply type="complete">
      <user action="update">
        <jobCode>1234</jobCode>
      </user>
    <AccessReply>
    </ns1:AccessRequestReply>
    Best regards
    Joachim

  • Model attribute binding to value attribute.

    Hi all,
          I have problem that i have used a model in my view. First i will me executing that model. Then i need to use the ouput parameter of that model to assign it to the value attribute present in the same view. And make it to be visible in the field to which that value attribute is assigned. Please help me out.

    Hi Selvakumar,
    Is it that, Your value attribute , say 'var' (which is bound to the view field) is
    the child of a value node(say this value node be 'MyNode')??
    Then give
    If only 1 record is needed for 'MyNode' set the cardinality of 'myNode' as 1..1 and then give
    wdContext.currentMyNodeElement().setVar(wdContext.<outputModelNode()>.<currentOutputmModelNodeElement>.getOutResult());
    Thanks
    Smitha
    Message was edited by:
            Armin Reichert

  • BPA model to BPM conversion

    Hi,
    I am building a small prototype for a internal demo to showcase the BPA - BPM conversion capabilities. I have a BPA model in 11g R1 that i want to execute in BPM 10g environment... note i am not building any BPEL processes. The exercise is to take what a business analyst modelled in BPA 11gR1 and port it to BPM 10g environment and write some code to make it work. The audience are interested in understanding the effort reuired by IT to execute a business model.
    any help insite is appreciated.
    Thanks
    - vishu

    Hi.
    It is technically possible but practically useless.
    You can export a model from BPA in XPDL format (do not ask me how).
    Then you can import it to OBPM (right click on Process in Process navigator).
    Please take in mind that BPA model is a use case so you have to refurbish it totally if you need a process model.
    Some time we use BPA models as a blueprint but we never convert them to BPM - loose of time.
    Good luck.
    Igor

  • Migrate BPM 7.11 process modells to BPM 7.2

    Hi,
    last week we have CE7.2 installed. Now there is a problem in migrating existing BPM 7.11 models to 7.2.
    Do anybody have already some experience in doing so?
    Regards

    Hello Martin.
    We have such experience. At first you have to change all dependencies of your SC from 7.11 to 7.20 (if you use NWDI import new SCs in your track and resync it in NWDS). Then build your DC with process model, and after this look for warnings. There you can find which advise you to convert process model from 7.11 to 7.20, apply quick fix and you model will be converted in right format. Remember about max size of warnng pool in your NWDS, not all of them may be shown.
    If this information doesn't help you, please provide more info about your project infrastructure and etc.
    Regards, Alexander.

  • Dimensional modeling and year ago values

    Hi,
    I have been in this process for the last six months to model a warehouse to my client. I have a requirement where my client want to compare the data from one year to another year ( this is what generally warehouse is build for :) ).
    Biggest challenge in my requirement is to get previous year-weeks Articles comparing current year-week Articles, as new Articles can be added during the Current year and few old Articles will no more exist in the current year-week.
    Art_no     Sold_amt     Tim_id     year-week
    10001      20.5           700001     201101
    10002      10.3           700001     201101
    10001      30.5           800001     201201
    20001      50.2           800001     201201
    Art_no      Sold_amt Tim_id          year-week      Prev_Sold_Amt
    10001      30.5      800001          201201           20.5
    10002      null      null            201201           10.3
    20001      50.2      800001          201201          null                                                       I decided to do them in ETL, it worked for me now, but yes it is not scalable to get previous-previous years data and has redundant data. I wonder how generally this is achieved in warehouses? I know we can do an attempt to calculate on the fly using an full-outer joint, but it can be result null values in the time dimension column of fact table.
    This was also discussed in my previous therad
    MV as core table
    Thanks,
    Hesh

    >
    I decided to do them in ETL, it worked for me now, but yes it is not scalable to get previous-previous years data and has redundant data. I wonder how generally this is achieved in warehouses? I know we can do an attempt to calculate on the fly using an full-outer joint, but it can be result null values in the time dimension column of fact table.
    >
    I'm not sure I understand what your question is.
    Are you talking about needing to provide groups for dates or ranges that you do not actually have data for. For example to show 1st quarter data and show 'something' for February even if you have no data for February?
    If so the term for that is Data Densification and there is a discussion of it with examples in the Oracle Data Warehousing Guide
    http://docs.oracle.com/cd/E14072_01/server.112/e10810/analysis.htm#i1014934
    >
    Data Densification for Reporting
    Data is normally stored in sparse form. That is, if no value exists for a given combination of dimension values, no row exists in the fact table. However, you may want to view the data in dense form, with rows for all combination of dimension values displayed even when no fact data exist for them. For example, if a product did not sell during a particular time period, you may still want to see the product for that time period with zero sales value next to it. Moreover, time series calculations can be performed most easily when data is dense along the time dimension.
    This is because dense data will fill a consistent number of rows for each period, which in turn makes it simple to use the analytic windowing functions with physical offsets. Data densification is the process of converting sparse data into dense form.To overcome the problem of sparsity, you can use a partitioned outer join to fill the gaps in a time series or any other dimension. Such a join extends the conventional outer join syntax by applying the outer join to each logical partition defined in a query. Oracle logically partitions the rows in your query based on the expression you specify in the PARTITION BY clause. The result of a partitioned outer join is a UNION of the outer joins of each of the partitions in the logically partitioned table with the table on the other side of the join.
    Note that you can use this type of join to fill the gaps in any dimension, not just the time dimension. Most of the examples here focus on the time dimension because it is the dimension most frequently used as a basis for comparisons.
    >
    Also see my reply in this recent thread.
    Re: Outer join

  • Mac air Model No  and best value hard case

    Just bought a new 13 inch mac air from the online store but cant suss out the model number? I need it to buy a hard case. Can anybody help? Also whats  the best value hard case that fits!

    Click onto the Apple logo in the upper left corner (menu bar), then About This Mac, then on to More Info.
    "MacBook Air, 13-inch, Late 2012", by any chance? That's a good start.
    Eventually click System Report and note Model Identifier, "MacBookAirx,y".

  • How to show BPM Reporting values in Visual Composer

    Hi everybody,
    i add a Reporting Activity to my current BPM Process. My purpose is to show the report in Visual composer.
    Unfortunately i don't know how to import the BPM Reporting Activity into VC?
    Could anybody help me with a how-to-guide or a detailed description?
    Best regards,
    Sid

    Hi,
    Helpful information on this topic:
    The whole subtree: Performing Process Analytics
    Some quotes from the help.sap.com documentation:
    Real-time analytics enables you to report against an operational system without using a BW system. Real-time analytics allows reporting on a subset of both generic process data and process context data. When performing real-time analytics, data is consumed and reports are displayed directly in the Visual Composer of the local system.
    Reporting data is provided as DataSources to VC and you use the VC BI Kit to display the data.
    Discovering BPM DataSources in Visual Composer:
    3. Choose: View -> Task Panel -> Search to search for BPM DataSources. The Search dialog appears.
    4. From the Select provider dropdown menu, select BI Data Sources.
    5. From the System destination dropdown menu, select BI (Sql) Portal.
    ...but per my understanding, in order to be able to select "BI Data Sources", because all BPM Data Sources can be accessed from this choice only, you need to have BW, and you need to configure the connection between your BPM and BW??
    Also, your own custom BPM Data Sources can be created only with the "Reporting Activity" in NWDS, Process Development perspective??
    Similar-helpful thread: Using Visual Composer from NWDS - CE as BI Data Source (BPM tables)
    Regards,
    David

  • Modeling Question: Last Item value and First Value

    Dear all,
    I have the data coming in the following format:
    Data Source:
    HEAD1     1     21.01.2005
         2     21.01.2005
         3     21.01.2005
         4     21.01.2005
         5     23.01.2005
         6     21.01.2005
         7     21.01.2005
         8     21.01.2005
         9     21.01.2005
         10     25.01.2005 * ( I need this last Value)
    HEAD2     1     26.01.2005 * ( I need 1st Value)
         2     28.01.2005
         3     28.01.2005
         4     28.01.2005
         5     28.01.2005 * (I need this last value)
    HEAD2     1     29.01.2005 * (I need this 1st Value in the report)
         2     30.01.2005
         3     30.01.2005 * ( I need this last value in the report)
    And I want in reporting as:
    Reporting:
    0) 00.00.0000  21.01.2005
    1) 25.01.2005  26.01.2005
    2) 28.01.2005  29.01.2005
    3) 30.01.2005  00.00.0000
    What can be my options?
    I am not interested to use ABAP logic..
    Is any last and first value aggregations can be helpfull??
    Thanks for your time..
    Regards,
    Hari

    Hi Aborgeld,
    you can remove headers after writing the csv by using Get-Content $file | Select -skip 1 | Set-Content $file.
    If null values are a problem for your SQL, this may work for you in your Select-Object call:
    # Service Pack Filter-hash
    $spf = @{
    n = "osServicePack"
    e = {
    if ($_.osServicePack) { $_.osServicePack }
    else { " " }
    # Using it later in Select
    Select-Object Name, lastLogonTimestamp, OSName, ParentContainerDN, osversion, $spf
    Cheers,
    Fred
    There's no place like 127.0.0.1

  • BPM: read value of simple container variable in mapping

    Hi everybody,
    is this possible via UDF?
    Regards Mario

    Hi Mario,
    maybe you can (mis-)use dynamic attributes of a message: Set a dynamic attribute in a UDF for the message, where you want to set the value. In a later mapping, you can read the attribute again.
    Not very nice, I know, but maybe helpful. But you will need a message mapping to set the variable.
    Regards,
    Torsten

  • Process Modelling in BPM 11g

    Hi Everyone, I need to design onboarding employee process. So for every department I am planning to design separate process and a overall parent process and planning to communicate via send/receive task between parent and child. So when i have multiple child process and a parent process can i see the overall audit trail from child process ? For example parent process is employeeProcess and child process is hrDepartmentProcess. When the employee instance is in hrDepartmentProcess, can i view the entire audit trail from parent till hrDepartmentProcess(interactive activity) as a participant in child process from bpmWorkspace? or Do i need to change my design for this?. Thanks for your time

    From what you've described, either the embedded subprocess or call activity would work.  I tend to use the embedded subprocess if the called subprocess is relatively simple or if I have a collection/list of items that I need to loop or burst out simultaneously.
    Don't know your use case, but I tend to favor the call activity over embedded subprocesses.
    Dan Atwood

Maybe you are looking for

  • HP Probook 6560B

    Hi, Im having issues with a HP Probook 6560B, where I am unable to boot it up at all (does not even power up). I was curious if there is any known errors regarding power up issues or there is a simpler solution then to send the laptop off for warrent

  • ICal data unavailable after volume restore

    I recently used ASR to create an image of a data partition of a server. The data partition in question also held the iCal Server data store. After installing new drives, I restored this partition with ASR, and all the ACLs were gone from all the file

  • SAPJCo problem --   Could not initialize dynamic link library sapjcorfc

    Hi there, I have a J2EE application running on Websphere Application Server ND V6.1 (on MS Windows Server, Enterprise Edition with SP2). J2EE application need to communicate to SAP on P560 z/OS via SAP Java Connector (SAP JCo). As per instructions gi

  • Loyalty discount disappeared in the past two weeks?

    Hello, My contract is coming to an end and I started looking into upgrading our phones a couple of weeks ago. Since my contract was initiated before Verizon decided to do away with the loyalty discount I still had the $50 loyality discount to use for

  • WLS: more fine granularity for User, Groups, Roles

    Hi All, in order to organize different user, groups in WLS, I need to use/define more condition/attributes than standard WLS User and Groups. The Oracle WLS concept and OPSS is clear to me and I need some samples or practical cases. - Oracle Fusion M