Deadline Branche in Correlation Process - Best Practice

Hello,
I have an integration process with a correlation - there is a asynchronous send step which activates a correlation and afterwards an asynchronous receive step that uses that correlation.
Furthermore I have a deadline branch to cancel the process after 24 hours.
My question now is:
There could be (rare) cases where a message arrives later than 24 hours, so according to my understanding the received message will block the inbound queue as no active correlation can be found anymore. Is this correct? How can I avoid this situation, I guess a blocked queue would also block other messages that are sent to the integration process?
What would be best practice to handle such a scenario? I could leave the process intance open for 1 month, however this might have a significant impact on system performance.....
Thank you for your advice.

There could be (rare) cases where a message arrives later than 24 hours, so according to my understanding the received
essage will block the inbound queue as no active correlation can be found anymore
No correlation found error will occur only when the BPM instance is running and the message tries to enter into the relevant receive step (not the first one)
However when you say the process is cancelled you need not worry about the message going into the queue and blocking the BPM queue.
Regards,
Abhishek.

Similar Messages

  • Order Process Best Practice Suggestions?

    Hey CF World,
    I have to revamp an online order process. The process is broken into 4 steps.
    The app as it exists today was built by a different developer and for the life of me, I have wasted about 5 hours trying to figure out exactly what the person is doing in the code just so I can make some basic tweaks to the process.
    Could anyone offer what might be considered today's best practice for a step by step order process?
    The thought is, if the user could complete step 1, upon clicking next the data elements of the form would be validated and then they would be taken to step 2, etc, etc... until the end where upon submission, the order would then be written to the database and next process triggered internally.
    Should I have one page that upon submit of step 1 cycles back to itself, processes the data and then loads a separate div of info for step 2 or...?
    Any suggestions would be great.  Thank you so much in advance for your help, I sincerely appreciate it.
    Ciao'
    D.

    Hello,
    Thank you so much for that. Let me qualify a few things as I probably should have in the first place. (my apologies)
    Coldfusion 8
    SQL Server  2005
    There is no payment or credit card information being provided.
    The user comes online, goes through a basic order process for some work to be done. As mentioned, it is a multi step process for gathering their information.
    Once the entire order is in and all the fields validated along the way to ensure they were populated where required, the order is to be written into the pending orders table and an email is sent to the branch closest to the customer notifying them of the new order with a link into the details. The branch then calls them directly to confirm the details of the order before activating it.
    So, the code I received, is next to impossible to follow through, for the life of me I can not figure out what the former developer has done. I need to make some changes to the process and if I can not even follow the flow to figure out where to make my changes, that could pose a problem.
    I have not coded too much in Coldfusion for the past two years but did so quite extensively before that. I totally agree on the CFTransaction suggestion. I guess what I was looking for is, are there any best practices for coding that I should be aware of, especially considering what I want to accomplish? Previously we used the "fusebox" concept of coding and had most of our code in CustomTags in a very reusable and easy to follow structure and flow.
    Any thoughts/suggestions would be great! Thank you very much!
    D.

  • Idoc processing best practices - use of RBDAPP01 and RBDMANI2

    We are having performance problems in the processing of inbound idocs.  The message type is SHPCON, and transaction volume is very high.  I am a functional consultant, not an ABAP developer, but will try my best to explain our current setup.
    1)     We have a number of message variants for the inbound SHPCON message, almost all of which are set to trigger immediately upon receipt under the Processing by Function Module setting.
    2)      For messages that fail to process on the first try, we have a batch job running frequently using RBDMANI2.
    We are having some instances of the RBDMANI2 almost every day which get stuck running for a very long period of time.  We frequently have multiple SHPCON idocs coming in containing the same material number, and frequently have idocs fail because the material in the idoc has become locked.  Once the stuck batch job is cancelled and the job starts running again normally, the materials unlock and the failed idocs begin processing.  The variant for the RBDMANI2 batch job is currently set with a packet size of 1 and without parallel processing enabled.
    I am trying to determine the best practice for processing inbound idocs such as this for maximum performance in a very high volume system.  I know that RBDAPP01 processes idocs in status 64 and 66, and RBDMANI2 is used to reprocess idocs in all statuses.  I have been told that setting the messages to trigger immediately in WE20 can result in poor performance.  So I am wondering if the best practice is to:
    1)     Set messages in WE20 to Trigger by background program
    2)     Have a batch job running RBDAPP01 to process inbound idocs waiting in status 64
    3)     Have a periodic batch job running RBDMANI2 to try and clean up any failed messages that can be processed
    I would be grateful if somebody more knowledgeable than myself on this can confirm the best practice for this process and comment on the correct packet size in the program variant and whether or not parallel processing is desirable.  Because of the material locking issue, I felt that parallel processing was not desirable and may actually increase the material locking problem.  I would welcome any comments.
    This appeared to be the correct area for this discussion based upon other discussions.  If this is not the correct area for this discussion, then I would be grateful if the moderator could re-assign this discussion to the correct area (if possible) or let me know the best place to post it.  Thank you for your help.

    Hi Bob,
    Not sure if there is an official best practice, but the note 1333417 - Performance problems when processing IDocs immediately does state that for the high volume the immediate processing is not a good option.
    I'm hoping that for SHPCON there is no dependency in the IDoc processing (i.e. it's not important if they're processed in the same sequence or not), otherwise it'd add another complexity level.
    In the past for the high volume IDoc processing we scheduled a background job with RBDAPP01 (with parallel processing) and RBDMANIN as a second step in the same job to re-process the IDocs with errors due to locking issues. RBDMANI2 has a parallel processing option, but it was not needed in our case (actually we specifically wouldn't want to parallel-process the errors to avoid running into a lock issue again). In short, your steps 1-3 are correct but 2 and 3 should rather be in the same job.
    Also I believe we had a designated server for the background jobs, which helped with the resource availability.
    As a side note, you might want to confirm that the performance issues are caused only by the high volume. An ABAPer or a Basis admin should be able to run a performance trace. There might be an inefficiency in the process that could be adding to the performance issue as well.
    Hope this helps.

  • Re engineering of existed process / Best Practice (customization)

    Hi all of you,
    We are implementing SAP ECC 6.0 for one of our clients. Client is asking us to compare their existed business process with Best Practice / standard process and based on the result, asking to prepare a GAP analysis between the existed and best practice for his business.
    SAP itself is a best practice in the respective domains / business processes. By implementing SAP ERP,  the client will have best practice for his business processes as I know. But thing is, how can I explain to the client that SAP has given the best practice and based on which, client will consider the SAP practice as Best Practice for the business??
    Please give me a solution
    Regards,
    Ramki

    f l,
    I'm not sure deleting keys from the registry is ever a best practice, however Xcelsius has listings in:
    HKEY_CURRENT_USER > Software > Business Objects > Xcelsius
    HKEY_LOCAL_MACHINE > SOFTWARE > Business Objects > Suite 12.0 > Xcelsius
    The current user folder holds temporary settings, such as how you've modified your interface.
    The local machine folder holds more important information.
    As always, it's recommended that you backup the registry and/or create a restore point before modifying or deleting any keys.
    As for directories, the only directory Xcelsius uses is the one you install to.  It also places some install logs in the temp directory, but they have no effect on the application.

  • Filming to film editing process - best practice?

    Hey everyone,
    This is an odd question, and I'm not really sure where to start looking. But basically, I've been doing a lot of film editing for my company. They send out 2 people to do the filming, then hand me the ra footage to edit. I am given a rough storyboard as well.
    Now, my problem is this - because the filming is done by easiest access at the time (so it wont necessarily be in the order of the story board) and I'm not involved in the actual filming (so I don't know roughly when things are filmed) it's taking me so much longer to go through all the footage, and figure out what I need.
    So does anyone have any good links or advice on how best to bridge these two processes? Or any experience in the industy to help me out?
    I get very tight deadlines, and don't really have time to troll through hours of video. Which then I forget and get lost with it anyway, so I have to search through it again.
    Thank you for you help

    Steven L. Gotz wrote:
    Or, learn to use Adobe Prelude. I don't use it, but those who do might chime in here with their own opinions on the subject.
    It's been a little while since I've worked extensively in Prelude so some of the features and details may have changed but the workflow is the same so I'll give it a shot...
    Prelude is an app specifically designed to log and manage your footage before (and during...) the editing process. There's a lot you can do with it to edit metadata, add markers or selectively ingest footage, but at its moist basic level it's good for marking things up into subclips and then ordering them as a sort of rough cut to bring directly in PrPro. But of course that all takes time too, so depending on what your needs and proficiency level are, it still might be better to just use bins and things to organize it all in PrPro and then just edit directly from that.
    Any way you cut it, 'logging' is a huge job, big enough that we developed a whole app just to help people do it effectively. That's also wht production houses often have dedicated logging pro's for that reason (but it still often falls to the editor in a big way). The production people in the field could certainly help your plight by trying to organize things a little as they go, but that sort of work comes with its own heavy pressures and priorities so that probably won't happen.

  • Af:dialog model-restore / cancel-button processing best practice ?

    Using JDev 11.1.1.3; if I have an af:dialog running in an af:popup which contains auto-submit components (for cross-component enablement, validation etc.). My question is what are the preferred ways of discarding the model submitted changes made through popup processing if/when the af:dialog cancel button is pressed by user ? Figured that using a task flow for the content that is the popup could be an option, and using the task flow savepoint restore feature, but that looks more like database restore than model restore. I want to be able to restore the model content to the way it looked before the popup executed, without necessitating a submit to the database. How is this most commonly and best achieved ?
    Thanks,

    Taskflow savepoints are not database savepoints. Transactional BTF can be configured to issue automatic savepoints at TF entry and eventually to "rollback" to them at the TF exit. The internal implementation uses the ApplicationModule's passivation/activation mechanism to passivate the AM state at the TF entry and eventually to activate the AM state at the TF exit back to the passivated state at the entry. In this way it is simulated that you have not made any modifications in ADF BC, so your model layer will be restored to the state before TF entry. (Of course, you must not perform any DB commits durring the lifetime of this TF). I have used successfully this mechanism for the same goal you are asking about.
    Also there are savepoints managed by the ADF Controller, but I could be of little help here because I have never used them. I suspect that this mechanism could be what you need, so you may have a look here for more details:
    Adding Save Points to a Task Flow
    and in this thread:
    {thread:id=2128956}
    Dimitar

  • PL/SQL After submit process - best practice?

    I have after submit process which fires PL/SQL procedure. In this PL/SQL procedure I do some updates and would also like to generate some XML output and send it to browser so that user can save it in file. What I'm asking is, what is proper way to handle this.
    I realize that starting procedure from "after submit process" is too late. If I understand correctly, the page is already rendered at that time so htp.p output from PL/SQL procedure in not showing (but procedure is executed). So I create branch to PL/SQL procedure (after button is pressed). That way procedure actualy creates new window and I can use htp.p functions. Altough now I have trouble closing window but I hope I could manage this.
    Is there some other, better way to do export? Maybe javascript popup and calling procedure from there? Any suggestions?
    Thanks!
    Marko

    How should I send this content to user so that his browser recognize this as a file (for opening or saving)?
    Put that code in a onLoad process similar to how Scott shows at http://spendolini.blogspot.com/2006/04/custom-export-to-csv.html
    With this in place, when you issue a show request on that page, your generated content will be offered by the browser using a open/save dialog box.

  • BPM - Issue on Deadline branch on a Block

    Hello,
    I have designed a BPM to access to an Inbound Proxy. I have put the call to the Proxy on a Block, and I have added the deadline and the exception branches.
    The exception branch runs ok, but the deadline branch not. I have put a infine loop on the proxy to the process goes to the deadline branch, but the process goes to the exception branch after a minute, and I have put the deadline branch on 2 minutes.
    Is this normal? Hou can I get that the process goes to the deadline exception?
    Thanks and regards,
    Manuel.

    One more way is to send a "stop collection" message from a proxy, chk this link
    /people/siva.maranani/blog/2005/05/22/schedule-your-bpm
    you ve to customize what Maranani describes so that the message is a "stop" rather than a "start" , hope it helps

  • General Discussion - Best practice to manage Process order

    Hi Experts,
    Which is the best practice to manage process orders ?
    1. Quantity Change - I can make quantity adjustment in R3 and APO.
    2. Source Change - I can make a version change from order header . Also i can make a source change in APO by selecting a different PPM. Which is the best option.
    3. Re Read Master Data - Best practice to read master data is from R3 or APO ?
    I feel for all the above scenarios process ordes should always managed in R3. But still wondering why we have the same flexibility in APO too ?
    Can

    Hello,
    we are just migrating from 4.6c to ECC 6.0 and I have a couple of workflows to adopt.
    For background steps I defined in the corresponding BOR methods an exception to be fired when no result is available (e.g. no mail address available). Normally, I defined them as temporary errors.
    I activated in the WI outcome section the line for this exception and so the workflow processed this branch when the exception appeared. It worked fine.
    Now, in ECC 6.0, the same workflow get stuck in the WI. The exception is fired (I can see it in the log as "Error message"), but the WI is still in status "in process". It doesn't continue with the error outcome branch.
    Is this a new logic in ECC 6.0? Do you have any idea what to do? I used this logic some dozent times in different methods and workflows and it gives me a headache if I have to change everything ...
    Thank you!
    Best regards,
    Thomas

  • Best practice for handling errors in EOIO processing on AEX?

    Hi,
    I'm looing for resources and information describing the best practice on how to handle processing errors of asynchronous integrations with Exactly-Once-In-Order QoS on an AEX (7.31) Java only installation. Most information I've found so far are describing a the monitoring and restart jobs on AS ABAP.
    Situation to solve:
    multiple different SOAP messages are integrated using one queue with an RFC receiver. On error the message status is set to Holding and all following messages are Waiting. Currently I need to manually watch over the processing, delete the message with status holding and restart the waiting ones.
    Seems like I can setup component based message alerting to trigger an email or whatever alert. I still need to decide on how to handle the error and resolve it (ie. delete the errornous message, correct the data at sender and trigger another update). I still need to manually find the oldest entry with status waiting and restart it. I've found a restart job in Background jobs in configuration and monitoring home, but it can be only scheduled in intervals of 1 or more hours.
    Is there something better?
    Thank you.
    Best regards,
    Nikolaus

    Hi Nikolaus -
    AFAIK - For EOIO, you have to cancel the failed message and then process the next message in the sequence manually..
    Restart job only works the messages which are in error state but not in holding state.. So you have to manually push the message... So there is no other alternative.
    But it should not be that difficult to identify the messages in a sequence..
    How to deal with stuck EOIO messages in the XI ... | SCN
    Though it is for older version, it should be the same.. you should be able to select the additional columns such as sequence ID from the settings..

  • What is the best practice to display info of completed task in process flow

    Hi all,
    I'm starting to study BPM modeling with CE7.1 EHP1. Thanks to the tutorial and example on SDN site and I can easily build my own process in NWDS and deploy to server, start it, finish it.
    I like the new runtime which can show a BPMN diagram to the processors. However, I can't find a way to let the follow up processor to review the task result completed in previous step. I'm more familiar with Guided Procedure, and know there is "Display Callable Object" which can used to show some info of a completed task when the processor/owner/admin/overseer click on a completed task.  Where is the feature in BPM ? What is the best practice to show such task information in BPM environment.
    For example, A multiple level approval process, the higher level approver need to know the comment written by the previous approver. Can he read this information from process flow ?
    I think it is very important feature for a BPM platform. In Guided Procedure, such requirement can be done with Display Callable Object + View Permission, and you just need some coding for the UI. If BPM is superior to GP, I think there must be a way to achieve this, I just do not know how ?
    Can anyone shed me some light on it ?

    Oliver,
    Thanks for your quick reply.
    Yes, Notes and Attachment CAN BE USED for the purpose. But I'm still looking for a more elegant solution.
    With the solution of using Notes/Attachment, the processor need to give input at two places : the task UI and Note/Attach , with similar or same data. It is really annoying.
    Is there any SAP BPM real-world deployment ? None of customer has the requirement ?

  • Best practice to process inbound soapenc:Array

    Hi, I'm working my way through a real world example where I've sent a keyword search query to Amazon and it return 10 results. This works great. Now I want to loop thru the Array and put the returned values into a table in the database. I can't seen to get down to the element level via XQuery to assign the returned values to my local variables. The XML returned is defined in the wdsl as:
         <xsd:complexType name="DetailsArray">
         <xsd:complexContent>
         <xsd:restriction base="soapenc:Array">
    <xsd:attribute ref="soapenc:arrayType" wsdl:arrayType="typens:Details[]"/>
    </xsd:restriction>
    </xsd:complexContent>
    </xsd:complexType>
         <xsd:complexType name="Details">
    What's the best practice for walking thru this incoming XML Array?
    Thanks...Matt

    Hello again!
    Thanks for quick reply!
    The instance state says : closed.completed.
    Perhaps the problem is not the insert into db, but the transformation activity?
    This is what the audit trail (raw xml) looks like:
    <?xml version="1.0" encoding="UTF-8" ?>
    - <audit-trail>
    - <event sid="0" cat="2" type="2" n="0" date="2008-06-24T10:16:21.875+02:00">
    - <message>
    - <![CDATA[ New instance of BPEL process "BPELProcess3" initiated (# "150001").
      ]]>
    </message>
    </event>
    - <event sid="BpPrc0.1" cat="1" type="2" label="process" n="1" date="2008-06-24T10:16:21.875+02:00" psid="0">
    - <message>
    - <![CDATA[ _cr_
      ]]>
    </message>
    </event>
    - <event sid="BpTry0.2" cat="1" type="2" n="2" date="2008-06-24T10:16:21.890+02:00" psid="BpPrc0.1">
    - <message>
    - <![CDATA[ _cr_
      ]]>
    </message>
    </event>
    - <event sid="BpSeq0.3" cat="1" type="2" label="sequence" n="3" date="2008-06-24T10:16:21.890+02:00" psid="BpTry0.2">
    - <message>
    - <![CDATA[ _cr_
      ]]>
    </message>
    </event>
    - <event sid="BpSeq0.3" cat="2" type="2" wikey="150001-BpRcv0-BpSeq0.3-1" n="4" date="2008-06-24T10:16:21.921+02:00">
    - <message>
    - <![CDATA[ Received "inputVariable" call from partner "client"
      ]]>
    </message>
    - <details>
    - <![CDATA[
    <inputVariable><part xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="payload"><ns1:BPELProcess3ProcessRequest xmlns:ns1="http://xmlns.oracle.com/BPELProcess3">
                <ns1:stations>18700</ns1:stations>
                <ns1:username/>
            </ns1:BPELProcess3ProcessRequest>
    </part></inputVariable>
      ]]>
    </details>
    </event>
    - <event to="GetStationProperties_getStationsProperties_InputVariable" sid="BpSeq0.3" cat="2" type="1" wikey="150001-BpAss0-BpSeq0.3-2" n="5" date="2008-06-24T10:16:21.953+02:00">
    - <message>
    - <![CDATA[ Updated variable "GetStationProperties_getStationsProperties_InputVariable"
      ]]>
    </message>
    - <details>
    - <![CDATA[
    <GetStationProperties_getStationsProperties_InputVariable><part xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="stations"><stations>18700</stations>
    </part><part xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="username"><username xmlns="" xmlns:def="http://www.w3.org/2001/XMLSchema" xsi:type="def:string" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"/>
    </part></GetStationProperties_getStationsProperties_InputVariable>
      ]]>
    </details>
    </event>
    - <event to="GetStationProperties_getStationsProperties_InputVariable" sid="BpSeq0.3" cat="2" type="1" wikey="150001-BpAss0-BpSeq0.3-2" n="6" date="2008-06-24T10:16:21.953+02:00">
    - <message>
    - <![CDATA[ Updated variable "GetStationProperties_getStationsProperties_InputVariable"
      ]]>
    </message>
    - <details>
    - <![CDATA[
    <GetStationProperties_getStationsProperties_InputVariable><part xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="stations"><stations>18700</stations>
    </part><part xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="username"><username/>
    </part></GetStationProperties_getStationsProperties_InputVariable>
      ]]>
    </details>
    </event>
    - <event sid="BpSeq0.3" cat="2" type="2" wikey="150001-BpInv0-BpSeq0.3-3" partnerWSDL="MetDataService2Ref.wsdl" n="7" date="2008-06-24T10:16:22.968+02:00">
    - <message>
    - <![CDATA[ Invoked 2-way operation "getStationsProperties" on partner "MetDataService".
      ]]>
    </message>
    - <details>
    - <![CDATA[
    <messages><GetStationProperties_getStationsProperties_InputVariable><part xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="stations"><stations>18700</stations>
    </part><part xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="username"><username/>
    </part></GetStationProperties_getStationsProperties_InputVariable><GetStationProperties_getStationsProperties_OutputVariable><part xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="return"><return xmlns:ns2="http://schemas.xmlsoap.org/soap/encoding/" xsi:type="ns2:Array" xmlns:ns3="http://no.met.metdata/IMetDataService.xsd" ns2:arrayType="ns3:no_met_metdata_StationProperties[1]" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
    <item xsi:type="ns3:no_met_metdata_StationProperties">
    <toDay xsi:type="xsd:int">0</toDay>
    <toMonth xsi:type="xsd:int">0</toMonth>
    <fromYear xsi:type="xsd:int">1937</fromYear>
    <municipalityNo xsi:type="xsd:int">301</municipalityNo>
    <amsl xsi:type="xsd:int">94</amsl>
    <latDec xsi:type="xsd:double">59.9427</latDec>
    <lonDec xsi:type="xsd:double">10.7207</lonDec>
    <toYear xsi:type="xsd:int">0</toYear>
    <department xsi:type="xsd:string">OSLO</department>
    <fromMonth xsi:type="xsd:int">2</fromMonth>
    <stnr xsi:type="xsd:int">18700</stnr>
    <wmoNo xsi:type="xsd:int">492</wmoNo>
    <latLonFmt xsi:type="xsd:string">decimal_degrees</latLonFmt>
    <name xsi:type="xsd:string">OSLO - BLINDERN</name>
    <fromDay xsi:type="xsd:int">25</fromDay>
    </item>
    </return>
    </part></GetStationProperties_getStationsProperties_OutputVariable></messages>
    ]]>
    </details>
    </event>
    - <event to="StoreStationProperties_insert_InputVariable" sid="BpSeq0.3" cat="2" type="1" wikey="150001-BpAss1-BpSeq0.3-4" n="8" date="2008-06-24T10:16:23.000+02:00">
    - <message>
    - <![CDATA[ Updated variable "StoreStationProperties_insert_InputVariable"
      ]]>
    </message>
    - <details>
    - <![CDATA[
    <StoreStationProperties_insert_InputVariable><part xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="EklimaStationsTblCollection"><EklimaStationsTblCollection xmlns:ns0="http://xmlns.oracle.com/pcbpel/adapter/db/top/db" xmlns="http://xmlns.oracle.com/pcbpel/adapter/db/top/db"/>
    </part></StoreStationProperties_insert_InputVariable>
      ]]>
    </details>
    </event>
    - <event sid="BpSeq0.3" cat="2" type="2" wikey="150001-BpInv1-BpSeq0.3-5" partnerWSDL="db.wsdl" n="9" date="2008-06-24T10:16:27.312+02:00">
    - <message>
    - <![CDATA[ Invoked 1-way operation "insert" on partner "db".
      ]]>
    </message>
    - <details>
    - <![CDATA[
    <StoreStationProperties_insert_InputVariable><part xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="EklimaStationsTblCollection"><EklimaStationsTblCollection xmlns:ns0="http://xmlns.oracle.com/pcbpel/adapter/db/top/db" xmlns="http://xmlns.oracle.com/pcbpel/adapter/db/top/db"/>
    </part></StoreStationProperties_insert_InputVariable>
      ]]>
    </details>
    </event>
    - <event sid="BpSeq0.3" cat="2" type="2" wikey="150001-BpRpl0-BpSeq0.3-6" n="10" date="2008-06-24T10:16:27.312+02:00">
    - <message>
    - <![CDATA[ Reply to partner "client".
      ]]>
    </message>
    - <details>
    - <![CDATA[
    <outputVariable><part xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="payload"><BPELProcess3ProcessResponse xmlns="http://xmlns.oracle.com/BPELProcess3"/>
    </part></outputVariable>
      ]]>
    </details>
    </event>
    - <event sid="BpSeq0.3" cat="1" type="2" n="11" date="2008-06-24T10:16:27.312+02:00">
    - <message>
    - <![CDATA[ _cl_
      ]]>
    </message>
    </event>
    - <event sid="BpPrc0.1" cat="1" type="2" n="12" date="2008-06-24T10:16:27.312+02:00">
    - <message>
    - <![CDATA[ _cl_
      ]]>
    </message>
    </event>
    - <event sid="BpPrc0.1" cat="2" type="2" n="13" date="2008-06-24T10:16:27.328+02:00">
    - <message>
    - <![CDATA[ BPEL process instance "150001" completed
      ]]>
    </message>
    </event>
    </audit-trail>
    The webservice-method I'm calling in this case is getStationsProperties.
    Kind regards
    Jørn Eirik

  • Best Practice for sugar refinery process

    hello, my company need to deploy a new business concerning raw sugar refinery.
    so we need to analyze the business requirements and propose a process for refinery management  .
    step 1: arrival of goods in docks
    step 2: raw sugar need to be charged in our stock ( quantity and value ) but is not  our property
    step 3: goods need to be delivered to our plant ( we pay the transport as service for our business partner )
    step 4: goods need to be verified ( for quality and quantity ) and accepted by operators
    step 5: goods are processed in a refinery plant, we need to verify timing, costs, quantity and human resources employed ( for costs remittance and transfer  )
    step 6: sugar is delivered to other industrial plants, warehouse and finally sold ( but is not our property ), for us it's like a refinery service.
    step 7: we need to trace production lot from raw sugar arrival to the docks up to step 6 .
    step 8: inventory and  maintenance costs need to be traced because our profit is a part of this refinery service reduced by costs incurred
    any suggestions to find the right best practice ?
    I'm not a skilled BPS, I was looking for oil refinery but  is not the same process, so.. what can i look for?
    TNks

    Hi Kumar,
    In order to have Consigment in SAP u need to have master data such as material master, vendor master and purchase inforecord of consignment type. U have to enter the item category k when u enter PO. The goods receipt post in vendor consignment stock will be non valuated.
    1. The intial steps starts with raising purchase order for the consignment item
    2. The vendor recieves the purchase order.
    3. GR happens for the consignment material.
    4. Stocks are recieved and placed under consignment stock.
    5. When ever we issue to prodn or if we transfer post(using mov 411) from consignment to own stock then liability occurs.
    6. Finally comes the settlement using mrko. You settle the amount for the goods which was consumed during a specific period.
    regards
    Anand.C

  • Best Practice for Roaming Profiles over Branch Offices

    Hello Everybody,
    I was hoping I could gain advice from experienced engineers on an issue me and my team are currently experiencing.
    The issue:
    We have a client that has their main office in London and this company has other remote offices over the world, Paris, Milan, Luxembourg etc.. Each remote office has a local DC&file server installed as two separate servers or both roles on the same
    server. Everything is central to London, all the main file shares that the company uses is based in London and the terminal server's are based in London too.
    We have DFS-R & N set up on the London File servers to replicate the dfs shares over the remote offices which works fine and we don't generally ever have any issues with this and works well when user's in remote offices access file shares from
    London.
    However and I didn't set this up but this client also has DFS-R & N set up for roaming profiles!!! The issue we are having here is only with the terminal servers. For example I will log in as one of the users from London on the terminal server and will
    load the profile fine, I will log that user off and log in as the admin and remove the profile through advanced system settings. I will then log back on as that same user and will be given a temporary profile, I repeat the first step and the profile loads
    fine, so every other log on will load the user with a temporary profile. I know this is the case because for that user, if I change the profile path in AD from
    \\xxxx.com\public to
    \\lon-fs3v\profiles$\user it then loads every time with no issues. Before anyone asks I have rebuilt the terminal server(s) to rule out if they were misconfigured. You may ask why not do that for everyone? We can't do that for people based in Milan otherwise
    their profile will forever take time to load and that's where the dfs replication comes into play for the profiles.
    Unfortunately they are a very stubborn client so some users (important people) have very large profiles which sometimes takes a while to load up.
    I have done some reading already on the web and have seen the unsupported scenario from Microsoft regarding this (
    https://support.microsoft.com/en-gb/kb/2533009 )  so unsure the best way to do it. The link I've put in does say you can have issues with the profiles loading (which we do) if there are too many
    connections which we have 10 to replicate profiles around the remote offices.
    I have done some reading into the hostedbranch cache method but not sure if you can do this with roaming profiles or not?
    We generally want to eliminate the issue for users when logging on remotely with getting a temp profile every other log on attempt. I must add though this issue never occurs on the workstations just the terminal server, our client is in the private equity
    market so one user may just spend one day in a remote office and then come back to London to carry on as they normally would.
    So that's the background to the issue and I was generally trying to work out what methods or if this is possible with the branch cached method for roaming profiles ?
    Thank you to anyone who replies.
    Best,
    Liam

    Hi UC3ngineer,
    Agree with Luca.
    If the branch users need to do lot of conferencing, the best practice is to deploy an new Front End Pool and an Edge Server in Branch Office; Otherwise you must have a 100%
    reliable WAN connection to your central site.
    Best regards,
    Eric
    Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Vendor Prepayment Process -  SAP best practice

    Hi Friends,
    I am looking for SAP Best Practices for Vendor Prepayment process.
    Could you guide me step by step approach..
    Is there any config ?? I have maintained Prepayment Indicator in Vendor Master..
    Please throw some light..
    Regards,
    Jackie

    Hi
    check following link
    [http://help.sap.com/bp_mediav1600/Media_US/HTML/Scenarios/V1B_MED_Scen_EN_US.htm]
    [http://help.sap.com/bp_mediav1600/Media_US/HTML/Scenarios/V1C_MED_Scen_EN_US.htm]
    Regards
    Kailas Ugale

Maybe you are looking for

  • Re: No display, MSI Z97 PC Mate (New Build)

    from the lower screen shot it looks like the drive is empty! how have you formatted the disk and what format did you chose? is it in a USB 2.0 slot?

  • SapScript problem with leading zeros

    Hi, I print field LTAP-NLENR. But when i use &LTAP-NLENR& leading zeros are cut. I need to display it with leading zeros. (ex. 0000000000010000000001) Please help. Regards, Greg.

  • How to hide the main window menu ?

    Hi, Im developing a Java app that has an horizontal menu bar,which has the option to exit the program . So I want to hide, or get rid of the main window menu bar, the one that has the close, minimize and maximize buttons..is it possible ? Thanks in a

  • Starting the application server

    Hi, I am new to Java Studio Enterprise 8. I try this tutorial http://developers.sun.com/prodtech/javatools/jsenterprise/learning/tutorials/jse7/introjse7/index.html#Step_2:_Make_sure_Sun_Java_System_App which is for Java Studio Enterprise 7. I stop t

  • 'Attach photo' to emails and messages has stopped working. Help?

    'Attach photo' to emails and messages has stopped working. The function is grayed out. Help? - Relatively new iPhone 6+ with latest system ... If on a mac, I'd be running a disk utility or looking to overwrite an app, or re-install the system. Althou