Best practices to modify process chain

Hi,
What is the best practice to modify the  process chain whether in in the production or transport back to the dev.
Does Query performing tuning setting should be done in production like read modes,cache settings.
Thanks
nikhil

Hi Nikhil,
The best practice to modify the process chains will be making the change in Development system and transporting to Quality and Production. But if you are making a simple change like changing the scheduling time ( change of scheduled date and time by editing the variant) you can do it directly in Production. But if you are adding new steps to process chain, it is better to make change in Dev and move it to Prod.
Also once the change reach production you may need to open the chain in edit mode and activate it manually.
It is a common issue for the process chains containing delta DTP, that the chain will be saved in modified (M)  version and not in active version by transports.
The query read mode and cache settings, you can do it in Development and move it to production.
But the pre-filling of cache using broadcaster settings can be done directly in production.
Also creating of cube aggregates to improve query can be done directly in production rather than creating in development and transporting to Prod.
Again, it depends on project, as in some projects they make changes directly in Prod, while in some projects changes in Prod are not allowed. I think it is better always start the changes in dev and move it to P.
Hope it helps,
Thanks,
Vinod-

Similar Messages

  • Best practice when modifying SAP Standard Development Component

    Hello Experts,
    What is best practice when modifying SAP Standard Development Component (Java Web Dynpro)? Iu2019m looking for the best method to do modifications to SAP Standard DC so that my changes will be kept (or need low maintenance) after a new service package (or EHP) is applied.
    Thanks,
    Kevin

    Hi,
      'How to use Busiess Packages in Enterprise Portal 6.0' is available in this link.
    http://help.sap.com/bp_epv260/EP_EN/documentation/How-to_Guides/misc/Using_Business_Packages.pdf
    Check out for the best practices.
    Regards,
    Harini S

  • General Discussion - Best practice to manage Process order

    Hi Experts,
    Which is the best practice to manage process orders ?
    1. Quantity Change - I can make quantity adjustment in R3 and APO.
    2. Source Change - I can make a version change from order header . Also i can make a source change in APO by selecting a different PPM. Which is the best option.
    3. Re Read Master Data - Best practice to read master data is from R3 or APO ?
    I feel for all the above scenarios process ordes should always managed in R3. But still wondering why we have the same flexibility in APO too ?
    Can

    Hello,
    we are just migrating from 4.6c to ECC 6.0 and I have a couple of workflows to adopt.
    For background steps I defined in the corresponding BOR methods an exception to be fired when no result is available (e.g. no mail address available). Normally, I defined them as temporary errors.
    I activated in the WI outcome section the line for this exception and so the workflow processed this branch when the exception appeared. It worked fine.
    Now, in ECC 6.0, the same workflow get stuck in the WI. The exception is fired (I can see it in the log as "Error message"), but the WI is still in status "in process". It doesn't continue with the error outcome branch.
    Is this a new logic in ECC 6.0? Do you have any idea what to do? I used this logic some dozent times in different methods and workflows and it gives me a headache if I have to change everything ...
    Thank you!
    Best regards,
    Thomas

  • Best Practice Re-install Process?  (Xcelsius 2008 Engage)

    The main impetus of this post is now to request a best practice reinstall process.  E.g., what files/directories or registry keys to delete after uninstalling under Windows XP?  Running Xcelsius 2008 5.1.1.0 Build 12,1,1,344 (Windows XP SP2)?  I want to clean it up and reinstall completely, ideally without also reinstalling Office 2007.
    Earlier today, I had loaded up an otherwise healthy project and all of a sudden my edits were not taking.  I could perform the change in the Component Properties window but the change did not translate in the project display area.
    I had closed and reopened this project several times, killed both "Xcelsius" and "EXCEL" processes, rebooted the machine a couple of times, all to no avail.  I was totally stuck, it seems.  Then, it randomly corrected itself after the Xth close/reboot/reload.  I'm past the issue now it seems, but please consider this an official bug report.  These issues are no longer a mere minor nuisance, and I am looking forward to a nice big update in the next Fix Pack release, which I assume is right around the corner (hint, hint).
    Thanks.
    Edited by: f l on Nov 17, 2008 9:20 PM

    f l,
    I'm not sure deleting keys from the registry is ever a best practice, however Xcelsius has listings in:
    HKEY_CURRENT_USER > Software > Business Objects > Xcelsius
    HKEY_LOCAL_MACHINE > SOFTWARE > Business Objects > Suite 12.0 > Xcelsius
    The current user folder holds temporary settings, such as how you've modified your interface.
    The local machine folder holds more important information.
    As always, it's recommended that you backup the registry and/or create a restore point before modifying or deleting any keys.
    As for directories, the only directory Xcelsius uses is the one you install to.  It also places some install logs in the temp directory, but they have no effect on the application.

  • Best practice to modify OIM webApp.war in Weblogic (10g - 9102)

    I very generic question for OIM in Weblogic:
    If I'm doing any change(minor or major) in any of the JSP file in OIM xlWebApp.war, what is the best practice of moving it to production ?
    Thanks,

    Hi,
    Here the steps you should do:
    - unwar the xlWebApp.war file (through jar command)
    - Modify/add the jsp's.
    - War it again.
    - copy/replace that at XEL_HOME/webapp location.
    - if you had modified xml or properties file, copy that to DDTemplate/webapp
    - run patch_weblogic command.
    - restart server.
    You can also write a build script which will take OOTB xlWebApp.war and custom jsp's and will generate new xlWebApp.war.
    Once tested successfully in dev/QA env, you can check in code and new xlWebApp.war file in some subversion.
    In Production you can directly use this xlWebApp.war file and run patch command.
    Also look at this:
    Re: Help: OIM 10G Server With Weblogic
    Cheer$
    A..

  • Swing best practice - private modifier vs. many parameters

    Dear Experts,
    I have a comboBox that has a customized editor and has KeyListener that responds to several keyPressed and keyTyped. The comboBox is used in two different JFrame, say JFrame frmA and JFrame frmB.
    Since the KeyListener changes the state of 8 other components in frmA, I have two options:
    Option 1:
    - Coding the comboBox in separate class and pass all components as parameters. I will have around 10 parameters, but the components can be made private to frmA or frmB.
    Option 2:
    - Coding the comboBox in separate class and pass the instance of the caller (frmA or frmB), so that the comboBox can change the state of the other components in frmA or frmB according to its caller. However, the components must not be private and should be able to be accessed by the comboBox class.
    My questions:
    1. I have not implemented option 2, so that I have not proved that it will work. Will it work?
    2. Which option will be more efficient and require less cpu time? If it is the same, which option is the best practice?
    3. Is there any other option that is better than these two options?
    Thanks for your advice,
    Patrick

    Option 2:
    - Coding the comboBox in separate class and pass the instance of the caller (frmA or frmB), so that the comboBox can change the state of the other components in frmA or frmB according to its caller. However, the components must not be private and should be able to be accessed by the comboBox class.
    My questions:
    1. I have not implemented option 2, so that I have not proved that it will work. Will it work?It doesn't stand in the long run. Doing so couples your specific ComboBox classes to all widgets that react to the ComboBox changes. If you happen to add a new button is either JFrame that should also be affected by the combo-box selection, you'll have to modify, and re-test, the ComboBox code. Moreover, if a new button was needed in one the JFrame but not the other, you'd have to introduce a special case in your combobox code.
    Instead of the ComboBox's listeners invoke methods on each piloted widget, have them invoke one method (+selectionChanged(...)+) on these widget's common parent (not necessarily a graphical container, but an object that has (maybe indirect) references to each of the dependant widgets).
    2. Which option will be more efficient and require less cpu time?I wouldn't mind.
    In the graphical layer of an application, and unless the graphcial representation performs computations on the bitmap, any action is normally much quicker than business logic computation. Any counter-example is likely to be a bug in the UI implementation (such as, not observing Swing's threading rules), or a severe flaw in the design 'such as, having a hierrachy of several hunderd JComponents,...). Swing widgets are pretty reactive to genuine calls such as setEnabled(), setBackground(), setText(),...
    If it is the same, which option is the best practice?Neither. Hardcoding relationships between widgets may be OK within a single, and single-purpose, form.
    But if you want to code a reusable component thoug, design it for reuse (that is, the less it knows about how which context it is used in, the more contexts it can be used in).
    In general, widgets that know each-other involve a quadratic number of references that accordingly impacts the code readability (and bug rate). This is the primary reason for introducing a Mediator pattern (of which my reply to 1 above is a degenerate form).
    3. Is there any other option that is better than these two options?Yes. Look into the [+Mediator+|http://en.wikipedia.org/wiki/Mediator_pattern] pattern (the Wikipedia page is not compelling, but you'll easily find lots of resources on the Web).

  • Modify process chain

    Hi,
    I have created a process chain but forgot to add compression cube step. I have already activated the process chain and now looking for an edit/modify option and I do not know how do we modify a process chain. Is there any way that I can modify a process chain.
    Thanks in advance.
    Teder.

    Hi,
    You can Modify the chain and inculde the missing Process(s) .
    In Planning Mode: Goto Change mode...
    include the Processes -Save--Activate.
    Note: If you have already scheduled your chain make sure you have released job after the modifiactions.
    Regards,
    Jonn

  • Best practice for optimizing processing of big XMLs?

    All,
    What is the best practice when dealing with large XML files ... (say couple of MBs).
    Instead of having to read the file from the file system everytime a static method is run, what would be the best way in which the program reads the file once and then keeps it in memory. So the next time it would not have to read and parse it all over again?
    Currently my code just read the file in the static method like ...
    public static String doOperation(String path,...) throws Exception
    try{           
    String masterFile = path+"configfile.xml";
    Document theFile = (Document)getDocument(masterFile);
    Element root = theFile.getDocumentElement();
    NodeList nl = root.getChildNodes();
    // ... operations on file
    Optimization tips and tricks most appreciated :-)
    Thanks,
    David

    The best practice for multi-megabyte XML files is not to have them at all.
    However if you must, presumably you don't need all of the information in your XML, repeatedly. Or do you? If you need a little bit of it here, then another little bit of it there, then yet another little bit of it later, then you shouldn't have stored your data in one big XML.
    Sorry if that sounds unhelpful, but I'm having trouble imagining a scenario when you need all the data in an XML document repeatedly. Perhaps you could expand on your design?
    PC²

  • How to modify  Process Chain to run under BWREMOTE

    Greetings,
    below is the setup
    - a process chain made up of an abap program
    + the program uses 'open dataset' to read the data on a file on the server and eventually to rewrite to a different directory
    - when run the process chain get the following message:
      "PC could not be scheduled- termination return code 8"
    -when I run the abap in the background -->  get the following message:
       "OPEN_DATASET_no_authority"
    - I have ful authorization to run any abap program and have full access to the server file.
    - Notes: 947690 & 511475 --> has solution for changing the user id for BW up to and including 4.6C.  Our system is 7.0
    How can I change the user id in Process Chain so it can run under BWREMOTE and not  under my user id.
    Any suggestions are greatly appreciated.  Thank you in advance,
    B.A.

    Thank you everyone for your responses. 
    After the responses, I changed the user to BWREMOTE
    - went thru the following steps:
    RSPC - chose my PC - change mode - Menu - PC - Attributes - Execution user - selected option 'BW background user' --> saved the PC, checked it, activated it and scheduled it.
    However, in sm37, I do not see the job under BWREMOTE, it is still under my userid. 
    Any suggestions please.  Greatly appreciate it.
    B.A.

  • What is the best practice dealing with process.getErrorStream()

    I've been playing around creating Process objects with ProcessBuilder. I can use getErrorStream() and getOutputStream() to read the output from the process, but it seems I have to do this on another thread. If I simply call process.waitFor() and then try to read the streams that doesn't work. So I do something like final InputStream errorStream = process.getErrorStream();
    final StringWriter errWriter = new StringWriter();
    ExecutorService executorService = Executors.newCachedThreadPool();
    executorService.execute(
        new Runnable() {
            public void run() {
                try {
                    IOUtils.copy(errorStream, errWriter, "UTF-8");
             } catch (IOException e) {
                    getLog().error(e.getMessage(), e);
    int exitValue = process.waitFor();
    getLog().info("exitValue = " + exitValue);
    getLog().info("errString =\n" + errWriter); This works, but it seems rather inelegant somehow.
    The basic problem is that the Runnable never completes on its own. Through experimentation, I believe that when the process is actually done, errorStream is never closed, or never gets an end-of-file. My current code works because when it goes to read errWriter it just reads what is currently in the buffer. However, if I wanted to clean things up and use executorService.submit() to submit a Callable and get back a Future, then a lot more code is needed because "IOUtils.copy(errorStream, errWriter, "UTF-8");" never terminates.
    Am I misunderstanding something, or is process.getErrorStream() just a crappy API?
    What do other people do when they want to get the error and output results from running a process?
    Edited by: Eric Kolotyluk on Aug 16, 2012 5:26 PM

    OK, I found a better solution.Future<String> errString = executorService.submit(
        new Callable<String>() {
            public String call() throws Exception {
                StringWriter errWriter = new StringWriter();
                IOUtil.copy(process.getErrorStream(), errWriter, "UTF-8");
                return errWriter.toString();
    int exitValue = process.waitFor();
    getLog().info("exitValue = " + exitValue);
    try {
        getLog().info("errString =\n" + errString.get());
    } catch (ExecutionException e) {
        throw new MojoExecutionException("proxygen: ExecutionException");
    } The problem I was having before seemed to be that the call to Apache's IOUtil.copy(errorStream, errWriter, "UTF-8"); was not working right, it did not seem to be terminating on EOS. But now it seems to be working fine, so I must have been chasing some other problem (or non-problem).
    So, it does seem the best thing to do is read the error and output streams from the process on their own daemon threads, and then call process.waitFor(). The ExecutorService API makes this easy, and using a Callable to return a future value does the right thing. Also, Callable is a little nicer as the call method can throw an Exception, so my code does not need to worry about that (and the readability is better).
    Thanks for helping to clarify my thoughts and finding a good solution :-)
    Now, it would be really nice if the Process API had a method like process.getFutureErrorString() which does what my code does.
    Cheers, Eric

  • Best Practice Reuse - Post Processing

    I have a Query Canvas that I would like to use over and over again in my different form modules. I created an object group and put the group into the object library. All of the code is located in program units.
    I want some "post" processing done which differs depending on which form I am reusing it on. I first created a post_query procedure and called it as a hook, but when I use it on a form, it gets overwritten as I update the reusable object group.
    I then thought I would just call my post processing after I called the query procedure, but the "post" code is executing BEFORE query code executes - which I am completely confused as to why.
    So my basic question is, how do you add "hooks" or pre and post procedures to your reusable code and prevent that code from being overwritten as you update your objects?

    Hi,
    I also want to know what kind of monitoring scripts I can use to setup as cron jobs to monitor or detect any failure or problems?
    To monitor Cluster (OS Level):
    I suggest you use a powerful tool "CHM" that already comes with product Grid Infrastructure.
    What do you do to configure? Nothing ... Just use.
    Cluster Health Monitor (CHM) FAQ [ID 1328466.1]
    See this example:
    http://levipereira.wordpress.com/2011/07/19/monitoring-the-cluster-in-real-time-with-chm-cluster-health-monitor/
    To monitor Database:
    PERFORMANCE TUNING USING ADVISORS AND MANAGEABILITY FEATURES: AWR, ASH, and ADDM and Sql Tuning Advisor. [ID 276103.1]
    The purpose of this article is to illustrate how to use the new 10g manageability features to diagnose
    and resolve performance problems in the Oracle Database.
    Oracle10g has powerful tools to help the DBA identify and resolve performance issues
    without the hassle of analyzing complex statistical data and extensive reports.
    Hope this help,
    Levi Pereira
    Edited by: Levi Pereira on Nov 3, 2011 11:40 PM

  • Best practices for batch processing without SSIS

    Hi,
    The gist of this question is in general how should a C# client insert/update batches records using stored procedures. The ideas I can think of are:
    1) create 1 SP with a parameter of type XML, and pass say 100 records at a time, on 1 thread.  The SP reads the XML as a table and does a single INSERT.
    2) create 1 SP with many parameters, that inserts 1 records.  I can either build a big block of EXEC statements for say 100 records at a time, or call the SP 1 and a time, on 1 thread.  Obviously this seems the slowest.
    3) Parallel processing version of either of the above: Pass 100 records at a time via XML parameter, big block of EXEC statements, or 1 at a time, and use PLINQ to make multiple connections to the database.
    The records will be fairly wide, substantial records.
    Which scenario is likely to be fastest and avoid lock contention?
    (We are doing batch processing and there is not a current SSIS infrastructure, so it's manual: fetch data, call web services, update batches.  I need a batch strategy that doesn't involve SSIS - yet).
    Thanks.

    The "streaming" option you mention in your linked thread sounds interesting, is that a way to input millions of rows at once?  Are they not staged into the tempdb?
    The entire TVP is stored in tempdb before the query/proc is executed.  The advantage of the streaming method is that it eliminates the need to load the entire TVP into memory on either the client or server.  The rowset is streamed to the server
    and SQL Server uses the insert bulk method is to store it in tempdb.  Below is an example C# console app that streams 10M rows as a TVP.
    using System;
    using System.Data;
    using System.Data.SqlClient;
    using System.Collections;
    using System.Collections.Generic;
    using Microsoft.SqlServer.Server;
    namespace ConsoleApplication1
    class Program
    static string connectionString = @"Data Source=.;Initial Catalog=MyDatabase;Integrated Security=SSPI;";
    static void Main(string[] args)
    using(var connection = new SqlConnection(connectionString))
    using(var command = new SqlCommand("dbo.usp_tvp_test", connection))
    command.Parameters.Add("@tvp", SqlDbType.Structured).Value = new Class1();
    command.CommandType = CommandType.StoredProcedure;
    connection.Open();
    command.ExecuteNonQuery();
    connection.Close();
    class Class1 : IEnumerable<SqlDataRecord>
    private SqlMetaData[] metaData = new SqlMetaData[1] { new SqlMetaData("col1", System.Data.SqlDbType.Int) };
    public IEnumerator<SqlDataRecord> GetEnumerator()
    for (int i = 0; i < 10000000; ++i)
    var record = new SqlDataRecord(metaData);
    record.SetInt32(0, i);
    yield return record;
    IEnumerator IEnumerable.GetEnumerator()
    throw new NotImplementedException();
    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

  • Idoc processing best practices - use of RBDAPP01 and RBDMANI2

    We are having performance problems in the processing of inbound idocs.  The message type is SHPCON, and transaction volume is very high.  I am a functional consultant, not an ABAP developer, but will try my best to explain our current setup.
    1)     We have a number of message variants for the inbound SHPCON message, almost all of which are set to trigger immediately upon receipt under the Processing by Function Module setting.
    2)      For messages that fail to process on the first try, we have a batch job running frequently using RBDMANI2.
    We are having some instances of the RBDMANI2 almost every day which get stuck running for a very long period of time.  We frequently have multiple SHPCON idocs coming in containing the same material number, and frequently have idocs fail because the material in the idoc has become locked.  Once the stuck batch job is cancelled and the job starts running again normally, the materials unlock and the failed idocs begin processing.  The variant for the RBDMANI2 batch job is currently set with a packet size of 1 and without parallel processing enabled.
    I am trying to determine the best practice for processing inbound idocs such as this for maximum performance in a very high volume system.  I know that RBDAPP01 processes idocs in status 64 and 66, and RBDMANI2 is used to reprocess idocs in all statuses.  I have been told that setting the messages to trigger immediately in WE20 can result in poor performance.  So I am wondering if the best practice is to:
    1)     Set messages in WE20 to Trigger by background program
    2)     Have a batch job running RBDAPP01 to process inbound idocs waiting in status 64
    3)     Have a periodic batch job running RBDMANI2 to try and clean up any failed messages that can be processed
    I would be grateful if somebody more knowledgeable than myself on this can confirm the best practice for this process and comment on the correct packet size in the program variant and whether or not parallel processing is desirable.  Because of the material locking issue, I felt that parallel processing was not desirable and may actually increase the material locking problem.  I would welcome any comments.
    This appeared to be the correct area for this discussion based upon other discussions.  If this is not the correct area for this discussion, then I would be grateful if the moderator could re-assign this discussion to the correct area (if possible) or let me know the best place to post it.  Thank you for your help.

    Hi Bob,
    Not sure if there is an official best practice, but the note 1333417 - Performance problems when processing IDocs immediately does state that for the high volume the immediate processing is not a good option.
    I'm hoping that for SHPCON there is no dependency in the IDoc processing (i.e. it's not important if they're processed in the same sequence or not), otherwise it'd add another complexity level.
    In the past for the high volume IDoc processing we scheduled a background job with RBDAPP01 (with parallel processing) and RBDMANIN as a second step in the same job to re-process the IDocs with errors due to locking issues. RBDMANI2 has a parallel processing option, but it was not needed in our case (actually we specifically wouldn't want to parallel-process the errors to avoid running into a lock issue again). In short, your steps 1-3 are correct but 2 and 3 should rather be in the same job.
    Also I believe we had a designated server for the background jobs, which helped with the resource availability.
    As a side note, you might want to confirm that the performance issues are caused only by the high volume. An ABAPer or a Basis admin should be able to run a performance trace. There might be an inefficiency in the process that could be adding to the performance issue as well.
    Hope this helps.

  • Best practices to design OWB Process Flows

    Hi All,
    I am using OWB to develop historical data warehouse. For this I have created few mappings that load data from source tables to staging tables and then from staging to target tables.
    Now I am designing process flows to execute these mappings. Can someone suggest me what are the best practices to design process flows?
    I have few questions in my mind that I am looking answers for:
    1) Should I keep the mapping to load staging and target tables in different process flows or in one?
    2) Do I need to include an email activity for each mapping failure or in other words how can I integrate all mapping errors in one email?
    3) I am using two outgoing transitions from each mapping; one for success and another unconditional, is that right method or I should use three transitions one for success, one for warning and one for error?
    4) I have created some email activity templates that I am using in process flows. For development environment I have used test from_address and to_address. But when I actually deploy these process flows in production, these addresses will need to be changed. Will I need to change these address individually in each process flow or in other words what is the best way to deploy process flows?
    I am using 10gR2 database and OWB.
    Any help will be appreciated.
    Thanks,
    VKumar

    Any help will be appreciated.
    Thanks.

  • Process chains administration

    Hi experts!
    I want your advice about the best way for administering process chains.
    We have been adding process chains because of delivering new models and now we want to adopt best practices about the way to monitor these process.
    We use RSPCM transaction to monitor process chains (and eventually many others : rsmon, changerunmoni, etc) but we want to implement automated warnings that can alert via email about: process chains terminated with red status, process chains still running after certain hour (long running or "hanged " process), or even process chains that do not run but have to.
    We have programmed emails from process chains, we know about ccms alerting...but still we don't know about which practice to adopt.
    We think about developing some abap code with events handling, but we want to know if there are not better ways.
    Thanks in advance

    Hi,
    Please check the following links these might help you
    http://help.sap.com/saphelp_nw04s/helpdata/en/cf/9bdd42cadf2878e10000000a155106/frameset.htm
    http://help.sap.com/saphelp_nw04s/helpdata/en/92/5e073c8e56f658e10000000a114084/frameset.htm
    http://help.sap.com/saphelp_nw04s/helpdata/en/bb/f0033c128f4a7de10000000a114084/frameset.htm
    http://help.sap.com/saphelp_nw04s/helpdata/en/bb/f0033c128f4a7de10000000a114084/frameset.htm
    http://help.sap.com/saphelp_nw04s/helpdata/en/bb/f0033c128f4a7de10000000a114084/frameset.htm
    Regards,
    MADhu

Maybe you are looking for