Scenario runtime parameters best practices

Hi,
I have a scenario which uses dynamic input values. Iam trying to understand what is best practice to initialise input configuration values considering the scenario is automatic.
(something like configuration initialisation parameters)
Option in my mind
1. variables (which can be passed at runtime however have to be input manually)
2. Flexfields (sounds a good idea but not sure if i migrate across environments flexfields will also be migrated?)
3. Procedure Options ( can the procedure options be changed at runtime?)
please let me know your suggestions.

Hi John,
I think it's best if you use a variable, and refresh that variable from a database query at runtime.
You can setup a lookup/config table which has key/value pairs, the key being the variable name, the value being the config value you need to use.
The developers guide has info on refreshing variables.
You can then change the config in your DB table when you need to.
Cheers.

Similar Messages

  • Plan/Forecast Scenario in HFM - best practice

    Hello all,
    We're currently working on adding a Plan and Forecast scenarios to HFM
    to incorporate some budgeting, which had previously been done in
    Excel/Access. For those of you who do planning in HFM, would you care
    to briefly summarize your processes?
    1) How do you load the data into your HFM plan/forecast scenario?
    Excel spreadsheet? FDM Multi-Load? Webform?
    2) If you use Excel, do you just allow the users to submit via the
    template using SmartView or formulas? What about audit/controls?
    3) Do you let the users consolidate whenever they want?
    Any other tips would be greatly appreciated!
    Thanks

    We've been doing budgeting forecasting for awhile in HFM.... However.... We leaving that and going to Hyperion Planning...
    Some things you are going to want to consider:
    1) You will probably have more people entering budgets \ forecasting then actuals, so there might be licensing cost.
    2) Security will need to be considered, who get's what and where.
    3) Budgeting \ Forecasting is usually done at a more granular level then rolled-up. This could make your HFM application much larger or more detailed then necessary.
    4) If you re-org, which we do, we have to reconsolidate all the scenarios and all the years. Time consuming.
    5) The is one rule file, making divisional or cross-company modeling difficult. For instance, one of our divisions has a very complex allocation model which we could not build in HFM.
    6) Its not uncommon for users to ask for a rolling forecast, which can be done in HFM at the cost of more scenarios. Also, versioning is difficult in HFM.
    7) Reporting will also need to be considered, who does it, how it looks and who gets what report.
    With Planning, we don't have the above issues.
    To answer your questions before moving to Planning:
    1) How do you load the data into your HFM plan/forecast scenario, Excel spreadsheet, FDM Multi-Load, Webform? 90% use the data grids, 5% FDM (International or not on our GL) 5% SmartView (Powerusers)
    2) If you use Excel, do you just allow the users to submit via the template using SmartView or formulas? What about audit/controls? How do you handle that in actuals? Our controls are handled via security
    3) Do you let the users consolidate whenever they want? Yes - at the expense of users running consolidations over each-other

  • Information on Best Practice usage of PopulateAttribute

    Using JDeveloper 11.1.1.4.0
    I have a requirement where I use the BC layer as my data service for my UI. But this BC is not connected to the database but rather relying on a WebService.
    Everything is returning well based on NO-DB Transaction and Programmatic Vo's implementation based on SteveMeunch not yet documented, but to further enhance or optimize my implementation, I'm trying to utilize the populateAttribute as in my scenario, I do insert the attribute programmatically thus I also handle the validation before-hand.
    Based on definition populateAttribute allows me to set the value without triggering any validations. So far this is correct as none of my BusinessRules nor any LOVModel accessor validation are getting fired. My question mainly focus on the difference between <b>populateAttribute</b> and <b>populateAttributeAsChanged</b>.
    1. Which of the two methods should be used on certain scenarios (close to best practice :) )?
    2. In the documentation, there is a warning about "Primary key attributes should not be set using populateAttribute apis", in what case will this affect my rows?
    3. In a click-to-edit table, I am able to prepopulate my ViewObject without triggering my accessor (LOVModel) validation, but when I start activating the rows (click), it seems to still be firing the accessor validation. (Which is fairly not expensive calls but uneccessary).
    Thanks.
    -Marvin

    Hi Julian,
    I have struggled with the same questions you are addressing. On a previous project we tried to model based on packages, but during the course of the project we encountered some problems that grew overtime. The main problems were:
    1. It is hard to enforce rules on package assignments
    2. With multiple developers on the project and limited time we didn't have time to review package assignment
    3. Devopelers would click away warnings that an object was already part of another project and just continue
    4. After go-live the maintenance partner didn't care.
    So, my experience is is that it is a nice feature, but only from a high level design point of view. In real life it will get messy and above all, it doesn't add much value to the development. On my neew assignment we are just working with packages based on functional area and that works just fine.
    Roy

  • WCEM Best Practice deployment in a multi CRM Landscape

    Hi SCN
    Im looking for advice in relation to best practice deployment of WCEM. Specifically in a multi CRM landscape scenario.
    Do best practices exist?
    JR

    Look into using NWDI as your source code control (DTR) and transport/migration from dev through to production.  This also will handle the deployment to your dev system (check-in/activate).
    For unit testing and debugging you should be running a local version (NWDW).  This way once the code is ready to be shared with the team, you check it in (makes it visible to other team members) and activate it (deploys it to development server).
    We are currently using a separate server for WD applications rather than running them on the portal server.  However, this does not allow for the WD app to run in the new WD iView.  So it depends on what the WD app needs to do an have access to.  Of course there is always the Federated Portal Network as an option, but that is a whole other topic.
    For JCo connections, WD uses a connection name and this connection can be set up to point to different locations depending on which server it is on.  So on the development server the JCo connection can point to the dev back-end and in prod point to the prod back-end.  The JCo connections are not migrated, but setup in each system.
    I hope this helps.  There is a lot of documentation available for NWDI to get you started.  See:  http://help.sap.com/saphelp_erp2005/helpdata/en/01/9c4940d1ba6913e10000000a1550b0/frameset.htm
    -Cindy

  • Best practices around building reporting relationships in OM

    Hello Experts,
    Would you please let me know the pros and cons of building the following kind of reporting lines in Organizational Management (OM):
    1) Direct line reporting
    2) Travel expense approval manager relationship
    3) Compensation approval reporting lines
    4) Performance and other reporting lines.
    What do you all suggest and what's the best practices in this respect....just build one (the direct reporting) and that's it or you recommend building all kind of reporting relationships. Any thoughts and what you all have done on this!
    Thank you very much.
    saplover

    I learned for Time Management, you need to have Chief not in the same org unit as his employees so he won't have access to approve his own time.  I wondered if same would apply for ESS / MSS?
    Here are two different scenarios of maintaing this.  I learned that Scenario I would not work for Time Management for the reasons I listed above and Scenario II would work.  Can you provide some input?  what's SAP best practices on this? which scenario would be best practice?  Please help!
    Scenario I     
    *HQ Sales (Org Unit 1)     *     
         General Manager (Chief)     
         Sales Rep I     
         Sales Rep II     
         Sales Rep III     
    Scenario II
    *HQ Sales (Org Unit 1)     *     
         General Manager (Chief)     
         *HQ Sales (Org Unit 2)     *     
                    Sales Rep I     
         Sales Rep II     
         Sales Rep III

  • Best Practices Java 1.6.0.24 Runtime parameters

    Hi,
    We're deploying Java 1.6.0.24 for 300 end-users on ERP 11.5.10 CU2 on Linux.
    We would like to know what are the Best Practices for Java 1.6.0.24 Runtime parameters.

    Please also see these docs.
    Recommended Client Java Plug-in (JVM/JRE) For Discoverer Plus 10g (10.1.2) [ID 465234.1]
    Diagnosing Forms Mouse Focus Problems Using JRE [ID 760250.1]
    Login Loop on Internet Explorer after Session Timeout using JRE 1.6.0_18 [ID 1078228.1]
    How Are The Forms JAR Files Stored with Sun JRE [ID 1058882.1]
    Thanks,
    Hussein

  • SAP PI conceptual best practice for synchronous scenarios

    Hi,
    <br /><br />Apologies for the length of this post but I'm sure this is an area most of you have thought about in your journey with SAP PI.
    <br /><br />We have recently upgraded our SAP PI system from 7.0 to 7.1 and I'd like to document  best practice guidelines for our internal development team to follow.
    I'd be grateful for any feedback related to my thoughts below which may help to consolidate my knowledge to date.
    <br /><br />Prior to the upgrade we have implemented a number of synchronous and asynchronous scenarios using SAP PI as the hub at runtime using the Integration Directory configuration.
    No interfaces to date are exposes directly from our backend systems using transaction SOAMANAGER.
    <br /><br />Our asynchronous scenarios operate through the SAP PI hub at runtime which builds in resilience and harnesses the benefits of the queue-based approach.
    <br /><br />My queries relate to the implementation of synchronous scenarios where there is no mapping or routing requirement.  Perhaps it's best that I outline my experience/thoughts on the 3 options and summarise my queries/concerns that people may be able to advise upon afterwards.
    <br /><br />1) Use SAP PI Integration Directory.  I appreciate going through SAP PI at runtime is not necessary and adds latency to the process but the monitoring capability in transaction SXMB_MONI provide full access for audit purposes and we have implemented alerting running hourly so all process errors are raised and we handle accordingly.  In our SAP PI Production system we have a full record of sync messages recorded while these don't show in the backend system as we don't have propogation turned on.  When we first looked at this, the reduction in speed seemed to be outweighed by the quality of the monitoring/alerting given none of the processes are particularly intensive and don't require instant responses.  We have some inbound interfaces called by two sender systems so we have the overhead of maintaing the Integration Repository/Directory design/configuration twice for these systems but the nice thing is SXMB_MONI shows which system sent the message.  Extra work but seemingly for improved visibility of the process.  I'm not suggesting this is the correct long term approach but states where we are currently.
    <br /><br />2) Use the Advanced Adapter Engine.  I've heard mixed reviews about this functionaslity, there areh obvious improvements in speed by avoiding the ABAP stack on the SAP PI server at runtime, but some people have complained about the lack of SXMB_MONI support.  I don't know if this is still the case as we're at SAP PI 7.1 EHP1 but I plan to test and evaluate once Basis have set up the pre-requisite RFC etc. 
    <br /><br />3) Use the backend system's SOAP runtime and SOAMANAGER.  Using this option I can still model inbound interfaces in SAP PI but expose these using transaction SOAMANAGER in the backend ABAP system.  [I would have tested out the direct P2P connection option but our backend systems are still at Netweaver 7.0 and this option is not supported until 7.1 so that's out for now.]  The clear benefits of exposing the service directly from the backend system is obviously performance which in some of our planned processes would be desirable.  My understanding is that the logging/tracing options in SOAMANAGER have to be switched on while you investigate so there is no automatic recording of interface detail for retrospective review. 
    <br /><br />Queries:
    <br /><br />I have the feeling that there is no clear cut answer to which of the options you select from above but the decision should be based upon the requirements.
    <br /><br />I'm curious to understand SAPs intention with these options  -
    <br /><br />- For synchronous scenarios is it assumed that the client should always handle errors therefore the lack of monitoring should be less of a concern and option 3 desirable when no mapping/routing is required? 
    <br /><br />- Not only does option 3 offer the best performance, but the generated WSDL is ready once built for any further system to implement thereby offering the maximum benefit of SOA, therefore should we always use option 3 whenever possible?
    <br /><br />- Is it intended that the AAE runtime should be used when available but only for asynchronous scenarios or those requiring SAP PI functionality like mapping/routing otherwise customers should use option 3?  I accept there are some areas of functionality not yet supported with the AAE so that would be another factor.
    <br /><br />Thanks for any advice, it is much appreciated.
    <br /><br />Alan
    Edited by: Alan Cecchini on Aug 19, 2010 11:48 AM
    Edited by: Alan Cecchini on Aug 19, 2010 11:50 AM
    Edited by: Alan Cecchini on Aug 20, 2010 12:11 PM

    Hi Aaron,
    I was hoping for a better more concrete answer to my questions.
    I've had discussion with a number of experienced SAP developers and read many articles.
    There is no definitive paper that sets out the best approach here but I have gleaned the following key points:
    - Make interfaces asynchronous whenever possible to reduce system dependencies and improve the user experience (e.g. by eliminating wait times when they are not essential, such as by sending them an email with confirmation details rather than waiting for the server to respond)
    - It is the responsibility of the client to handle errors in synchronous scenarios hence monitoring lost through P-P services compared to the details information in transaction SXMB_MONI for PI services is not such a big issue.  You can always turn on monitoring in SOAMANAGER to trace errors if need be.
    - Choice of integration technique varies considerably by release level (for PI and Netweaver) so system landscape will be a significant factor.  For example, we have some systems on Netweaver 7.0 and other on 7.1.  As you need 7.1 for direction connection PI services we'd rather wait until all systems are at the higher level than have mixed usage in our landscape - it is already complex enough.
    - We've not tried the AAE option in a Production scenarios yet but this is only really important for high volume interfaces, something that is not a concern at the moment.  Obviously cumulative performance may be an issue in time so we plan to start looking at AAE soon.
    Hope these comments may be useful.
    Alan

  • What’s the best practice for this scenario?

    Hi,
    My users want the ability to change the WHERE and/or ORDER BY clause at runtime. They may define user preferences on each screen ( which is bind to a view object). They want to see the same records based on WHERE/ORDER BY defined on the last visit. That is why I keep the users preferences and load the screen based on that, using :
    View.setWhereClause(...);
    View.setOrderByClause(...);
    View.executeQuery();
    This works good when only one user working with the application but faced low performance when more than one user working with the application.
    What are the points to increase the performance and what is the best practice for this scenario?
    Thanks for your help in advance.

    Sung,
    I am talking only about 2 users in my testing. I am sure i missed something but could not recognize that.
    This page is my custom query page including a tag to instantiate app module in stateful mode at the top <jbo:ApplicationModule..> and a tag to instantiate data source <jbo:Datasource...> and release tag at the bottom <jbo:ReleasePageResources..> and some java code in the middle(body). The java code constructed the query statement and then fires the query to set the view object based on the query statement using the above methods.
    So, I am facing very slow performance(speed) when two clients load this page at the same time. Looks like the entire application locks for others when one client load this page and fire the query. i realized the battle neck is where executeQuery() is executing.
    what do you think.
    Thanks in advance for your comments.

  • Best Practices for FSCM Multiple systems scenario

    Hi guys,
    We have a scenario to implement FSCM credit, collections and dispute management solution for our landscape comprising the following:
    a 4.6c system
    a 4.7 system
    an ECC 5 system
    2 ECC6 systems
    I have documented my design, but would like to double check and rob minds with colleagues regarding the following areas/questions.
    Business partner replication and synchronization: what is the best practice for the initial replication of customers in each of the different systems to business partners in the FSCM system? (a) for the initial creation, and (b) for on-going synchronization of new customers and changes to existing customers?
    Credit Management: what is the best practice for update of exposures from SD and FI-AR from each of the different systems? Should this be real-time for each transaction from SD and AR  (synchronous) or periodic, say once a day? (assuming we can control this in the BADI)
    Is there any particular point to note in dispute management?
    Any other general note regarding this scenario?
    Thanks in advance. Comments appreciated.

    Hi,
    I guess when you've the informations that the SAP can read and take some action, has to be asynchronous (from non-SAP to FSCM);
    But when the credit analysis is done by non-SAP and like an 'Experian', SAP send the informations with invoices paid and not paid and this non-SAP group give a rate for this customer. All banks and big companies in the world does the same. And for this, you've the synchronous interface. This interface will updated the FSCM-CR (Credit), blocking or not the vendor, decreasing or increasing them limit amount to buy.
    So, for these 1.000 sales orders, you'll have to think with PI in how to create an interface for this volume? What parameters SAP does has to check? There's an time interval to receive and send back? Will be a synchronous or asynchronous?
    Contact your PI to help think in this information exchange.
    Am I clear in your question?
    JPA

  • Activate Scenarios with Solution Builder CRM Best Practices V1.2007

    Hi,
    I finished all steps in Quickguide for CRM Best Practices V1.2007 until the end.
    All worked fine without any problem.
    Now I want to activate a scenario.
    1. In the field Workbench I get a list of 15 Request/Task, I`m only able to select one.
    2. In the field Customizing I do not get any values.
    3. How to maintain this fields?
    3. Do I have to create a customizing request?
    Can anybody tell me how to proceed with this step? I copied the standard solution to my favorite Solution and marked seven scenarios.
    Perhaps there is a another documentation than Solution_Builder_Quick_Start_V4
    Regards
    Andreas

    Hi Andreas,
    In the same popup window, at the bottom, you will find options to create work bench and customising requests.
    You can assign only one workbench and one customizing request for all the activities of solution builder.
    If you do not have an existing customizing request, choose the option to create one.
    Regards,
    Padma

  • How to set global parameters? (Best Practice?)

    Dear all,
    I've experience with Infor Performance Management tools, Qlikview and also SAP Business Objects Reporting. As for last week i'm new to Microsoft Reporting Services. In our current environemnt we're setting up a Report Structure on top of an Analysis Service
    Cube.
    Currently I'm creating various reports (which works fine) on the mentioned Analysis Service Cube.
    My current question is, if there is a best practice to create reports with global variables? We'll have a package of ~20 reports which all relate to the same data source. My approach is that i'll define the reports, but the global parameters like "Current
    year", "Last year" etc. should be defined on a global page. That's the way I would have done it in the other BI tools.
    How is this done best in MS Reporting Services? Should i add an additional flat SQL server source and put the values in some ref tables? Or script the global parameter?

    Hi Praveen G,
    this blog was one of the sources I already found.
    I was searching for the "official" way to set the default-workbook - without manipulating the tables programmaticaly (and possibly inconsistant).
    Best regards,
      Marco

  • What is the best practice for checking if CR2008 runtime is installed?

    I've created our crystal report functions in a .Net exe and it is launched from a Delphi application.
    I need to check if the .Net runtime for CR 2008 is installed on a machine at startup of the delphi application.
    Right now I am checking the registry at
    HKEY_LOCAL_MACHINE\Software\Sap BusinessObjects\Crystal Reports For .Net Framework 4.0\Crystal Reports
    and checking the value for CRRuntime32Version.
    This works great if a user is an admin on the machine, however I'm assuming due to group policies and restrictions this registry is not able to be read for some reason. My prompt continues to show up after installation because it can not get the value of the registry.
    So before I get winded and ramble on, what is best practice to test to see if the runtime has been installed? Is there a particular section of the registry I can check? My next thought is to check for the runtime directory but that might not be efficient as I would hope.

    Registry and folder is about all I can think of. Problem is, you're never guaranteed that something was not installed and then uninstalled and "stuff" like folders are getting left behind (a common occurrence from my experience...). I've also seen registry entries left behind. Perhaps looking for crpe32.dll in the c:\Program Files\Business Objects\BusinessObjects Enterprise 12.0\win32_x86 folder will be best. I've never seen the crpe32.dll orphaned after an uninstall.
    Other than that, when you run the app and there is not runtime, you will get an error and you could trap that, possibly launch the installer... naaa - too goofy...
    Ludek
    Follow us on Twitter http://twitter.com/SAPCRNetSup
    Got Enhancement ideas? Try the [SAP Idea Place|https://ideas.sap.com/community/products_and_solutions/crystalreports]

  • Best practice to create users - Hybrid scenario

    I would like to know what is the best practice for creating new users in a Hybrid scenario with all mailboxes hosted and no mailboxes on-premise.
    Currently when creating a new user we go to our local EMC and create a 'New Remote Mailbox'.  This creates the mailbox in Office365 and the local user account in one wizard.
    After the new user is created we have to manually add the user to the correct distribution groups, and security groups.
    We would like a way to create new users using a template which already has the correct distribution groups and security groups.  This is how we did it prior to setting up the Hyrbrid scenario.
    Is this possible?  Can we create a user from a template and have the mailbox created in Office365 at the same time?  We do not wish to create the mailbox locally then migrate it.

    Thanks for the response DJStatik.  I think this tool might be useful for creating users in bulk, however we are looking for a user template in the traditional sense.
    Occasionally we have a new user and (prior to O365) would 'copy' the template user to ensure correct groups etc.
    We have tried making users from the existing template we have, but in order to create the mailbox in O365, you need to 'mail enable' the user.  We have noticed that doing this process causes issues with Autodiscover for that particular user.
    To avoid the autodiscover issue, we have found it best to create the user and the mailbox in the same wizard - hence our new process that we would like a template.

  • Could scenario 161 of SAP best practice  help to generate aut. posting?

    Dear All,
    We need to generate automatic postings for Sales costs, admin costs in FI. This sums we have on our cost centers groups accordingly.
    Could scenario 161 of SAP best practice help us?
    Best regards,
    Kamila

    Be in the monitor window where u got the below issue
    when I start the "schedule process" the status getting still "yellow 14:27: 31(194 from 0 records)"
    and go to environment in the menu options TransactRFC--->in the sourcesystem...
    give the logon details and enter and from there give the correct target destination as ur BI server and execute...
    if u find some idoc's pending there push that manually using F6..
    and come back to ur load and refresh....
    if still it doen't turn green u can manully change status to red in STATUS tab and come to processing tab and expand ur processing details and right click on ur data packet which was not yet updated and select manual update...
    it shows busy status and when it comes out of that once again refresh...
    rgds,

  • Request for any Best Practice document for configuring Service Parameters on CUCM 9.1.1

    Hi Team,
    Could you please send if you have any  Best Practice document for configuring Service Parameters on CUCM 9.1.1. That would really help.
    Thanks,
    Guru

    Hi
    There's no 'best practice' as such, but there are a few that I think should be default:
    Enabling CDR, On-Hook Pickup, CFwdAll Override... but really the settings are specific to the requirements of the deployment.
    Aaron

Maybe you are looking for