Composite Release Roles Best Practice

I have a question in regards to best practice for utilizing composite release roles.
We had an issue recently where Purchasing Doc Type (M_BEST_BSA - BSART), Release Code (M_EINK_FRG - FRGCO) and Release Group (M_EINK_FRG - FRGCO), which are maintained at the task role were over written with blanks when derived from the template role.  The template role has these three fields maintained as blanks.  All other data is consisten from the template role to the task role with the exception of the Organizational Levls (ie Plant, Purchasing Org, Purchasing Group).  We then have a variety of task roles that make up the composite.
Would it make sense to maintain these three fields as Org Level data at in the task role?
What are our other options?
Thanks for your assistance.

We do have DEV, QA, PRD, Training and Sandbox environments.  Our standard practice is to develop in DEV (200) role out to the other DEV clients and then transport to QA for UAT.  I have come across on occasion where the roles are not consistant across all DEV clients and if development work was completed on a role in DEV that was not consistant with the production role then we would be fubar.  This did occur a few weeks back; however, it was caught in time.
Chain of events went as follows
1. Request submitted to remove a plant value
2. Dev work completed and moved to QA.  Based on screen shots of UAT we can see that the three fields were yellow at this point (blank values)
3. End user did not recognize the caution flags as they were only looking at org value to ensure plant was removed.
4. Developer failed to highlight the unmaintained fields
5. Roles moved to production which halted purchasing teams
This hole thing is very confusing 
My only guess was the development work was completed on an old role in the wrong dev client.  But then this opens up another issue.  Why was there an old role as standard practice is to move the new roles to all dev clients once completed.

Similar Messages

  • Failover cluster File Server role best practices

    We recently implemented a Hyper-V Server Core 2012 R2 cluster with the sole purpose to run our server environment.  I started with our file servers and decided to create multiple file servers and put them in a cluster for high
    availability.  So now I have a cluster of VMs, which I have now learned is called a guest cluster, and I added the File Server role to this cluster.  It then struck me that I could have just as easily created the File Server role under my Hyper-V
    Server cluster and removed this extra virtual layer.  
    I'm reaching out to this community to see if there are any best practices on using the File Server role.  Are there any benefits to having a guest cluster provide file shares? Or am I making things overly complicated for no reason?
    Just to be clear, I'm just trying to make a simple Windows file server with folder shares that have security enabled on them for users to access internally. I'm using Hyper-V Core server 2012 R2 on my physical servers and right now I have Windows
    Server Standard 2012 R2 on the VMs in the guest cluster.
    Thanks for any information you can provide.

    Hi,
    Generally with Hyper-V VMs available, we will install all roles into virtual machines as that will be easy for management purpose.
    In your situation the host system is a server core, so it seems that manage file shares with a GUI is much better.
    I cannot find an article specifically regarding "best practices of setting up failover cluster". Here are 2 articles regarding build guest cluster (you have already done) and steps to create a file server cluster. 
    Hyper-V Guest Clustering Step-by-Step Guide
    http://blogs.technet.com/b/mghazai/archive/2009/12/12/hyper-v-guest-clustering-step-by-step-guide.aspx
    Failover Cluster Step-by-Step Guide: Configuring a Two-Node File Server Failover Cluster
    https://technet.microsoft.com/en-us/library/cc731844(v=ws.10).aspx
    Please remember to mark the replies as answers if they help and un-mark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Site System Roles - Best Practices

    Hi all -
    I was wondering if there wwere any best practice recommendations for how to configure Site System Roles? We had a vendor come onsite and setup our environment and without going into a lot of detail on why, I wasn't able to work with the vendor. I am trying
    to understand why they did certain things after the fact.
    For scoping purposes we have about 12,000 clients, and this how our environment was setup:
    SERVERA - Site Server, Management Point
    SERVERB - Management Point, Software Update Point
    SERVERC - Asset Intelligence Synchronization Point, Application Catalog Web Service Point, Application Catalog Website Point, Fallback Status Point, Software Update Point
    SERVERD - Distribution Point (we will add more DPs later)
    SERVERE - Distribution Point (we will add more DPs later)
    SERVERF - Reporting Services Point
    The rest is dedicated to our SQL cluster.
    I was wondering if this seems like a good setup, and had a few specific questions:
    Our Site Server is also a Management Point. We have a second Management Point as well, but I was curious if that was best practice?
    Should our Fallback Status Point be a Distribution Point?
    I really appreciate any help on this.

    The FSP role has nothing to do with the 'Allow
    fallback source location for content' on the DP.
    http://technet.microsoft.com/en-us/library/gg681976.aspx
    http://blogs.technet.com/b/cmpfekevin/archive/2013/03/05/what-is-fallback-and-what-does-it-mean.aspx
    Benoit Lecours | Blog: System Center Dudes

  • Modifying SAP standard roles - best practice

    Hi,
    Is there a Best practice How-to guide for configuring SAP BPs roles for client use.  I know I shouldn't change the content delivered by SAP but I'm not quite sure what I should delta link copy into client namespace.
    I am implementing MSS.  Do I just delta link copy the Manager role into client namespace or I should make a delta link copy of the My Staff workset then make changes to the workset and assign it to a completely new ClientManager role?
    I have the TransportEP6Content how to guide but it doesn't say explicitly what is best parctice.  This doc references 'HowTo Use Business Packages in Enterprise Portal 6.0' but it isn't where it says it is on service marketplace.
    TIA,
    J

    Hi,
      'How to use Busiess Packages in Enterprise Portal 6.0' is available in this link.
    http://help.sap.com/bp_epv260/EP_EN/documentation/How-to_Guides/misc/Using_Business_Packages.pdf
    Check out for the best practices.
    Regards,
    Harini S

  • Qs on Best Practices

    Hi All,
    For every new SAP version, SAP releases industry  best practice for each version of SAP.
    I have following points
    1. Can Customer can take the best practices solution and implement as it is?  If so how much time will it take ?
    2.  How the Best Practices documents are getting upgraded ?technically or functionally ?
      I came to know that  many of the business scenarios versions are with the  same text and no new features. Is that correct or Am I wrong ?
    3.Can we expect any support from SAP on best practices ?  Is it free support ?What are the contract agreements
    4. Any reference customers who implemented Best Practices as it is ?
    5. What is the success rate ?
    Regards,

    BTW it is not clear from WHAT you want to migrate? Other DB or simply other Oracle version?
    OK anyway speaking about Data migration strategy there is at least one valueable article
    http://www.dulcian.com/papers/The%20Complete%20Data%20Migration%20Methodology.html
    Speking about technical execution you can look at my article Data migration from old to new application: an experience at http://www.gplivna.eu/papers/legacy_app_migration.htm
    None of them focuse on datawarehouse though.
    Gints Plivna
    http://www.gplivna.eu

  • New whitepaper: Oracle9iAS Best Practices

    Check out the newly published whitepaper on Oracle9iAS Release 2 best practices:
    http://otn.oracle.com/products/ias/ohs/collateral/r2/bp-core-v2.PDF
    Ashesh Parekh
    Oracle9iAS Product Management

    Carl,
    There is really no set number or best practice for the number of segments. It is driven by the needs of your organization based upon reporting requirement, collective bargaining agreements, the degree of organizational change occurring within the enterprise, etc. I do believe that "less" segments usually makes more sense from a maintenance and ease of use perspective. Jobs are available across the Business Group, unlike positions, which are subordinate and specific to jobs and organizations...so you'll be maintaining less of them (hopefully).
    Regards,
    Greg

  • SOA 11g  Composite Deployment across multiple Instances: Best Practice

    Hi,
    We have a requirement where we need to deploy the composite acrocss mutilple instances(DEV,TEST,Production) without JDEV.
    We are using SOA11.1.3.3(cluster) and linux OS.
    Pls suggest me what is the best practice to deploy the SOA composite.
    Thanks,
    AB

    Why there are different ways to deploy the composite in different environment? Depending upon the environment, it's business importance increases and hence access to developers get more restricted and hence there are many ways for deploying. If you are developing an application, you would not like to export a SAR and then login to EM console to deploy it. For a developer it is always very convenient to use IDE itself for deployment and hence JDev is preferably used for Dev instances for deployment.
    Once development finishes, developer will check in the artifacts into version control system and if you want to deploy to a test instance then you have to check out the stable version, compile it and package it and then only you can deploy. Hence for test instances, ANT is the preferable mode for deployment. (Remember that developers may not be able to connect to Test environment directly and hence deployment from JDev is not possible)
    Once a configuration gets tested in Test env, it's SAR should be checked in into version control system, so that it would not be required to recompile and repackaging of it. This will also make sure that any artifact which is not tested on Test instance, must not go to higher instances (like UAT/PROD). Now in Pre-Prod/UAT/Prod, you may simply access the EM console and deploy the SAR which is already tested in Test instance. Remember that there will be very limited access to such critical environments and hence using Role based access of EM it would be very easy to deploy the resources. Moreover, it is more secure mode of deployment because only those users which have appropriate priviledge, will be able to deploy, and it would also be easier to track the changes.
    What is the proc and cons if we use only one way to deploy the composite across the Instances...As such there is no major pros and cons. You may use EM for all environments but it may be a little discomfort for developers/deployers of test environment. You may also use ANT for all the environments but it is not suggested until and unless you have a very good and secure process in place for deployments.
    Regards,
    Anuj

  • Best Practice of using ERM (Role Expert) in Landscape

    Hello,
    Can anyone tell me what is the best practice (choice) of using ERM in the SAP landscape?
    1. Creating a role in DEV system using ERM and using SAP standard transport process to transport role to QAS and PRD systems.
    OR
    2. Creating a role in all systems in ladscape (DEV, QAS and PRD).
    Please share if you have any best practice implementation scenarios.
    Appreciate for the help.
    Thanks
    Harry.

    Harry,
       The best practice is to follow Option 1. You should never directly create a role in Prod system. This is what SAP recommends as well.
    Alpesh

  • Best practice for distributing/releasing J2EE applications.

    Hi All,
    We are developing a J2EE application and would like some information on the best
    practices to be followed for distributing/releasing J2EE applications, in general.
    In particular, the dilemma we have is centered around the generation of stub, skeleton
    and additional classes for the application.
    Most App. Servers can generate the required classes while deploying the EJBs in the
    application i.e. at install time. While some ( BEA Weblogic and IBM Websphere are
    two that we are aware of ) allow these classes to be generated before the installation
    time and the .ear file containing the additional classes is the one that is uploaded.
    For instance, say we have assembled the application "myapp.ear" . There are two ways
    in which the classes can be generated. The first is using 'ejbc' ( assume we are
    using BEA Weblogic ), which generates the stub, skeleton and additional classes for
    the application and returns the file, say, "Deployable_myapp.ear" containing all
    the necessary classes and files. This file is the one that is then installed. The
    other option is to install the file "myapp.ear" and let the Weblogic App. server
    itself, generate the required classes at the installation time.
    If the first way, of 'pre-generating' the stubs is followed, does it require us to
    separately generate the stubs for each versions of the App. Server that we support
    ? i.e. if we generate a deployable file having the required classes using the 'ejbc'
    of Weblogic Ver5.1, can the same file be installed on Weblogic Ver6.1 or do we
    have to generate a separate file?
    If the second method, of 'install-time-generation' of stubs is used, what is the
    nature/magnitude of the risk that we are taking in terms of the failure of the installation
    Any links to useful resources as well as comments/suggestions will be appreciated.
    TIA
    Regards,
    Aasif

    Its much easier to distribute schema/data from an older version to a newer one than the other way around. Nearly all SQL Server deployment features supports database version upgrade, and these include the "Copy Database" wizard, BACKUP/RESTORE,
    detach/attach, script generation, Microsoft Sync framework, and a few others.
    EVEN if you just want to distribute schemas, you may want to distribute the entire database, and then truncate the tables to purge data.
    Backing up and restoring your database is by far the most RELIABLE method of distributing it, but it may not be pratical in some cases because you'll need to generate a new backup every time a schema change occurs, but not if you already have an automated
    backup/maintenance routine in your environment.
    As an alternative, you can Copy Database functionality in SSMS, although it may present itself unstable in some situations, specially if you are distributing across multiple subnets and/or domains. It will also require you to purge data if/when applicable.
    Another option is to detach your database, copy its files, and then attach them in both the source and destination instances. It will generate downtime for your detached databases, so there are better methods for distribution available.
    And then there is the previously mentioned method of generating scripts for schema, and then using an INSERT statement or the import data wizard available in SSMS (which is very practical and implements a SSIS package internally that can be saved for repeated
    executions). Works fine, not as practical as the other options, but is the best way for distributing databases when their version is being downgraded.
    With all this said, there is no "best practice" for this. There are multiple features, each offering their own advantages and downfalls which allow them to align to different business requirements.

  • Best Practice For Working on Composite In Team

    Hello,
    I would like to know what is the best practice for working on a single composite by mutiple members in a team.
    We have a core services module wherein a single composite contains many services. So, to complete in time, we would like many members to work on it simultaneously.
    In such scenarios, if some one adds a new adapter or some other services, composite.xml changes.
    Saving it would override other member's changes. Also, it is not possible to apply lock simultaneously on the same file through some version control mechanism.
    Please let us know what should be the best practice in such scenarios.
    Thanks-
    Ashish

    You can very well use a version control software with JDev. You may refer -
    http://www.oracle.com/technetwork/articles/soa/jimerson-config-soa-355383.html
    I think without version control mechanism (like subversion) it won't be easy to work in a multi-developer environment. If you really don't have a source and version control mechanism then manual merging will be required which may be error prone and time & effort consuming.
    Regards,
    Anuj

  • Best practices to reduce downtime for Database releases(rolling changes)

    Hi,
    What are best practices to reduce downtime for database releases on 10.2.0.3? What DB changes can be rolling and what can't?
    Thanks in advance.
    Regards,
    RJiv.

    I would be very dubious about any sort of universal "best practices" here. Realistically, your practices need to be tailored to the application and the environment.
    You can invest a lot of time, energy, and resources into minimizing downtime if that is the only goal. But you'll generally pay for that goal in terms of developer and admin time and effort, environmental complexity, etc. And you generally need to architect your application with rolling upgrades in mind, which necessitates potentially large amounts of redesign to existing applications. It may be perfectly acceptable to go full-bore into minimizing downtime if you are running Amazon.com and any downtime is unacceptable. Most organizations, however, need to balance downtime against other needs.
    For example, you could radically minimize downtime by having a second active database, configuring Streams to replicate changes between the two master databases, and configure the middle tier environment so that you can point different middle tier servers against one or the other database. When you want to upgrade, you point all the middle tier servers against database A other than 1 that lives on a special URL. You upgrade database B (making sure to deal with the Streams replication environment properly depending on requirements) and do the smoke test against the special URL. When you determine that everything works, you configure all the app servers to point at B and have Streams replication process configured to replicate changes from the old data model to the new data model), upgrade B, repeat the smoke test, and then return the middle tier environment to the normal state of balancing between databases.
    This lets you upgrade with 0 downtime. But you've got to license another primary database. And configure Streams. And write the replication code to propagate the changes on B during the time you're smoke testing A. And you need the middle tier infrastructure in place. And you're obviously going to be involving more admins than you would for a simpler deploy where you take things down, reboot, and bring things up. The test plan becomes more complicated as well since you need to practice this sort of thing in lower environments.
    Justin

  • Best Practice for ESS/ MSS role customization

    Hi ,
    I would want to know the best practice for role customization for ESS / MSS business package . For eg if my company does not want to use someof the workset like working time , travel etc , what is the best practice for this scenario .
    anEEZ

    Hi Aneez,
    This is the link for complete best practices on NetWeaver
    http://help.sap.com/bp_epv260/EP_EN/index.htm
    Browse the Busines scenarios, you will find what you are looking for.
    Now, these ones is specific for ESS and MSS
    http://help.sap.com/bp_epv260/EP_EN/html/EP/N26_ESS.htm
    http://help.sap.com/bp_epv260/EP_EN/html/EP/N27_MSS.htm
    Hope this helps,
    Kumar
    P.S Reward Points for useful answers.

  • Best Practice for BEX Query "PUBLISH to ROLE"?

    Hello.
    We are trying to determine the best practice for publishing BEX queries/views/workbooks to ROLEs. 
    To be clear of the process I am referring: from the BEX Query Designer, there is an option QUERY>PUBLISH>TO ROLE.  This function updates the user menu of the selected security role with essentially a shortcut to the BEX query.  It is also possible to save VIEWS/WORKBOOKS to a role from the BEX Analyzer menu.  We have found ROLE menus to be a good way to organize BEX queries/views/workbooks for our users. 
    Our dilemma is whether to publish to the role in our DEV system and transport to PROD,... or if it is ok to publish to the role directly in the PROD system.
    Publishing in DEV is not always possible, as we have objects in PROD that do not exist in DEV. For example, we allow power users to create queries directly in PROD.  We also allow VIEWS and WORKBOOKS to be created directly in PROD.  It would not be possible to publish types of objects in DEV. 
    Publishing in PROD eliminates the issues above, but causes concerns for our SECURITY team.  We would be able to maintain these special roles directly in PROD.
    Would appreciate any ideas, suggestions, examples of how others are handling this BEX publish-to-role process.
    Thank you.
    -Joel

    Hi Joel,
    Again as per the Best Practices.Nothing to be created in PRD,even if we create them in PRD for Power users its assumed as temprory and can be deleted at any time.
    So if there are already deviations then you can go for deviations in this case as well but it wont be the Best Practice.Also in few cases we have workbooks created in PRD as they cud nt be created in DEV due to various reasons...in such cases we did not think of Best Practice ,we had a raised an OSS on this aswell.
    In our Project,we have done everything in DEV and transported to PRD,in case there were any very Minor changes at query level we have done in PRD and immedialtely replicated the same in DEV so that they are in SYNC.
    rgds
    SVU

  • Best practices for adding components in a composite custom component

    Hello,
    I am developing a custom, composite JSF component need to dynamically add child components in my renderer class. I've noticed that people add components to a custom component in many different ways. I'd like to follow JSF best practices when developing this component - of the following approaches, which would you recommend? Or is there yet another approach I should be using?
    1) in the encodeBegin method of my renderer class, create a new component, add it to the component tree, and let the page life cycle take care of the rendering:
    HtmlDataList dimensionStateGroupDataList = (HtmlDataList) app.createComponent( HtmlDataList.COMPONENT_TYPE );
    //set properties on dimensionStateGroupDataList
    component.getChildren().add(dimensionStateGroupDataList);
    2) in either the encodeBegin or encodeEnd method, create a component and encode it:
    HtmlDataList dimensionStateGroupDataList = (HtmlDataList) app.createComponent( HtmlDataList.COMPONENT_TYPE );
    //set properties on dimensionStateGroupDataList
    dimensionStateGroupDataList.encodeBegin();
    dimensionStateGroupDataList.encodeEnd();
    Both of these methods are functional, and I prefer the first (why encode children if you don't have to?), but I am interested in other people's take on how this should be done.
    Thanks for your help.
    -Christopher

    My bad, sorry, wasnt concentrating, Im afraid I have no experience with portlets, but I would have thought that you can mimic the outputLinkEx in you renderer by encoding your own links?
    If you were to bind a backing bean variable to an outputLinkEx what would it be? Not understanding portlets, or knowing what an outputLinkEx is may be hindering me, but you should be able to create an instance of it in code like (this example uses HtmlOutputLink, you would need to know which component to use):
    HtmlOutputLink hol = new HtmlOutputLink();
    hol.set....Then set any attributes on it, and explicitly call its encodeStart, encodeEnd functions. Is that way off the mark.

  • Best practice of 11G release 2 Grid & RAC installation on Solaris 10

    Hi Experts,
    Please share 11g Release 2 Grid infrastructure and RAC installation experiennce on Sun SPARC.
    Appreciate if you can provide documentation which provde complete information from server setup to database setup(other than oracle documentaion)
    Also please let me know which is the best storage option( NFS , ASM,...) and pros and cons
    Regards,
    Rasin M

    Hi,
    Appreciate if you can provide documentation which provde complete information from server setup to database setup(other than oracle documentaion)Check this in MOS:
    RAC Assurance Support Team: RAC Starter Kit and Best Practices (Solaris)
    https://support.oracle.com/CSP/main/article?cmd=show&id=811280.1&type=NOT
    Regards,
    Levi Pereira
    http://levipereira.wordpress.com

Maybe you are looking for