WCEM Best Practice deployment in a multi CRM Landscape

Hi SCN
Im looking for advice in relation to best practice deployment of WCEM. Specifically in a multi CRM landscape scenario.
Do best practices exist?
JR

Look into using NWDI as your source code control (DTR) and transport/migration from dev through to production.  This also will handle the deployment to your dev system (check-in/activate).
For unit testing and debugging you should be running a local version (NWDW).  This way once the code is ready to be shared with the team, you check it in (makes it visible to other team members) and activate it (deploys it to development server).
We are currently using a separate server for WD applications rather than running them on the portal server.  However, this does not allow for the WD app to run in the new WD iView.  So it depends on what the WD app needs to do an have access to.  Of course there is always the Federated Portal Network as an option, but that is a whole other topic.
For JCo connections, WD uses a connection name and this connection can be set up to point to different locations depending on which server it is on.  So on the development server the JCo connection can point to the dev back-end and in prod point to the prod back-end.  The JCo connections are not migrated, but setup in each system.
I hope this helps.  There is a lot of documentation available for NWDI to get you started.  See:  http://help.sap.com/saphelp_erp2005/helpdata/en/01/9c4940d1ba6913e10000000a1550b0/frameset.htm
-Cindy

Similar Messages

  • Best practice for encrypting data in CRM 2013 (other than the fields it already encrypts)

    I know CRM 2013 can encrypt some values by default, but if I want to store data in custom fields then encrypt that, what's the best practice?  I'm working on a project to do this through a javascript action that when triggered from a form would reference
    a web service to decrypt values and a plugin to encrypt on Update/Create, but I hoped there might be a simpler or more suggested way to do this.
    Thanks.

    At what level are you encrypting?  CRM 2013 supports encrypted databases if you're worried about the data at rest.
    In transit, you should be using SSL to encrypt the entire process, not just individual data.
    you can use field-level security to not display certain fields to end users of a certain type if you're worried about that.  It's even more secure than anything you could do with JS, as the data is never passed over the wire.
    Is there something those don't solve?
    The postings on this site are solely my own and do not represent or constitute Hitachi Solutions' positions, views, strategies or opinions.

  • Best Practice: Deploying Group Policy to Users on different OUs

    Greetings, everyone! I am needing some advice on how to deploy some group policy objects to specific users stored on different OUs.
    Let me set the stage: I work for a large school district, and have recently taken over the district's career center. The idea behind the career center is that students from different high schools around the city come in to take classes based on their choice
    of career, such as radio broadcasting or auto mechanic and such. The AD structure is set up so that each school has their own OU.  When a user (staff, student, etc.) is assigned to a school OU, they automatically are added to
    their school's security group (i.e. EASTHIGH-STUDENT), and that when any user moves from one school to another, we have to move their AD account to that school's OU, which will remove the security group from the old school and apply the new school
    security group.
    For the career center, since we have students coming from different buildings every day, rather than trying to find a way to move their AD account from their high school OU to the career center OU, the previous techs created generic accounts (such as tv001,
    tv002, etc.) in AD and stored them in the career center OU.  This way, teachers can assign students that particular generic account so that they can access the drives and printers from the career center, as well as access the career center network
    drives while they are at their home high school.
    Since I have moved to the career center, and apparently I have more knowledge about group policy than most of the techs in the district, the district system engineers want me to remove all of the generic accounts from the career center OU, and have students
    use their own AD accounts.  Obviously I also want to do this since the generic accounts are very confusing to me, but I'm trying to figure out the best way to do this.
    For simplicity sake, I'm just going to start off by figuring out how to set up a group policy for mapping the career center drives.  Now, I obviously know that the best way would be to create security groups for each career area, and that we would need
    to add students to those groups so that only those particular students would get the GPO for the career center, but my question is where would I like the group policies to?  Do I need to link it at the root of the domain so that every OU is hit? 
    Just curious about this.
    Thanks!

    Don't link it to the root.... apply the drive mapping as a policy at the OU or you could apply the drive mapping using Group Policy Preferences using security group targeting... .I would also strongly recommend you check out my articles
    Best Practice: Active Directory Structure Guidelines
    – Part 1
    Best Practice: Group Policy Design Guidelines – Part 2
    Hope it helps...

  • Best Practice for report output of CRM Notes field data

    My company has a requirement to produce a report with variable output, based upon a keyword search of our CRM Request Notes data.  Example:  The business wants a report return of all Service Requests where the Notes field contains the word "pay" or "payee" or "payment".  As part of the report output, the business wants to freely select the output fields meant to accompany the notes data.  Can anyone please advise to SAP's Best Practice for meeting a report requirement such as this.  Is a custom ABAP application built?  Does data get moved to BW for Reporting (how are notes handles)?  Is data moved to separate system?

    Hi David,
    I would leave your query
    "Am I doing something wrong and did I miss something that would prevent this problem?"
    to the experts/ gurus out here on this forum.
    From my end, you can follow
    TOP 10 EXCEL TIPS FOR SUCCESS
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/204c3259-edb2-2b10-4a84-a754c9e1aea8
    Please follow the Xcelsius Best Practices at
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/a084a11c-6564-2b10-79ac-cc1eb3f017ac
    In order to reduce the size of xlf and swf files follow
    http://myxcelsius.com/2009/03/18/reduce-the-size-of-your-xlf-and-swf-files/
    Hope this helps to certain extent.
    Regards
    Nikhil

  • Best practice: Deployment plan for cluster environment

    Hi All,
    I want to know, which way is the best practice for preparing and deploying new configuration for WLS-cluster environment. How can I plan a simultan deployment of ALL of nodes, with out single point of failure?
    Regards,
    Moh

    Hi All,
    I get the Answer as followed:
    When you deploy an application OR redeploy an application, the deployment is initiated from the Admin Server and it it initiated on all targets (managed servers in the cluster) at the same time based on targets (which is expected to be cluster).
    We recommend that applications should be targeted to a cluster instead of individual servers whenever a cluster configuration is available.
    So, as long as you target the application to the cluster, the admin server will initiate the deployment on all the servers in a cluster at the same type, so application is in sync on all servers.
    Hope that answers your queries. If not, please let me know what exactly you mean by synchronization.
    Regards,
    Moh

  • Best practice deploying additional updates

    Hello what is the best practice concerning monthy windows updates. We are currently adding additional windows updates to the existing 1 package and updating the content on the DP's. However this seems to work with inconsistant results.
    DPs are not finalising content .
    Other places I have worked we would create a seperate package each month for additional updates and never had an issue. Any thoughts?
    SCCM Deployment Technician

    The documented best practices are all related to the maximum number of patches that are part of one deployment. That number should not pas the 1000,
    Remember this is a hard limit of 1000 updates per Software Update Group (not deployment package). It's quite legitimate to use a single deployment package.
    I usually create static historical Software Updates Groups at a point in time (eg November 2014). In this case it is not possible to have a single SUG for all products (Windows 7 has over 600 updates for example). You have to split them. I deploy these
    updates (to pilot and production) and leave the deployments in place. Then I create an ADR which creates a new SUG each month and deploy (to pilot and production).
    You can use a single deployment package for all the above.
    Gerry Hampson | Blog:
    www.gerryhampsoncm.blogspot.ie | LinkedIn:
    Gerry Hampson | Twitter:
    @gerryhampson

  • Best practices TopLink Mapping Workbench multi-user + CVS?

    This might be a very important issue, in our decision whether or not to choose TopLink --
    How well is multi-user development and CVS supported when using the TopLink Mapping Workbench? Are there best practices regarding this use case?
    Thanks.

    We have no problem with the workbench and CVS. Only a couple of our developers are responsible for the mappings so we havn't really run into concurrent edits. It's pure XML so a decent mergetool with XML support should let you resolve conflicts pretty easily.

  • Rapid Deployment Solution for SAP CRM - Landscape

    Hi
    For the first time we have applied the Rapid Deployment Solution for SAP CRM ITSDO into our new DEV system.  The activation of this RDS creates 2 transports  (one WORKBENCH, one CUSTOMIZING)
    I now need to apply the RDS to our QA and PRD systems - is it enough to just transport the 2 transports or do I need to do the entire activation again in both QA and PRD systems?
    I would really appreciate any assistance here.
    Thanks

    Hi Leigh,
    Just wondering how you got on with those transports.. Was it sufficient for you to release the Solution Builder transports to target system or did you need to carry out manual activities in the target system?
    Also, What did you do with regard to activating the reporting client in the target system? did the transport take care of everyting?
    Any assistance would be greatly appreciated.

  • Best practice in JSTL with multi-dimensional arrays

    Hi,
    I'm working in a project and I'm trying to convert some code into JSTL way, I have a first.jsp that calls addField(String, String) from fieldControl.jsp:
    (first.jsp)
    <%@page contentType="text/html"%>
    <%@page pageEncoding="UTF-8"%>
    <%@ include file="fieldControl.jsp" %>
    <html>
    <head><title>JSP Page</title></head>
    <body>
    <%! String[][] list1;
    %>
    <%
    list1 = new String[2][2];
    list1[0][0]="first_1";
    list1[0][1]="first_2";
    list1[1][0]="second_1";
    list1[1][1]="second_2";
    for (int i=0;i<list1.length;i++){
    String html;
    html = addField (list1[0], list1[i][1]);
    out.println(html);
    %>
    </body>
    </html>
    (fieldControl.jsp)
    <%@page contentType="text/html"%>
    <%@page pageEncoding="UTF-8"%>
    <%@ page language="java" %>
    <%! public void addField(String name,String label)
    return ...
    Now for JSTL I've this example from "JSTL pratical guide for JSP programmers" Sue Spielman:
    <%@ taglib uri="http://java.sun.com/jstl/core" prefix="c" %>
    <html>
    <head>
    <title>
    Display Results
    </title>
    </head>
    <body>
    <%-- Create a HashMap so we can add some values to it --%>
    <jsp:useBean id="hash" class="java.util.HashMap" />
    <%-- Add some values, so that we can loop over them --%>
    <%
         hash.put("apples","pie");
         hash.put("oranges","juice");
         hash.put("plums","pudding");
         hash.put("peaches","jam");
    %>
    <br>
    <c:forEach var="item" items="${hash}">
    I like to use <c:out value="${item.key}" /> to make <c:out value="${hash[item.key]}" />
    <br>
    <br>
    </c:forEach>
    </body>
    </html>
    and my problem is :
    1st - how to use the multi-dimensional array in this way (<jsp:useBean id="hash" class="java.util.HashMap" />) ? because if I use it like this
    <% String[][] list1;
    list1 = new String[2][2];
    list1[0][0]="first_1";
    list1[0][1]="first_1";
    list1[1][0]="second_1";
    list1[1][1]="second_2";%>
    <c:out value="${list1}" />
    <c:out value="${list1.lenght}" />
    I get nothing,
    also tryed <c:set var="test" value="<%=list1.length%>" /> and got "According to TLD or attribute directive in tag file, attribute value does not accept any expressions"
    2nd hot to make the call to the method addField?
    Thanks for any help, I really want to make this project using JSTL, PV

    When you are using JSTL, it is best to not put data inside the JSP. Put it inside JavaBeans. Then make calles to the methods in those JavaBeans.
    So you should get used to JavaBeans and their requirenments.
    Access JavaBeans only through 2 types of methods: getters and setters.
    Getters are used to "get" values (properties) from the bean, and always have the form
    public Type getPropertyName(void)
    That is, they are public, return some value, start with the exact string "get" end with the name of the property (first letter of the property name capitalized) and have a void (empty) argument list.
    For example, this is a good signature to get HTML from a bean:
    public String getHtml()
    Setters are used to assign values to a bean. They always have the form
    public void setPropertyName(Type value)
    That is, they are public, do not return anything, start with the string "set", end with the name of the property (first letter capitalized), and take a SINGLE parameter.
    public void setHtml(String value)
    Also, JavaBeans must have a no argument constructor and need to be Serializable.
    The way I would approach this would be to create a JavaBean that holds the two dim array for you, and has a getter that returns a list of all the formatted html.
    package mypack;
    import java.util.List;
    import java.util.ArrayList;
    public class DataFormatterBean implement java.io.Serializable {
      private String[][] list;
      public DataFormatterBean() {
        list = new String[2][2];
        list[0][0]="first_1";
        list[0][1]="first_2";
        list[1][0]="second_1";
        list[1][1]="second_2";
      public List getHtml() {
        List outputHtml = new ArrayList(list.length);
        for (int i = 0; i < list.length; i++) {
          String html = addField(list[0], list[i][1]);
    outputHtml.add(html);
    return outputHtml;
    private addField(String a, String b) { .. do work .. }
    Then, the JSP would look like this:
    <jsp:useBean id="dataFormatter" class="mypack.DataFormatterBean"/>
    <c:forEach var="htmlString" items="${dataFormatter.html}">
      <c:out value="${htmlString}"/>
    </c:forEach>Once you start thinking in terms of having JavaBeans do your work for you JSTL is so much easier... and it gets even easier when you start to delve into custom tags.

  • Best practice deployment - Solaris source and target

    Hi,
    What is the recommended deployment guide for an ODI instance under Solaris. I have a sybase source and an Oracle target, both of which are on Solaris. I plan to put my ODI master and work repository on another Oracle DB on the solaris target machine. Now where does my agent sit, since my source and target are solaris ?? I plan to administer ODI from my windows clients but :-
    Where and how do I configure my agent so that I can schedule scenarios. It would make most sense to be able to run the agent on my target solaris machine , is this possible ?? If not then do I have to have a separate windows server that is used to run the agent and schedule the jobs ??
    Thanks for any assistance,
    Brandon

    thanks for the reply. I cant find anything in the installation guide about Solaris specifically but it mentions to follow the instructions for "Installing the Java Agent on iSeries and AS/400" where the download o/s is not supported.
    So it seems I just need make some directories on the solaris host and to manually copy files into s these directories and as long as java sdk/runtime is there I can use the shell scripts (eg. agentshceduler.sh ) to start and stop the agent.
    So my question I guess is since the os supported downloads are only windows and Linux, where do I copy the files from, the Linux ones ?? Is it right to say that since these are java programs I should be able to copy the Linux ones and use them under Solaris ??
    I dont have the Solaris environment at hand to test this just yet.... hence the questions....
    thanks again

  • Best practice: Webdynpro in a large system landscape

    Dear Sirs,
    I have a few questions about using Webdynpro (WD) in a large system landscape.  After doing some research I understand there are a few alternatives, and I would like to get your opinions on the issue and links to any relevant documentation. I know most of my questions do not have a single answer, but I hope we can get a disussion, which will highlight the pro/cons.
    My landscape consists of a full set of ECC and portal servers (DEV, QA, P) , where using WD to fetch BABI’s from the backend and present them in the portal is a likely scenario.
    <b><i>Deploy the WD components on portal servers or on separate servers?</i></b>
    Would you deploy the WD components on the portal WAS or would you advice having a (or a number) of servers dedicated to running WD.
    The way I see it, when you are having a large number of developers, giving  away the SDM password to the portal server (DEV) in order for them to test WD applications is not advisable (or perhaps more true, not wanted by the basis). So perhaps a separate WAS for development of WD is advisable, and then let basis deploy them into the portal QA and PROD server.  I do not think that each developer having its own local J2EE for testing is likely.
    How about performance?, will any solution be preferable over an other. Will it be faster/slower to run WD on separate WAS.
    <b><i>Transporting the WD components</i></b>
    How should one transport the components and keep them pointing to the right JCO connections (as you have different JCO connections for (DEV, QA, P)), I have seen example with threads where you opt for a dynamic setting of the JCO connections through parameters.  Is this the one to prefer? 
    Any documentation on this issue would be highly appreciated. (Already read: System Landscape Directory, SAP System Landscape Directory on SAP Web AS Java 6.40)

    Look into using NWDI as your source code control (DTR) and transport/migration from dev through to production.  This also will handle the deployment to your dev system (check-in/activate).
    For unit testing and debugging you should be running a local version (NWDW).  This way once the code is ready to be shared with the team, you check it in (makes it visible to other team members) and activate it (deploys it to development server).
    We are currently using a separate server for WD applications rather than running them on the portal server.  However, this does not allow for the WD app to run in the new WD iView.  So it depends on what the WD app needs to do an have access to.  Of course there is always the Federated Portal Network as an option, but that is a whole other topic.
    For JCo connections, WD uses a connection name and this connection can be set up to point to different locations depending on which server it is on.  So on the development server the JCo connection can point to the dev back-end and in prod point to the prod back-end.  The JCo connections are not migrated, but setup in each system.
    I hope this helps.  There is a lot of documentation available for NWDI to get you started.  See:  http://help.sap.com/saphelp_erp2005/helpdata/en/01/9c4940d1ba6913e10000000a1550b0/frameset.htm
    -Cindy

  • CRM 5.2 Best Practices

    Hi All;
    I'm basically looking for Best Practices related to the new CRM 5.2. I need almost everything starting from Internet Sales Best Practices landing to activity and opportunity management.
    Does anybody have documentation i could be interested in?
    Thank you in advance.

    Thank you Mike.
    If it's possible i would like to have some sort of scenarios ready tailored on the new CRM 5.2... I'm preparing for my company some kind of demos where we wanna show the potentiality of the new UI.
    I would lilke something more specific or a way how to get that...
    Thank you again

  • Best practice for a deplomyent (EAR containing WAR/EJB) in a productive environment

    Hi there,
    I'm looking for some hints regarding to the best practice deployment in a productive
    environment (currently we are not using a WLS-cluster);
    We are using ANT for buildung, packaging and (dynamic) deployment (via weblogic.Deployer)
    on the development environment and this works fine (in the meantime);
    For my point of view, I would like to prefere this kind of Deploment not only
    for the development, also for the productive system.
    But I found some hints in some books, and this guys prefere the static deployment
    for the p-system.
    My question now:
    Could anybody provide me with some links to some whitepapers regarding best practice
    for a deployment into a p-system ??
    What is your experiance with the new two-phase-deploment coming up with WLS 7.0
    Is it really a good idea to use the static deployment (what is the advantage of
    this kind of deployment ???
    THX in advanced
    -Martin

    Hi Siva,
    What best practise are you looking for ? If you can be specific on your question we could provide appropriate response.
    From my basis experience some of the best practices.
    1) Productive landscape should have high availability to business. For this you may setup DR or HA or both.
    2) It should have backup configured for which restore has been already tested
    3) It should have all the monitoring setup viz application, OS and DB
    4) Productive client should not be modifiable
    5) Users in Production landscape should have appropriate authorization based on SOD. There should not be any SOD conflicts
    6) Transport to Production should be highly controlled. Any transport to Production should be moved only with appropriate Change Board approvals.
    7) Relevant Database and OS security parameters should be tested before golive and enabled
    8) Pre-Golive , Post Golive should have been performed on Production system
    9) EWA should be configured atleast for Production system
    10) Production system availability using DR should have been tested
    Hope this helps.
    Regards,
    Deepak Kori

  • Best practice configure DHCP server NAC

    hi all,
    any idea how the best practice deploy dhcp on cas? i tired follow user guide configure dhcp on cas but still cannot running smoothly user just only grep ip authenticate.
    - CCA agent very slow appear when user get ip dhcp on authenticate.any idea ?
    - how to integrated profiler with nac appliance .?

    Hi ahmed,
    You have configured your CAS to be your DHCP server, Thats well and good because you are using Real IP mode, Which Supports the CAS to be a DHCP server.
    Remember
    This Setting is only For your Authentication VLAN that your client gets an ip While Authentication ok.
    When your Client switches to Access VLAN , your client trafiic no longer flows through the CAS so CAS is now not responsible for DHCP.
    You'll have to configure another DHCP on the Trusted Side which can Lease IPs to the Acess VLAN Members.
    As you have configured OOB then your client is in Acess VLAN and does not come in contact with the CAS so you need the Trusted side DHCP to give the Client an IP address.
    Here in your Scenario your ACCESS VLANS are 2022,2044
    Hope this helps, Do reply after Testing.
    Thank You
    Regards
    Edward

  • Best practice for extracting data to feed external DW

    We are having a healthy debate with our EDW team about extracting data from SAP.  They want to go directly against ECC tables using Informatica and my SAP team is saying this is not a best practice and could potentially be a performance drain.  We are recommending going against BW at the ODS level.  Does anyone have any recommendations or thoughts on this?

    Hi,
    As you asked for Best Practice, here it is in SAP landscape.
    1. Full Load or Delta Load data from SAP ECC to SAP BI (BW): SAP BI understand the data element structure of SAP ECC, and delta mechanism is the continous process of data load from a SAP ECC (transaction system) to BI (Analytic System).
    2. You can store transaction data in DSOs (granular level), and in InfoCubes (at a summrized level) within SAP BI. You can have master data from SAP ECC coming into SAP BI separately.
    3. Within SAP BI, you SHOULD use OpenHub service to provide SAP BI data to other external system. You must not connect external extractor to fetch data from DSO and InfoCube to target system. OpenHub service is the tool that faciliate data feeding to external system. You can have Informatica to take data from OpenHubs of SAP BI.
    Hope I explain to best of your satisfaction.
    Thanks,
    S

Maybe you are looking for

  • Link problem with Fireworks CS5 - net::ERR_FILE_NOT_FOUND

    Hi, I'm having trouble with the hotspot links in Fireworks CS5. When I've added the links I want and click F12 to preview in Chrome, it loads the page without problems. But when I click on one of the links I get the following message: Webpage cannot

  • Can I enter the 'wait' character in a phone number in Contacts?

    I am accustomed to being able to enter a 'pause' character and a 'wait' character when entering phone numbers in my phone's address book. 'pause' causes the phone to pause a certain number of seconds (typically 2 or 4) before sending the characters w

  • Reading Data

    Let's say that I have to read 50 file names. What would be more efficient, 1) Searching the files directly from the computer? 2) Searching a text file that already has the names of all of the files? 3) Writing the names of all of the files to a text

  • Whats the use of these Bios Option

    after Flashing the bios of my x58-pro To 8.50 I saw there is some new option in the bios that you can say there is practically no guidance in the manual on what they do. i'm interested specially in 3 of them which i believe have something with power

  • Newby question on changing columnar data

    If I want to change values in, say, the genre column from what was automatically added when the CD was imported, is there a way to change several at a time rather than one at a time. For instance 'Sound track' was a value I wanted to change to 'Music