Best practice for migrating between environments and versions?

Hi to all,
we've got a full suite of solutions custom developed in SAP BPC 7.0, SP 7. We'd like to understand if
- there are best practice in order to copy this applications from an environment to another environment (another client)
- there are best practice in case the client has got a newer version of SAP BPC (they should install the 7.5, while we're still stucked with the 7.0).
Thank you very much
Daniele

Hi Daniele
I am not entirely sure, what you are asking, Please could you provide additional information.
Are you looking for best practice recommendations for Governance, for example: Change transports between DEV, QA and PRD in BPC 7.0?
What is the best method? Server Manager backup and restore, etc  ?
And
Best Practice recommendations on how to upgrade to a different version of BPC, for example: Upgrading from BPC 7.0 to 7.5 or 10.0 ?
Kind Regards
Daniel

Similar Messages

  • Best Practices for SSO between NWBC and BOBJ CMC

    What are the best practices in this scenario:
    - NWBC client (using SAP ECC logon credentials)
    - BOBJ client (configured using Windows AD credentials)
    I would like my users to log into NWBC - but be automatically logged into CMC for running crystal reports inside the NWBC gui.
    Thanks
    Shane Kelly

    yes.  we're not using portal.    only SAPGUI up till now.
    but we've recently configured our DEV server to run NWBC.
    Normally my users log into CMC Infoview in a browser - but with NWBC i can bring infoview directly into the UI.
    but it asks for a sign=on every time.
    i'd like to configure SSO for NWBC to BOBJ infoview somewhow.

  • Cisco Network 2009 - Best practices for migrating previous versions of cisco unified communications manager to cucm 7.1

    Does anybody have a copy of the above referenced presentation that you could send me. 
    Thanks in advanced. 
    The presentation can be purchased at the following site:
    http://www.scribd.com/doc/33211957/BRKVVT-2011-Best-Practices-for-Migrating-Previous-Versions-of-Cisco-Unified-Communications#archive
    but felt I ask one of my peeps first. 
    Thanks in advanced.
    Dennis

    Hi Dennis,
    Well..let's give this a try
    Cheers!
    Rob

  • Best practice for migrating IDOCs?

    Subject: Best practice for migrating IDOC's? 
    Hi,
    I need to migrate some IDOC's to another system for 'historical reference'.
    However, I don't want to move them using the regular setup as I don't want the inbound processing to be triggered.
    The data that was created in the original system by the processed IDOC's will be migrated to the new system using migration workbench. I only need to migrate the IDOC's as-is due to legal requirements.
    What is the best way to do this? I can see three solutions:
    A) Download IDOC table contents to a local file and upload them in the new system. Quick and dirty approach, but it might also be a bit risky.
    B) Use LSMW. However, I'm not sure whether this is feasible for IDOC's.
    C) Using ALE and setting up a custom partner profile where inbound processing only writes the IDOC's to the database. Send the IDOC's from legacy to the new system. Using standard functionality in this way seems to me to be the best solution, but I need to make sure that the IDOC's once migration will get the same status as they had in the old system.
    Any help/input will be appreciated
    Regards
    Karl Johan
    PS. For anyone interested in the business case: Within EU the utility market was deregulated a few years ago, so that any customer can buy electricity from any supplier. When a customer switches supplier this is handled via EDI, in SAP using ALE and IDOC's. I'm working on a merger between two utility companies and for legal reasons we need to move the IDOC's. Any other data is migrated using migration workbench for IS-U.

    Hi Daniele
    I am not entirely sure, what you are asking, Please could you provide additional information.
    Are you looking for best practice recommendations for Governance, for example: Change transports between DEV, QA and PRD in BPC 7.0?
    What is the best method? Server Manager backup and restore, etc  ?
    And
    Best Practice recommendations on how to upgrade to a different version of BPC, for example: Upgrading from BPC 7.0 to 7.5 or 10.0 ?
    Kind Regards
    Daniel

  • Best practice for migration to new hardware

    Hi,
    We are commissioning new hardware for our Web Server. Our current webserver is version 6.1SP4, and for the new server we've decided to stay with 6.1 but install SP7.
    Is there a best practice for migrating content from one physical server to another?
    What configuration files should I watch out for?
    Hopefully the jump from SP4 to SP7 won't cause too many problems.
    Thanks,
    John

    unfortunately, there is no quick solution for migrating from 1 server to other. you will need to carefully reconstruct
    - acl rules
    - server hostname configurations
    - any certificates that have been created on the old machine

  • Best Practices for migrating .rpd

    Hi,
    Can someone please share with us the industry best practices for migrating the .rpd file from a source (QA) to a destination (Prod) environment. Our Oracle BI server runs on a unix box and currently what we do to migrate is to save copies of QA and prod .rpds on our local machine and do a Merge. Once merged, we ftp the resulting .rpd to destination (prod) machine. This approach does not seem reliable and we are looking for a better established approach for .rpd migration. Kindly help me where I can find the Documentation..Any help in this regard would be highly appreciated.

    I don't think there is an industry best practice as such. What applies for one customer doesn't necessarily apply at another site.
    Have a read of this from Mark Rittman, it should give you some good ideas:
    http://www.rittmanmead.com/2009/11/obiee-software-configuration-management-part-2-subsequent-deployments-from-dev-to-prod/
    Paul

  • Need Best Practice for Migrating from Solaris to Linux

    Hi Team,
    We are migrating our Data Center from Solaris to Linux and our EBS 11i, database 10g (10.2.0.5) is 6TB. Please let us know the Best Practice to Migrate our EBS 11.5.10.2 from Solaris to Linux RHEL 5.
    we require Database 10g (10.2.0.5) on Linux x86-64 RHEL 5 and Application EBS on Linux x86 RHEL 5. Please let us know for any details.
    EBS version: 11.5.10.2
    DB version: 10.2.0.5
    We have checked the certifications in Oracle support.
    Oracle EBS 11.5.10.2 is not certified with Linux x86-64 RHEL 5. 
    Oracle EBS 11.5.10.2 is certified on Linux x86 RHEL 5.
    So we require Database 10g (10.2.0.5) on Linux x86-64 RHEL 5 and Application EBS on Linux x86 RHEL 5.
    Thank You.

    You can transportable tablespace for the database tier node.
    https://blogs.oracle.com/stevenChan/entry/10gr2_xtts_ebs11i
    https://blogs.oracle.com/stevenChan/entry/call_for_xtts_eap_participants
    For the application tier node, please see:
    https://blogs.oracle.com/stevenChan/entry/migrate_ebs_apptiers_linux
    https://blogs.oracle.com/stevenChan/entry/migrating_oracle_applications_to_new_platforms
    Thanks,
    Hussein

  • Best practices for development / production environments

    Our current scenario:
    We have one production database server containing the APEX development install, plus all production data.
    We have one development server that is cloned nightly (via RMAN duplicate) from production. It therefore also contains a full APEX development environment, and all our production data, albeit 1 day old.
    Our desired scenario:
    We want to convert the production database to a runtime only environment.
    We want to be able to develop in the test environment, but since this is an RMAN duplicated database, every night the runtime APEX will overlay it, and the production versions of the apps will overlay. However, we still want to have up to date data against which to develop.
    Questions: What is best practice for this sort of thing? We've considered a couple options:
    1.) Find a way to clone the database (RMAN or something else), that will leave the existing APEX environment intact? If such is doable, we can modify our nightly refresh procedure to refresh the data, but not APEX.
    2.) Move apex (in both prod and dev environments) to a separate database containing only APEX, and use DBLINKS to point to the data in both cases. The nightly refresh would only refresh the data and the APEX database would be unaffected. This would require rewriting all apps to use DBLINKS though, as well as requiring a change to the code when moving to production (i.e. modify the DBLINK to the production value)
    3.) Require the developers to export their apps when done for the day, and reimport the following morning. This would leave the RMAN duplication process unchanged, and would add a manual step which the developers loath.
    We basically have two mutually exclusive requirements - refresh the database nightly for the sake of fresh data, but don't refresh the database ever for the sake of the APEX environment.
    Again, any suggestions on best practices would be helpful.
    Thanks,
    Bill Johnson

    Bill,
    To clarify, you do have the ability to export/import, happily, at the application level. The issue is that if you have an application that consist of more than a couple of pages, you will find yourself in a situation where changes to page 3 are tested and ready but, changes to pages 2, 5 and 6 are still in various stages of development. You will need to get the change for page 5 in to resolve a critical production issue. How do you do this without sending pages 2, 5 and 6 in their current state if you have to move the application all at once??? The issue is that you absolutely are going to need to version control at the page level, not at the application level.
    Moreover, the only supported way of exporting items is via the GUI. While practically everyone doing serious APEX development has gone on to either PL/SQL or Utility hacks, Oracle still will not release a supported method for doing this. I have no idea why this would be...maybe one of the developers would care to comment on the matter. Obviously, if you want to automate, you will have to accept this caveat.
    As to which version of the backend source control tool you use, the short answer is that it really doesn't matter. As far as the VC system is concerned, you APEX exports are simply files. Some versioning systems allow promotion of code through various SDLC stages. I am not sure about GIT in particular but, if it doesn't support this directly, you could always mimic the behavior with multiple repositories. Taht is, create a development repository into which you automatically update via exports every night. Whenever particular changes are promoted to production, you can at that time export form the development repository and into the production. You could, of course, create as many of these "stops" as necessary to mirror your shop's SDLC stages, e.g. dev, qa, intergation, staging, production etc.
    -Joe
    Edited by: Joe Upshaw on Feb 5, 2013 10:31 AM

  • Best Practice for Migration of BO  from one server to another

    Hi All,
               I would like to know what is the Best pratice for Migration of BO from One server to another.
    i have Installed BO Xi R2 on my server.
    Thanks,
    Anendu Bothra
    Edited by: Anendu Bothra on Mar 5, 2009 10:24 AM

    You need to copy your input and output file stores from the old server to the new server. By default these are located in the <Business Objects install path>\FileStore directory.
    Then you need to stop the CMS, Right click the CMS,Click the Configuration tab, and then click Specify.
    Choose Copy, then click OK.
    Choose the version information for the source CMS database.
    Select the database type for the source CMS database, and then specify
    its database information (including host name, user name, and password).
    Select the database type for the destination CMS database, and then
    specify its database information (including host name, user name, and
    password).
    When the CMS database has finished copying, click OK.
    Once this process has been completed start the CMS and click on update objects -> located on the top of the CCM.
    I'd advise taking full backups beforehand.

  • Best Practices for NCS/PI Server and Application Monitoring question

    Hello,
    I am deploying a virtual instance of Cisco Prime Infrastructure 1.2 (1.2.1.012) on an ESX infrastructure. This is being deployed in an enterprise enviroment. I have questions around the best practices for moniotring this appliance. I am looking to monitor application failures (services down, db issues) and "hardware" (I understand this is a virtual machine, but statistics on the filesystem and CPU/Memory is good).
    Firstly, I have enabled via the CLI the snmp-server and set the SNMP trap host destination. I have created a notification receiver for the SNMP traps inside the NCS GUI and enabled the "System" type alarm. This type includes alarms like NCS_DOWN and PI database is down. I am trying to understand what the difference between enabling SNMP-SERVER HOST via the CLI and setting the Notification destination inthe GUI is? Also how can I generate a NCS_DOWN alarm in my lab. Doing NCS stop does not generate any alarms. I have not been able to find much information on how to generate this as a test.
    Secondly, how and which processes should I be monitoring from the Management Station? I cannot easily identify the main NCS procsses from the output of ps -ef when logged in the shell as root.
    Thanks guys!

    Amihan_Zerrudo wrote:
    1.) What is the cost of having the scope in a <jsp:useBean> tag set to 'session'? I am aware that there are a list of scopes like page, application, etc. and that if i use 'session' my variable will live for as long as that session is alive. (did i get this right?). You should rather look to the functional requirements instead of costs. If the bean need to be session scoped (e.g. maintain the logged in user), then do it so. If it just need to be request scoped (e.g. single page form data), then keep it request scoped.
    2.)If the JSP Page where i use that <useBean> is to be accessed hundred of times a day, will it compensate my server resources? Right now i am using the Sun Glassfish Server.It will certainly eat resources. Just supply enough CPU speed and memory to a server. You cannot expect that a webserver running at a Pentium 500MHz with 256MB of memory can flawlessly serve 100 simultaneous users at the same second. But you may expect that it can serve 100 users per 24 hour.
    3.) Can you suggest best practice in memory management given the architecture i described above?Just write code so that it doesn't unnecessarily eat memory. Only allocate memory if your application need to do so. You should rather let the hardware depend on the application requirements, not to let the application depend on the hardware specs.
    4.)Also, I have implemented connection pooling in my architecture, but my application is to be used by thousands of clients everyday.. Can the Sun Glassfish Server take care of that or will I have to purchase a powerful sever?Glassfish is just an application server software, it is not server hardware. Your concerns are rather hardware related.

  • What is the best practice for creating master pages and styles with translated text?

    I format translated text all the time for my company. I want to create a set of master pages and styles for each language and then import those styles into future translated documents. That way, the formatting can be done quickly and easily.
    What are the best practices for doing this? As a company this has been tried in the past, but without success. I'd like to know what other people are doing in this regard.
    Thank you!

    I create a master template that is usually void of content, with the exception I define as many of the paragraph styles I believe can/will be used with examples of their use in the body of the document--a style guide for that client. When beginning a new document for that client, I import those styles from the paragraph styles panel.
    Exception to this is when in a rush I begin documentation first, then begin new work. Then in the new work, I still pull in those defined paragraph and or object styles via their panels into the new work.
    There are times I need new styles. If they have broader applicability than a one-off instance or publication, then I open the style template for that client and import that style(s) from the publication containing the new style(s) and create example paragraphs and usage instructions.
    Take care, Mike

  • Tips n Tricks/Best Practices for integrating iPhone, iPad and MacBook Pro

    My wife just purchased an iPhone, iPad and Macbook Pro for her non profit consulting business and I was wondering if a tips and tricks or best practices for efficiently and productively integrating these devices exists?

    http://www.apple.com/icloud/

  • Best practice for migrating data tables- please comment.

    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    Please comment on your view of this practice. Thanks!

    >
    Please comment on your view of this practice. Thanks!
    >
    Sounds like the DBAs are using best practices to get the job done. Congratulations to them!
    >
    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    >
    The process you describe is what I would expect, and require, in any well-run environment.
    >
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    >
    Nobody cares if if is riskier for you. The production environment is sacred. Any and all risk to it must be reduced to a minimum at all cost. In my opinion a DBA should NEVER move ANYTHING from a development environment directly to a production environment. NEVER.
    Development environments are sandboxes. They are often not backed up. You or anyone else could easily modify tables or data with no controls in place. Anything done in a DEV environment is assumed to be incomplete, unsecure, disposable and unvetted.
    If you are doing development and don't have scripts to rebuild your objects from scratch then you are doing it wrong. You should ALWAYS have your own backup copies of DDL in case anything happens (and it does) to the development environment. By 'have your own' I mean there should be copies in a version control system or central repository where your teammates can get their hands on them if you are not available.
    As for data - I agree with what others have said. Further - ALL data in a dev environment is assumed to be dev data and not production data. In all environments I have worked in ALL production data must be validated and approved by the business. That means every piece of data in lookup tables, fact tables, dimension tables, etc. Only computed data, such as might be in a data warehouse system generated by an ETL process might be exempt; but the process that creates that data is not exempt - that process and ultimately the data - must be signed off on by the business.
    And the business generally has no access to, or control of, a development environment. That means using a TEST or QA environment for the business users to test and validate.
    >
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    >
    Absolutely! That's how professional deployments are performed. Deployment documents are prepared and submitted for sign off by each of the affected groups. Those groups can include security, dba, business user, IT and even legal. The deployment documents always include recovery steps so that is something goes wrong or the deployment can't procede there is a documented procedure of how to restore the system to a valid working state.
    The deployments themselves that I participate in have representatives from the each of those groups in the room or on a conference call as each step of the deployment is performed. Your 5 tables may be used by stored procedures, views or other code that has to be deployed as part of the same process. Each step of the deployment has to be performed in the correct order. If something goes wrong the responsible party is responsible for assisting in the retry or recovery of their component.
    It is absolutely vital to have a known, secure, repeatable process for deployments. There are no shortcuts. I agree, for a simple 5 new table and small amount of data scenario it may seem like overkill.
    But, despite what you say it simply cannot be that easy for one simple reason. Adding 5 tables with data to a production system has no business impact or utility at all unless there is some code, process or application somewhere that accesses those tables and data. Your post didn't mention the part about what changes are being made to actually USE what you are adding.

  • Best Practice For Database Parameter ARCH_LAG_TARGET and DBWR CHECKPOINT

    Hi,
    For best practice - i need to know - what is the recommended or guideline concerning these 2 Databases Parameter.
    I found for ARCH_LAG_TARGET, Oracle recommend to setup it to 1800 sec (30min)
    Maybe some one can guide me with these 2 parameters...
    Cheers

    Dear unsolaris,
    First of all if you want to track the full and incremental checkpoints, make the LOG_CHECKPOINT_TO_ALERT parameter TRUE. You will see the checkpoint SCN and the completion periods.
    Full checkpoint is being triggered when a log switch happens and checkpoint position in the controlfile is written in the datafile headers. For just a really tiny amount of time the database could be consistent eventhough it is open and in read/write mode.
    ARCH_LAG_TARGET parameter is disabled and set to 0 by default. Here is the definition for that parameter;
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams009.htm
    If you want to set this parameter up the Oracle recommends it to be 1800 as you have said. This can subject to change from database to database and it is better for you to check it by experiencing it.
    Regards.
    Ogan

  • Best Practice - Outer Join between Fact and Dim table

    Hi Gurus,
    Need some advice on the below scenario
    I have an OOTB subject area and we have around 50-60 reports based on it. The related subject area Fact and Dim1 table are having inner join.
    Now I have a scenario for one report where outer join has to be implemented between Fact and Dim1. Here I am against changing the OOTB subject area join as the outer join will impact the performance of other 50-60 reports.
    Can anyone provide any inputs on what is the best way to handle this scenario?
    Thanks

    Ok. I tried this:
    Driving table : Fact, Left outer join -- didnt work.
    Driving table: Dimension D left outer join -- didnt work either
    In either the case, I see physical query as D left outer Join on Fact F. and omitting the rows.
    And then I tried this -
    Driving table: Fact, RIght outer join.
    Now, this is giving me error:
    Sybase][ODBC Driver]Internal Error. [nQSError: 16001] ODBC error state: 00000 code: 30128 message: [Sybase][ODBC Driver]Data overflow. Increase specified column size or buffer size. [nQSError: 16011] ODBC error occurred while executing SQLExtendedFetch to retrieve the results of a SQL statement. (HY000)
    I checked all columns, everything matched with database table type and size.
    I am pulling Fact.account number, Dimension.account name, Fact.Measures. I am seeing this error each time I pull Fact.Account number.

Maybe you are looking for

  • Problem with Community app

    I have a Nokia C2 touch&type...it have a Community app in it...after i update my phone i can't open it and it says "delete this application?"....what should i do to get it back?

  • Changes to sequence and corruption

    Is it better to avoid cutting and copying/pasting in the timeline? I've heard it can introduce corruption (once I had some -- not sure why). Makes me think this thing is very sensitive. Yet, as a newbie, I make a lot of changes!

  • APPLE WONT FIX MY IPODTOUCH!

    So I've had my ipod for about 9months now, 3months ago the screen started to crash every now and then, I didn't think much of until now! my screen has changed colour! It hasn't been dropped, nothings spilt on it at all! I bought a case for it the sam

  • I need help about threads

    Hi!, I need to know how can i do that a father thread have acces to the son thread's methods or how a son thread can to notify at father any event? thanks

  • Run my web service in custom Execute Queue

    i developed my web service using the WorkShop 8.1 Sp3 I created a custom execute queuem in my weblogic server Can anyone guide how i can instruct my web serivce to run in my custom execute queue. Always it's going to the Default execute queue. Found