Best practice on test region setup...

We want to set up our dev environment to a test database and our prod to our production database, the reason being we transform our data and also build our star schema ourselves. So any changes we make on brininging in new data, we want to be able to test using our test repository. Any best practices on how to do this? We have noticed issues when we change the physical layer tables of rpd from one db to other. Basically we have crashed the system doing this during test....If we have one dedicated repository for test pointing to the test db, and one to prod pointing to our prod db...what is the easiest and fool proof way to copy rpd changes from one environment to the other? If any of you have done this, please do drop in a line on what how you accomplished this.
Thanks much!
Arch

Right now we are doing entire rpd copies as we are pointing to the production database from both rpd's.......
..but the problem is our dev environment physical layer is schema A and prod env has db schema B. we want to make changes to underlying table data in schema A, test using the rpd pointing to that schema and then once everything is ok, move changes to production db and make changes to prod repository....So I just want to merge business process and presentation layer.....We will try it the oracle suggestion but as I have been reading....merging is error prone....and we did not have much luck with it the one time we tried....

Similar Messages

  • BPC 7M SP6 - best practice for multi server setup

    Experts,
    We are considering purchasing new hardware for our BPC 7M implementation. My question is what is the recommended or best practice setup for SQL and Analysis Services? Should they be on the same server or each on a dedicated server?
    The hardware we're looking at would have 4 dual core processors and 32 GB RAM in a x64 base. Would this adequately support both services?
    Our primary application cube is just under 2GB and appset database is about 12 GB. We have over 1400 users and a concurrency count of 250 users. We'll have 5 app/web servers to handle this concurrency.
    Please let me know if I am missing information to be able to answer this question.
    Thank you,
    Hitesh

    I don't think there's really a preference on that point. As long as it's 64bit, the servers scale well (CPU, RAM), so SQL and SSAS can be on the same server. But it is important to look also beyond CPU and RAM and make sure there's no other bottlenecks like storage (Best practice is to split the database files on several disks and of course to have the logs on disks that are used only for the logs). Also the memory allocation in SQL and OLAP should be adjusted so that each has enough memory at all times.
    Another point to consider is high availability. Clustering is quite common on that tier. And you could consider having the active node for SQL on one server and the active node for OLAP (SSAS) on the other server. It costs more in SQL licensing but you get to fully utilize both servers, at the cost of degraded performance in the event of a failover.
    Bruno
    Edited by: Bruno Ranchy on Jul 3, 2010 9:13 AM

  • Best practice for test reports location -multiple installers

    Hi,
    What is recommended best practice for saving test reports with multiple installers of different applications:
    For example, if I have 3 different teststand installers: Installer1, Installer2 and Installer3 and I want to save test reports of each installer at:
    1. C:\Reports\Installer1\TestReportfilename
    2. C:\Reports\Installer2\TestReportfilename
    3. C:\Reports\Installer3\TestReportfilename
    How could I do this programatically as to have all reports at the proper folder when teststand installers are deployed to a test PC?
    Thanks,
    Frank

    There's no recommended best practice for what you're suggesting. The example here shows how to programmatically modify a report path. And, this Knowledge Base describes how you can change a report's filepath based on test results.
    -Mike 
    Applications Engineer
    National Instuments

  • Best Practice for Production IDM setup

    Hi, what is the best practice for setting up prodcution IDM:
    1. Connect IDM prod to ECC DEV,QA and Prod or
    2. Connect IDM prod to ECC prod only and Connect IDM dev to ECC Dev and QA.
    Please also specify pros and cons for both options if possible.
    Thanks in advance,
    Farhan

    We run our IDM installation as per your option 2 (Prod and non-prod on separate instances)
    We use HCM for the source of truth in production and have a strict policy regarding not allowing non HCM based user accounts. HCM creates the SU01 record and details are downloaded to IDM through the LDAP extract. Access is provision based on Roles attached to the HCM Position in IDM. In Dev/test/uat we create user logins in IDM and push the details out.
    Our thinking was that we definitely needed a testing environment for development and patch testing, and it needs to be separate to production. It was also ideal to use this second environment for dev/test/uat since we are in the middle of a major SAP project rollout and are creating hundreds of test and training users with various roles and prefer to keep this out of a production instance.
    Lately we also created a sandpit environment since I found that I could not do destructive testing or development in the dev/test/uat instance because we were becoming reliant on this environment being available. Almost a second production instance - since we also set the policy that all changes are made through IDM and no direct SU01 changes are permitted.
    Have a close look at your usage requirements before deciding which structure works best for you.

  • Best practice Forms 10g configuration setup and tuning

    Hi,
    We are currently depolying forms 10g from 6i client/server version. Users are experiencing Form hangups and hour glasses. This does not happen that often but can happen any time, anywhere in the app (users do inserts, updates and deletes and queries).
    Is there a baseline best practice configuration setup anywhere either in the Forms side or the AppServer side of things?
    Here is our setup:
    Forms 10g (9.0.4)
    Reports 10g (9.0.4)
    Oracle AppServer 10g (9.0.4)
    OS = RedHat Linux
    Client Workstations run on Windows 2000 and XP w/ Internet Explorer 6 or higher
    Average No. of users = 250
    Thanks for all your help

    Shutdown applications within the guest.
    Either power off from Oracle VM Manager or 'xm shutdown xxx' from the command line
    It is possible one or more files could be open when the shutdown is initiated.
    Have found at least one case of misconfigured IP which would have resulted in the disk access being via the 'Front End' interface rather than the Back End.
    Thanks

  • Best practices for transport of setup of archivelink/contentserver

    Hi
    I'm using the archivelink setup to store all kind of documents/files and to archive outgoing documents and print lists (print and archive).
    But I don't know how we should transport the setting.
    We need different setup in dev/qa/prd systems because we don't want documents from our development system stored in the same server as the documents from our productive system.
    We have 2 setup used in different scenarios :
    1) We link the ObjectType/Doc.type to different content repositories in OAC3 (D1 for dev, Q1 for qa and P1 for prd)
    2) We point the content repository to different HTTP servers in OAC0/CSADMIN
    In both scenarios I see 2 options.
    1) open for customizing in qa and prd systems and maintain the different setups directly in each system.
    2) We Transport the prd content repositories all the way, but delete the transport with qa content repositories after import to the qa system and finally we don't transport the dev content repositories at all.
    Both options are bad practices, but what are Best practices?
    Best regards
    Thomas Madsen Nielsen

    Hi David,
    The best mechanism is probably transporting the objects in the same order as creating/changing them. The order would be Application Components, Info Area, Info-objects, Transfer Structure, Transfer Rules, Communication Structure, InfoCube, Update rules, Infopackages and the frontend components.
    There are many topics on BW transports in SDN forum. You can search for them.
    You can refer to this link for more details on transports in BW System:
    http://help.sap.com/saphelp_nw04/helpdata/en/b5/1d733b73a8f706e10000000a11402f/frameset.htm
    Bye
    Dinesh

  • Imaging solution for EBS: Best practice guide for server setup

    Hi,
    We have to implement Imaging solution for EBS using AXF adapter. For this, customer is going to procure and implement SOA and WebCenter Content from scratch.
    We are now faced with the challenge whether to recommend SOA and WCC on the same Weblogic server or on separate Weblogic servers. Is there any best practice guide available for setting up Application Adapters for WCC?
    Thanks
    Arijit

    Hi ,
    I think this documentation would atleast help you in starting with planning :http://docs.oracle.com/cd/E23943_01/doc.1111/e15483/toc.htm
    Thanks,
    Srinath

  • Best practice for test to production

    I actually only have one server for test and production, but the dev processes all point to development databases and the production processes will point to production databases.
    The only real change is to make the JMS queue points to prod vs test. There doesn't seem to be an easy way to copy a complete process and change the name. That would work best for me.
    Any ideas?
    Edited by: ss396s on Nov 19, 2009 9:21 AM

    Yes you can. With SOA 11g, you can create deployment profiles to change poperties during deployment. You can also build your own deployment mechanism, as I did.
    http://orasoa.blogspot.com/2009/04/new-oracle-soa-build-server-osbs.html
    Marc

  • Best practice for testing a web form after converting to a connection file

    I was using InfoPath to test a web form using a web service for the connection. I decided to deploy the form to SharePoint to make sure it worked outside of InfoPath preview mode. This required me to convert the connection to a connection file and publish
    that in SharePoint. After getting all that set up, it appears to work from SharePoint.
    Now I want to go back and work on the form some more. But in InfoPath it appears preview is no longer an option because the form is using a connection file. While trying to preview the form I am given a message that basically says the form will have to be
    published in order for it to make the connection. This would mean I would have to overwrite the form on SharePoint just to test.
    I suppose I could publish to a different form library and switch back when I'm done but it would seem that InfoPath preview should still be an option for testing. Is there something I'm missing in InfoPath that would allow me to still test with this form?

    Not sure what you're doing, but previewing IP forms with data connections should be quite possible.
    Scott Brickey
    MCTS, MCPD, MCITP
    www.sbrickey.com
    Strategic Data Systems - for all your SharePoint needs
    To put it simply, here are the basic steps:
    Create IP form based on web service
    Fill out form and test via Preview
    Convert data connection to a connection file
    Publish IP form and test on SharePoint
    Go back into IP and try to test again via Preview <--- no longer works - now stuck with testing on SharePoint which is "live"
    This could be a factor of how our SharePoint environment is configured. I have no other experience to base it on.

  • Best practice for conditional region display

    I feel like I'm missing something here...
    What's the easiest way to control display of a region only if the user has submitted that same page via a button?
    For example, I only want to display a search results region if the user has entered search criteria and pressed a "Search" button.
    Currently, I'm setting a hidden item value on page submission, and only displaying the region if the hidden item value = "SEARCH".
    Is there a better way?

    After click the Search button (Item Name = SEARCH),
    debug mode shows the following at the start of the SHOW:When you submit the page by clicking the button, the REQUEST will be set to the button name during accept procesing, not during the subsequent show processing (the request value for the latter can be passed in via the same-page branch you have defined)
    To verify this, create a dummy after-submit PL/SQL process with NULL; and a Success Message of "Fired", make it conditional upon REQUEST=SEARCH and make sure your branch has the 'Show process success message' checkbox checked.
    Now when you click the Search button, the page should re-display with the Fired success message.

  • Best practices for Test - Production promotions

    Good Day to all
    Environment:
    OWB Client: 10.1.0.2.0 on Windows XP Professional
    OWB Server Side: 10.1.0.2.0 on UNIX (AIX 5.2)
    Repository: Oracle 9.2.0.4 on UNIX (AIX 5.2)
    Runtime Schema: Oracle 9.2.0.4 on UNIX (AIX 5.2)
    I am trying to put together an idiot-proof process to have our Infrastructure Team (Operators) facilitate the promotion of OWB objects from our Test environment to our Production environment.
    The Production environment is a separate OWB Repository and separate target schema.
    Thanks to SOX we can't have one person be able to update both environments.
    I am thinking of using UNIX scripts to run MDL Exports and MDL Imports and OMBPLUS to do the deployments.
    Has anyone implemented such a process?
    My other thought was to use the 'Deploy to File' method and OMBPLUS to do the actual deployment.
    Any pros and cons to either method?
    Any and all opinions are most welcomed.
    Many thanks.
    Gary

    Upon further research it appears I'll need a combination of exp/imp and OMBPlus to do what I need. Please correct any of the below assumptions:
    1 - I need to use exp/imp to get the object definitions from the Test repository to Production repository.
    2 - I need to do a 'Deploy to File' through the OWB Client on the Test repository objects to generate the deployment files.
    3 - I then need to use OMBPlus to deploy the objects on the Production side.
    Questions:
    1 - Is there a way to get the 'Deploy to File' output in a command line environment so it can be scripted in UNIX?
    2 - Is there a way to set up a 'Deploy Only' type user so the Operators can ONLY deploy objects and NOT change anything?
    Thanks very much for the help. I'm very interested in how other companies are handling the separation of duties with SOX and these moves from Test to Production.
    Till then...
    Gary

  • Best practice for EBS Testing

    Hello Friends - We are doing EBS Configuration and it includes Search strings also. These changes are applicable to around 40-50 Bank accounts.
    Is it that I should ask Bank to send us test Bank statements for all these accounts.
    Can you pls share Best practices to test EBS set up.
    Thanks

    Hello!
    You don't have to test EBS for each bank account. The best approach to testing is to identify the typical bank statement cases and test them. For instance, if you have bank statements from five banks with several bank accounts in each bank, you need to test the one bank statement for one bank account from each bank. Similarly, if you have different types of bank accounts (e.g. current account, deposit account, transfer account etc.), you will have different operation types in bank statements for these accounts. Therefore, you also have to test bank statement from different account types.
    To sum up, test typical bank statements from each bank and different bank statements from each account type if applicable.
    Hope this will help you!
    Best regards!

  • Best practice for TM on AEBS with multiple macs

    Like many others, I just plugged a WD 1TB drive (mac ready) into the AEBS and started TM.
    But in reading here and elsewhere I'm realizing that there might be a better way.
    I'd like suggestions for best practices on how to setup the external drive.
    The environment is...
    ...G4 Mac mini, 10.4 PPC - this is the system I'm moving from, it has all iPhotos, iTunes, and it being left untouched until I get all the TM/backup setup and tested. But it will got to 10.5 eventually.
    ...Intel iMac, 10.5 soon to be 10.6
    ...Intel Mac mini, 10.5, soon to be 10.6
    ...AEBS with (mac ready) WD-1TB usb attached drive.
    What I'd like to do...
    ...use the one WD-1TB drive for all three backups, AND keep a copy of system and iLife DVD's to recover from.
    From what I'm reading, I should have a separate partition for each mac's TM to backup to.
    The first question is partitioning... disk utility see's my iMac's internal HD&DVD, but doesn't see the WD-1TB on the AEBS. (when TM is activity it will appear in disk utility, but when TM ends, it drops off the disk utility list).
    I guess I have to connect it via USB to the iMac for the partitioning, right?
    I've also read the benefits of keeping a copy of the install DVD's on the external drive... but this raises more questions.
    How do I get an image of the install DVD onto the 1TB drive?
    How do I do that? (install?, ISO image?, straight copy?)
    And what about the 2nd disk (for iLife?) - same partition, a different one, ISO image, straight copy?
    Can I actually boot from the external WD 1TB while it it connected to the AEBS, or do I have to temporarily plug it in via USB?
    And if I have to boot the O/S from USB, once I load it and it wants to restore from the TM, do I leave it USB or move it to the AEBS? (I've heard the way the backups are created differ local vs network)>
    I know its a lot of question but here are the two objectives...
    1. Use TM in typical fashion, to recover the occasion deleted file.
    2. The ability to perform a bare-metal point-in-time recovery (not always to the very last backup, but sometimes to a day or two before.)

    dmcnish wrote:
    From what I'm reading, I should have a separate partition for each mac's TM to backup to.
    Hi, and welcome to the forums.
    You can, but you really only need a separate partition for the Mac that's backing-up directly. It won't have a Sparse Bundle, but a Backups.backupdb folder, and if you ever have or want to delete all of them (new Mac, certain hardware repairs, etc.) you can just erase the partition.
    The first question is partitioning... disk utility see's my iMac's internal HD&DVD, but doesn't see the WD-1TB on the AEBS. (when TM is activity it will appear in disk utility, but when TM ends, it drops off the disk utility list).
    I guess I have to connect it via USB to the iMac for the partitioning, right?
    Right.
    I've also read the benefits of keeping a copy of the install DVD's on the external drive... but this raises more questions.
    Can I actually boot from the external WD 1TB while it it connected to the AEBS, or do I have to temporarily plug it in via USB?
    I don't think so. I've never tried it, but even if it works, it will be very slow. So connect via F/W or USB (the PPC Mac probably can't boot from USB, but the Intels can).
    And if I have to boot the O/S from USB, once I load it and it wants to restore from the TM, do I leave it USB or move it to the AEBS? (I've heard the way the backups are created differ local vs network)
    That's actually two different questions. To do a full system restore, you don't load OSX at all, but you do need the Leopard Install disc, because it has the installer. See item #14 of the Frequently Asked Questions *User Tip* at the top of this forum.
    If for some reason you do install OSX, then you can either "transfer" (as part of the installation) or "Migrate" (after restarting, via the Migration Assistant app in your Applications/Utilities folder) from your TM backups. See the *Erase, Install, & Migrate* section of the Glenn Carter - Restoring Your Entire System / Time Machine *User Tip* at the top of this forum.
    In either case, If the backups were done wirelessly, you must transfer/migrate wirelessly (although you can speed it up by connecting via Ethernet).

  • Best practice for lazy-loading collection once but making sure it's there?

    I'm confused on the best practice to handle the 'setup' of a form, where I need a remote call to take place just once for the form, but I also need to make use of this collection for a combobox that will change when different rows in the datagrid or clicked. Easier if I just explain...
    You click on a row in a datagrid to edit an object (for this example let's say it's an "Employee")
    The form you go to needs to have a collection of "Department" objects loaded by a remote call. This collection of departments only should happen once, since it's not common for them to change. The collection of departments is used to populate a form combobox.
    You need to figure out which department of the comboBox is the selectedIndex by iterating over the departments and finding the one that matches the employee.department.id
    Individually, I know how I can do each of the above, but due to the asynch nature of Flex, I'm having trouble setting up things. Here are some issues...
    My initial thought was just put the loading of the departments in an init() method on the employeeForm which would load as creationComplete() event on the form. Then, on the grid component page when the event handler for clicking on a row was fired, I call a setup() method on my employeeForm which will figure out which selectedIndex to set on the combobox by looking at the departments.
    The problem is the resultHandler for the departments load might not have returned (so the departments might not be there when 'setUp' is called), yet I can't put my business logic to determine the correct combobox in the departmentResultHandler since that would mean I'd always have to fire the call to the remote server object every time which I don't want.
    I have to be missing a simple best practice? Suggestions welcome.

    Hi there rickcr
    This is pretty rough and you'll need to do some tidying up but have a look below.
    <?xml version="1.0"?>
    <mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute">
        <mx:Script>
            <![CDATA[
                import mx.controls.Alert;
                import mx.collections.ArrayCollection;
                private var comboData:ArrayCollection;
                private function setUp():void {
                    if (comboData) {
                        Alert.show('Data Is Present')
                        populateForm()
                    } else {
                        Alert.show('Data Not')
                        getData();
                private function getData():void {
                    comboData = new ArrayCollection();
                    // On the result of this call the setUp again
                private function populateForm():void {
                    // populate your form
            ]]>
        </mx:Script>
        <mx:TabNavigator left="50" right="638" top="50" bottom="413" minWidth="500" minHeight="500">
            <mx:Canvas label="Tab 1" width="100%" height="100%">
            </mx:Canvas>
            <mx:Canvas label="Tab 2" width="100%" height="100%" show="setUp()">
            </mx:Canvas>
        </mx:TabNavigator>
    </mx:Application>
    I think this example is kind of showing what you want.  When you first click tab 2 there is no data.  When you click tab 2 again there is. The data for your combo is going to be stored in comboData.  When the component first gets created the comboData is not instansiated, just decalred.  This allows you to say
    if (comboData)
    This means if the variable has your data in it you can populate the form.  At first it doesn't so on the else condition you can call your data, and then on the result of your data coming back you can say
    comboData = new ArrayCollection(), put the data in it and recall the setUp procedure again.  This time comboData is populayed and exists so it will run the populate form method and you can decide which selected Item to set.
    If this is on a bigger scale you'll want to look into creating a proper manager class to handle this, but this demo simple shows you can test to see if the data is tthere.
    Hope it helps and gives you some ideas.
    Andrew

  • CMS Tracks - best practice?

    Hi,
    we are developing our product with CVS right now and want to move over to DTR. The basic concepts are clear and I did a test migration yet which is successful.
    But I am unclear on the chamge management piece:
    Let's say we develop a version 1.0
    Now this version has service packs 1.0 SP1, SP2, SP3 and so on. These service packs also have to be maintained, they might contain bugs, so you could have something like
    1.0 SP1 Patch 1, 1.0 SP1 Patch 2 and so on.
    How do I handle this with CMS tracks? Whats the best practice? Do I setup a track for every major version and for every support package in that version? I.E. i will have a track 10SP0, 10SP1, 10SP2, 10SP3 and so on? Will this work?
    Right now we have a lot of CVS tags and branches to make this work... but how do you do that in DTR? I need to be able to jump back to a specific version and SP and fix bugs in there if a customer needs it.
    In CVS the concept is that I will develop in HEAD and bugfix in branches (which is all in the same repository / "workspace"). But in DTR how do I do it? Is there something analog to this? Or do I always just use the track with the highest versio number as the "HEAD"?
    Any input is appreciated.
    Thanks
    Bruno

    Hello Bruno,
    For each state of your product that you wish to maintain, you must create a track. So in your case, you will have a track structure as follows:
    Track1.0
    Track1.0_SP1
    Track1.0_SP2
    DTR does not support tags (yet), so the state that you wish to retain for possible future fixes must be isolated in a workspace of a given track. That is, "Track1.0_SP1" will contain the workspaces that represent the SP1 state, and a fix for SP1 must be done in this track.
    And you must develop on the Main Release track ("Track1.0") and do the bugfixes in the track for the approrpriate SP. You should set up a transport connection of type "Repair" from each SP track to the Main Release track, so the fixes you make in the SP track are automatically back-transported to the Main Release track. (This connection can be setup in the "Track Connections" tab in the CMS Landscape Configurator.)
    Also note that the DTR version graph represents a global version history, so for any file you will be able to view the changes made in the different tracks (workspaces) from the Version Graph view (in the DTR Perspective of the SAP NetWeaver Developer Studio).
    Regards,
    Manohar

Maybe you are looking for