OWB Change Management/Version Control Best Practice

Hi
I am about to start developing a data warehouse using OWB 10g R2, and I've been doing quite a lot of research into the various deployment/change management/version control techniques that can be used, but am still unsure which is the best to use.
We will have 2-3 developers working on the project, and will be deploying from Development, to Test, to Production (each will have a separate repository). We want to be able to easily identify changes made between 1 release and the next to have a greater degree of control and awareness of what goes into each release. We also wish to use a source control system to track changes (we'll probably use SVN, but I don't think that the actual SCS tool makes a big difference to our decision at this point).
The options available (that I'm aware of), are:
1. Full MDL export/import.
2. Snapshot MDL export/import.
3. Manual coding of everything using OMB Plus.
I am loath to use the full MDL export/import functionality since it will be difficult, if not impossible, to identify easily the changes made between 1 release and the next.
The snapshot MDL export/import functionality is a little better at comparing releases, but it's still difficult to see exactly what has changed between 1 version and the next - particularly when a change to a transformation has been made. It also doesn't cope that well with tracking individually made changes to different components of the model.
The manual coding using OMB Plus seems like the best option at the moment, though I keep thinking "What's the point of using a GUI tool, if I'm just going to code everything in scripts anyway?".
I know that you can create OMB Plus code generation scripts to create your 'creation' scripts, but the code generation of the Alteration scripts seems that it would be more complicated than just writing the Alteration scripts manually.
Any thoughts anyone out there has would be much appreciated.
Thanks
Liffey

Well, you can also do per-object MDL exports and then manage those in your version control system. With a proper directory structure it would be fairly simple to code an OMB+ Script that scans a release directory tree and imports the objects one by one. I have done this before, although if you are using OWB as the primary metadata location for database objects then you have to come up with some way to manage object dependency order issues.
The nice thing about this sort of system is that a patch can be easily shipped with only those objects that need to be updated.
And if you force developers to put object-level MDL into your version control system then your system should also have pretty reporting on what objects were changed for a release and why.
At my current job we do full exports of the project MDL and have a deployment script that drops the pre-existing deployed version of the project before importing and deploying the new version, which also works quite well - although as you note the tracking of what has changed in a release then needs to be carefully managed elsewhere. But we don't deploy any of our physical database objects through OWB. Those are deployed from Designer, and our patch script applies all physical changes first before we replace the mappings from the OWB project. We don't even bother synching the project metadata for tables / views / etc. at deployment. If the OWB project's metadata for database objects is not in sync with Designer, then we wind up with deployment errors. But on the whole it works pretty well.

Similar Messages

  • Right tool for Configuration Management (version control) for Jdeveloper

    All
    Please share your idea and exepeience about right tool for Configuration Management (version control) for Jdeveloper Development. I used CVS in the past. Now In new company we are planning to use Oracle SCM. Is anybody used it before for Jdev Developmet ( BC4J/ADF and Struts project). Is SCM also intergrated with Jdev just like CVS.??
    Jdev Team please guide us.
    Thanks

    Before you go with SCM you should read these two papers:
    http://otn.oracle.com/products/designer/Schedule_2004.htm
    http://otn.oracle.com/products/designer/FAQ_Schedule_2004.htm

  • Changing a Cube - SAP Best Practice

    I have a situation where a Consultant we have is speaking of a SAP Best Practice but cannot provide any documentation support the claim.
    The situation is that a change has been made in BW Dev to a KF (changed the datatype).  Of course the transport fails in the BW QA system.  OSS note 125499 suggest activating the object manually. 
    To do I will need to open up the system for changes and deactivate the KF in question, then a core SAP BW table (RSDKYF) is to be modified to change the datatype.   Then upon activation of the KF, the data in the cube will be converted.
    If I delete the data in the cube, apply the transport, and then reload from PSA would this work also?  I would rather not have to open up the systems and have core BW tables being modified.  That just doesn't seem like a best practice to me. 
    Is this practice a SAP Best Practice?
    Regards,
    Kevin

    Hello Kevin,
    opening the system for manual changes is not best practice. There are only few exceptional cases where this is necessary (usually documented in SAP notes).
    "Easy" practice would be to add a new key figure instead of changing the data type. Obviously this causes some rework in depended objects but transport will work and no table conversions will be required.
    "Save" practice is to drop and reload the data. You can do it from PSA if the data is still available. Or create a backup InfoCube and use data mart interface to transfer data between the original and backup.
    Regards
    Marc
    SAP NetWeaver RIG

  • Source Configuration Management / Version Control

    I was wondering what the Forte raving masses out there are doing about Source
    Configuration Management and Version Control type of issues?
    Have you been able to implement or "skunk work" a packaged product with your
    Forte development environment?
    Our shop consists of WindowsNT Forte developers coding for predominantly
    Windows95 clients and a HP UNIX Central Server. At this time we currently use a
    home grown "system" to handle Source Configuration Management and Version
    Control issues. We are now looking to see if there is a better way to do this.
    We've identified several Industry Standard packages (SCCS, CVS, Microsoft
    SourceSafe etc.) and still haven't found anything very useful.
    What I am seeing is that all of the packages so far have direct hooks in C++,
    Visual Basic etc.
    I have yet to see something with Forte hooks.
    Kelsey PetrychynSaskTel Forte Technical Analyst
    ITM - Technology Solutions - Distributed Computing (OTC)
    Tel (306) 777 - 4906, Fax (306) 359 - 0857
    Internet:[email protected]
    Quality is not job 1. It is the only job!
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>

    Kindly specify the email address to apply to for the mentioned job

  • OWB 10g/11g version control

    I am using OWB 10g release 2 but there is no feature for version control. The snapshot feature and export/import can help but Its very difficult with big project.
    Is there any new feature with OWB 11g release 1 or 2 for version control ? Do you have any proposition for version control in 10g ?
    Robin
    Edited by: user451399 on 2009-06-12 08:07

    Hi Robin,
    In the repository there can be only one version at the moment. If you have to work on an older release, you have to load it into another repository.
    We proceed as follows:
    1. Build a collection containing all objects that belong to the release
    2. Export that collection
    3. Check the file into CVS
    4. Import the file into production repository (in production db)
    5. Deploy from the production repository to the production target schema
    That way the release that is currently deployed on production is also in the production repository. You may do hotfixes directly here or just have a look at what is currently deployed.
    In our development repository we can work on new features.
    I gave a talk about automating this process on DOAG 2008. You may request the presentation here [http://www.metafinanz.de/leistungen/leistungsbereiche/bi-reporting/data-warehousing/kontaktformular/|http://www.metafinanz.de/leistungen/leistungsbereiche/bi-reporting/data-warehousing/kontaktformular/]
    Though the slides are in german they show the architecture and should give you some idea.
    Regards,
    Carsten.

  • Adobe LiveCycle Process Management Overview and Best Practices

    To get familiar with the best practices of process management watch this recording of a webinar hosted by Avoka Technologies.

    To get familiar with the best practices of process management watch this recording of a webinar hosted by Avoka Technologies.

  • Management IP Address : best practices ?

    Hi,
    What are the advantages to assign the mgmt IP @ to service profil rather to blade server ?
    Can we do the both ?... and in what puropose ?
    what are the best practices for specific use ?
    many thx in advance for you feedback.
    Nicolas.

    The ability to assign the IP address to the SP was done at the request of users.  This allows the KVM IP adddress to follow the SP (and OS associated with that SP).  Customers wanted to know that KVM IP was also associated with there OS.
    The IP associated to the blade can be used at any time for a KVM session.  An IP address associated with the SP can only be used while the SP is associated with a blade.
    Both can be used.  I dont believe there is a best practice for their assignment.
    Thank You,
    Dan Laden
    Cisco PDI Data Center
    Want to know more about how PDI can assist you?
    http://www.youtube.com/watch?v=4BebSCuxcQU&list=PL88EB353557455BD7
    http://www.cisco.com/go/pdihelpdesk

  • OWB Project Management, Versioning

    Hi,
    We're using version 9.2.0.3.0 version of OWB. Most of our work has been in the development stage, but we're ready to start moving to Integration, Acceptance and eventually Production.
    I've read a lot of OTN and Metalink forum messages on how people are doing this. I'd like to know what it the best way to move these changes to the different levels. Some people mentionned creating different repositories, others have mentionned different projects (All within the same repository), and there is also mention of snapshots.
    The snapshot concept is nice, but would incur too much work around if a restore is needed. From a developer point of view, if a restore is needed, then afterward who know what's been restored, what's current,etc...especially if we have about 5-6 developers working in OWB.
    I'm kinda leaning towards having separate repositories, so the developers can see the mappings in each level at any time. Having different projects wouls also be feasible.
    As a DBA, once we get to production, then I agree that if the code is deployed in a file that I would only have to run, then this would make our lives easier.
    I'd like to hear what people (and Oracle) have to say on the subject.
    Thanks,
    Guy LaBelle
    DBA
    Atlantic BlueCross Care

    There can be several possible solutions of this problem. One would be the snapshots (project versions stored in the database). Today you can create snapshots at different project phases and the snapshot utility will let you see the differences between the various versions in a graphical interface (version reporting)
    The problem with snapshots (as you correctly state) might be that you have to restore the versions to work with them, which might be cumbersome if the developers need to have instantaneous access to multiple versions.
    Therefore, in your case I would suggest the following:
    - create two separate repositories (with two separate run-time environments), one for development, one for production.
    - do all the design, development and testing in the development environment. You can have different projects containing different versions of the project in the development repository. This will allow the users almost simultaneous access to multiple versions (what you don't heve here and is present in snapshots are version reporting - project diffs. the users will have to manually compare the versions. If project diffs are more important than fast access to multiple versions, than use snapshots instead of multiple projects).
    - Once the design is consolidated and you are ready to go into production, export the project into an mdl file and import it into the production repository. Deploy and run the production environment.
    - Now you have two almost identical systems running side by side. The development system also has all the earlier project versions, while the production only has the latest version. If a problem is detected in the production, test and fix in the development and then move the fix into production (mdl export-import + deploy).
    Hope this helps.
    Regards:
    Igor

  • Remote Control Best Practices

    Hello. I am new to the world of Mac. I have a bunch of Windows XP machines at home but just bought a MacBook White for my daughter for college. I would like to be able to provide tech support while she is away so I have to find a remote control solution that will do the trick.
    I will not have a Mac at the house (at least not right away) so I am hoping to find something that will allow me to access her Mac over the Internet using a Windows-based client.
    So I have two basic questions:
    1) What is my best solution for this? I read that VNC might work but is that the best solution? If so, is there a VNC Host already embedded in the MAC OS or do I have to find one (and where)?
    2) If I end up getting a MacBook Air, what Apple-based solution is best? ARD? WHat would I have to install/purchase on both the MacBook and the Air?
    Thanks for your suggestions and patience with a Windows guy who appreciates Mac.
    -Rick

    I have been very successful using TeamViewer from Mac to Mac, Mac to Windows and some Windows to Mac. It's free to use within limits: you have to buy it if you're using it professionally, but for what you're talking about, it's free. http://teamviewer.com
    If you've tried it before, download it again, because they just released the 4.0 version the other day. I've gotten through many firewalls with it, and even some dual NAT situations.
    If you get a Mac, you can also use the screen sharing function of iChat, but I find it to be a lot less reliable than TeamViewer, expecially through corporate firewalls.

  • Managed bean inheritance best practice

    Hi!
    I'm new to JSF so I would like to apologize if my question is trivial, but I haven't been able to find proper solution.
    I'm using JSF 1.2 on WAS CE. I have a page where user can search for some entities, both person and company, which share some data, but have some different properties. Both inherit from entity class, so search results are displayed in dataTable as base entity objects. Each dataTable row has commandLink which should navigate user to specific (person or company) page for editing data. For example, if user clicks on commandLink in person data row, I would like to show personEdit page with all data set to components. I understand that I can use action to set navigation to proper page, and I have done so (following action() method). I have implemented EntityBean, PersonBean and CompanyBean:
    public class EntityBean {
        protected Entity entity;
        public int getId() {
            return entity.getId();
        public void setId(int id) {
            entity.setId(id);
        public String action() {
            // Some code which defines return value for action invoking.
        public void edit(ActionEvent event) {
            // Get entity data for edit.
    public class PersonBean extends EntityBean {
        public String getFirstName() {
            return ((Person)entity).getFirstName();
    public class CompanyBean extends EntityBean {
        public String getName() {
            return ((Company)entity).getName();
    }I tried combining action with actionListener method (+edit(ActionEvent event)+ method) invoked from commandLink, but if it is invoked on mapped entityBean, I get only entity properties set on any other page which is shown after invoked action. If method is invoked on i.e. personBean (same method, not overriden), I get NullPointerException for trying to access property for base entityBean.
    How should I invoke proper data initialization for bean shown on another page?
    Thanks in advance.

    Thank you for your quick reply. Beside the code for managed bean classes, here's the rest of the code for this specific problem.
    In faces-config.xml I have the following configuration:
         <managed-bean>
              <managed-bean-name>entitySearchBean</managed-bean-name>
              <managed-bean-class>entities.EntitySearchBean</managed-bean-class>
              <managed-bean-scope>request</managed-bean-scope>
         </managed-bean>
         <managed-bean>
              <managed-bean-name>personBean</managed-bean-name>
              <managed-bean-class>entities.PersonBean</managed-bean-class>
              <managed-bean-scope>request</managed-bean-scope>
         </managed-bean>
         <managed-bean>
              <managed-bean-name>entityBean</managed-bean-name>
              <managed-bean-class>legalentities.EntityBean</managed-bean-class>
              <managed-bean-scope>request</managed-bean-scope>
         </managed-bean>On my JSP, I have a dataTable loaded with entityBean objects, and the following code for showing editPage:
         <h:column>
              <h:commandLink id="editLink" action="#{entitySearchBean.editEntity}" actionListener="#{entityBean.edit}">
                   <h:outputText value="#{bundle.edit}" />
                   <f:param id="editId" name="editId" value="#{entity.id}" />
              </h:commandLink>
         </h:column>
    </h:dataTable>So, what I'm trying to do is to call entityBean.edit which should initialize EntityBean with entity data, and it sets i.e. person data to entity field of EntityBean class. I also call editEntity method on EntitySearchBean class (bean used just for searching entities with some criteria), and pass id property of selected entity in dataTable.
    What I'm expecting is to get personEdit.jsp with loaded personBean data initialized from actionListener on searchPage.jsp. Is it possible or is there some other way to do this?

  • Grid Control : Change Management Across Non-Prod and Prod

    All,
    I had some questions with regards to Grid Control Implementation Architechture on Prod and Non-Prod Env's
    The best practice is to configure a production Grid Control (GC) environment to monitor only production targets. This prevents nonproduction targets from adversely impacting a production environment.Another issue could be the compliance policy where in the production systems are to be isolated from Non-Production Environments.
    If this is the case then we have to have 2 GC environments , 1 for Prod and 1 for Non-Prod. In such a scenario how well can we do change management (Version control ) across the Non-Prod and Prod Environments ?
    To be more specific
    How is version control managed for Oracle builds if we have separate GC for Prod and Non-PROD? (Change Management pack / Configuration Management Packs are there but what is the real depth of these? )
    Change management for the Oracle Environments from DEV -> Test -> Pre-Production-> Production? We aim to have consist build across the environments can we achieve this if the GC is managed by Different OMS/OMR and if we have 2 separate environments how do we sync the GC environments managing PROD/NON-PROD after a change?

    This is feature is not there in 10.2
    This situation can be avoid by chagning the 9i port to 1522
    or install the GC on diffrent box

  • What is the best practice to migrate code from dev  to prod

    I have few questions related OWB version control/migration.
    1. What is the best way to Version the design repository
    We want to keep a separate copy of production and test version all the time.
    Do we have to create design repository in each environment ?
    2. Is it possible to have multiple copies of the same mapping ? If yes how this is done in OWB
    3. How to migrate the code to different environments
    2.1 Is there a way to provide DBA with script for deploying PL/SQL mapping without using OWB ?
    2.2 How important is the setting before actual code in the XML file is required to execute the mapping.
    2.3 We are planning to create tables , materialized view etc., outside OWB will this cause any problems ?
    Note : Our DBA prepare compiling the code through SQLPLUS
    3. Which user ( target owner or runtime access ) should be used to execute the mapping through sqlplus.
    We are looking for a solution were we do not have to use OWB tools for version control , migration and deployment. Currently we are using PVCS for version control and migration.
    we are using OWB 9.2 , Oracle 9.2.
    Any suggestion/ comments are appreciated.
    Thanks,
    Sekar,K

    Some answers below:
    1. Yes, you should use snapshots (right click on t he object and select create snapshot or on the project menu, select Change Manager for a centralized snapshot management).
    2. See above.
    3. Usually you will export the (test/development) design repository to an mdl file, and import it into the other (production) design directory. Then you will deploy the code from the other design directory into the production target (runtime).
    2.1 From 9.2 on, you will have to deploy to file and then use OWB scripting (OMBPlus) to deploy the generated file outside of the deployment manager.
    2.2 Not sure what is asked in this question - can you rephrase?
    2.3 No, but the creation and use of thes will not be managed and audited by OWB (Runtime Audit Browser)
    3.(bis) The target owner.
    Regards:
    Igor

  • Migrating from TFVC to Git best practices

    Hi Ive been tasked with migrating a large number of team projects from TFVC to GIT in TFS 2013. Can you please suggest any tools or documents to get me started? It seems the only suggestion so far is to create a new GIT project but you'd lose the history. 

    Hi Beaglehound24,
    You cannot change the version control system after a team project is created. As you mentioned, the way is create a new Git team project and then move the source code. This way will not keep your history.
    A very detailed instruction together with Powershell script which does migrate source code, work items and  test plans can be found here
    Migrating a TFS TFVC based team project to a Git team project - a practical example
    Best regards,
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • SAP Best Practice on IDES

    Dear All;
    I know that IDES does not have SAP Best Practices .  Do you know if it is possible to upload SAP Best Preactices into IDES. are there rest
    restrictions on this. if it is possible then how can i do this?
    Best Regards
    ~Amal Aloun

    You need solution manager to download best practice.
    Before you upload, make sure the version.

  • RMAN/TSM Configuration - Best Practices?

    Wondering how best to configure RMAN with TSM from a network perspective. Dual HBA's? Suggestions?
    Thanks.

    * The OWB client software will reside on each user's windows desktop.
    Yes.
    * Should there be one repository that will deploy to the runtime environments of TEST, USER ACCEPTANCE and PRODUCTION environments? or should there be three repositories, one for each environment?
    One.
    * If there is only a single repository that will deploy to all three environments, should it reside on TEST, USER ACCEPTANCE or PRODUCTION server?
    Test, but you need a repository on the other environments too.
    * Does OWB have sufficient version control capabilities to drive three environments simultaneously?
    No, no version control at all. You can buy a third-party tool like http://www.scm4all.com
    * Is it possible to have an additional development repository on a laptop to do remote development of source/target definition, ETL etc and then deploy it to a centralized development repository? Perhaps it is possible to generate OMB+ scripts from the laptop repository and then run them against the centralized repository on the shared server.
    Use the export/Import from the OWB via mdl-Files.
    Regards
    Detlef
    Message was edited by:
    289512
    null

Maybe you are looking for