Version control practices

I looking for information on how others employ version control on all the
various portal resources. Versioning custom development around remote
services and the like is easily handled through the IDE or normal version
control practices. What I am wonding about is how people version admin
objects, kd items, and other resources managed in the portal.
Any info would be appreciated.
Thanks.

In our current J2EE environment an ant script builds a war and copies it to a staging file server, then there is a java app that deploys the war to weblogic with the weblogic scripting extensions. We are thinking of modifying this app to also deploy the PTE files that any portlets in the war depend on. Day one a jsp portlet requires a single session preference. The next day it needs a login token to use the PRC. How do you coordinate this? For us it will probably be done with custom scripting.
A developer modifies the jsp and checks that in along with a revised remote service pte file which has the new settings. The ant script will then build the war and package it into a tar with the pte file and finally drop it on the staging server. From there our existing java deployment web app can extract the tar, deploy the war and run a migration import on ALUI. Hopefully all this can happen in a transactional context so that that if the import fails, the new portlet is not deployed.
I don't want to run into the case where a new portlet is out on the QA box and the portlet service is misconfigured and isn't supplying the correct headers to the portlet tier.

Similar Messages

  • Best practice for version control

    Hi.
    I'm setting up a file share, and want some sort of version control on the file share. What's the best practice method for this sort of thing?
    I'm coming at this as a subversion server administrator, and in subversion people keep their own copy of everything, and occasionally "commit" their changes, and the server keeps every "committed" version of every file.
    I liked subversion because: 1) users have their own copy, if they are away from the office or make a big oops mistake, it doesn't ever hit the server, and 2) you can lock a file to avoid conflicts, and 3) if you don't lock the file and a conflict (two simultaneous edits) occur, it has systems for dealing with conflicts.
    I didn't like subversion because it adds a level of complexity to things -- and many people ended up with critical files that should be shared on their own hard drives. So now I'm setting up a fileshare for them, which they will use in addition to the subversion repository.
    I guess I realize that I'll never get full subversion-like functionality in a file share. But through a system of permissions, incremental backups and mirroring (rsync, second-copy for windows users) I should be able to allow a) local copies on user's hard drives, b) control for conflicts (locking, conflict identification), and keeping old versions of things.
    I wonder if anyone has any suggestions about how to best setup a file share in a system where many people might want to edit the same file, with remote users needing to take copies of directories along with them on the road, and where the admin wants to keep revisions of things?
    Links to articles or books are welcome. Thanks.

    Subversion works great for code. Sort-of-ok for documents. Not so great for large data files.
    I'm now looking at using the wiki for project-level documentation. We've done that before quite successfully, and the wiki I was using (mediawiki) provides version history of pages and uploaded files, and stores the uploaded files in the file system.
    Which would leave just the large data files and some working files on the fileshare. Is there any way people can lock a file on the fileshare, to indicate to others that they are working on it and others shouldn't be modifying it? Is there a way to use unix permissions (user-group-other) permissions, "chmod oa-w" to lock a file and indicate that one is working on it?
    I also looked at Alfresco, which provides a CIFS (windows SMB) view of data files. I liked it in principle, but the files are all stored in a database, not in the file system, which makes me uneasy about backups. (Sure, subversion also stores stuff in a database, not a file system, but everyone has a copy of everything so I only lose sleep about backups regarding version history, not backups on the most recent file version.)
    John Abraham
    [email protected]

  • Best practice for version control B2B, ESB and BPEL

    Hello,
    we are setting up a new system using B2B, ESB and BPEL. The development team is more experienced working with PL/SQL, Oracle Workflow and we are worried that Jdeveloper generates changes to the source files during development and that we might have problems with the version control.
    Is there any best practice for setting up version control for these systems? Do we need to take anything in particular into consideration when setting up the projects?
    We are using Serena Dimensions 9.1 for version control with the add-on in Jdeveloper.
    Thanks in advance!

    I believe JDeveloper has a plugin for Dimensions.
    I havent used it but to get it, go to tools (It may be help I don't have JDeveloper on this machine to confirm) check for updates.
    If you select the thrid party check box - next, you will see an entry for dimentions.
    Configure the connection and develop as you would any other project.
    cheers
    James

  • OWB Change Management/Version Control Best Practice

    Hi
    I am about to start developing a data warehouse using OWB 10g R2, and I've been doing quite a lot of research into the various deployment/change management/version control techniques that can be used, but am still unsure which is the best to use.
    We will have 2-3 developers working on the project, and will be deploying from Development, to Test, to Production (each will have a separate repository). We want to be able to easily identify changes made between 1 release and the next to have a greater degree of control and awareness of what goes into each release. We also wish to use a source control system to track changes (we'll probably use SVN, but I don't think that the actual SCS tool makes a big difference to our decision at this point).
    The options available (that I'm aware of), are:
    1. Full MDL export/import.
    2. Snapshot MDL export/import.
    3. Manual coding of everything using OMB Plus.
    I am loath to use the full MDL export/import functionality since it will be difficult, if not impossible, to identify easily the changes made between 1 release and the next.
    The snapshot MDL export/import functionality is a little better at comparing releases, but it's still difficult to see exactly what has changed between 1 version and the next - particularly when a change to a transformation has been made. It also doesn't cope that well with tracking individually made changes to different components of the model.
    The manual coding using OMB Plus seems like the best option at the moment, though I keep thinking "What's the point of using a GUI tool, if I'm just going to code everything in scripts anyway?".
    I know that you can create OMB Plus code generation scripts to create your 'creation' scripts, but the code generation of the Alteration scripts seems that it would be more complicated than just writing the Alteration scripts manually.
    Any thoughts anyone out there has would be much appreciated.
    Thanks
    Liffey

    Well, you can also do per-object MDL exports and then manage those in your version control system. With a proper directory structure it would be fairly simple to code an OMB+ Script that scans a release directory tree and imports the objects one by one. I have done this before, although if you are using OWB as the primary metadata location for database objects then you have to come up with some way to manage object dependency order issues.
    The nice thing about this sort of system is that a patch can be easily shipped with only those objects that need to be updated.
    And if you force developers to put object-level MDL into your version control system then your system should also have pretty reporting on what objects were changed for a release and why.
    At my current job we do full exports of the project MDL and have a deployment script that drops the pre-existing deployed version of the project before importing and deploying the new version, which also works quite well - although as you note the tracking of what has changed in a release then needs to be carefully managed elsewhere. But we don't deploy any of our physical database objects through OWB. Those are deployed from Designer, and our patch script applies all physical changes first before we replace the mappings from the OWB project. We don't even bother synching the project metadata for tables / views / etc. at deployment. If the OWB project's metadata for database objects is not in sync with Designer, then we wind up with deployment errors. But on the whole it works pretty well.

  • How do I fix project after "Remove From Version Control" corrupted it?

    I am using RoboHelp 9.0.1 and installed both Tortoise SVN 1.6.9 and latest PushOK SVNSCC then added my large RoboHelp project to SVN. I was able to check in and out files from SVN but had several issues with it:
    1) Super super slow. Working with folders or any renames would take 10 seconds per file and up to 1 hour if needed to refresh the root folder.
    2) I could not perform some actions at all, such as delete, rename, or move folders. I kept getting COM errors.
    I therefore decided that working with SVN and RoboHelp is not practical, at least not on my VPN so I decided to disconnect the project from source control and just work locally. The only option that I saw that sounded like it would do that was the "Remove from Version Control". This started a process that lasted for several hours. At the end of it, I now have several significant issues:
    1) The order of the files and folders in my Project Manager is completely wrong now. I have almost 1000 topics and reordering all of them is not possible.
    2) The Table of Contents, Glossary, and Index files appear empty. They had content before.
    2) A couple of the Single Source Layouts I had created are completely missing.
    3) Many, but not all, of the folders have tons of files with the extension ending in "_temp_removed_by_svn"
    4) Many, but not all, of the files are actually gone from SVN so I can't recover a clean image. There was no warning that this command would actually delete the files from SVN (I thought it would just remove the version control connection).
    5) Who know what other issue exist that I haven't seen.
    Any idea how I can fix this?
    Thanks in advance,
    Dan

    Are the "_temp_removed_by_svn" files in your local folder or SVN? Let us know how you get on with the new project. It sounds like something is wrong with SVN. Can you use the SVN Log command to see whether there is a different version you can restore. This might also give you an indication of what might have caused the problem. You could try deleting your CPD file. It gets rebuilt it is isn't there anyway. This file can become bloated and it is good practice to delete it when it gets close to 2mb in size. Your project is fairly large and has a lot of folders and may affect performance. Have you considered splitting them and merging the output? I know you probably don't want to consider this right now, but I think it may be a better long term solution.
      The RoboColum(n)
      @robocolumn
      Colum McAndrew

  • Oracle Service Bus Configurations version control and deployment automation

    Hi,
    Currently we have OSB10gR3 installed and we use the web based sbconsole to create projects and proxy services. It's all working well and good!!
    We are at the state where we need to think about source control, migration of artifacts from dev to test and to prod.
    I'm looking for pointers to version control the artifacts of OSB projects, what could be version controlled (no binaries) and how do we extract those artifacts?
    How do we customize those artifacts while migrating to different environments in an automated fashion?
    Please point me to best practices and gotchas that we should be aware of while dealing with deploying OSB proejcts from test to Prod.
    Thanks in Advance!!

    After reading the threads mentioned by Deba, I'm able to get this all worked out with SBConsole itself. Experts, please review my approach below and let me know if I have overlooked anything.
    Simple advantage I see in using sbconsole is that it requires less maintenance, i.e. avoids rolling out another IDE (Eclipse -Workshop plug-in) to IT developers and at the same time provides the functionality that we are looking for. Currently, JDeveloper is our primary IDE, so we thought it's best to wait till OSB development gets integrated into JDev.
    This is the deployment workflow which worked for us,
    Developer:
    1)Develops Proxy services using SBconsole in Dev environment.
    2)Creates sbconfig.jar by using the export functionality available in System Administration link in sbconsole
    3)Checks-in ALL the files present in the above jar into version control under the proxy service project name
    4) Creates customization file using the customization file link in System Administration and modifies the value for each environment i.e. creates two files test_customfille, prod_customfile
    5)Checks-in the customization files into version control under the same proxy service project
    Promotion to Test and Prod
    1) From the source control, service proxy is built (actually the jar file of all files including the custom file is created)
    2) SCP proxy_sbconfig.jar file to Test or Prod box
    3) Follow the steps mentioned in Auto deploy of ALSB/OSB artifacts - Proxy, WSDL and webservices...
    4) Depending on the server, test or prod, pick the right customization file and deploy using ANT.
    Thanks!!

  • OWB Process Flow - How is the best  version control tool ??

    HI all,
    I just start work with OWB and I have a question to know how is the best way to do something.
    Imagine the scenario below:
    If I have 2 or more requests for example:
    Request 1: Create a Dimension City.
    Request 2: Create a Dimension Products.
    I Have ONE process flow and i need put my changes inside. This is my problem.
    In my scenario I don't know what Request goes to Prod First.
    If I put the Request 1 and Request 2 in my PROCESS FLOW, maybe I need change is someone decide change MY REQUEST PRIORITY.
    There is something in OWB to "control the version or changes" ?? For a mapping I export the MDL and commit on SVN, but I dont know haw can i do with the process flow.
    Something to agree multiples peoples work in different mappings and a SAME PROCESS FLOW ??
    What is best way to work with process flow and version control.
    What are the best practices when it comes to version control?
    Thanks.

    Amit,
    Are you really doing this in 10.1.3.x and not 11g?
    At any rate, I don't see how #2 and #3 relate whatsoever to your choice of a version control system. OK, maybe in #2 if there is some "maintenance" activity to be done against the version control server. Subversion is the open source alternative that you listed there and is pretty commonly used. If your company is already using one of the mentioned tools, why change? About the only thing I'd mention is to advise you NOT to use CVS for well documented reasons (JDev does support it) - if you would have picked CVS otherwise, choose Subversion. As far as question #1 - I've only used Subversion (well, I did use CVS for a while) with JDeveloper, so I can say it was "effective enough for me." In 10.1.3.x, I also used the external svn tools for doing lots of things like merging and so forth; in 11g, the support is much much better.
    Best,
    John

  • Development backup and version control questions

    Two question I want to ask as an Oracle XE beginner:
    1.How to backup my work (database objects and application pages) each day?
    2.What is the best practice for version control for the Oracle XE development projects? Does anyone use CVS?
    Thank you.

    Two question I want to ask as an Oracle XE beginner:
    1.How to backup my work (database objects and
    application pages) each day?Take a look at the backup script in the product directory.
    C.

  • Cp5 Project version control

    Hi all
    Can anyone recommend best practice for version control in Cp5? In other documentation I'm in the habit of saving the version number in the file name, but with Captivate, if there are links to projects, changing file names with every edit will cause problems.
    Any advice would be appreciated.
    Regards
    Amanda

    I don't know of a way to do this so that you can see the version information without opening the Captivate file. My way to include version information within the .cptx file is to put the version number into a caption on the title slide of the presentation. If the version number is only relevant to the developer(s), make the caption invisible, but if you want users to be able to see it in the published presentation, leave it visible and format it appropriately (e.g., a transparent caption with say 8 point font, at the very bottom of the slide).

  • Version control and deployment strategies

    Hi,
    I was looking for input from the community on general
    strategies for using version control and managing deployments to
    test/stg/production.
    Currently, I am using subversion to track my source code, and
    using the standard flex builder build routine to produce my binary
    output and test. My output is stored locally on a shared vmware
    drive, so that it can be served up with a linux vm running apache
    (this is not dissimilar to just local testing)
    Now I'm getting ready to deploy to some remote testing server
    though, so I'm trying to think of the best way to go about it. I
    would like to tag my code in svn with a release tag, as is my
    practice in other platforms. Should I also store the bin folder in
    svn? Should I check in the resulting binary code independently in a
    separate repository/directory and then tag it there? Should I
    create a new build target to deploy directly to my testing server?
    The issue with the tagging approach seems to be that if I
    want to rebuild the code or redeploy it for any reason, I would
    have to checkout the tagged code in a separate directory, import it
    as a new project, rebuild and then redeploy.
    If I checked in the tested binaries into a separate
    repository/folder, I could always just do an svn export for
    deployment, but I'm not sure if that would cause any weird issues,
    and it seems a bit wasteful. I suppose I could build from the tag
    and zip up the resulting release and just make it available via
    normal download, but it seems that I would likely then have lots of
    working directory checkouts as flex builder projects for each tag
    or release, just so that I could rebuild from them....doesn't seem
    very elegant.
    I'm very interested in hearing any feedback on this. How do
    you do it?
    thanks,
    Cliff

    flex with ant is still not very very popular
    combination,since FB does all for you anyways, but I have to say
    ant is lot more flexible, especialy if you combine it with FB, ant
    can do pretty much anything, ...
    here is link about flex's ant tasks
    http://livedocs.adobe.com/flex/3/html/help.html?content=anttasks_1.html
    most of the projects I have done in Flex were with ant,
    here is general approach:
    for internal testing I let flex builder build and deploy
    within integrated tomcat,this is also where I do debugging, FB is
    pretty good about that.
    then I have following targets :
    build target - builds optimized version of flex app, only
    using library classes that are needed by project, also using that
    to feed my Modules building tasks so that they exclude all class
    references (like Button, Tabnavigation etc... ) from their compiled
    units,
    something like this (this is my old fb2 example) i don't have
    fb3 example handy right now :
    quote:
    <mxmlc file="${basedir}/main.mxml"
    debug="false"
    optimize="true"
    output="${dir.web}/main.swf"
    show-binding-warnings="false"
    show-actionscript-warnings="false"
    link-report="${basedir}/docs/my-links.xml"
    use-network="true">
    <load-config
    filename="/configdir/modified/flex-config.xml"/>
    <source-path path-element="${FLEX_HOME}/frameworks"/>
    <compiler.library-path dir="." append="true">
    <include name="lib" />
    </compiler.library-path>
    <compiler.source-path path-element=""/>
    <compiler.source-path path-element="src"/>
    <metadata description="some app">
    <contributor name="John Doe" />
    <contributor name="Apple Orangino" />
    </metadata>
    </mxmlc>
    <!-- compile module mymodule-->
    <mxmlc file="${basedir}/mymodule.mxml"
    debug="false"
    optimize="true"
    output="${dir.web}/mymodule.swf"
    show-binding-warnings="false"
    show-actionscript-warnings="false"
    load-externs="${basedir}/docs/my-links.xml"
    use-network="true" >
    <load-config
    filename="/configdir/modified/flex-config.xml"/>
    <source-path path-element="${FLEX_HOME}/frameworks"/>
    <compiler.source-path path-element=""/>
    <compiler.source-path path-element="src"/>
    <metadata description="some app">
    <contributor name="John Doe" />
    <contributor name="Apple Orangino" />
    </metadata>
    </mxmlc>
    tag -target tags release based on parameter of latest tag
    plus number increment I configure in properties file.
    Utility targets :
    classpath target - builds classpath string for compc task.
    commit target : commits source code, before building.
    resources target - copies all resources files to build
    directory,
    deploy-local target- deploy to local Integration server
    deploy-remote target deploy to remote uat server.
    test -target - run test cases over classes and generate
    report.
    and of course all famous asDoc target :)
    good thing is that you can create "ant builder" under project
    properties and chain your targets with flexbuilder's build
    commands,
    you can also easily integrate with build servers ( I use
    hudson)
    here is example :
    http://hudson.amostudio.com/
    MR hudson checks out code for you, builds it using ant
    targets you tell it to use, and reports to you, its pretty cool and
    very handy to always have active build proccess over codebase, of
    course in some cases its overkill, but most of the times, MR hudson
    is good to have.
    unfortunately all my ant files are for external clients I
    cant disclose them, but I can write blog about some general (apples
    and oranges) example, hhm that's actually good idea :) I can shake
    off some stress as well :)
    thanks for the idea :)
    hth
    regards
    levan

  • "version control" for Oracle database?

    Hi,
    My work involves loading data from csv files into database tables. The data structure is different in the csv files than that in the tables, so the loading is not straight forward and I often make mistakes along the way. I would like to know what the best practice is for undoing mistakes and rolling back to meaningful point back in time. To make this more concrete, consider the following scenario.
    10:00AM I start loading some data into the database. I create two external tables for my csv files.
    10:30AM I create a PL/SQL script to insert the data from the external tables to the target tables.
    10:35AM I run the PL/SQL script and commit the change.
    11:00AM I notice a bug in my script: some of the data is loaded incorrectly, and some are not loaded.
    11:15AM I fix the bug, try to run it again but it fails this time because of unique constraints.
    At this point, I want my database to go back in time to 10:00AM, so I can start over. How can I do this?
    12:00PM Suppose I manage to start over and successfully loaded the two csv files. I still have more files to load. Before I proceed, I want to somehow "tag" the database so that I can go back to this state later (say two weeks from now, and the rollback segment isn't large enough to go back two weeks).
    Currently I use data pump export/import to undo mistakes on my development server. Due to the size of the database, it is not as efficient as I would like. I am from a Java developer background. The scenario sounds a lot like source version control to me. Is there such a thing in database land? What's the best practice for doing rapid try-error-rollback cycles?

    Is the data in the external tables sorted by some attribute? Consider keeping a small metadata table indicating the last successful key of the attribute that was committed. Then, after the commit, set a Savepoint (use the attribute key value for the savepoint name) and continue execution. If you find an error before your next commit, you can rollback to that savepoint and not lose all of the updates prior to it, but remember that a subsequent commit erases all savepoints you have set. Flashback of the table(s) is also a good idea. You can get the current db commit no. by executing 'Select current_scn from v$database' (you may need privs to read this table from the dba), and then executing 'Flashback table <table_name> to scn <scn_no>'. You can also use a Timestamp in place of scn_no with the Flashback command.

  • OWB 11gR2 - Version Control ?

    OWB 11gR2 - Version Control ?
    =======================================
    I am using OWB 11gR2 (11.2.0.1) on Win XP 32 bit. We have our OWB repository is on Unix server.
    I am thinking of implementing version control of OWB artifacts.
    I searched the past posting of this forum/Google. I find some hits, which were posted awhile ago.
    What is your solution for OWB 11gR2 version control ? I need to find that kind of solution too.
    Any best practices suggestions are welcome.
    Thanks in helping this thread alive.

    We are using Subversion where folder structure copys OWB tree structure. Every object is exported to separate mdl file. Also we store all DML scripts in separate forlder. After developers have finished their changes they manually gather affected objects to collection. We have written build/export/import tool (OMB+, ActiveTcl, ANT, SQLPlus) that they use next to export OWB objects aumatically from OWB structure to Subversion structure. Changelist with links to exported objects is aumatically maintained. Based on that release package is created at buildtime containig mdl files from Subversion. Release package is also stored in Subversion. When installing release to another environment (TEST, PROD) package is aumatically downloaded from Subversion, mdl files imported to OWB and deployed, DMLs executed in database.

  • Versioning best practices.

    Howdy, community.
    Are there any best practices for version control of FIM configuration?
    I'm curious if someone using something different from TFS.

    On 21.05.2011 00:27, Emily wrote:
    > I would like to use API comparison outside eclipse in a build
    > environment. Is this possible? If so, what are the prereq jars and
    > APIs I can use? Any help is very much appreciated.
    > The task I wish to accomplish is to compare two jars under different
    > versions and make sure the package versions comply with the OSGi
    > versioning best practices. For example, the package major version has
    > changed if there are API breaking changes.
    Maybe one of the API Tools Ant tasks can do this. Take a look at the
    'API Tools Ant Tasks' reference documentation in the 'Plug-in
    Development Environment Guide'.
    Dani
    >
    >
    > Thanks in advance!
    >
    > Emily

  • Version Control on BI7?

    Hi Guys,
    During my current project ,i encounter one problem with version control.
    Due to some reason, we only have two environment (dev&prod),without QA system, So there comes the problem.
    All the development is performed on dev env,and then transferred to prod system. We cann't control all the developmemt version in BI. So the crisis happens, we didn't know the current version of my prod system. very bad news:(
    Can any expert give some advice on how to control the version( infoobjects,process chain,cube,queryetc)?
    Does the bi system has the version control componet? Can the objects(created in bi and neet to be transferred) be exported and marked with some version number ?
    My purpose is as following:
    1:Using some method to control the version on dev and prod system, i need to see clearly what's the difference of some data models between dev and prod system.
    2:Can any tools be used to control the version?
    3:IF the version conflict,can we change the version backward quickly and effectivly?
    Also can any experts explain the best practice of version control used in their own project ?
    Thanks Ahead
    Jinwei Zhang From Beijing China

    unfortunately there is nor version control in BI. So once you have made changes to any objects, the previous changes are overwritten.
    also for comparsion, you need to do a side-side comparison between DEv and PROD boxes in your case. no easy method.

  • Can I have a library of PDF docs with version control? Can it cope with version nos. embedded in the file name?

    I manage a set of documents which are edited in Word but "published" as PDFs (using Word 2010's save to PDF capability).
    I want to create a library for them on SharePoint (my company has SharePoint Online via its Office 365 subscription).
    I'm pretty much a SharePoint novice but even I can see it's easy to upload the documents to a simple library. The things that are giving me a headache are:
    Can I tell SharePoint what the version number of the uploaded PDF document is? The version number as understood by the library needs to match the version number written into the document (where it is called a revision number and increments in whole numbers
    starting from zero).
    How do I handle replacing the uploaded PDF documents with new versions? If they were Word documents I could edit them by opening them from the SharePoint library, checking them out if necessary, and SharePoint would handle version control.
    But since the PDFs are generated from editable masters (Word documents) which are NOT on SharePoint I would need to edit the local Word document on my PC then generate a PDF version then upload it to replace the existing PDF document in the
    library. Is it easy to upload a new document over the top of an old in a SharePoint library?
    Hoping someone can give me some answers.
    Regards,
    Bruce Officer

    hi Bruce,
    1. It sounds like what you need is to set the starting version number since your revision number increments in whole numbers, it would match up to SharePoint once the starting version number is set. You can potentially create a new custom field in the
    library to manually track the version of the uploaded PDF document, but this might not match up with SharePoint's own version number and could get confusing. Another possibility is to upload dummy versions of the PDF document until the SP version
    matches with the revision version and then delete these dummy versions.
    2. When you upload the PDF document again into the library, it should prompt you to see if you want to replace the existing. If you proceed with the upload, it should replace and increment the SharePoint version number.
    Please Mark Answered if my reply solves your problem. Thanks!
    Jeff Thai
    Technical Solutions Architect, AvePoint
    http://www.AvePoint.com

Maybe you are looking for