Schema changes, minimal downtime

We are a software development company, using Oracle 10g (10.2.0.2.0). We need to implement schema changes to our application, in a high traffic environment, with minimal downtime. The schema changes will probably mean that we have to migrate data from the old schema to new or modified tables.
Does anyone have any experience with this, or a pointer to a 'best practices' document?

It really depends on what "minimal" entails and how much you're willing to invest in terms of development time, testing, hardware, and complexity in order to meet that downtime requirement.
At the high end, you could create a second database either as a clone of the current system that you would then run your migration scripts against or as an empty database using the new schema layout, then use Streams, Change Data Capture, or one of Oracle's ETL tools like Warehouse Builder (which is using those technologies under the covers) to migrate changes from the current production system to the new system. Once the new system is basically running in sync with the old system (or within a couple of seconds), you can shut down the old system and switch over to the new system. If the application front end can move seamlessly to the new system, and you can script everything else, you can probably get downtime to the 5-10 second range, less if both versions of the application can run simultaneously (i.e. a farm of middle-tier application servers that can be upgraded 1 by 1 to use the new system).
Of course, at this high end, you're talking about highly non-trivial investments of time/ money/ testing and a significant increase in complexity. If your definition of 'minimal' gets broader, the solutions get a lot easier to manage.
Justin

Similar Messages

  • Migrating Hyper-V 2008 R2 HA Clustered to Hyper-V 2012R HA Clustered with minimal downtime to new hardware and storage

    Folks:
    Alright, let's hear it.
    I am tasked with migrating an existing Hyper-V HA Clustered environment from v2008R2 to new server hardware and storage running v2012R2.
    Web research is not panning out, it seems that we are looking at a lot of downtime, I am a VMware guy and I would do likely a V2V migration at this point with minimal downtime.
    What are my options in the Hyper-V world?  Help a brother out.

    Merging does require some extra disk space, but not much. 
    In most cases the data in the differencing disk is change, and not additional files.
    The absolute worst case is that the amount of disk space necessary would be the total of the root plus the snapshot.
    Quite honestly, I have seen merges succeed with folks being down to 10 GB free.
    But, low disk free space will cause the merge to take longer. So you always want to free up storage space to speed up the process.
    Merge is designed to not lose data.  And that is really what takes the time in the background.  To ensure that a partial merge will still allow a machine to run, and a full merge has everything.
    Folks have problems when their free space hits that critical level of 10GB, and if they have some type of disk failure during the process.
    It is always best to let the merge process happen and do its work.  You can't push it, and you cannot stop it once it starts (you can only cause it to pause).  That said, you can break it by trying to second guess or manipulate it.
    Brian Ehlert
    http://ITProctology.blogspot.com
    Learn. Apply. Repeat.

  • What is the best methodology to handle database schema changes after an application has been deployed?

    Hi,
    VS2013, SQL Server 2012 Express LocalDB, EF 6.0, VB, desktop application with an end user database
    What is a reliable method to follow when there is a schema change for an end user database used by a deployed application?  In other words, each end user has their own private data, but the database needs to be expanded for additional features, etc. 
    I list here the steps it seems I must consider.  If I've missed any, please also inform:
    (1) From the first time the application is installed, it should have already moved all downloaded database files to a separate known location, most likely some sub-folder in <user>\App Data.
    (2) When there's a schema change, the new database file(s) must also be moved into the location in item (1) above.
    (3) The application must check to see if the new database file(s) have been loaded, and if not, transfer the data from the old database file(s) to the new database file(s).
    (4) Then the application can operate using the new schema.
    This may seem basic, but for those of us who haven't done it, it seems pretty complicated.  Item (3) seems to be the operative issue for database schema changes.  Existing user data needs to be preserved, but using the new schema.  I'd like
    to understand the various ways it can be done, if there are specific tools created to handle this process, and which method is considered best practice.
    (1) Should we handle the transfer in a 'one-time use' application method, i.e. do it in application code.
    (2) Should we handle the transfer using some type of 'one-time use' SQL query.  If this is the best way, can you provide some guidance if there are different alternatives for how to perform this in SQL, and where to learn/see examples?
    (3) Some other method?
    Thanks.
    Best Regards,
    Alan

    Hi Uri,
    Thank you kindly for your response.  Also thanks to Kalman Toth for showing the right forum for such questions.
    To clarify the scenario, I did not mean to imply the end user 'owns' the schema.  I was trying to communicate that in my scenario, an end user will have loaded their own private data into the database file originally delivered with the application. 
    If the schema needs to be updated for new application features, the end user's data will of course need to be preserved during the application upgrade if that upgrade includes a database schema change.
    Although I listed step 3 as transferring the data, I should have made more clear I was trying to express my limited understanding of how this process "might work", since at the present time I am not an expert with this.  I suspected my thinking
    is limited and someone would correct me.
    This is basically the reason for my post; I am hoping an expert can point me to what I need to learn about to handle database schema changes when application upgrades are deployed.  For example, if an SQL script needs to be created and deployed
    then I need to learn how to do that.  What's the best practice, or most reliable/efficient way to make sure the end user's database is changed to the new schema after the upgraded application is deployed?  Correct me if I'm wrong on this,
    but updating the end user database will have to be handled totally within the deployment tool or the upgraded application when it first starts up.
    If it makes a difference, I'll be deploying application upgrades initially using Click Once from Visual Studio, and eventually I may also use Windows Installer or Wix.
    Again, thanks for your help.
    Best Regards,
    Alan

  • Best Practice for Replicating Schema Changes

    Hi,
    We manage several merge replication topologies (each topology has a single publisher/distributor with several pull subscriptions, all servers/subscribers are SQL Server 2008 R2).  When we have a need to perform schema changes in support of pending software
    upgrades we do the following:
    a) Have all subscribers synchronize to ensure there are no unsynchronized changes present in the topology at the time of schema update,
    b) Make full copy-only backup of distribution and publication databases,
    c) Execute snapshot agent,
    d) Execute schema change script(s) on publisher (*) when c and d are reversed this has caused issues with changes to view definitions which has resulted in us having to reinitialize subscriptions,
    e) Have subscribers synchronize again to receive schema updates.
    Each topology has it's own quirks in terms of subscriber availability and consequently the best time to perform such updates.
    The above process would seem necessary when making schema changes to remove tables, columns and/or views from the database, but when schema changes are focused on adding and/or updating objects, and/or adding/updating data, is the entire process above necessary? 
    In this instance, if it's possible to remove the step of coordinating the entire topology to synchronize prior to performing these changes I would like to do that.
    The process as we currently perform it works without issue, but I'd like to streamline it if and where possible, while maintaining integrity and avoiding potential for non-convergence.
    Any assistance or insight you can provide is greatly appreciated.
    Best Regards
    Brad

    If you need to make schema changes then you will need to use ALTER syntax at the publisher.  By default the schema change will be propagated to subscribers automatically, publication property
    @replicate_ddl must be set to true.  This is covered in
    Make Schema Changes on Publication Databases.
    This can be done at anytime, without the need to synchronize unsynchronized changes, make a backup, or execute the snapshot agent.
    Adding an a new article involves adding the article to the publication, creating a new snapshot, and synchronizing the subscription to apply the schema and data for the newly added article. Reinitialization is not required, but a new snapshot is.
    Dropping an article from a publication involves dropping the articles, creating a new snapshot, and synchronizing subscriptions. Special considerations must be made for Merge publications with parameterized filters and compatibility level lower than 90RTM.
    This is covered in
    Add Articles to and Drop Articles from Existing Publications.
    Brandon Williams (blog |
    linkedin)

  • OVD 11.1.1.5 Schema Change Not Appearing in Adapter

    I updated the schema in 11.1.1.5 of OVD. I need custom attributes/object classes available for database adapter mappings. The schema screen shows the attributes and object classes as does the schema.user.xml file. The schema.user.xml file is included in the schemas under Server Settings.
    The problem is that when I go to select the object class for the attribute mappings in the database adapter, the new object classes don't appear. I rebooted OVD using opmnctl and the schema changes are still not an option to select in the dropdown menu.
    Any suggestions for fixing this would be great.

    Hi Hamm,
    The fact that you are saying that "hfm text appears using hfm url" can mean that you have not configured the workspace web server after you configured the HFM. Have you done that?
    As another test, have you tried toy connect to HFM via smartview to an existing app?
    Regards,
    Thanos

  • Using Change Data Capture in SSIS - how to handle schema changes

    I was asked to consider change data capture for a database recently.  I can see that from the database perspective, its quite nice.  When I considered how I'd do this in SSIS, it seemed pretty obvious that I might have a problem, but I wanted to
    confirm here.
    The database in question changes the schema about once per month in production.  We have a lot of controls in our environment, so everytime a tables schema is changed, I'd have to do a formal change request to deal with a change to my code
    base, in this case my SSIS package; it can be a lot of work.   If I wanted to track the data changes for inserts, update and deletes using an SSIS package to send the data changes to the destination tables, would I have to change my SSIS package
    with every schema change, or is there a way to keep the exact same SSIS package with CDC without having to change it every month?
    Thanks,
    Keith

    Hi Keith,
    What is your exact requirement?
    If you want to capture the object_created, object_deleted or object_altered informations, you can try using
    Extended events .
    As mentioned in your OP:
    "If I wanted to track the data changes for inserts, update and deletes using an SSIS package to send the data changes to the destination tables, would I have to change my SSIS package with
    every schema change, or is there a way to keep the exact same SSIS package with CDC without having to change it every month?"
    If you want the databases in two different environments to be in sync, then take periodic
    backup and apply(restore) on the another destination DB.
    (or)
    you can also try with
    SQL Server replication if it is really needed.
    As I understand from your description, if you want the table data & schema to be in sync in two different database:
    then create job [script that will drop the destination DB table & create the copy of source DB table ] as per your requirement:
    --CREATE DATABASE db1
    --CREATE DATABASE db2
    USE db1
    GO
    CREATE TABLE tbl(Id INT)
    USE db2
    GO
    IF EXISTS (SELECT * FROM SYS.OBJECTS WHERE name = 'tb1' and TYPE = 'u')
    DROP TABLE dbo.tb1
    SELECT * INTO db2.dbo.tb1 FROM db1.dbo.tbl
    SELECT * FROM dbo.tb1
    --DROP DATABASE db1,db2
    sathya - www.allaboutmssql.com ** Mark as answered if my post solved your problem and Vote as helpful if my post was useful **.

  • Removing schema changes in AD made by software

    Hi,
    I've got an application in my forest, which has extended the schema. This is not Exchange, but if I uninstall this application for any reason, how can I remove all the associated schema changes from AD?
    Thanks

    Hello,
    It is not possible to roll back the schema to the previous state. As other experts mentioned in their post the only way is to do a full forest recovery. Have a look at this link for more information:
    Best Practices for Implementing Schema Updates
    Regards.
    Mahdi Tehrani   |  
      |  
    www.mahditehrani.ir
    Please click on Propose As Answer or to mark this post as
    and helpful for other people.
    This posting is provided AS-IS with no warranties, and confers no rights.
    How to query members of 'Local Administrators' group in all computers?

  • XML schema changes

    Hi,
    Can someone tell me whats the best way to handle XML schema changes if there are already XMLType tables creates using the schema?
    I checked the Oracle XML DB developer's guide but didnt find any information on updating schema.
    Thanks,
    Hiren

    Fundamentally the process behind copyEvolve is...
    For each table or column based on the schema
    1. Create a table with a column of XMLType which is not based on the XMLSchema. EG XMLType stored as CLOB
    2. Copy the content of the existing table into the CLOB based table.
    3. Delete the existing schema
    4. REgister the New Schema.
    5. Recreate teh tables and columns
    6. COpy the data into the new tables using supplied XSL transformation where necessary
    7. Delete tempoarary tables....
    The manual schema evoluiton method outlined in the 10g XML DB documentation will work in 9.2.x. The copyEvolve() method is only available in 10g. It should be noted that we do cheat a little under the covers with the copyEvolve() process, so in general, in 10g using copyEvolve() will be a little faster than the above process.
    THe other thing related to using the repository. We carefully preserve any OID's related to the XMLType tables affected by this process so that all the repository references remain valid.

  • Order Management 11i Schema Changes

    Hello,
    Does any one have all the schema changes from Order Entry to Order Management listed out? Oracle has it all over the place and I'd appreciate if someone who has figured this out posts it.
    TIA,
    Sudha

    I wanted more a mapping of old --> new tables/columns etc.. For other modules, we have Produt notes where all this is listed - But OM is considered a new product and not included in the product notes :-(
    I am doing my own mapping but wanted to know if someone else has it so I don't miss anything (we have a lot of custom views/reports/procedures which need to be upgraded and am having to rewrite most of them)
    I'd appreciate if someone who has done this mapping can share.
    Can you please send me to [email protected]

  • Generic query on documenting and maintaining schema changes -Oracle 10g

    Hi All,
    This is a generic query and i was not getting any particular forum here to put this for suggestions and help.i have put this in the documentation forum also but havent got any inputs.
    Could you all please advise a good and easy way I can store all db related stuff including every detail and minutest change done.
    I work as a Oracle-DB developer and it has been particularly difficult to keep track the changes very much.
    We have a Development pipeline(a code base for development),Testing pipeline(another code base-for Testing)
    and then comes Production Deployment(another one for clients)
    Presently,we have all our DDL,SQL,etc everything scripts stored in a sort of shared location but keeping them updated with every change done
    in any of the above pipelines is very difficult and I end up finding discrepencies between the stored versions and what is in Testing and Production.
    I typically spend a good deal of time doing comparing the entire scripts,schema changes to try to figure out what the right and latest version of a script should be.
    So,need your inputs on any particular free tool,best practises and good process for tracking easily and also maintaining db changes.
    --Thanks
    Dev

    The problem with most configuration management systems is that they were made for files, not database objects. Files such as dlls get replaced by the new version upon deployment, but database tables do not. Tables get upgraded with alter scripts. So if you want to maintain create table scripts, you need to upgrade them with every change, as well as creating alter table scripts.
    For a free configuration management system, see http://subversion.tigris.org/. Such software will also do file comparisons including version comparison.
    For comparing installation scripts to existing schemas, I would create a new schema with the installation script and then compare that new schema to an existing schema using the schema comparison feature in Toad or PLSQL Developer.
    Alternately, you can forget about maintaining installation scripts and just take an export without data from your production or test schema whenever you need to create a new installation. You can also extract DDL from prod using the dbms_metadata package. Embarcadero Rapid SQL was also good for extracting DDL.

  • Migration Verification with schemas changed

    Hi
    Does the migration verification tool supports this scenario ?
    Source DataBase -
    Target Database - with schema changed in some tables.
    We have changed the schemas in some tables . Do we have any mechanism by which we can check the migration correctness. (May be by giving the mapping for fields between source and target tables).
    Can this tool do this kind of verification .
    We have both databases as Oracle only.
    Alternatively - i have been working to develop the tool to do the same.
    But the basic problem is that
    how do we ensure that the rows we are checking(from source and target table) are related ones , so that we can check all the fields. We dont want to do m*n comparisons. Even after doing m*n comparisons, how do we ensure that we have checked the right row, because the same values can be same in a column for many rows, which will match.
    I hope that my problem is understood.
    Thanks
    Atul

    There is a commercially available/generalized tool available to verify migrated data that permits the user/developer to specify the desired source to destination mappings between two databases with dissimilar schemas. The tool provides the ability to map source to destination fields, define the mappings between the fields, compare source to target records after the migration has been completed and to report on the results. The tool has been successfully used to test/validate results for FDA compliance as well as other mission critical requirements.
    The product is TRUcompare and additional information is available from the product website www.trucompare.com, or you could call for additional information (800) 880-4540.

  • Schema Changes from JSIM 5.5 to 6.0

    Hello,
    In reviewing the documentation for JSIM 6.0, it appears to note that there are schema changes from V 5.5 in 6.0. However, these changes are not detailed in the doc. Can anyone detail for me exactly what these schema changes are, whether any scripts are required and available, and/or if the schema is automatically updated if an aupgrade in place occurs?

    There has been a doc bug logged about the fact that this was missing, fixed in SP1 release notes.
    There are scripts provided for you to upgrade from 5.x to 6.0 in the db_scripts diretcory of the install. There are a number of changes which are different for each database type. Each database type comes with a script called upgradeto2005Q4M3.*
    You can only upgrade from a version 5.x repo to a version 6 not from before 5.x You run the upgrade scripts before you import the update.xml file.
    Major changes:
    Oracle can now use BLOB's (optional)
    MySQL 4.1 is the only version supported (4.0 can not be used anymore)
    Index changes for performance enhancements
    The scripts contain comments explaining what they do and why they do it.
    WilfredS

  • Major version upgrade of WebLogic with zero/minimal downtime

    From what I can tell, the recommended approach for supporting minimal downtime during major version upgrades (e.g. WL 9 -> WL 10) is to have 2 domains available in the production environment.
    Leave one running to support existing users, upgrade the other domain, then swap to perform the upgrade on the first domain.
    We are planning on starting out with WL 9.1, but moving forward we require very high availability...(99.99%).
    Is this my only option?
    According to BEA marketing literature, service pack upgrades can be applied with "zero" downtime...but if this isn't reality, I'd like to hear more...
    Thanks...
    Chuck

    Have gotten as far as upgrading all of the software, deleting /var/db/.AppleSetupDone, and rebooting.  It brought me back in to Setup Assistant and let me choose "migrate from another mac os x server" and is now sitting waiting for me to take the old server down and boot it into target disk mode.  Which we can probably do Sunday at about 2am or so...
    You know, Setup Assistant should really let you run Software Update BEFORE migrating from another machine.  We have servers that can't be down for SoftwareUpdates in the middle of the day...

  • Any tools to rollback schema change and map new data to old schema?

    What is the best way to rollback a schema change, but preserve the data that is in the new schema?
    We are updating an application for a customer, however the customer would like to rollback to the old application if after a few days they run into a critical problem.
    The application schema is changing - not a lot, but a number of tables will have extra columns.
    To rollback, one would have to re-create the original schema and load data into it from the updated schema.
    I thought that the Migration toolkit/workbench might have been able to do this for Oracle --> Oracle migrations, but it appears to only support non-Oracle Database migrations to Oracle.
    Are there any oracle tools in say OEM or elsewhere that can handle data transformations?

    Kirk,
    You are correct, the focus of the Oracle Migration Workbench is on non Oracle databases.
    I am not aware of a tool that can automatically do what you want, but it should be possible to do what you want. Depending on the version of Oracle they are running, it is now possible to drop columns for example. Your approach would be based on the nature of the changes. Have you looked into the OEM Change Management pack?
    http://www.oracle.com/technology/products/oem/pdf/ds_change_pack.pdf
    You might also post you message to Database General forum.
    Donal

  • Schema change questions

    Hello,
    A few questions, about schema changes:
    1) How can we add a field to all documents in a collection and set a default value for it? So for example, let's say that we have a collection with documents that have two fields, mainly, Field1 and Field2. We want to add Field3 to all the documents in the
    collection and set it's default value. How can we do that?
    2) If it's possible to do #1. Can we add Field3 to only certain documents in a collection and not to others? and how would we do that?
    3) Can we rename fields? or delete fields for a) All documents in a collection? b) only certain documents in a collection? and how can we do that?
    Thanks,
    Mike

    Absolutely!
    I think you'd be interested in DocumentDB's server side programming model. It allows you to place run batching and sequencing operations inside the database - avoiding the need to do extra round trips between your application and the database.
    For this scenario, I'd consider writing a bulk update stored procedure. You can implement a continuation model inside the script for updates on large batches. The steps would be to query for the specific documents you are looking for. Then for each document
    - apply your updates (e.g. adding Field3, renaming fields, deleting fields) to the document and write it to the database.
    I have a few related stored procedure code samples available:
    Bulk deleting specific documents based on an user-specified query:
    https://github.com/aliuy/azure-node-samples/blob/master/documentdb-server-side-js/bulkDelete.js
    Update a document (w/ support for set, unset, and rename):
    https://github.com/aliuy/azure-node-samples/blob/master/documentdb-server-side-js/update.js
    I will work on getting a bulk update sample for you :)
    I'd also recommend checking out this 5 minute video for a brief introduction on server-side scripting:
    http://channel9.msdn.com/Blogs/Windows-Azure/Azure-Demo-A-Quick-Intro-to-Azure-DocumentDBs-Server-Side-Javascript
    And for more complete documentation, check out:
    http://azure.microsoft.com/en-us/documentation/articles/documentdb-programming/

Maybe you are looking for

  • VOICE MEMOS ARE GONE

    hello, i have iphone 5s with software virsion 7.0.6 i plugged the iphone into itunes latest virsion (11.1.5) on my laptop to get to voice memos and transferre them to my desktop, i syncronized the iphone many times and i could see the voice memos and

  • XML failure with ASN no Process DELIVERY_INFO_GR for ASN in status is not allowed

    Hi All, In production , we are having many XML failures while sending inbound delivery to SNC from ECC. In SXMB_Moni,  ASN XXXX: Process DELIVERY_INFO_GR for ASN in status is not allowed. Becasue ASN is not exist in SNC but in ECC GR status is comple

  • SSO logon not possible; logon tickets not activated on the server

    When I am on RWB 1)  click "component monitoring" ->"display all" 2) click "CCMS" I get a popup says: "SSO logon not possible; logon tickets not activated on the server" I find some threads about this on SDN but they are about EP. My case is PI7.0SP0

  • Need guidance for implementing perimeter authentication

    Hello, 1.) I have a WLS and a WLI Server on different servers, both have access to the same Active Directory LDAP. 2) I want the WLS application to authenticate a user via username/password, and then call the WLI JPD process (on the other server) as

  • Flash Video Under Dropdown Menu problem

    I have been consistently affect by this problem for the last few days.  I have a menu that needs to dropdown above the flash video. The problem is that the dropdown appears behind the flash and cannot be clicked. I have narrowed it down to some sort