Minimizing downtime

I'm looking at ways to minimize the downtime for a Weblogic 5.1 server
          when we do a release. Right now, we are:
          a) deploying the new code to a "new" directory
          b) Running the shutdown command
          c) Watching "ps" until the java processes go away
          d) Renaming "server" directory to "old"
          e) Renaming "new" directory to "server"
          f) Restarting Weblogic
          It occurs to me that the only resource that needs protecting in this
          process is the port, so I think I should be able to replace item "c"
          above with:
          c) Run "nmap" until port 7001 is no longer listed as open.
          Is this safe to do in a clustered environment? Are there any issues
          with starting the new server before the shutting down server has
          exited? The only one I can think of is getting mixed messages in a log
          file, but that doesn't worry me. If it becomes an issue I can just
          rotate the log files.
          Is there anything else that might cause a problem? We'll be going to
          Weblogic 7 in a few months, but it would be nice if we had an interim
          solution now.
          Thanks for any help.
          

I'm looking at ways to minimize the downtime for a Weblogic 5.1 server
          when we do a release. Right now, we are:
          a) deploying the new code to a "new" directory
          b) Running the shutdown command
          c) Watching "ps" until the java processes go away
          d) Renaming "server" directory to "old"
          e) Renaming "new" directory to "server"
          f) Restarting Weblogic
          It occurs to me that the only resource that needs protecting in this
          process is the port, so I think I should be able to replace item "c"
          above with:
          c) Run "nmap" until port 7001 is no longer listed as open.
          Is this safe to do in a clustered environment? Are there any issues
          with starting the new server before the shutting down server has
          exited? The only one I can think of is getting mixed messages in a log
          file, but that doesn't worry me. If it becomes an issue I can just
          rotate the log files.
          Is there anything else that might cause a problem? We'll be going to
          Weblogic 7 in a few months, but it would be nice if we had an interim
          solution now.
          Thanks for any help.
          

Similar Messages

  • Major version upgrade of WebLogic with zero/minimal downtime

    From what I can tell, the recommended approach for supporting minimal downtime during major version upgrades (e.g. WL 9 -> WL 10) is to have 2 domains available in the production environment.
    Leave one running to support existing users, upgrade the other domain, then swap to perform the upgrade on the first domain.
    We are planning on starting out with WL 9.1, but moving forward we require very high availability...(99.99%).
    Is this my only option?
    According to BEA marketing literature, service pack upgrades can be applied with "zero" downtime...but if this isn't reality, I'd like to hear more...
    Thanks...
    Chuck

    Have gotten as far as upgrading all of the software, deleting /var/db/.AppleSetupDone, and rebooting.  It brought me back in to Setup Assistant and let me choose "migrate from another mac os x server" and is now sitting waiting for me to take the old server down and boot it into target disk mode.  Which we can probably do Sunday at about 2am or so...
    You know, Setup Assistant should really let you run Software Update BEFORE migrating from another machine.  We have servers that can't be down for SoftwareUpdates in the middle of the day...

  • Migrating Hyper-V 2008 R2 HA Clustered to Hyper-V 2012R HA Clustered with minimal downtime to new hardware and storage

    Folks:
    Alright, let's hear it.
    I am tasked with migrating an existing Hyper-V HA Clustered environment from v2008R2 to new server hardware and storage running v2012R2.
    Web research is not panning out, it seems that we are looking at a lot of downtime, I am a VMware guy and I would do likely a V2V migration at this point with minimal downtime.
    What are my options in the Hyper-V world?  Help a brother out.

    Merging does require some extra disk space, but not much. 
    In most cases the data in the differencing disk is change, and not additional files.
    The absolute worst case is that the amount of disk space necessary would be the total of the root plus the snapshot.
    Quite honestly, I have seen merges succeed with folks being down to 10 GB free.
    But, low disk free space will cause the merge to take longer. So you always want to free up storage space to speed up the process.
    Merge is designed to not lose data.  And that is really what takes the time in the background.  To ensure that a partial merge will still allow a machine to run, and a full merge has everything.
    Folks have problems when their free space hits that critical level of 10GB, and if they have some type of disk failure during the process.
    It is always best to let the merge process happen and do its work.  You can't push it, and you cannot stop it once it starts (you can only cause it to pause).  That said, you can break it by trying to second guess or manipulate it.
    Brian Ehlert
    http://ITProctology.blogspot.com
    Learn. Apply. Repeat.

  • Schema changes, minimal downtime

    We are a software development company, using Oracle 10g (10.2.0.2.0). We need to implement schema changes to our application, in a high traffic environment, with minimal downtime. The schema changes will probably mean that we have to migrate data from the old schema to new or modified tables.
    Does anyone have any experience with this, or a pointer to a 'best practices' document?

    It really depends on what "minimal" entails and how much you're willing to invest in terms of development time, testing, hardware, and complexity in order to meet that downtime requirement.
    At the high end, you could create a second database either as a clone of the current system that you would then run your migration scripts against or as an empty database using the new schema layout, then use Streams, Change Data Capture, or one of Oracle's ETL tools like Warehouse Builder (which is using those technologies under the covers) to migrate changes from the current production system to the new system. Once the new system is basically running in sync with the old system (or within a couple of seconds), you can shut down the old system and switch over to the new system. If the application front end can move seamlessly to the new system, and you can script everything else, you can probably get downtime to the 5-10 second range, less if both versions of the application can run simultaneously (i.e. a farm of middle-tier application servers that can be upgraded 1 by 1 to use the new system).
    Of course, at this high end, you're talking about highly non-trivial investments of time/ money/ testing and a significant increase in complexity. If your definition of 'minimal' gets broader, the solutions get a lot easier to manage.
    Justin

  • Migrating SQL Server Published databases with minimal downtime

    All,
     I have a requirement in which databases in Server A has to be mirated to Server B. Server A has two databases in Merge replication and one with Transactional replication. Server A act as the distributor itself.
    My question is how can I migrate all these production databases which are also publishers to new sever B with minimal downtime, and with out breaking replication.
    I read that if the Server B is renamed as Server A then replication will continue with out any errors.Please suggest how to migrate the publisher to Server B with minimal downtime.
    Also I would like to know if the replication is Server A->Server C->Server D, then if I can migrate Server A to Server B with out dropping replication, will it affect the replication setup from Server C to Server D. Does it cause any replication errors.
    Thanks in advance!

    Just to throw some thoughts out there.
    It sounds like the "blocker" in the overall migration strategy is the Distribution database and the fact that it is local to a Publisher.
    Once you have migrated the Distributor, the remainder of the problem
    is more straightforward, using solutions such as Database Mirroring to failover to a new server with minimal downtime for example.
    If I were performing this migration project, I would agree with the business that there was to be a
    short replication service interruption whilst we migrated to a new Distribution database/server. I would also apply Hillary's recommendation to use the opportunity to split-out the Distribution database to a dedicated server.
    Using this approach, you could build and configure the new Distribution database before the migration "cut over" phase. The cut-over would require that you Drop the replication topology and re-create it, using the new Distributor DB/Server. You can script
    ALL the steps required for this ahead of time, resulting in the actual cut-over to the new Distributor being completed in a couple of minutes max (the database involved are already in Synch).
    Once done, you can proceed ahead with the remainder of the migration.
    Overall this is not a trivial project you have on your hands and in my opinion you absolutely must perform the process in a test environment before attempting to do so in Production.
    Good luck.
    John Sansom | SQL Server MCM
    Blog |
    Twitter | LinkedIn |
    SQL Consulting

  • Migrate entire database to ASM on another server via RMAN minimal downtime

    Hi
    I was looking for a procedure to migrate non ASM production databases to ASM via RMAN on a separate server with minimal downtime (backup as copy/switch database to copy technique). We have TDPO for tape backup and I normally rman clone test databases between servers but this involves too much downtime for production. The procedure in the ASM Admin Guide (Chapter 8) assumes the databases are on the same server.
    Thanks
    Tom Cullen

    tcullen wrote:
    Hi
    I was looking for a procedure to migrate non ASM production databases to ASM via RMAN on a separate server with minimal downtime (backup as copy/switch database to copy technique). We have TDPO for tape backup and I normally rman clone test databases between servers but this involves too much downtime for production. The procedure in the ASM Admin Guide (Chapter 8) assumes the databases are on the same server.
    Thanks
    Tom CullenDear Tom. Why you think you'll have downtime in the production database? The database will be running while the clone will be processing. Check the following link:
    http://download.oracle.com/docs/cd/B19306_01/backup.102/b14191/rcmdupdb.htm#BGBDCDGJ
    RMAN DUPLICATE DATABASE From ASM to ASM: Example
    Kamran Agayev A.
    Oracle ACE
    My Oracle Video Tutorials - http://kamranagayev.wordpress.com/oracle-video-tutorials/

  • How do I move databases using RMAN with minimal downtime ?

    How can I do the following using RMAN ?
    DB version 10.2.0.4
    Redhat 5.2
    I am not using an rman catalogIn the past I have moved large databases from 1 server to another with 5 minutes downtime using backups done the old way by putting tablespaces in backup mode and making copies of the datafiles.
    I used the following method :
    ========> Part 1
    STARTUP NOMOUNT
    CREATE CONTROLFILE REUSE set DATABASE "VPMY" RESETLOGS ARCHIVELOG
    MAXLOGFILES 32
    MAXLOGMEMBERS 3
         ....etc
    LOGFILE
    ......... log file names
    DATAFILE
    ... list of datafiles
    CHARACTER SET US7ASCII;
    ========> Part 2
    Up until the scheduled downtime, I would copy the archive logs from the production server to the new and run the following to apply the latest archive logs:
    RECOVER DATABASE USING BACKUP CONTROLFILE UNTIL CANCEL;
    ->AUTO<-
    ========> Part 3
    After applying the last archive log while in restrict mode -
    ALTER DATABASE OPEN resetlogs;
    My question is, how do I do Part 2 in RMAN. I have managed to duplicate databases and restore databases using rman to a different server, but this obviously only covers the data upto the backup point. How can I do the above parts 1 -3 with downtime of about 5 minutes like I have done using old methods ?
    Any help is much appreciated/

    you should be able to recover as you go with rman as well..
    copy the archived logs from A to B and apply them as they come in.
    run
    set until sequence x thread 1;
    recover database;
    if you're not opening the database after recovery you can just increment the set until sequence as the logs come in and do a new recovery.

  • Migrate Data from one SAN to other in AlwaysON configuration with no/minimal downtime

    Team,
    There is a requirement to move the current DBs from existing SAN to new IBM SAN disks. I have outlined the below steps. Request you to please add your inputs/challenges/risks.
    This is in Always ON and We have 3 Nodes. 
    Server A , Server B and Server C
    A and B are Synchronous
    C is hosting Async replica.
    Note: The SQL binaries are installed on E: drive which is a SAN drive. Whether is this going to be impacted based on my steps below?
    Present the new SAN of same size  on all the nodes in the Availability Cluster.
    Break the secondary replica. On Secondary replica, Migrate the data by physically moving the DB files from old SAN to new SAN.
    Rename the new SANs back to the original filenames as it was present earlier. Bring up the services and join to Availability Group. Check whether the DBs are Synchronized with Primary.
    Now, Failover the Primary to Secondary replica- This will be a short outage.
    Follow the same steps on Primary as mentioned in Steps 1-3.
    Check the Synchronization of the DBs and perform all the sanity checks.
    Thanks!
    Sharath
    Best Regards, Sharath

    Hi Sharath,
    This can be easily achieved by storage migration by storage Vendor, In your case IBM should be able to do it, since they have sophisticated tools
    (one of these tools is Storage Virtualization for Data Migration and Consolidation) to this SAN migration they don't need downtime during the storage sync process. your downtime will be only the Cutoff Time, If SAN Vendor does the Data Migration for you. <o:p></o:p>
    Otherwise,
    1. Take out read-only replica out of Always on Group.
    2. Since SQL is also installed on SAN drive, I strongly suggest you to rebuild the SQL Server as new. (No Copy of Data from SAN to SAN)
            Note: Drive allocation and drive letters should be identical
    3. Once you rebuild the SQL Server, One Challenge here is to migrate the SQL Logins (You must migrate users using same SID, to avoid permission issues).
    4. then Add it Always on Group, and wait for the databases to sync.
    5.
    Then manually move the always on group read/write replica to new node.
    6. follow the steps 1 ,2,3 and 4 in sequence.
    Hope this will help you.

  • Minimizing Downtime during Upgrade

    Hi All,
    We are in process of upgrade from R/3 4.7 EE 2.00 to ECC 6.0 SR3.
    The details of legacy and new landscape is as follows :
    Legacy Landscape
    OS       Solaris 9
    DB       Oracle 9.2.04
    SAP     R/3 4.7 EE 2.00
    New Landscape
    OS       Solaris 10
    DB       Oracle 10g
    SAP     ECC 6.0 SR3
    We have taken the offline backup of Live PRD and restored it on to Stand by server. We have also upgrade oracle 9i to 10g, which has taken around 20 hours.
    My question is, we want to minimize the downtime by directly installing Oracle 10g with R/3 4.7 EE and the take the offline/online backup of live PRD system and restored back to oracle 10g enviroment. By doing this we can reduce the downtime by aprrox 15 hrs.
    Is this scenario is possible ? Please give your valuable suggestion from your last experiences.
    Thanks,
    Kshitiz Goyal

    > My question is, we want to minimize the downtime by directly installing Oracle 10g with R/3 4.7 EE and the take the offline/online backup of live PRD system and restored back to oracle 10g enviroment. By doing this we can reduce the downtime by aprrox 15 hrs.
    Yes - this is possible.
    I suggest you get the newest installation CD set for 4.7 which supports Oracle 10g (using sapinst, not R3SETUP):
    See note "969519 - 6.20/6.40 Patch Collection Installation : Oracle/UNIX", Section 3:
    3.) Retroactive Release of Oracle 10.2
    Markus

  • Reinstall Xserve OS X Server 10.4, minimal downtime

    Hi,
    Ever since a powercut 2 weeks ago now we have had constant minor problems with our xserve, mainly with the file sharing and a couple of kernal crash's.
    I am going to re-install the server but i want to keep the downtime to an absolute minimum. Our Xserve is a G4, is it possible to setup and install OS X Server 10.4 to an external firewire on an Imac G5 then copy across onto the Xserve when complete? Will this have any issues with being installed on a G5 processor then coppied onto a G4?
    Any Issues with setting up OS X Server on an Imac?
    Regards
    Tim Pearson
    Grafika Ltd

    I think I would be tempted to do it the other way around...
    Clone the existing system to the external, test it, and then run this whilst doing the new install onto the 'real' server. More downtime for the 'real' server but no later concerns about compatibilities.
    -david

  • Cluster Switch Version - Minimizing Downtime

    Hi guys,
    We are due to do a switch version for a customer with a PUB/SUB configuration. Both have had the update installed and sitting in the inactive partition.
    In the past, what I would have done is to disconnect the SUB from the network, switch version on the PUB, allow it time to come up and for the phones to upgrade and re-register. After this, I would switch version on the SUB, allow it to come up, and then re-connect it to the network.
    In this case, there is no time when the business is completely closed so the customer is worried about minimizing the amount of dowtime. Reading the documentation, I have found:
    All servers in a cluster must run the same release  of Cisco Unified Communications Manager. The only exception is during a  cluster software upgrade, during which a temporary mismatch is allowed.
    Does this mean I can switch version on the PUB whilst leaving the SUB connected and have the phones failover to it, then wait for the PUB to come back up again before performing the switch version on the SUB? To me this seems a bit dangerous as there will be a version of the database on two different version numbers?
    Thanks
    Sean

    Hi Sean,
    Your process is spot on! Make sure you leave lots of time for the Pub
    to fully come back online after doing the switch version before starting the Sub.
    Other than that you look good to go;
    http://www.cisco.com/en/US/docs/voice_ip_comm/cucm/cucos/7_1_2/cucos/iptpch7.html#wp1179637
    Cheers!
    Rob
    PS: +5 for Will, Java & Aaron for their great work here
    "Clocks go slow in a place of work
    Minutes drag and the hours jerk" 
    -The Clash

  • Minimizing downtime of prod. db on account of encryption

    Hi,
    We are in need of encrypting a Oracle 8i database with 170 million records based on 3DES alhorithm.But we cannot afford the downtime of the production database.How to minimize the down time.
    regards
    Dilawar.

    1) there is no oracle 8i server for apple.
    2) Do you need to encrypt all data or just the network traffic ?
    3) If all data I would add encrypted_data columns to the tables and do the encryption in there. Keep them up to date using triggers. When all data is converted drop the plain_data columns and switch over to the new application. It looks like a hell of a job to me.
    Ronald.
    http://ronr.nl/unix-dba

  • Minimizing downtime after house move.

    I'm moving house in October and wish to have Infinity from the moment I move in. How can I acheive this?
    The previous owner has Infinity and so I am assuming that an engineer visit is not required - is that a fair assumption?
    I am not moving my existing BT connection so this will be a new one.
    Any advice?

    you need to contact BT sales with this one 0800800150
    If you want to say thanks for a helpful answer,please click on the Ratings star on the left-hand side If the reply answers your question then please mark as ’Mark as Accepted Solution’

  • Minimizing Downtime for OS patching with Data Guard

    Hi there,
    I'm trying to come up with a solution for a situation that will occur often, and I'm not entirely sure how to go about it. We don't have DG installed, but we will be implementing it very soon.
    We will have a primary and standby DB (both 11gR2 Enterprise Edition) setup with an observer for automatic server-side failover.
    Our team needs to perform operating system patches on both primary & standby systems every few months, but the 2 systems will not be patched at the same time. So they will patch one system on one day, then patch the other system on another day.
    When it comes time to patch the systems we would like to minimize the outage... In fact, I would like to have a situation where clients would be connected to the primary DB, while the standby is being patched, and then perform a switchover and then do it again on the other system - all without too much of an outage for the client (i.e. only the delay during the switchover)
    Would the following approach be valid for this type of scenario?
    1. stop application of redo to standby
    2. shut the standby database down
    3. team performs Operating System patching, etc.
    4. bring the standby system back up
    5. restart redo apply to standby - let it catch up to primary
    6. perform a switchover from primary to standby
    7. then we repeat steps 1-6, but on the new standby (i.e. the old primary DB)
    Is this approach valid? Or is it incorrect/impossible?
    If not, how do any of you handle scheduled outages for OS-level patching?

    Hello;
    This looks valid. Just a few months ago our Primary server room needed work and I did the same thing as you came up with. One small change is check they are in sync before step 1.
    One thing to watch is after you do a switchover make sure you can send a few logs and have them apply before you do the shutdown.
    Will check my notes but your plan looks the same as mine. Completely valid. I would do it again and again. I like to think of it as one database, but in different roles.
    In fact I moved a standby database to a different server using similar steps :
    1. On the primary database defer the standby destination.
    2. On the current standby cancel recovery and shutdown the database.
    3. Create database directories on the new server.
    4. Edit the tnsnames.ora file on the current primary and fix dest location
    5. Add tnsnames.ora and listener.ora to the new server.
    6. Move password and spfile files to the new server.
    7. Tnsping both servers
    8. Use scp to move the database to the new server.
    9. Restart recovery and check
    A little more complex but it worked the first time.
    Also had a power outage (planned) in the Standby server room just this last weekend, did "DEFER" on all the Primary's and then an "ENABLE" after the power came back. DG caught up in no time.
    Best Regards
    mseberg
    Later
    I checked my "post-morten" on this and I had two issues :
    1. One of the standby's did not have ALTER SYSTEM SET SEC_CASE_SENSITIVE_LOGON = FALSE; set and the Oracle forms application barked. ( Its was set on the Primary )
    2. Two of the databases gave "LOG SWITCH GAP" when I did : ( when I was about to switch back )
    select switchover_status from v$database;The logs returned this error : ( when I moved them and tried to regisiter )
    ORA-16089 archive log has already been registeredData Guard corrected itself after I forced a few log switches
    SQL>ALTER SYSTEM SWITCH LOGFILE;Two or three and then it all worked.
    The alert log had this :
    Clearing online redo logfile 1 complete
    Conclusion
    Issuing the ALTER SYSTEM SWITCH LOGFILE was needed for the "clearing" process on two of the three databases.
    This problem might be avoided by waiting after a switchover until several logs can be forced to the new standby. Otherwise before starting the switchover perform several log switches on the current primary.
    Edited by: mseberg on Jan 18, 2012 7:54 PM
    Edited by: mseberg on Jan 18, 2012 7:58 PM

  • Best practices to reduce downtime for Database releases(rolling changes)

    Hi,
    What are best practices to reduce downtime for database releases on 10.2.0.3? What DB changes can be rolling and what can't?
    Thanks in advance.
    Regards,
    RJiv.

    I would be very dubious about any sort of universal "best practices" here. Realistically, your practices need to be tailored to the application and the environment.
    You can invest a lot of time, energy, and resources into minimizing downtime if that is the only goal. But you'll generally pay for that goal in terms of developer and admin time and effort, environmental complexity, etc. And you generally need to architect your application with rolling upgrades in mind, which necessitates potentially large amounts of redesign to existing applications. It may be perfectly acceptable to go full-bore into minimizing downtime if you are running Amazon.com and any downtime is unacceptable. Most organizations, however, need to balance downtime against other needs.
    For example, you could radically minimize downtime by having a second active database, configuring Streams to replicate changes between the two master databases, and configure the middle tier environment so that you can point different middle tier servers against one or the other database. When you want to upgrade, you point all the middle tier servers against database A other than 1 that lives on a special URL. You upgrade database B (making sure to deal with the Streams replication environment properly depending on requirements) and do the smoke test against the special URL. When you determine that everything works, you configure all the app servers to point at B and have Streams replication process configured to replicate changes from the old data model to the new data model), upgrade B, repeat the smoke test, and then return the middle tier environment to the normal state of balancing between databases.
    This lets you upgrade with 0 downtime. But you've got to license another primary database. And configure Streams. And write the replication code to propagate the changes on B during the time you're smoke testing A. And you need the middle tier infrastructure in place. And you're obviously going to be involving more admins than you would for a simpler deploy where you take things down, reboot, and bring things up. The test plan becomes more complicated as well since you need to practice this sort of thing in lower environments.
    Justin

Maybe you are looking for