Zero downtime backups

I was shocked to read the following notation on Sun's Training website web page:
"Please note that the Sun Web Learning Center will be down once a week for backup on Saturdays from 1:00 to 3:00 A.M. MDT (Friday 19:00 to 21:00 GMT)."
I have been researching various methods of backup including fssnap ufsdump flar and others only to find it seems there is no "best way" to backup without downtime. It appears as though it is still best and most reliable to bring the system into single user mode to do a proper backup; and the fact that Suns Sys Admins feel it best to bring down the Learning Center system "teaches" me it must be best.
Does anyone have any insite into why this might be, and how it might be possible for the system backup to take place with zero downtime?

If you have enough diskspace you can go a long way with the use of ufssnap. But like you said; your milage may vary. I usually run ufsdump directly on the slices I wish to have backed up because I know there won't be much writes going on. So far this approach never failed me (and yes; it has been crash tested a few times).

Similar Messages

  • ASA 5520 Upgrade 8.0(4)-- 8.4.2--Zero Downtime

    Hello Everyone,
    We are currently on 8.0(4) and planning on upgrading our failover pair to 8.4.2, I read some documents saying that we can perform a zero downtime upgrade.
    According the below documents Version 8.2 supports mismatch memory failover,
    http://www.cisco.com/en/US/docs/security/asa/asa82/configuration/guide/ha_overview.html#wp1077536
    https://supportforums.cisco.com/message/3549760#3549760//
    Upgrade Path:
    Active Firewall:                         Standby Firewall:
       8.0(4)                                       8.0(4)-->8.2.2
       8.0(4)                                       Upgrade RAM-2G---Reload
       faiover to standby                    8.2.2
       8.0(4)--->8.2.2                          8.2.2
       Upgrade RAM-2G-reload         8.2.2----Fail over
       8.2.2--Active                             8.2.2--Standby
      8.2.2                                          8.3.1
      8.2.2                                          8.4.2
      Failover to stanby                      8.4.2
      8.2.2--Standby                           8.4.2-----Active
    Can I perform zero downtime upgrade with the above upgrade path? Will both the firewalls act as a failover pair if one is on 8.2.2 and other is on 8.4.2.
    "Performing Zero Downtime Upgrades for Failover Pairs
    The two units in a failover configuration should have the same major  (first number) and minor (second number) software version. However, you  do not need to maintain version parity on the units during the upgrade  process; you can have different versions on the software running on each  unit and still maintain failover support."  (http://www.cisco.com/en/US/docs/security/asa/asa83/configuration/guide/admin_swconfig.html)
    Upgrade RAM-2G

    You can do it in a lot fewer steps.
    1. Upgrade RAM on standby, reload and make it active.
    2. Repeat process for newly standby unit.
    Now you have 2 units still on 8.0(4) with requisite RAM for 8.3+. TAC will recommend you go up in "baby steps" but the software will work upgrading directly from 8.0 to 8.4. 8.4(3) is the current version for the 5520 platform. At most conservative, I might upgrade to 8.2(4) as an interim but it's not strictly necessary. So my next step would be:
    3. Upgrade standby unit from 8.0(4) to 8.4(3). At this point take stock of the script syntax changes. Examine the upgrade log (on disk0:) and address any discrepancies.
    Note active/standby failover will work here but should not be run this way for any extended time as syntax changes would affect the ability to synchronize if changes are introduced on the active member.
    Finally:
    4. Flip upgraded standby unit to active and upgrade remaining standby unit to 8.4(3).
    If you follow these steps and check your work after each step, this would all be zero downtime.

  • Zero Downtime Migration from Oracle to Sybase

    Is there any way/ tool to migrate from oracle to sybase with Zero Downtime??
    Thanks

    Better answered on a Sybase forum I suppose...

  • Zero downtime Upgrade ASA 8.0(4) TO 8.4(7)

    Hi All,
    I checked a few blogs and upgrading ASA 5520 from 8.0(4) to 8.4(7) following below path. I will be upgrading  RAM to 2GB at version 8.2.5. Reason for 8.4.6 is we may get an error message ""No Cfg structure found in downloaded image file" Error Message" if we upgrade directly to 8.4.7.
    Please advise if we can perform Zero downtime upgrade if I follow below path and will they still be in HA? Active/standby
    8.0.4-->8.2.5 (Active on 8.0.4 and standby 8.2.5)--> Will they be in HA?
    8.2.5--->8.4.6(Active on 8.2.5 and standby 8.4.6)--> Will they be in HA?
    I believe below one should not be a problem.
    8.4.6-->8.4.7(Active on 8.4.6 and standby 8.4.7)--> Will they be in HA?
    Thanks in advance.
    Regards

    8.0.4-->8.2.5 (Active on 8.0.4 and standby 8.2.5)--> Will they be in HA?
    HA will work...as in the units will failover.  But due to changes in configuration syntax you could run into problems with config synchronisation. And could also cause issues in traffic flow if a failover occurs.  So it is best to upgrade the second ASA to the new version ASAP.  It is also the reason cisco recommend using the same Major and Minor software versions.
    8.2.5--->8.4.6(Active on 8.2.5 and standby 8.4.6)--> Will they be in HA?
    Same as above.
    8.4.6-->8.4.7(Active on 8.4.6 and standby 8.4.7)--> Will they be in HA?
    This should be fine
    Please remember to select a correct answer and rate helpful posts

  • Cisco ASA non zero downtime upgrade

    Hello,
    with a NON zero downtime procedure upgrade all connections are lost, even nat and arp table ? here, http://www.cisco.com/c/en/us/td/docs/security/asa/asa84/configuration/guide/asa_84_cli_config/ha_overview.html#wp1078922, on Table 61-2 State Information I think it is only for plain failover but not for upgrade with a non zero downtime upgrade procedure.

    Assuming you have a working HA pair with stateful failover, the Cisco supported answer is that you cannot skip minor releases (i.e. going from 9.1 directly to 9.3).
    You CAN upgrade directly from 9.1(2) to 9.1(5) as that third ordinal (the number in parentheses) is known as the maintenance release level.
    See table 1-6 in the Release notes for confirmation, excerpted here:
    "You can upgrade from any maintenance release to any other maintenance release within a minor release.
    For example, you can upgrade from 8.4(1) to 8.4(6) without first installing the maintenance releases in between."
    Note that 9.1(3) or later have some restrictions that are unique to those more recent code levels as some file system changes were put in place that requires certain prerequisites for a successful upgrade. Given that you are on 9.1(2) already that doesn't affect you in this case but it may be a consideration for other readers. Those requirements are noted just above Table 1-6 in those release notes.

  • Zero-downtime Database Migration tool ?

    We are exploring\evaluating tools provided by Oracle (or its partners) that ensures Zero-downtime Database Migration. Migration should include:
    - Migration of data from one version of the application to another version with or without changes to the database schema.
    - Migration of data from staging to production where staging was used for beta testing to host customers who created live data which need to be migrated to production. (Oracle to Oracle, SQL Server to Oracle, MySQL to Oracle, etc)
    - if a data type changes (say int to varchar) in staging database for a particular column in a table, the change migration should happen in the production database as well
    - if a column is added\deleted in a table of staging database, the same table alteration should migrate to production database
    - records in production database should not be deleted\truncated during data\schema migration
    - maintain zero-downtime
    By Zero-Downtime we mean: both the source and the target should be up and accept updates in real time during migration process. This should again be synced across and hence help to eliminate downtime during migration between various vendor databases.
    We are not looking for any ETL product, but out-of-the-box products like GoldenGate and Celona that ensure Zero-downtime database migration.

    Hi,
    I dont think that there is any easy answer. It looks like huge project so it should be done part to part.
    If I understand
    1) you have create staging database with all changes
    2) production is in old structure
    3) now you want to merge this two databases into one? Or applly all changes form staging to prod?
    I see there one solution clone your staging and create new prod. Whenit's donw switch connection to your new prod database.
    Regards,
    Tom

  • Zero downtime deployment

    Hi, was just wondering about "best practice" in terms of supporting zero-downtime deployment.
    We have a cluster with N nodes that are not storage nodes, and M nodes that are storage nodes. We use java all around, and pof-serialize the objects that we store in coherence.
    We want to deploy a new codebase, which requires a restart of all the processes, and it might also include changes to the objects that are being stored (ie, their serialization might be different).
    The typical approach is to do a "rolling restart" of the various processes, but i fear all that synch-up coherence does might not work right with some members running an older version of code, while the ones that are restarted are running a newer version of the code.
    Anybody have any experience with this?
    Thanks.

    Hi,
    If you change the serialisation format then a rolling restart will not work. You need to make sure that all your POF classes are evolvable - that is that they implement EvolvablePortableObject. Doing this can take a lot of effort. We do it, not for rolling restart, but to make sure clients of our system do not constantly need to upgrade their client libraries. It can be quite complicated as it is not just serialisation that you need to be aware of but even changing how methods work that get called on the server side can break evolvability, for example changing how a filter works, an aggregator works, a method return type etc.
    We have looked at zero downtime rolling restarts in the past and decided they were more trouble than they were worth. In our case we have large clusters 300+ nodes so a rolling restart would take a very long time as you need to wait for the cluster to re-balance the partitions each time you stop then restart the node. We found it could take quite a few hours to do a rolling restart whereas a full deployment and reload of the data take about 90 minutes.
    You would also need to be careful if your cluster sits on top of a database. If you need to make any DB changes then this will not work unless they can some how be evolvable too otherwise either the new code or old code will be incompatible with the start of the DB.
    Other people may have different experiences of zero-downtime, but as I said, we just found it too much effort and just go for the normal small-amount-of-planned-downtime approach.
    JK

  • Yosemite and zero power Backup Battery

    Hi to everybody.
    I have a 350 Mhz Yosemite and there is no way to boot OS 9 or Mac OS X from nothing: neither from HD, CD, DVD or external firewire HD... Mac OS 9 style folder blinking in the screen.
    Could this be caused by the backup battery which is currently dead (zero volts)?
    TIA
    Majortom

    Information that makes your Mac a pleasure to use on a daily basis is held in Parameter RAM.
    This includes:
    Preferred OS Type
    Preferred OS Version
    Preferred Boot Device
    Normally, Parameter RAM is maintained by an "always-on" \[when AC Power is available] about 5 Volt power supply.
    When the "always-on" power drops, even for a moment, the battery is pressed into service to maintain the parameters. If the Voltage is not there, you get corruption in the parameters, and then the things that usually make your Mac a pleasure to start and use become the reverse.
    Your Mac may be looking for a goofy version of a goofy Operating System, on a non-existent boot device, with the screen resolution set to something truly amazing.
    For bench-testing, you can remove the battery and remove all power and let it sit for a while. Then restore power and press and hold the tiny Reset button nearest the battery for a quarter minute, then wait at least five seconds and attempt to start again. If successful, you will be good until the power fluctuates.
    The 3.6 Volt non-recharging Lithium batteries are available from many online sellers for under US$10. Although a 3.0 Volt camera battery will fit, it is not adequate to start and run your Mac. In general, it needs at least 3.2 Volts to start and run successfully. A new battery test at 3.6 Volts in hand.
    Message was edited by: Grant Bennet-Alder

  • Snapshot Backups on HP EVA SAN

    Hi everyone,
    We are implementing a new HP EVA SAN for our SAP MaxDB Wintel environment.  As part of the SAN setup we will be utilising the EVAs snapshot technology to perform a nightly backup.
    Currently HP Data Protector does not support MaxDB for its "Zero Downtime Backup" concept (ZDB), thus we need to perform LUN snapshots using the EVAs native commands.  ZDB would have been nice as it integrates into SAP and lets the DB/SAP know when a snapshot backup has occurred.  However as I mentioned this feature is not available on MaxDB (only SAP on Oracle).
    We are aware that SAP supports snapshots on external storage devices as stated in OSS notes 371247 and 616814.
    To perform the snapshot we would do something similar (if not exactly) like note 616814 describes as below:
    To create the split mirror or snapshot, proceed as follows:
                 dbmcli -d <database_name> -u < dbm_user>,<password>
                      util_connect < dbm_user>,<password>
                      util_execute suspend logwriter
                   ==> Create the snapshot on the EVA
                      util_execute resume logwriter
                      util_release
                      exit
    Obviously MaxDB and SAP are unaware that a "backup" has been performed.  This poses a couple of issues that I would like to see if anyone has a solution too.
    a.  To enable automatic log backup MaxDB must know that it has first completed a "full" backup.  Is it possible to have MaxDB be aware that a snapshot backup has been taken of the database, thus allowing us to enable automatic log backup?
    b.  SAP also likes to know its been backed up also.  Earlywatch Alert reports start to get a little upset when you don't perform a backup on the system for awhile.
    Also DB12 will mention that the system isn't in a recoverable state, when in fact it is.  Any work arounds available here?
    Cheers
    Shaun

    Hi Shaun,
    interesting thread sofar...
    > It would be nice to see HP and SAP(MaxDB) take the snapshot technology one or two steps further, to provide a guaranteed consistent backup, and can be block level verified.  I think HPs ZDB (zero downtime backup eg snapshots) technology for SAP on Oracle using Data Protector does this now?!??!
    Hmm... I guess the keyword here is 'market'. If there is enough market potential visible, I tend to believe that both SAP and HP would happily try to deliver such tight integration.
    I don't know how this ZDB stuff works with Oracle, but how could the HP software possibly know how a Oracle block should look like?
    No, there are just these options to actually check for block consistency in Oracle:  use RMAN, use DBV or use SQL to actually read your data (via EXP, EXPDB, ANALYZE, custom SQL)
    Even worse, you might come across block corruptions that are not covered by these checks really.
    > Data corruption can mean so many things.  If your talking structure corruption or block corruption, then you do hope that your consistency checks and database backup block checks will bring this to the attention of the DBA.  Hopefully recovery of the DB from tape and rolling forward would resolve this.
    Yes, I was talking about data block corruption. Why? Because there is no reliable way to actually perform a semantic check of your data. None.
    We (SAP) simply rely on that, whatever we write to the database by the Updater is consistent from application point of view.
    Having handled far too much remote consulting messages concerning data rescue due to block corruptions I can say: getting all readable data from the corrupt database objects is really the easy part of it.
    The problems begin to get big, once the application developers need to think of reports to check and repair consistency from application level.
    > However if your talking data corruption as is "crap data" has been loaded into the database, or a rogue ABAP has corrupted several million rows of data then this becomes a little more tricky.  If the issue is identified immediately, restoring from backup is a fesible option for us.
    > If the issue happened over 48hrs ago, then restoring from a backup is not an option.  We are a 24x7x365 manufacturing operation.  Shipping goods all around the world.  We produce and ship to much product in a 24hr window that can not be rekeyed (or so the business says) if the data is lost.
    Well in that case you're doomed. Plain and simple. Don't put any effort into getting "tricky", just let never ever run any piece of code that had not passed the whole testfactory. That's really the only chance.
    > We would have to get tricky and do things such as restore a copy of the production database to another server, and extract the original "good" documents from the copy back into the original, or hopefully the rogue ABAP can correct whatever mistake they originally made to the data.
    That's not a recovery plan - that is praying for mercy.
    I know quite a few customer systems that went to this "solution" and had inconsistencies in their system for a long long time afterwards.
    > Look...there are hundreds of corruption scenarios we could talk about, but each issue will have to be evaluated, and the decision to restore or not would be decided based on the issue at hand.
    I totally agree.
    The only thing that must not happen is: open a callconference and talk about what a corruption is in the first place, why it happened, how it could happen at all ... I spend hours of precious lifetime in such non-sense call confs, only to see - there is no plan for this at customer side.
    > I would love to think that this is something we could do daily to a sandpit system, but with a 1.7TB production database, our backups take 6hrs, a restore would take about 10hrs, and the consistency check ... well a while.
    We have customers saving multi-TB databases in far less time - it is possible.
    > And what a luxury to be able to do this ... do you actually know of ANY sites that do this?
    Quick Backups? Yes, quite a few. Complete Backup, Restore, Consistency Check cycle? None.
    So why is that? I believe it's because there is no single button for it.
    It's not integrated into the CCMS and/or the database management software.
    It might also be (hopefully) that I never hear of these customers. See as a DB Support Consultant I don't get in touch with "sucess stories". I see failures and bugs all day.
    To me the correct behaviour would be to actually stop the database once the last verified backup is too old. Just like everybody is used to it, when he hits a LOGFULL /ARCHIVER STUCK situation.
    Until then - I guess I will have a lot more data rescue to do...
    > Had a read  ...  being from New Zealand I could easily relate to the sheep =)
    > Thats not wan't I meant.  Like I said we are a 24x7x365 system.  We get a maximum of 2hrs downtime for maintenance a month.  Not that we need it these days as the systems practically run themselves.  What I meant was that between 7am and 7pm are our busiest peak hours, but we have dispatch personnel, warehouse operations, shift supervisors ..etc.. as well as a huge amount of batch running through the "night" (and day).  We try to maintain a good dialog response during the core hours, and then try to perform all the "other" stuff around these hours, including backups, opt stats, and business batch, large BI extractions ..etc..
    > Are we busy all day and night ... yes ... very.
    Ah ok - got it!
    Especially in such situations I would not try to implement consistency checks on your prod. database.
    Basically running a CHECK DATA there does not mean anything. Right after a table finished the check it can get corrupted although the check is still running on other tables. So you have no guranteed consistent state in a running database - never really.
    On the other hand, what you really want to know is not: "Are there any corruptions in the database?" but "If there would be any corruptions in the database, could I get my data back?".
    This later question can only be answered by checking the backups.
    > Noted and agreed.  Will do daily backups via MaxDB kernel, and a full verification each week.
    One more customer on the bright side
    > One last question.  If we "restored" from an EVA snapshot, and had the DB logs upto the current point-in-time, can you tell MaxDB just to roll forward using these logs even though a restore wasn't initiated via MaxDB?
    I don't see a reason why not - if you restore the data and logarea and bring the db to admin mode than it uses the last successfull savepoint for startup.
    If you than use recover_start to supply more logs that should work.
    But as always this is something that needs to be checked on your system.
    That has been a really nice discussion - hope you don't get my comments as offending, they really aren't meant that way.
    KR Lars

  • Major version upgrade of WebLogic with zero/minimal downtime

    From what I can tell, the recommended approach for supporting minimal downtime during major version upgrades (e.g. WL 9 -> WL 10) is to have 2 domains available in the production environment.
    Leave one running to support existing users, upgrade the other domain, then swap to perform the upgrade on the first domain.
    We are planning on starting out with WL 9.1, but moving forward we require very high availability...(99.99%).
    Is this my only option?
    According to BEA marketing literature, service pack upgrades can be applied with "zero" downtime...but if this isn't reality, I'd like to hear more...
    Thanks...
    Chuck

    Have gotten as far as upgrading all of the software, deleting /var/db/.AppleSetupDone, and rebooting.  It brought me back in to Setup Assistant and let me choose "migrate from another mac os x server" and is now sitting waiting for me to take the old server down and boot it into target disk mode.  Which we can probably do Sunday at about 2am or so...
    You know, Setup Assistant should really let you run Software Update BEFORE migrating from another machine.  We have servers that can't be down for SoftwareUpdates in the middle of the day...

  • Options for fast recovery - Reducing Downtime

    OS: OEL 5.7
    Database : 11.2.0.3-EE (non-RAC)
    I'm looking for Options using ONLY Oracle Features to reduce downtime on scheduled outages, due application changes and upgrades.
    In this particular case I have only one application installed on this database (ERP).
    Default Backup (full) and Restore operation are activities that we already know, but I'm looking for others options that reduce downtime.
    I need a rollback plan in short time.
    Any help is welcome.

    Hi,
    Dataguard is the best option in case of short downtime, but you will need double of storage space.
    Two thing you must consider:
    * What is database size?
    * What is amount of data that will be updated/deleted/added during this application change?
    Choose one of this option:
    * Dataguard
    The best option and is really fast, near zero downtime. (As mentioned by mseberg using a nice example)
    * Flashback Database with RESTORE POINT
    Oracle Flashback Database and restore points are related data protection features that enable you to rewind data back in time to correct any problems caused by logical data corruption or user errors within a designated time window. These features provide a more efficient alternative to point-in-time recovery and does not require a backup of the database to be restored first.
    Restore points provide capabilities related to Flashback Database and other media recovery operations. In particular, a guaranteed restore point created at an system change number (SCN) ensures that you can use Flashback Database to rewind the database to this SCN. You can use restore points and Flashback Database independently or together.
    You will need to open the database with RESETLOGS after FLASHBACK Database.
    * Guarantee Restore Point (with flashback database disabled)
    Like a normal restore point, a guaranteed restore point serves as an alias for an SCN in recovery operations. A principal difference is that guaranteed restore points never age out of the control file and must be explicitly dropped. In general, you can use a guaranteed restore point as an alias for an SCN with any command that works with a normal restore point.
    A guaranteed restore point ensures that you can use Flashback Database to rewind a database to its state at the restore point SCN, even if the generation of flashback logs is disabled.
    You don't need RESETLOGS after rollback.
    * Edition-Based Redefinition for Online Application Maintenance and Upgrades
    Edition-based redefinition enables you to upgrade a database component of an application while it is in use, thereby minimizing or eliminating down time. This is accomplished by changing (redefining) database objects in a private environment known as an edition.
    To upgrade an application while it is in use, you copy the database objects that comprise the application and redefine the copied objects in isolation. Your changes do not affect users of the application—they continue to run the unchanged application. When you are sure that your changes are correct, you make the upgraded application available to all users.

  • What are the network restrictions for a Tuxedo MP configuration between the Master and Backup?  Can they be on separate networks?

    We are migrating from one data center to another and want to keep 100% uptime during the process.  The two data centers are on separate networks but in the same city.  I want to migrate the Master and the Backup to the new data center with zero downtime. 

    Hi Harvey,
    Tuxedo machines just need TCP/IP connectivity.  So machines can be in different subnets, different Ethernets, etc.  I know of one customer that has a 13 machine MP configuration that is distributed across the entire country.  So as long as there is a route between the machines, you should be fine.  Here's an old but probably useful white paper on bridges, multiple networks, etc., from Aurora Information Systems:  "Exploring High Availability Issues with BEA Tuxedo and Third Party High Availability Software".  While it's a pretty old white paper, from what I can see most of it is still applicable.
    Regards,
    Todd Little
    Oracle Tuxedo Chief Architect
    PS  You should write some articles about what you've been up to and how you've accomplished it!

  • Downtime for EBCDIC to ASCII conversion

    Hello,
    we have performed successfully an EBCDIC to ASCII conversion for a client's development system.
    The total downtime was about 24 hours.
    The customer though, refuses to have downtime of their production system, since it severely affects their systems.
    Is there a possibility not to have any downtime for an EBCDIC to ASCII conversion ?
    We were thinking as alternatives to build a second productive system, do the conversion there and after finishing, to apply journal receivers from the real production system. This would minimize their downtime to the backup time of the production system and the time needed to apply the journal receivers ?!?
    Has anyone performed such a task ?
    Would this be feasible (we would have to apply journals from an EBCDIC system to an ASCII system)
    Thank you very much
    Katerina Psalida

    Hi Katerina,
    applying journal changes from the EBCDIC system to the ASCII system will not work for several reasons, primarily because the journal keeps track of the journaled tables through an internal journal ID, which will not be the same after the EBCDIC to ASCII conversion. Technically it would not work because the data in the journal entries is kept very low-level, so a conversion from EBCDIC to ASCII during apply is not implemented. Also, the journal entries are based on the relative record numbers in the table, and after the conversion, the relative record numbers will not necessarily be the same.
    I am not aware of a zero-downtime conversion option. You can speed up the conversion if you use the "Inplace" conversion option. Did you use that when you measured the 24 hours downtime at the test system? If not, you should give the Inplace option a try. Depending on your data, it could reduce the downtime significantly.
    Kind regards,
    Christian Bartels.

  • ASA 8.2 8.4 9.1 possible with no downtime as we run active/standby?

    Hello,
    We have 2 x ASA 5520s (with 2GB mem) in active/standby mode, they also include the IPS modules.
    The current firmware is 8.2 and I was wondering if it is possible to upgrade these firewalls with no downtimes?  In the past I have upgraded the standby ASA, rebooted it and then made it the active ASA then upgraded the new standby ASA.
    I have have quite a lot of NAT Exempts (No-NATs?) and a few static NATs, how did you approach this during your upgrades?
    I guess I can roll back as the 8.2 firmware will still be on the flash and I will have the config?
    Thanks

    Yeah it's supported:
    Release Notes for the Cisco ASA Series, 9.1(x)
    http://www.cisco.com/en/US/docs/security/asa/asa91/release/notes/asarn91.html#wp732442
    This document has the information that you need; it talks about the requirements and zero downtime procedure.
    But you need to take a lot of considerations that you can reference in the document:
    https://supportforums.cisco.com/docs/DOC-12690
    If you don't mind me asking why are you upgrading?
    Because of a fix or feature?

  • What are the criteria/scenarios for zero down time to be true?

    As stated in the subject title. Could someone explain to me situations where zero down time can be achieved and situations where it cannot?
    Edited by: 877132 on Aug 3, 2011 8:28 PM

    Zero-downtime in what context? Initial load, failover, upgrade?
    During initial load, with a zero-downtime requirement, changes will be taking place on the source. How do those get replicated to the target while the load is taking place? You start Extract before the initial load, so that any DML changes made during the load process can be captured.
    The recently published Apress book on GoldenGate discusses zero-downtime migration.
    • Keep the old database you’re migrating from synchronized with the new database.
    After the migration cutover, keep the old database synchronized for some period
    of time until you’re certain there will be no fallback to the old database.
    • The old database will actively process SQL changes until the migration cutover.
    Typically the cutover occurs during a scheduled weekend maintenance window or
    during a slow period for the website or application.
    • During the interim period until the migration cutover, the new database can be
    used for read-only queries as needed. Although supported by Oracle GoldenGate’s
    bidirectional replication feature, typically the data in the new database isn’t being
    updated until after the cutover. If updating data in the new database before the
    cutover is a requirement for your project, you need to configure bidirectional
    replication to keep both databases synchronized.
    Database data and structures in the old database can be different than the new
    database. This depends on the type and complexity of migration. For example,
    specific data types or structures may need to change if you’re migrating from a
    SQL Server database to an Oracle database. If you’re simply migrating from an
    Oracle 10g to an Oracle 11g database, you may have the exact same application
    data and data structures. You may also be using Oracle GoldenGate to migrate
    other types of applications from old to new releases, and the data structures in
    those cases may be different.
    • Data in the old database must be kept current with the new database, and there
    should be no replication lag at the time of cutover or fallback. Before the cutover
    or fallback some lag can be tolerated, but the lag must be eventually eliminated
    prior to the application cutover or fallback. Any replication lag present at the time
    of cutover could cause a delay.
    • Cutover to the new database and fallback from the new database to the old
    database must happen quickly to minimize any downtime. Keep in mind that
    Oracle GoldenGate is only one piece the greater migration project, and you need
    tested procedures in place for all parts of your migration project, such as switching
    your application connections.
    • After cutover to the new database, the database roles are reversed and the new
    database becomes the replication source and the old database becomes the target.
    This allows for migration fallback to the old database if needed.
    • The time period permitted for the cutover and fallback procedures should be well
    understood. For example, a two-hour window for cutover and fallback has much
    different requirements than if an entire weekend is allowed for the cutover.

Maybe you are looking for

  • Name of the Month not showing correctly in calendar

    I m facing a strange problem with the calendar, on the home screen calendar icon date & day is showing correct but when i view the full month calendar then its not showing the right month --- it is showing all the days of the March 1st to 31st correc

  • Subpartition by range?

    Hello, I created the following partitioned table, using Oracle 11g R2 interval partitioning: CREATE TABLE ACCOUNTS_IMPORT ( SIGNING_ACCOUNT VARCHAR2(16 CHAR), REPORT_MM DATE, IMPORT_ID INTEGER -- this is a unique number incremented by 1 (1, 2, 3,....

  • Doubt About FTP And NFS

    Hi Experts, 1...What is the differnce between FPT and NFS( in Trasport Protocol) 2...When we wil use FTP and NFS......In which Case Please Let me know in detailed Regards Khanna

  • NQS ERROR:14025

    NQS ERROR:14025 NO FACT TABLE EXISTS -after migrating from 10g to 11g, we are getting this error we are on OBIEE 11.1.16.0 we did set NO_FORCE_TO_DETAIL_BIN=1 But still having this issue. Please let me know if anybody resolved this issue.

  • Viewing Exchange Sub Folders on Iphone 5s

    I am using Iphone 5s, and have just switched over to Microsoft 365 exchange for email. The problem I am describing first occurred whilst I was using iOs 7, but I have since installed iOs 8 to see if the problem goes away - but it hasn't. My emails ar