Hot backup getting delayed everyday

Hi All,
Everyday our hot backup job is getting delayed by a significant time. Before the tablespaces are placed into begin backup mode we spotted the alert log flooded with the below message ,
Mon May 21 00:23:46 2012
Incremental checkpoint up to RBA [0x396b.3365.0], current log tail at RBA [0x396
b.3386.0]
Mon May 21 00:54:03 2012
Incremental checkpoint up to RBA [0x396b.3b81.0], current log tail at RBA [0x396
b.3b98.0]
Mon May 21 01:24:21 2012
Incremental checkpoint up to RBA [0x396b.495a.0], current log tail at RBA [0x396
b.497a.0]
Mon May 21 01:54:47 2012
Incremental checkpoint up to RBA [0x396b.5109.0], current log tail at RBA [0x396
b.5137.0]
Mon May 21 02:25:13 2012
Incremental checkpoint up to RBA [0x396b.6771.0], current log tail at RBA [0x396
b.67b6.0]
Also after the tablespaces are placed into begin backup mode before the end backup occurs the altert log is also flooded with the below messages,
Incremental checkpoint up to RBA [0x396b.24107.0], current log tail at RBA [0x39
6b.24afe.0]
Mon May 21 05:58:39 2012
Incremental checkpoint up to RBA [0x396b.2569e.0], current log tail at RBA [0x39
6b.257f2.0]
Mon May 21 06:28:56 2012
Incremental checkpoint up to RBA [0x396b.2815a.0], current log tail at RBA [0x39
6b.28925.0]
Mon May 21 06:59:14 2012
Incremental checkpoint up to .
Then the tablespaces are placed into end backup mode resulting in significant delay.
Please help how we can reduce the backup time and also please throw some light on these checkpoint related messages.

Hi Jonathon ,
Thanks for your valuable suggestion.
Here in our environment all the tablespaces go into hot backup mode at a time instead of one by one through a script.
Previously when the hot backup used to work fine the number of incremental checkpoints between begin backup and end backup were significantly less. It is also evident from the excerpt of the alert log file.
alter tablespace PROAIMIS01 begin backup
Completed: alter tablespace PROAIMIS01 begin backup
Mon Apr 19 01:02:47 2010
alter tablespace SYSTEM begin backup
Completed: alter tablespace SYSTEM begin backup
Mon Apr 19 01:02:47 2010
alter tablespace MAPDS01 begin backup
Completed: alter tablespace MAPDS01 begin backup
Mon Apr 19 01:02:47 2010
alter tablespace ADMN01 begin backup
Completed: alter tablespace ADMN01 begin backup
Mon Apr 19 01:02:47 2010
alter tablespace PACESIS01 begin backup
Completed: alter tablespace PACESIS01 begin backup
Mon Apr 19 01:02:47 2010
alter tablespace CTPDS01 begin backup
Completed: alter tablespace CTPDS01 begin backup
Mon Apr 19 01:07:47 2010
***Completed checkpoint up to RBA [0x1ced.2.10], SCN: 10912956469916***
***Mon Apr 19 01:14:13 2010***
***Incremental checkpoint up to RBA [0x1ced.fd.0], current log tail at RBA [0x1ced.***
***10a.0]***
***Mon Apr 19 01:44:32 2010***
***Incremental checkpoint up to RBA [0x1ced.73f.0], current log tail at RBA [0x1ced***
***.89e.0]***Mon Apr 19 01:51:58 2010*alter tablespace CTPIS01 end backup*Mon Apr 19 01:51:58 2010
Completed: alter tablespace CTPIS01 end backup
Mon Apr 19 01:51:58 2010
alter tablespace MAPDS01 end backup
Completed: alter tablespace MAPDS01 end backup
Mon Apr 19 01:51:58 2010
alter tablespace SYSAUX end backup
Completed: alter tablespace SYSAUX end backup
Mon Apr 19 01:51:58 2010
alter tablespace AUDT01 end backup
Completed: alter tablespace AUDT01 end backup
Mon Apr 19 01:51:59 2010
alter tablespace PROAIMIS01 end backup
Completed: alter tablespace PROAIMIS01 end backup
Mon Apr 19 01:51:59 2010
alter tablespace MAPIS01 end backup
Completed: alter tablespace MAPIS01 end backup
Mon Apr 19 01:51:59 2010
alter tablespace UNDO01 end backup
Completed: alter tablespace UNDO01 end backup
Mon Apr 19 01:51:59 2010
alter tablespace USER01 end backup
Completed: alter tablespace USER01 end backup
Mon Apr 19 01:51:59 2010
alter tablespace PROAIMDS01 end backup
Completed: alter tablespace PROAIMDS01 end backup
Mon Apr 19 01:51:59 2010
alter tablespace PACESIS01 end backup
Completed: alter tablespace PACESIS01 end backup
Mon Apr 19 01:52:00 2010
alter tablespace XDB01 end backup
Completed: alter tablespace XDB01 end backup
Mon Apr 19 01:52:00 2010
alter tablespace ADMN01 end backup
Completed: alter tablespace ADMN01 end backup
Mon Apr 19 01:52:00 2010
alter tablespace SYMANTEC_I3_ORCL end backup
Completed: alter tablespace SYMANTEC_I3_ORCL end backup
Mon Apr 19 01:52:00 2010
alter tablespace CTPDM01 end backup
Completed: alter tablespace CTPDM01 end backup
Mon Apr 19 01:52:00 2010
alter tablespace PACESDS01 end backup
Completed: alter tablespace PACESDS01 end backup
Mon Apr 19 01:52:00 2010
alter tablespace SYSTEM end backup
Completed: alter tablespace SYSTEM end backup
Mon Apr 19 01:52:01 2010
alter tablespace CTPDS01 end backup
Completed: alter tablespace CTPDS01 end backup
Mon Apr 19 01:52:01 2010
alter tablespace PERF01 end backup
Completed: alter tablespace PERF01 end backup
Mon Apr 19 01:52:01 2010
alter database backup controlfile to trace
Completed: alter database backup controlfile to trace
Mon Apr 19 01:52:01 2010
alter database backup controlfile to '$CTLBKPDIR/$ORACLE_SID-$dt-$tm.ctl'
Completed: alter database backup controlfile to '$CTLBKPDIR/$ORACLE_SID-$dt-$tm.
ctl'
Mon Apr 19 01:52:03 2010
Beginning log switch checkpoint up to RBA [0x1cee.2.10], SCN: 10912957698129
Thread 1 advanced to log sequence 7406
Completed: alter tablespace PACESDS01 end backup
Mon Apr 19 01:52:00 2010
alter tablespace SYSTEM end backup
Completed: alter tablespace SYSTEM end backup
Mon Apr 19 01:52:01 2010
alter tablespace CTPDS01 end backup
Completed: alter tablespace CTPDS01 end backup
Mon Apr 19 01:52:01 2010
alter tablespace PERF01 end backup
Completed: alter tablespace PERF01 end backup
Mon Apr 19 01:52:01 2010
alter database backup controlfile to trace
Completed: alter database backup controlfile to trace
alter tablespace ADMN01 end backup
Completed: alter tablespace ADMN01 end backup
Completed: alter tablespace CTPDS01 begin backup
Mon Apr 19 01:07:47 2010
Completed checkpoint up to RBA [0x1ced.2.10], SCN: 10912956469916
Mon Apr 19 01:14:13 2010
Incremental checkpoint up to RBA [0x1ced.fd.0], current log tail at RBA [0x1ced.
10a.0]
Mon Apr 19 01:44:32 2010
Incremental checkpoint up to RBA [0x1ced.73f.0], current log tail at RBA [0x1ced
.89e.0]
Mon Apr 19 01:51:58 2010
alter tablespace CTPIS01 end backup
Mon Apr 19 01:51:58 2010
Completed: alter tablespace CTPIS01 end backup
Mon Apr 19 01:51:58 2010
alter tablespace MAPDS01 end backup
Completed: alter tablespace MAPDS01 end backup
Mon Apr 19 01:51:58 2010
alter tablespace SYSAUX end backup
Completed: alter tablespace SYSAUX end backup
Mon Apr 19 01:51:58 2010
alter tablespace AUDT01 end backup
Completed: alter tablespace AUDT01 end backup
Mon Apr 19 01:51:59 2010
alter tablespace PROAIMIS01 end backup
Completed: alter tablespace PROAIMIS01 end backup
Mon Apr 19 01:51:59 2010
alter tablespace MAPIS01 end backup
Completed: alter tablespace MAPIS01 end backup
Mon Apr 19 01:51:59 2010
alter tablespace UNDO01 end backup
Completed: alter tablespace UNDO01 end backup
Mon Apr 19 01:51:59 2010
alter tablespace USER01 end backup
Completed: alter tablespace USER01 end backup
Mon Apr 19 01:51:59 2010
alter tablespace PROAIMDS01 end backup
Completed: alter tablespace PROAIMDS01 end backup
Mon Apr 19 01:51:59 2010
alter tablespace PACESIS01 end backup
Completed: alter tablespace PACESIS01 end backup
Mon Apr 19 01:52:00 2010
alter tablespace XDB01 end backup
Completed: alter tablespace XDB01 end backup
Mon Apr 19 01:52:00 2010
alter tablespace ADMN01 end backup
Completed: alter tablespace ADMN01 end backup
Mon Apr 19 01:52:00 2010
alter tablespace SYMANTEC_I3_ORCL end backup
Completed: alter tablespace SYMANTEC_I3_ORCL end backup
Mon Apr 19 01:52:00 2010
alter tablespace CTPDM01 end backup
Completed: alter tablespace CTPDM01 end backup
Mon Apr 19 01:52:00 2010
alter tablespace PACESDS01 end backup
Completed: alter tablespace PACESDS01 end backup
Mon Apr 19 01:52:00 2010
alter tablespace SYSTEM end backup
Completed: alter tablespace SYSTEM end backup
Mon Apr 19 01:52:01 2010
alter tablespace CTPDS01 end backup
Completed: alter tablespace CTPDS01 end backup
Mon Apr 19 01:52:01 2010
alter tablespace PERF01 end backup
Completed: alter tablespace PERF01 end backup
But for the last couple of days the number of incremental checkpoints seem to be higher between begin backup and end backup like below,
Incremental checkpoint up to RBA [0x396d.3129.0], current log tail at RBA [0x396
d.366c.0]
Tue May 22 03:14:23 2012
Incremental checkpoint up to RBA [0x396d.147a8.0], current log tail at RBA [0x39
6d.15ac8.0]
Tue May 22 03:44:45 2012
Incremental checkpoint up to RBA [0x396d.1c679.0], current log tail at RBA [0x39
6d.1d177.0]
Tue May 22 04:15:08 2012
Incremental checkpoint up to RBA [0x396d.20387.0], current log tail at RBA [0x39
6d.205dd.0]
Tue May 22 04:45:30 2012
Incremental checkpoint up to RBA [0x396d.20dcf.0], current log tail at RBA [0x39
6d.2109e.0]
Tue May 22 05:15:48 2012
Incremental checkpoint up to RBA [0x396d.23af5.0], current log tail at RBA [0x39
6d.23d6a.0]
Tue May 22 05:46:05 2012
Incremental checkpoint up to RBA [0x396d.245e3.0], current log tail at RBA [0x39
6d.245ef.0]
Tue May 22 05:47:08 2012
Also the parameter log_checkpoint_interval is set to zero here.
log_checkpoint_interval integer 0
And the parameter fast_start_mttr_target is set to 300
fast_start_mttr_target integer 300
Please suggest ..

Similar Messages

  • Does hot backup trigger archive log?

    Our archive log starts everyday around 5pm.
    Our hot backup also starts everyday around 5pm.
    Do they start in just coincidence?
    Or hot backup does trigger archive log?
    Our db version 9i. It writes one single archive log daily.
    The transaction volume is minimal. sometimes no user login in a whole day.
    Any idea?

    How are you executing the backup? Via a script? Backup software tool?
    You can manually issue a checkpoint which would cause a log switch and a write of the archivelog out to the backup directory, but I'm not familiar with any backup software that includes this as their standard operating procedure. This would account for the activity you are seeing.
    If you are using some kind of shell script or such, you might check to see if the script includes a checkpoint command before it calls up RMAN to perform the backup.
    The logic really isn't bad - by checkpointing the database just prior to a backup you would have almost no additional transactions outside of the archived log files, making the backup non-reliant on the current online redo files. If you are maintaining a good inventory of backed up archive logs, it should be fairly quick and easy to restore even in the event of a catastrophic isk failure that wiped out the database and online REDO logs.

  • When i start my hot backup my database getting very slow

    Hi,
    I am using following commands for enabling hot backup
    SQL>ALTER SYSTEM ARCHIVE LOG CURRENT;
    SQL>ARCHIVE LOG LIST;
    SQL >ALTER DABATBASE BEGIN BACKUP;
    Database altered.
    SQL>SELECT FILE#,STATUS FROM V$BACKUP;
    FILE# STATUS
    1 ACTIVE
    2 ACTIVE
    3 ACTIVE
    4 ACTIVE
    and using cp -rp command to copy the file (backup copying speed good) but database performance very slow
    How to improve performance ...
    Regards
    Vignesh C

    Uwe Hesse wrote:
    It is very likely that you experience slow performance with ALTER DATABASE BEGIN BACKUP , because until you do ALTER DATABASE END BACKUP , every modified block is additionally written into the online logfiles . Doesn't that happen only the first time the block is modified?
    >
    The command was introduced for split mirror backups, when this period is very short. Else ALTER TABLESPACE ... BEGIN/END BACKUP for every tablespace one at a time reduces the amount of additional redo during non-RMAN Hot Backup. There appear to be only 4 files. We don't know how big or sparse they are.
    >
    RMAN doesn't need that at all - much less redo - and also archive - generation then.
    Furthermore, you can use BACKUP AS COMPRESSED BACKUPSET DATABASE to decrease the size of the backup even more - if space is an issue.
    In short: Use RMAN :-)
    Agree with that! Unless the copy is actually going to an NFS mount or something, where I would be concerned whether it is the type of NFS that Oracle likes. I'd also advise a current patch set, as the OP didn't tell us the exact version, and I have this nagging unfocused memory of some compression problems of the "oh, I can't recover" variety.
    I'd like to see some evidence on I/O and cpu usage before giving advice. When I used to copy files like this, it would choke out everyone else. RMAN was a savior, but had to wait for local SAN upgrade.

  • Archive logs are missing in hot backup

    Hi All,
    We are using the following commands to take hot backup of our database. Hot backup is fired by "backup" user on Linux system.
    =======================
    rman target / nocatalog <<EOF
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '$backup_dir/$date/%F';
    run {
    allocate channel oem_backup_disk1 type disk format '$backup_dir/$date/%U';
    #--Switch archive logs for all threads
    sql 'alter system archive log current';
    backup as COMPRESSED BACKUPSET database include current controlfile;
    #--Switch archive logs for all threads
    sql 'alter system archive log current';
    #--Backup Archive logs and delete what we've backedup
    backup as COMPRESSED BACKUPSET archivelog all not backed up delete all input;
    release channel oem_backup_disk1;
    allocate channel for maintenance type disk;
    delete noprompt obsolete device type disk;
    release channel;
    exit
    EOF
    =======================
    Due to which after command (used 2 times) "sql 'alter system archive log current';" I see the following lines in alert log 2 times. Because of this all the online logs are not getting archived (Missing 2 logs per day), the backup taken is unusable when restoring. I am worried about this. I there any to avoid this situation.
    =======================
    Errors in file /u01/oracle/admin/rac/udump/rac1_ora_3546.trc:
    ORA-19504: failed to create file "+DATA/rac/1_32309_632680691.dbf"
    ORA-17502: ksfdcre:4 Failed to create file +DATA/rac/1_32309_632680691.dbf
    ORA-15055: unable to connect to ASM instance
    ORA-01031: insufficient privileges
    =======================
    Regards,
    Kunal.

    All thanks you for help, pleas find additional information. I goth the following error as log sequence was missing. Everyday during hotbackup, there are 2 missing archive logs, which makes our backup inconsistent and useless.
    archive log filename=/mnt/xtra-backup/ora_archivelogs/1_32531_632680691.dbf thread=1 sequence=32531
    archive log filename=/mnt/xtra-backup/ora_archivelogs/2_28768_632680691.dbf thread=2 sequence=28768
    archive log filename=/mnt/xtra-backup/ora_archivelogs/2_28769_632680691.dbf thread=2 sequence=28769
    archive log filename=/mnt/xtra-backup/ora_archivelogs/2_28770_632680691.dbf thread=2 sequence=28770
    archive log filename=/mnt/xtra-backup/ora_archivelogs/1_32532_632680691.dbf thread=1 sequence=32532
    archive log filename=/mnt/xtra-backup/ora_archivelogs/2_28771_632680691.dbf thread=2 sequence=28771
    archive log filename=/mnt/xtra-backup/ora_archivelogs/2_28772_632680691.dbf thread=2 sequence=28772
    archive log filename=/mnt/xtra-backup/ora_archivelogs/2_28772_632680691.dbf thread=2 sequence=28773
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of recover command at 12/13/2012 04:22:56
    RMAN-11003: failure during parse/execution of SQL statement: alter database recover logfile '/mnt/xtra-backup/ora_archivelogs/2_28772_632680691.dbf'
    ORA-00310: archived log contains sequence 28772; sequence 28773 required
    ORA-00334: archived log: '/mnt/xtra-backup/ora_archivelogs/2_28772_632680691.dbf'
    Let me try the susggestions provided above.

  • Rman hot backup

    hello
    i am using rman hot backup script to take backup database everyday but the problem is it is working but not deleting old backup older than 2 days .
    also i have question .. my database is in archive log mode and everyday about 6-7 .arch files generating in my archive directory.
    it is not deleting the old files but generating new files everyday so adding up to the space.
    SQL> show parameter archive
    NAME TYPE VALUE
    archive_lag_target integer 0
    log_archive_config string
    log_archive_dest string /u03/archive_logs/DEVL
    log_archive_dest_1 string
    also should i set dest_1 as archive location or just log_archive_dest
    whats is the difference.?
    my rman script is
    RMAN Hot backup.unix script
    The RMAN hot backup script rman_backup.sh:
    # !/bin/bash
    # Declare your environment variables
    export ORACLE_SID=DEVL
    export ORACLE_BASE=/u01/app/oracle
    export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1
    export PATH=$PATH:${ORACLE_HOME}/bin
    # Start the rman commands
    rman target=/ << EOF
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/u03/backup/autobackup_control_file%F';
    CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 2 DAYS;
    run {
    allocate channel d1 type disk;
    allocate channel d2 type disk;
    allocate channel d3 type disk;
    allocate channel d4 type disk;
    ALLOCATE CHANNEL RMAN_BACK_CH01 TYPE DISK;
    CROSSCHECK BACKUP;
    BACKUP AS COMPRESSED BACKUPSET DATABASE FORMAT '/u03/backup/databasefiles_%d_%u_%s_%T';
    sql 'ALTER SYSTEM ARCHIVE LOG CURRENT';
    BACKUP AS COMPRESSED BACKUPSET ARCHIVELOG ALL FORMAT '/u03/backup/archivelogs_%d_%u_%s_%T' DELETE INPUT;
    BACKUP AS COMPRESSED BACKUPSET CURRENT CONTROLFILE FORMAT '/u03/backup/controlfile_%d_%u_%s_%T';
    CROSSCHECK BACKUP;
    DELETE NOPROMPT OBSOLETE;
    DELETE NOPROMPT EXPIRED BACKUP;
    RELEASE CHANNEL RMAN_BACK_CH01;
    EXIT;
    EOF
    thanks

    Ahmer Mansoor wrote:
    RMAN never deletes the Backups unless there is a space pressure in the Recovery Area. Instead it marks the Backups as OBSOLETE based on Retention Policy (in your case it is 2 Days),
    To confirm it SET DB_RECOVERY_FILE_DEST_SIZE to some smaller value, the RMAN will remove all the Obsolete Backups automatically to reclaim space.Be very careful with this. If you generate a LOT of archivelog files and you exceed this size, on the next archivelog switch your database will hang with "cannot continue until archiver freed". RMAN will not automatically remove anything. RMAN only removes stuff when you program it in your script.
    See:
    http://docs.oracle.com/cd/E14072_01/backup.112/e10642/rcmconfb.htm#insertedID4 Retention Policy (recovery window or redundancy)
    things like:
    set retention window and number of copies
    crosscheck backup
    delete obsolete <-- delete old, redundant, no longer necessary backups/archivelogs
    delete expired <-- NOTE: If you manually delete files and do not execute delete expired (missing file), the DB_RECOVERY_FILE_DEST_SIZE remains the same. So, you can clean out the space and oracle will still say the location is "full".
    Understand that if you also set this parameter too small and your backup recovery window/redundancy are incorrectly set, you can also exhaust the "logical" space of this location again, putting your database at risk. Your parameter could be set to 100G on a 400G file system and even though you have 300G available, Oracle will see the limit of this parameter.
    My suggestion, get in a DEV/TEST environment and test to see how to best configure your environment for RMAN database backups/control file, archivelog backups also taking into consideration OS tape backup solutions. I always configure DISK for RMAN backups, then have some other tape backup utility sweep those locations to tape ensuring that I have sufficient backups to reconstitute my database - I also include a copy of the init.ora file, password file as well as the spfile backup in this location.
    >
    In case of Archivelogs, It is better to create and execute a Purge Job to remove Archivelogs after backup them on tape.I almost agree. I try to keep all archivelogs necessary for recovery from last full backup online. I try to keep a full backup online as well. much faster at restoring stuff instead of trying to locate it on tape.

  • File Sharing drops regularly, Time Machine Backups are delayed on OSX 10.7.3. Solution?

    Hi there folks,
    I'm having an issue with the new 10.7 server. We're running 10.7.3 Server on an Late 2006 XServe with 10GB RAM , connected to a Drobo Pro for Time Machine Backups of our 50+ users via our internal network.
    Recently, Time Machine backups have been getting delayed, stating that they can't connect up to the backup drive (which is out Drobo Pro handling 16TB of space). The Drobo Pro hass not unmounted, and all the IP Addresses coming from and going to the Server handling the backups will still ping.
    We've also be having trouble with Server resident drives and shared folders that people can mount using the ⌘+K shortcut or "Connect to Server" option in finder. People will choose the IP address, and finder will think for a few minutes then give an error stating that it cannot connect the the specified IP. The only way I can reestablish connection to the file sharing through a remote computer is to remote or physically login to the server, then go into System Preferences and turn "File Sharing" off, then on again. After that, the ⌘+K on a remote machine can find the drives again.
    It's driving me crazy why this is ocurring. I don't know why, and people in the office are beginning to complain that they can't access certain files on the servers or are worried because their backups are not completeing.
    It should also be known that a majority of the machines we have in the ofice are still running 10.6.8, with only a handful on 10.7, as we haven't had the time to make the major switchover yet.
    I'm not the best when it comes to server software or Unix CLI managment (the office outsourced our Network Admin to the east coast), so I'm a bit stumped.
    Does anyone have any suggestions to try? Is there a link between the number of Time Machine Users and the connection failures? Should I just tel the NetAdmin to create a script that will restart the File Sharing protocol on the server every couple of hours?
    Hopefully someone will be able to help.
    Thanks,
    JYHASH

    I started having connectivity issues to my 10.6.8 server from my 10.7 client immediately after updating the client to 10.7.3. FWIW. Takes forever to connect, when it does. After just milliconds before.

  • Unique Error while doing hot backup Cloning!

    Dear All,
    I am using EBS 11.5.10.2 with DB 9.2.0.6.
    I am doing hot backup cloning by referring *[Cloning Oracle Application 11i /R12 with Rapid Clone - Database (9i/10g/11g) Using Hot Backup on Open Database|https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=760772.1]*
    I am facing a unique issue, when I run adcfgclone.pl dbconfig <my_context_file>, application of my Production Server start misbehaving and it gives me an error:
    Node Id does not exist for the current application server id.When I check the FND_NODE table I found that there server id was empty in it.
    I have to run than run the following steps as the resolution:
    1. Shutdown all the services.
    2. EXEC FND_CONC_CLONE.SETUP_CLEAN;
    3. COMMIT;
    4. Run AutoConfig on all tiers, firstly on the DB tier and then the APPS tiers,to repopulate the required system tables.
    5.start all the services and verify the issue.
    Than application behaves normally.
    Is there any additional step to be followed while Hot Backup Cloning other than mentioned in the above mentioned ML Doc?
    Please suggest,
    Anchorage :)

    I am facing a unique issue, when I run adcfgclone.pl dbconfig <my_context_file>, application of my Production Server start misbehaving and it gives me an error:
    Node Id does not exist for the current application server id.Where did you get this error?
    This is very strange, how come this happens in Production, when you are doing cloning in TEST/DEV instance.
    Is there any additional step to be followed while Hot Backup Cloning other than mentioned in the above mentioned ML Doc?I feel there are no other additional steps, as I Recently did successfully hotbackup cloning using the same document.

  • How to find out which archived logs needed to recover a hot backup?

    I'm using Oracle 11gR2 (11.2.0.1.0).
    I have backed up a database when it is online using the following backup script through RMAN
    connect target /
    run {
    allocate channel d1 type disk;
    backup
    incremental level=0 cumulative
    filesperset 4
    format '/san/u01/app/backup/DB_%d_%T_%u_%c.rman'
    database
    }The backup set contains the backup of datafiles and control file. I have copied all the backup pieces to another server where I will restore/recover the database but I don't know which archived logs are needed in order to restore/recover the database to a consistent state.
    I have not deleted any archived log.
    How can I find out which archived logs are needed to recover the hot backup to a consistent state? Can this be done by querying V$BACKUP_DATAFILE and V$ARCHIVED_LOG? If yes, which columns should I query?
    Thanks for any help.

    A few ways :
    1a. Get the timestamps when the BACKUP ... DATABASE began and ended.
    1b. Review the alert.log of the database that was backed up.
    1c. From the alert.log identify the first Archivelog that was generated after the begin of the BACKUP ... DATABASE and the first Archivelog that was generated after the end of the BACKUP .. DATABASE.
    1d. These (from 1c) are the minimal Archivelogs that you need to RECOVER with. You can choose to apply additional Archivelogs that were generated at the source database to contininue to "roll-forward"
    2a. Do a RESTORE DATABASE alone.
    2b. Query V$DATAFILE on the restored database for the lowest CHECKPOINT_CHANGE# and CHECKPOINT_TIME. Also query for the highest CHECKPOINT_CHANGE# and CHECKPOINT_TIME.
    2c. Go back to the source database and query V$ARCHIVED_LOG (FIRST_CHANGE#) to identify the first Archivelog that has a higher SCN (FIRST_CHANGE#) than the lowest CHECKPOINT_CHANGE# from 2b above. Also query for the first Archivelog that has a higher SCN (FIRST_CHANGE#) than the highest CHECKPOINT_CHANGE# from 2b above.
    2d. These (from 2c) are the minimal Archivelogs that you need to RECOVER with.
    (why do you need to query V$ARCHIVED_LOG at the source ? If RESTORE a controlfile backup that was generated after the first Archivelog switch after the end of the BACKUP ... DATABASE, you would be able to query V$ARCHIVED_LOG at the restored database as well. That is why it is important to force an archivelog (log switch) after a BACKUP ... DATABASE and then backup the controlfile after this -- i.e. last. That way, the controlfile that you have restored to the new server has all the information needed).
    3. RESTORE DATABASE PREVIEW in RMAN if you have the archivelogs and subsequent controlfile in the backup itself !
    Hemant K Chitale

  • Question on recovery from Hot backup

    Whenever I tried to recover from my hot backup using recover database untill cancel (or any other until option)..
    I get messages similar to the following :
    SQL> recover database using backup controlfile until cancel ;
    ORA-00279: change 212733060 generated at 11/18/2008 23:50:58 needed for thread
    1
    ORA-00289: suggestion : /d01/oradata/devl/arc/1_282_667362770.dbf
    ORA-00280: change 212733060 for thread 1 is in sequence #282
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    /d01/oradata/devl/arc/1_282_667362770.dbf
    ORA-00279: change 212733060 generated at needed for thread 2
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    /d01/oradata/devl/arc/2_257_667362770.dbf
    ORA-00279: change 212733060 generated at needed for thread 3
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    /d01/oradata/devl/arc/3_258_667362770.dbf
    ORA-00279: change 212733060 generated at needed for thread 4
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    I have all my archive files in the archive dest location.
    Is there any way to prevent this warning messages and let oracle find all the archive files?
    As you can see from the messages, Oracle is finding the correct file, then why there is an error message and why we have to provide the file names one-by one.
    Please help !!!

    Oracle will look for needed archive logfile from your log archive destination.
    If you are sure all these files are under /d01/oradata/devl/arc/, you can input AUTO, Oracle will work down the list until done.

  • When to use REUSE/SET, NO-ARCHIVELOGS in create controlfile in HOT BACKUP?

    I am a trainee Oracle DBA and have the following queries. Kindly reply with detailed explanation as I want to get my concepts cleared!
    Q1>> While doing a user managed hot backup, when we are creating a control file(CREATE CONTROLFILE) from trace for recovery when do we use the create control file with the following options:
    *1. REUSE / SET*
    *2. ARCHIVELOGS / NOARCHIVELOGS*
    Q2>> In what scenarios do we re-create the control file while recovering datafiles from a hot backup??
    Thanks a tonne!
    Regards,
    Bhavi

    Hemant K Chitale wrote:
    1.1 It is not "REUSE/SET". These are two very different clauses.
    REUSE is when you want the CREATE to overwrite the existing controlfile(s). If the controlfile(s) {as named in the instance parameter file, initSID.ora or spfileSID.ora} is/are already present, the CREATE fails unless REUSE is specified.
    SET is when you want to change the database name. Oracle then creates the controlfile(s) with the specified database name and updates the headers of all the datafiles. If you run a CREATE with a database name that is different from that in the datafile headers, the CREATE fails unless you include a SET to specify that the name must be changed. Note that this also means that the name in the instance parameter file must already have been updated.
    1.2 ARCHIVELOG/NOARCHIVELOG is to set the database state. The same is achieved by issuing an "ALTER DATABASE ARCHIVELOG/NOARCHIVELOG" when the database is MOUNTed but not OPEN.
    2. You'd run the CREATE CONTROLFILE if you do not have a binary backup of the controlfile.
    Optionally, you can also use CREATE CONTROLFILE to rename all the datafiles by specifying the new locations of the datafiles -- the datafiles must already be present in the new locations, else the CREATE fails if it doesn't find a datafile that is included in the list of datafiles included in the CREATE statement.
    RMAN is the correct way to run Backups. User Managed Backup scripts are used in cases like Storage-based Snapshots / SnapClones / BCV.
    Hemant K ChitaleThanks that was really helpful..One last question when to use the resetlogs/noresetlogs clause in the create controlfile statement. I have noticed that at times it accepts resetlog while at times it accepts noresetlogs

  • Hot backup on NOARCHIVELOG mode?

    DB version:10gR2, 11G
    Why is it not possible to do a Hot Backup in NOARCHIVELOG mode? What role does archived redo log files have in a Hot backup?

    Because it takes more than zero seconds to backup a database.
    Say your database consists of 1 single datafile of 10MB. This datafile, at the OS filesystem level, consists of 2,560 blocks of 4KB each.
    If you start a backup of the datafile, the OS utility (tar, cp, cpio, whatever command) reads the first 4KB block and copies it out. It then, after a certain time, reads the next block. And so on till it gets to the last block of the file.
    However, since the database is "open" transactions may have updated blocks in the datafile.
    Therefore, at time t0, block 1 may have been copied out. At time t5, block 128 may have been copied out. At time t32, block 400 may have been copied out. Unfortunately, some user may have updated block 1 at time t3 and block 128 at time t8.
    What would happen if these blocks, having been copied out at different times were restored ? They would be inconsistent !!
    It is the ArchiveLog mechanism that allows Oracle to know that a datafile was "active" when it was being backed up. Oracle has to "re-play" all changes that occurred on the datafile from the time the datafile backup began (at t0) till it ended (at the 2,560th block).

  • Incomplete Recovery Fails using Full hot backup & Archive logs !!

    Hello DBA's !!
    I am doing on Recovery scenario where I have taken One full hot backup of my Portal Database (EPR) and Restored it on New Test Server. Also I restored Archive logs from last full hot backup for next 6 days. Also I restored the latest Control file (binary) to their original locations. Now, I started the recovery scenario as follows....
    1) Installed Oracle 10.2.0.2 compatible with restored version of oracle.
    2) Configured tnsnames.ora, listener.ora, sqlnet.ora with hostname of Test server.
    3) Restored all Hot backup files from Tape to Test Server.
    4) Restored all archive logs from tape to Test server.
    5) Restored Latest Binary Control file from Tape to Test Server.
    6) Now, Started recovery using following command from SQL prompt.
    SQL> recover database until cancel using backup controlfile;
    7) Open database after Recovery Completion using RESETLOGS option.
    Now in Above scenario I completed steps upto 5) successfully. But when I execute the step 6) the recovery completes with Warning : Recovery completed but OPEN RESETLOGS may throw error " system file needs more recovery to be consistent " . Please find the following snapshot ....
    ORA-00279: change 7001816252 generated at 01/13/2008 12:53:05 needed for thread
    1
    ORA-00289: suggestion : /oracle/EPR/oraarch/1_9624_601570270.dbf
    ORA-00280: change 7001816252 for thread 1 is in sequence #9624
    ORA-00278: log file '/oracle/EPR/oraarch/1_9623_601570270.dbf' no longer needed
    for this recovery
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    ORA-00308: cannot open archived log '/oracle/EPR/oraarch/1_9624_601570270.dbf'
    ORA-27037: unable to obtain file status
    SVR4 Error: 2: No such file or directory
    Additional information: 3
    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
    ORA-01194: file 1 needs more recovery to be consistent
    ORA-01110: data file 1: '/oracle/EPR/sapdata1/system_1/system.data1'
    SQL> SQL> SQL> SQL> SQL> SQL> SQL>
    SQL> alter database open resetlogs;
    alter database open resetlogs
    ERROR at line 1:
    ORA-01194: file 1 needs more recovery to be consistent
    ORA-01110: data file 1: '/oracle/EPR/sapdata1/system_1/system.data1'
    Let me know What should be the reason behind recovery failure !
    Note : I tried to Open the database using Last Full Hot Backup only & not applying any archives. Then Database Opens successfully. It means my Database Installation & Configuration is OK !
    Please Let me know why my Incomplete Recovery using Archive logs Goes Fail ?
    Atul Patil.

    oh you made up a new thread so here again:
    there is nothing wrong.
    You restored your backup, archives etc.
    you started your recovery and oracle applyed all archives but the archive
    '/oracle/EPR/oraarch/1_9624_601570270.dbf'
    does not exist because it represents your current online redo log file and that is not present.
    the recovery process cancels by itself.
    the solution is:
    restart your recovery process with:
    recover database until cancel using backup controlfile
    and when oracle suggests you '/oracle/EPR/oraarch/1_9624_601570270.dbf'
    type cancel!
    now you should be able to open your database with open resetlogs.

  • 10.2.0.4/10.2.0.5 Hot Backups (11r2 considered too) - ORACLE_HOME

    Scenario:
    We have a script that runs a hot backup of each database on a VM server. Because there are multiple databases on the VM we have a mixture of database versions. For example, we might have 8 databases on Server A running 10.2.0.5 and we might have 2 databases on the same server (Server A) running 10.2.0.4 (different homes of course). Our hot backup script sets one single ORACLE_HOME env variable for running a hot backup for all databases.
    Question:
    Can I safely export the script's ORACLE_HOME as 10.2.0.5 for all the databases on the VM and still get successful hot backups of the 10.2.0.4 and the 10.2.0.5 databases?
    Followup Question:
    When we eventually start upgrading the 10.2.0.5 databases to 11r2, will a single ORACLE_HOME set by the same hot backup script work knowing that some databases may be 11r2 and some may be 10.2.0.5 on the same VM?
    Thanks for your input.

    The cron job runs a script that builds a script that selects the SIDs on that server and then it runs another script that builds a sqlplus script (script detailed below). The sqlplus script is a file that contains the following commands for each SID on the machine:
    set feedback off
    set pagesize 0
    set termout off
    spool /xxx/xxx/b_backup.sql
    select 'set termout on' from dual;
    select 'set echo on' from dual;
    select distinct 'alter tablespace '||tablespace_name||' begin backup;'
    from dba_data_files;
    select 'select from v$backup;' from dual;*
    select 'exit' from dual;
    spool off
    *@/xxx/xxx/b_backup.sql*
    exit
    So what we end up with is a sqlplus file that will execute the above commands for each SID on the box. The databases may be different versions though (a mix of 10.2.0.4 and 10.2.0.5).
    My basic question is: Will using ORACLE_HOME pointing to 10.2.0.5 sqlplus have any negative effect on backing up a 10.2.0.4 database?
    My secondary/follow-up question is concerning future upgrades to 11r2: Will using ORACLE_HOME pointing to 11.2.0.2 sqlplus have any negative effect on backing up the remaining (not yet upgraded) 10.2.0.5 databases?
    Does sqlplus version matter when running sqlplus from a different ORACLE_HOME on a database with a different version?

  • Hot backup - best practice

    I have written a simple TimerTask subclass to handle backup for my application. I am doing my best to follow the guidelines in the Getting Started guide. I am using the provided DbBackup helper class along with a ReentrantReadWriteLock to quiesce the database. I want to make sure that writes wait for the lock and that a checkpoint happens before the backup. I have included a fragment of the code that we are using and I am hoping that perhaps someone could validate the approach.
    I should add that we are running a replicated group in which the replicas forward commands to the master using RMI and the Command pattern.
    Kind regards
    James Brook
    // Start backup, find out what needs to be copied.
    backupHelper.startBackup();
    log.info("running backup...");
    // Stop all writes by quiescing the database
    RepositoryNode.getInstance().bdb().quiesce();
    // Ensure all data is committed to disk by running a checkpoint with the default configuration - expensive
    environment.checkpoint(null);
    try { 
         String[] filesForBackup = backupHelper.getLogFilesInBackupSet();
         // Copy the files to archival storage.
         for (int i=0; i<filesForBackup.length; i++) {
              String filePathForBackup = filesForBackup;
              try {
                   File source = new File(environment.getHome(), filePathForBackup);
                   File destination = new File(backupDirectoryPath, new File(filePathForBackup).getName());
                   ERXFileUtilities.copyFileToFile(source, destination, false, true);
                   if (log.isDebugEnabled()) {
                        log.debug("backed up: " + source.getPath() + " to: destination: " + destination);
              } catch (FileNotFoundException e) {
                   log.error("backup failed for file: " + filePathForBackup +
    " the number stored in the holder did not exist.", e);
                   ERXFileUtilities.deleteFile(new File(backupDirectory, LATEST_BACKUP_NUMBER_FILE_NAME));
              } catch (IOException e) {
                   log.fatal("backup failed for file: " + filePathForBackup, e);
    saveLatestBackupFileNumber(backupDirectory, backupHelper.getLastFileInBackupSet());
    finally {
         // Remember to exit backup mode, or all log files won't be cleaned
         // and disk usage will bloat.
         backupHelper.endBackup();
         // Allow writes again
         RepositoryNode.getInstance().bdb().quiesce();
         log.info("finished backup");
    The quiesce() method acquires the writeLock on the ReentrantReadWriteLock.
    One thing I am not sure about is when we should checkpoint the database. Is it safe to do this after acquiring the lock, or should we do it just before?
    The plan is to backup files to a filesystem which is periodically copied to offline backup storage.
    Any help much appreciated.

    James,
    I am using the provided DbBackup helper class along with a ReentrantReadWriteLock to quiesce the database. I want to make sure that writes wait for the lock and that a checkpoint happens before the backup. One thing to make clear is that you can choose to do a hot backup or an offline backup. The Getting Started Guide, Performing Backups chapter defines these terms. When doing a hot backup, it's not necessary to stop any database operations or do a checkpoint, and you should use DbBackupHelper. On the other hand, when doing an offline backup, you would want to quiesce the database, close or sync your environment, and copy your .jdb files. In an offline backup, there is no need to do DbBackup.
    So while it doesn't hurt to do all of the steps for both hot and offline backup, as you are doing, it's more than you need to do.
    You may find the section on http://www.oracle.com/technology/documentation/berkeley-db/je/GettingStartedGuide/logfilesrevealed.html informative in how it explains how JE's storage is append only. This may help you understand why a hot backup does not require stopping database operations. If you want to do a hot backup, the code you show looks fine, though you don't need to quiesce the database or run a checkpoint.
    If you are running replication, you should read the chapter http://www.oracle.com/technology/documentation/berkeley-db/je/ReplicationGuide/dbbackup.html. Backing up a node that is a member of a replicated group is very much like a standalone backup, except you will want to catch that extra possible exception. Also note that backup files belong to a given node, and the files from one node's backup shouldn't be mixed with the files from another node.
    To be clear, suppose you have node A and node B. The backup of nodeA contains files 00000001.jdb, 00000003.jdb. The backup of node B contains files 00000002.jdb, 00000003.jdb. Although A's file 0000003.jdb and B's 00000003.jdb have the same name, they are not the same file and are not interchangeable.
    In a replication system, if you are doing a full restoration, you can use B's full set of files to restore A, and vice versa.
    Linda

  • DB creation with Hot backup of OTher DB ISSUES

    hi DBAs,
    I am using Oracle 11g R2, I want to make fresh database with another db hot backup of Datafile, Control file & SPfile.
    so far, I have run the below commands to create a db.
    C:\Documents and Settings\Administrator>E:
    E:\>set ORACLE_HOME=E:\app\Administrator\product\11.1.0\db_1
    E:\>set PATH=E:\app\Administrator\product\11.1.0\db_1\bin
    E:\>set ORACLE_SID=ora11g
    E:\>oradim -NEW -SID ora11g -STARTMODE auto
    E:\>sqlplus /nolog
    SQL*Plus: Release 11.1.0.6.0 - Production on Fri Dec 9 13:03:45 2011
    Copyright (c) 1982, 2007, Oracle. All rights reserved.
    SQL> conn / as sysdba;
    Connected to an idle instance.
    SQL> create spfile from pfile='C:\Documents and Settings\Administrator\Desktop\i (PFile from Another DB backup Here)
    nit.Text';
    File created.
    SQL> startup nomount;
    ORACLE instance started.
    Total System Global Area 648663040 bytes
    Fixed Size 1335108 bytes
    Variable Size 197132476 bytes
    Database Buffers 444596224 bytes
    Redo Buffers 5599232 bytes
    then I did shu immediate, it shows db not mounted n instance shutdown then again,
    SQL> startup
    ORACLE instance started.
    Total System Global Area 648663040 bytes
    Fixed Size 1335108 bytes
    Variable Size 197132476 bytes
    Database Buffers 444596224 bytes
    Redo Buffers 5599232 bytes
    ORA-00205: error in identifying control file, check alert log for more info
    Here My question is that, I am getting an error, ORA-27048: skgfifi: file header information is invalid in the alert alog file of my new db.
    what should i do now? how i can recreate a controlfile to support my new db datafile. Any can give me an Idea? How shluld i configure to up the new DB.
    Thanks inAdvance,
    By Baskar

    900322 wrote:
    hi ,
    I am getting this error only,
    ORA-00210: cannot open the specified control file
    ORA-00202: control file: 'E:\APP\ADMINISTRATOR\FLASH_RECOVERY_AREA\ORA11G\CONTROLFILE\CONTROL_FILE1_.CTL'
    ORA-27048: skgfifi: file header information is invalid
    OSD-04001: invalid logical block size (OS 1768843629)
    But I have mentioned the Control file location clearly inthe PFile. But dont know its telling this below error,
    ORA-00205: error in identifying control file, check alert log for more info
    Anyone can help me?
    By
    baskiIts unable to read, Do you have any other mirrored copy of controlfile?
    Error:  ORA 210 
    Text:   cannot open control file <name>
    Cause:  The system was unable to open a control file.
    Action: Check that the control file exists, that the storage device is online,
            and that the file is not locked by some other program and try again.
            Also, check to see that the operating system limit on the number of
            open files per process has not been exceeded.
            When using multiplexed control files, that is, more than one control
            file is referenced in the initialization parameter file, remove the
            parameter from the initialization parameter file referencing the
            control filename indicated in the message and restart the instance.
            If the message does not recur, remove the problem control file from
            the initialization parameter file and create another copy of the
            control file using a new filename in the initialization parameter file.

Maybe you are looking for

  • Partner function at shipment

    Hi Experts, Kindly advise on the below issue. Currently in VT01N(create shipment) when we enter FwdAgent and other details like shipping cond. ,at "Partner tab" partner function SP Transporter comes by default/automatically that is nothing but my Fwd

  • Ideas for algorithm to display how many words in a sentence differ?

    For example...... sentenceA: I went to the store today. sentenceB: I want to the store tdoay. ...easy case...compare word by word, we see one difference. BUT, this fails in a case such as the one below: sentenceA: I went to the store today. sentenceB

  • Why don't the FAQ's in this Sub-Forum Work?

    Why don't the FAQ's in this Sub-Forum Work?  Not in Firefox 22.0 or IE 8-latest.

  • Can we store prompts in BO Xi 3.1

    Hi In Business Objects Webi report panel whether it is possible to store the prompts and can we make it as personlised to user. Can we store the prompts in Webi in some variable and making user not to enter the prompts again if he want he can just se

  • MM-Reports

    Hi Experts, Needs Ur Valuable Suggestion on this. My Client wants Few PURCHASE REPORTS 1.  GOODS INWARD Inputs will be -1) DATE WISE  FROM DATE TO to date                     -2) With Respect to One Supplier Vendor Or all for that month