Hot Backup Mechanism

Hi all,
I know that Oracle will not update the datafile headers with SCN when they are in backup mode. But, I have a question, if at all the committed transactions will be written to the datafile during backup? I thought that Oracle will hold all committed data in the DBBC (Entries will be made in Redo Logs though) until tablespaces are taken out of backup mode.
Please correct me if I am wrong.
Thanks,
Aswin.

Aswin,
You are ABSOLUTELY wrong. I am not sure how did you pick up the concept like this but this is plain wrong. Have a look here,
E:\Documents and Settings\aristadba>sqlplus / as sysdba
SQL*Plus: Release 10.2.0.1.0 - Production on Mon Jun 8 14:36:44 2009
Copyright (c) 1982, 2005, Oracle.  All rights reserved.
Connected to an idle instance.
SQL> startup mount
ORACLE instance started.
Total System Global Area  167772160 bytes
Fixed Size                  1247900 bytes
Variable Size              75498852 bytes
Database Buffers           88080384 bytes
Redo Buffers                2945024 bytes
Database mounted.
SQL> alter database archivelog;
alter database archivelog
ERROR at line 1:
ORA-00265: instance recovery required, cannot set ARCHIVELOG mode
SQL> alter database open;
Database altered.
SQL> shut imm
SP2-0717: illegal SHUTDOWN option
SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.
Total System Global Area  167772160 bytes
Fixed Size                  1247900 bytes
Variable Size              75498852 bytes
Database Buffers           88080384 bytes
Redo Buffers                2945024 bytes
Database mounted.
SQL> alter database archivelog;
Database altered.
SQL> alter database open;
Database altered.
SQL> select checkpoint_change# from V$datafile;
CHECKPOINT_CHANGE#
           1762853
           1762853
           1762853
           1762853
           1762853
           1762853
6 rows selected.
SQL> select checkpoint_change#, last_change# from V$datafile;
CHECKPOINT_CHANGE# LAST_CHANGE#
           1762853
           1762853
           1762853
           1762853
           1762853
           1762853
6 rows selected.
SQL> select * from V$backup;
     FILE# STATUS                CHANGE# TIME
         1 NOT ACTIVE                  0
         2 NOT ACTIVE                  0
         3 NOT ACTIVE                  0
         4 NOT ACTIVE                  0
         5 NOT ACTIVE                  0
         6 NOT ACTIVE                  0
6 rows selected.
SQL> select name from V$tablespace';
ERROR:
ORA-01756: quoted string not properly terminated
SQL> select name from V$tablespace;
NAME
SYSTEM
UNDOTBS1
SYSAUX
USERS
TEMP
EXAMPLE
SONY
7 rows selected.
SQL> alter tablespace sony begin backup;
Tablespace altered.
SQL> select checkpoint_change#, last_change# from V$datafile;
CHECKPOINT_CHANGE# LAST_CHANGE#
           1762853
           1762853
           1762853
           1762853
           1762853
           1762999
6 rows selected.
SQL> select * from V$backup;
     FILE# STATUS                CHANGE# TIME
         1 NOT ACTIVE                  0
         2 NOT ACTIVE                  0
         3 NOT ACTIVE                  0
         4 NOT ACTIVE                  0
         5 NOT ACTIVE                  0
         6 ACTIVE                1762999 08-JUN-09
6 rows selected.
SQL> create table sony_table ( a number) tablespace sony;
Table created.
SQL> begin
  2  for i in 1..100 loop
  3  insert into sony_table values(100);
  4  end loop;
  5  end;
  6  /
PL/SQL procedure successfully completed.
SQL> commit;
Commit complete.
SQL> alter system flush buffer_cache;
System altered.
SQL> alter tablespace sony end backup;
Tablespace altered.
SQL> select * from sony_table;
         A
       100
       100
       100
       100
       100
       100
       100
       100
       100
....You can see here that the tablespace got a table created into it. In that table, we insserted some data which surely was smaller than the cache itself. So we flushed the buffer cache, making sure that the data of the buffer cache is going to be into the data file. After that we got the tablespace out from the backup and we got our data.
The only thing that does get frozen down in the begin backup mode is that the file headers get freezed at that point when the backup got started. Makes sense in doing so as this point would be remembered by the ocontrol file to identify that when the tablespace would be undergoing recovery, from what point the redo vectors need to get applied. The data does NOT go in to the file header but into the object blocks which has the System Change Number in them. This number gets frozen. So even when your tablespace is in the backup mode, the buffers stilll move as they were moving when the tablespace was in the normal mode. With that. there is no point what so ever to keep the committed in the data buffer cache . Even if you say that your logic is correct, still there can be a situation where your logic would be proved wrong. Buffer cache wont ever be equl to that amount of data in size which is changed by the users. So if we keep the committed data in the cache which is of , say 10meg and you changed 1000 meg, what about this data? Where would we keep it if we wont let it be flished to the data file?
HTH
Aman....

Similar Messages

  • Hot backup on NOARCHIVELOG mode?

    DB version:10gR2, 11G
    Why is it not possible to do a Hot Backup in NOARCHIVELOG mode? What role does archived redo log files have in a Hot backup?

    Because it takes more than zero seconds to backup a database.
    Say your database consists of 1 single datafile of 10MB. This datafile, at the OS filesystem level, consists of 2,560 blocks of 4KB each.
    If you start a backup of the datafile, the OS utility (tar, cp, cpio, whatever command) reads the first 4KB block and copies it out. It then, after a certain time, reads the next block. And so on till it gets to the last block of the file.
    However, since the database is "open" transactions may have updated blocks in the datafile.
    Therefore, at time t0, block 1 may have been copied out. At time t5, block 128 may have been copied out. At time t32, block 400 may have been copied out. Unfortunately, some user may have updated block 1 at time t3 and block 128 at time t8.
    What would happen if these blocks, having been copied out at different times were restored ? They would be inconsistent !!
    It is the ArchiveLog mechanism that allows Oracle to know that a datafile was "active" when it was being backed up. Oracle has to "re-play" all changes that occurred on the datafile from the time the datafile backup began (at t0) till it ended (at the 2,560th block).

  • How to take a Hot backup of Oracle database

    1: put the db in archive log mode
    2: set the db_sid to correct one
    3: login to sqlplus
    4: verify the name of the db that you are connected to
    select name from v$database;
    5: check if the db is in archive log made
    select log_mode from v$database;
    if not in archive log mode
    another command to check
    archive log list;
    6: find where on disk oracle writes archive log when it is in archive log mode
    sql> show parameter log_archive_dest_1;
    if the value is found to be 0, that means no values will be recorded, so we need to change it
    sql> alter system set log_archive_dest_1='LOCATION=c:\database\oradata\finance\archived_logs\'
    scope=spfile;
    7: shutdown immediate; < this is done just to prepare the db for hot backups >
    8: startup the db in mount mode
    startup mount;
    ( 3 startup types : nomount - just starts the instance, mount - locates the control files and open up according to the values, open - finds the datafiles from the control files and opens up the db )
    9: put the db in archive log mode
    alter database archivelog;
    10: open the database
    alter database open;
    11: check the status of the db
    select log_mode from v$database;
    SQL> archive log list;
    12: create a directory for archived log
    check if its empty, if empty we need to switch
    sql> alter system archive log current;
    run it 5 times < need to put / and enter > , then check the archive log dir , we will find files
    13: make a table in the database and insert data in it
    create table employees (fname varchar(2));
    check the table
    desc employees;
    insert values
    insert into employees values ('Mica');
    14: tablespace must be in hot backup mode
    check the status
    select * from v$backup;
    if found not active, then we need to change
    we cannot put the db in hot backup mode, unless it is archive log mode
    change to hot backup mode
    alter database begin backup;
    check the status
    select * from v$backup;
    15: now we can only COPY DBF FILES
    copy *dbf <distination location>
    16: need to take the db out to hot backup mode
    alter database end backup;
    17: need to make another archive log switch
    alter system archive log current;
    18: need to copy control files now, need to do a binary bckup
    alter database backup controlfile to '<location>\controlbackup';
    19: insert more values to the table
    insert into employess values ('NASH')
    COMMIT;
    make another archive log switch : alter system archive log current;
    do the same process for more values
    20 : backup all the archive logs to a new location
    21: shutdown the db and simulate a hw error, delete all the files from the database folder
    22: try to start the sqlplus and db ::: error
    23: copy all the backups to the db dir
    need to copy the control files, rename the binary backup of the control file and make the copies as needed
    24: try to mount the db, error < must use reset logs or noreset logs >
    25: need to do a recovering of the database
    shutdown
    restore the archive logs
    startup mount;
    recover database until cancel using backup controlfile;
    it will ask for a log file :
    yes for recovery
    cancel for cancelling recovery
    26: check status: open the database in readonly
    alter database open read only;
    check the tables to see the data
    shutdown immediate
    shartup mount;
    recover again : recover database until cancel using backup controlfile;
    if oracle is asking for a log that do nto exist , all we have to do is type cancel
    27: open the database
    alter database open;
    need to do reset logs
    alter database open resetlogs;
    28: check the db that you are connected, check the tables
    thanks and regards
    VKN
    site admin
    http://www.nitrofuture.com

    A very long list ... let me make it shorter.
    SQL> archive log list;If I see this:
    Database log mode              No Archive ModeI put the database into archivelog mode and leave it there forever.
    If it is in archivelog mode:
    RMAN> TARGET SYS/<password>@<service_name> NOCATALOG
    RMAN> BACKUP DATABASE PLUS ARCHIVELOG;Though there are a lot of things one could do better such as incrementals with block change tracking, creating an RMAN catalog, etc.

  • Unique Error while doing hot backup Cloning!

    Dear All,
    I am using EBS 11.5.10.2 with DB 9.2.0.6.
    I am doing hot backup cloning by referring *[Cloning Oracle Application 11i /R12 with Rapid Clone - Database (9i/10g/11g) Using Hot Backup on Open Database|https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=760772.1]*
    I am facing a unique issue, when I run adcfgclone.pl dbconfig <my_context_file>, application of my Production Server start misbehaving and it gives me an error:
    Node Id does not exist for the current application server id.When I check the FND_NODE table I found that there server id was empty in it.
    I have to run than run the following steps as the resolution:
    1. Shutdown all the services.
    2. EXEC FND_CONC_CLONE.SETUP_CLEAN;
    3. COMMIT;
    4. Run AutoConfig on all tiers, firstly on the DB tier and then the APPS tiers,to repopulate the required system tables.
    5.start all the services and verify the issue.
    Than application behaves normally.
    Is there any additional step to be followed while Hot Backup Cloning other than mentioned in the above mentioned ML Doc?
    Please suggest,
    Anchorage :)

    I am facing a unique issue, when I run adcfgclone.pl dbconfig <my_context_file>, application of my Production Server start misbehaving and it gives me an error:
    Node Id does not exist for the current application server id.Where did you get this error?
    This is very strange, how come this happens in Production, when you are doing cloning in TEST/DEV instance.
    Is there any additional step to be followed while Hot Backup Cloning other than mentioned in the above mentioned ML Doc?I feel there are no other additional steps, as I Recently did successfully hotbackup cloning using the same document.

  • How to find out which archived logs needed to recover a hot backup?

    I'm using Oracle 11gR2 (11.2.0.1.0).
    I have backed up a database when it is online using the following backup script through RMAN
    connect target /
    run {
    allocate channel d1 type disk;
    backup
    incremental level=0 cumulative
    filesperset 4
    format '/san/u01/app/backup/DB_%d_%T_%u_%c.rman'
    database
    }The backup set contains the backup of datafiles and control file. I have copied all the backup pieces to another server where I will restore/recover the database but I don't know which archived logs are needed in order to restore/recover the database to a consistent state.
    I have not deleted any archived log.
    How can I find out which archived logs are needed to recover the hot backup to a consistent state? Can this be done by querying V$BACKUP_DATAFILE and V$ARCHIVED_LOG? If yes, which columns should I query?
    Thanks for any help.

    A few ways :
    1a. Get the timestamps when the BACKUP ... DATABASE began and ended.
    1b. Review the alert.log of the database that was backed up.
    1c. From the alert.log identify the first Archivelog that was generated after the begin of the BACKUP ... DATABASE and the first Archivelog that was generated after the end of the BACKUP .. DATABASE.
    1d. These (from 1c) are the minimal Archivelogs that you need to RECOVER with. You can choose to apply additional Archivelogs that were generated at the source database to contininue to "roll-forward"
    2a. Do a RESTORE DATABASE alone.
    2b. Query V$DATAFILE on the restored database for the lowest CHECKPOINT_CHANGE# and CHECKPOINT_TIME. Also query for the highest CHECKPOINT_CHANGE# and CHECKPOINT_TIME.
    2c. Go back to the source database and query V$ARCHIVED_LOG (FIRST_CHANGE#) to identify the first Archivelog that has a higher SCN (FIRST_CHANGE#) than the lowest CHECKPOINT_CHANGE# from 2b above. Also query for the first Archivelog that has a higher SCN (FIRST_CHANGE#) than the highest CHECKPOINT_CHANGE# from 2b above.
    2d. These (from 2c) are the minimal Archivelogs that you need to RECOVER with.
    (why do you need to query V$ARCHIVED_LOG at the source ? If RESTORE a controlfile backup that was generated after the first Archivelog switch after the end of the BACKUP ... DATABASE, you would be able to query V$ARCHIVED_LOG at the restored database as well. That is why it is important to force an archivelog (log switch) after a BACKUP ... DATABASE and then backup the controlfile after this -- i.e. last. That way, the controlfile that you have restored to the new server has all the information needed).
    3. RESTORE DATABASE PREVIEW in RMAN if you have the archivelogs and subsequent controlfile in the backup itself !
    Hemant K Chitale

  • Question on recovery from Hot backup

    Whenever I tried to recover from my hot backup using recover database untill cancel (or any other until option)..
    I get messages similar to the following :
    SQL> recover database using backup controlfile until cancel ;
    ORA-00279: change 212733060 generated at 11/18/2008 23:50:58 needed for thread
    1
    ORA-00289: suggestion : /d01/oradata/devl/arc/1_282_667362770.dbf
    ORA-00280: change 212733060 for thread 1 is in sequence #282
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    /d01/oradata/devl/arc/1_282_667362770.dbf
    ORA-00279: change 212733060 generated at needed for thread 2
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    /d01/oradata/devl/arc/2_257_667362770.dbf
    ORA-00279: change 212733060 generated at needed for thread 3
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    /d01/oradata/devl/arc/3_258_667362770.dbf
    ORA-00279: change 212733060 generated at needed for thread 4
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    I have all my archive files in the archive dest location.
    Is there any way to prevent this warning messages and let oracle find all the archive files?
    As you can see from the messages, Oracle is finding the correct file, then why there is an error message and why we have to provide the file names one-by one.
    Please help !!!

    Oracle will look for needed archive logfile from your log archive destination.
    If you are sure all these files are under /d01/oradata/devl/arc/, you can input AUTO, Oracle will work down the list until done.

  • Complete Recovery from HOT BACKUP

    One of our server had media failuer. We are rebuilding new server and reinstalling oracle on it. I have hot backup of instance which I would like to recover.
    Can I have steps please. I will appreciate your help...!

    Hi,
    What is the OS and version of Oracle?
    Do you have all the archivelogs after the hot backup? Is the backup a valid backup? Is it a RMAN backup or normal file system backup of the datafiles?
    Regards,
    Badri.

  • Recovery from hot backup

    I have a problem with recovery hot backup from production database to test database and open it to use.
    I have 'hot backup' without temporary tablespace and roll tablespace, I cant shut down production database (24x7), please give me a regulations how to recover database from hot backup, without online redo logs, temp tablespace, roll tablespace only with archive logs, and.....after this ..rename database sid.
    Every time when I recovered database, it's still want to recover...and recover from archive logs and I cant open it, it's still inconsistent.Please send me an answer for [email protected]
    thanks!

    Hi,
    What is the OS and version of Oracle?
    Do you have all the archivelogs after the hot backup? Is the backup a valid backup? Is it a RMAN backup or normal file system backup of the datafiles?
    Regards,
    Badri.

  • Consistent hot backup possible

    Is a consistent hot backup possible?
    I would like to perform hot backups while the database is in basically a read only state. I am currently using Oracle recommended backups via OEM, for example.
    run {
    allocate channel oem_disk_backup device type disk;
    recover copy of database with tag 'ORA$OEM_LEVEL_0';
    backup incremental level 1 cumulative copies=1 for recover of copy with tag 'ORA$OEM_LEVEL_0' database;
    release channel oem_disk_backup;
    allocate channel oem_sbt_backup1 type 'SBT_TAPE' format '%U';
    backup recovery area;
    Would executing the sql command "alter database begin backup;" before running the above RMAN script accomplish this task? Then off course when completed execute sql "alter database end backup;".
    My basic concern is this type of RMAN hot backup usable in a disaster situation, i.e recreated on another server from tape backup.
    I am open to any other ideas.
    Thanks for your help in advance.
    Ed - Wasilla, Alaska
    Edited by: evankrevelen on Sep 11, 2008 10:18 PM

    Thanks everyone who replied to this thread.
    Just to clarify my complete backup strategy, there are two RMAN scripts run on daily and weekly basis. The daily does pickup the archivelogs. I had shown the weekly when first opening this thread. Here is the daily.
    run {
    allocate channel oem_disk_backup device type disk;
    recover copy of database with tag 'ORA$OEM_LEVEL_0';
    backup incremental level 1 cumulative copies=1 for recover of copy with tag 'ORA$OEM_LEVEL_0' database;
    release channel oem_disk_backup;
    allocate channel oem_sbt_backup1 type 'SBT_TAPE' format '%U';
    backup archivelog all not backed up;
    backup backupset all not backed up since time 'SYSDATE-1';
    My question now is what RMAN does in the increments. It appears to be updating the original level 0 copies of datafiles with changed blocks only. Is the new copy of the datafile now a level 0 type file?
    Here is a transcript from one of the daily backups.
    Starting recover at 11-SEP-08
    channel oem_disk_backup: starting incremental datafile backupset restore
    channel oem_disk_backup: specifying datafile copies to recover
    recovering datafile copy fno=00001 name=+DEVRVYG1/landesk/datafile/system.2576.616107783
    recovering datafile copy fno=00002 name=+DEVRVYG1/landesk/datafile/undotbs1.2574.616107865
    recovering datafile copy fno=00003 name=+DEVRVYG1/landesk/datafile/sysaux.2575.616107829
    recovering datafile copy fno=00004 name=+DEVRVYG1/landesk/datafile/users.2572.616107871
    recovering datafile copy fno=00005 name=+DEVRVYG1/landesk/datafile/landesk.2914.616107643
    channel oem_disk_backup: reading from backup piece +DEVRVYG1/landesk/backupset/2008_09_10/nnndn1_tag20080910t220150_0.12330.665100189
    channel oem_disk_backup: restored backup piece 1
    piece handle=+DEVRVYG1/landesk/backupset/2008_09_10/nnndn1_tag20080910t220150_0.12330.665100189 tag=TAG20080910T220150
    channel oem_disk_backup: restore complete, elapsed time: 00:05:16
    Finished recover at 11-SEP-08
    Starting backup at 11-SEP-08
    channel oem_disk_backup: starting incremental level 1 datafile backupset
    channel oem_disk_backup: specifying datafile(s) in backupset
    input datafile fno=00005 name=+DEVG1/landesk/datafile/landesk.374.614072207
    input datafile fno=00003 name=+DEVG1/landesk/datafile/sysaux.384.614002027
    input datafile fno=00001 name=+DEVG1/landesk/datafile/system.383.614002025
    input datafile fno=00002 name=+DEVG1/landesk/datafile/undotbs1.385.614002027
    input datafile fno=00004 name=+DEVG1/landesk/datafile/users.386.614002027
    channel oem_disk_backup: starting piece 1 at 11-SEP-08
    channel oem_disk_backup: finished piece 1 at 11-SEP-08
    piece handle=+DEVRVYG1/landesk/backupset/2008_09_11/nnndn1_tag20080911t220708_0.12999.665186835 tag=TAG20080911T220708 comment=NONE
    channel oem_disk_backup: backup set complete, elapsed time: 00:02:26
    channel oem_disk_backup: starting incremental level 1 datafile backupset
    channel oem_disk_backup: specifying datafile(s) in backupset
    including current control file in backupset
    including current SPFILE in backupset
    channel oem_disk_backup: starting piece 1 at 11-SEP-08
    channel oem_disk_backup: finished piece 1 at 11-SEP-08
    piece handle=+DEVRVYG1/landesk/backupset/2008_09_11/ncsnn1_tag20080911t220708_0.2301.665186983 tag=TAG20080911T220708 comment=NONE
    channel oem_disk_backup: backup set complete, elapsed time: 00:00:21
    Finished backup at 11-SEP-08
    It appears to be updating the previous copy with updated blocks thus rolling forward the datafile copy to a new level 0 copy.
    Then to restore from the backup RMAN would first use this new copy of the datafile and then apply any archivelogs to them to bring the database to the point in time the incremental backup was taken.
    Are these assumptions true?
    Thanks for your help,
    ED

  • When to use REUSE/SET, NO-ARCHIVELOGS in create controlfile in HOT BACKUP?

    I am a trainee Oracle DBA and have the following queries. Kindly reply with detailed explanation as I want to get my concepts cleared!
    Q1>> While doing a user managed hot backup, when we are creating a control file(CREATE CONTROLFILE) from trace for recovery when do we use the create control file with the following options:
    *1. REUSE / SET*
    *2. ARCHIVELOGS / NOARCHIVELOGS*
    Q2>> In what scenarios do we re-create the control file while recovering datafiles from a hot backup??
    Thanks a tonne!
    Regards,
    Bhavi

    Hemant K Chitale wrote:
    1.1 It is not "REUSE/SET". These are two very different clauses.
    REUSE is when you want the CREATE to overwrite the existing controlfile(s). If the controlfile(s) {as named in the instance parameter file, initSID.ora or spfileSID.ora} is/are already present, the CREATE fails unless REUSE is specified.
    SET is when you want to change the database name. Oracle then creates the controlfile(s) with the specified database name and updates the headers of all the datafiles. If you run a CREATE with a database name that is different from that in the datafile headers, the CREATE fails unless you include a SET to specify that the name must be changed. Note that this also means that the name in the instance parameter file must already have been updated.
    1.2 ARCHIVELOG/NOARCHIVELOG is to set the database state. The same is achieved by issuing an "ALTER DATABASE ARCHIVELOG/NOARCHIVELOG" when the database is MOUNTed but not OPEN.
    2. You'd run the CREATE CONTROLFILE if you do not have a binary backup of the controlfile.
    Optionally, you can also use CREATE CONTROLFILE to rename all the datafiles by specifying the new locations of the datafiles -- the datafiles must already be present in the new locations, else the CREATE fails if it doesn't find a datafile that is included in the list of datafiles included in the CREATE statement.
    RMAN is the correct way to run Backups. User Managed Backup scripts are used in cases like Storage-based Snapshots / SnapClones / BCV.
    Hemant K ChitaleThanks that was really helpful..One last question when to use the resetlogs/noresetlogs clause in the create controlfile statement. I have noticed that at times it accepts resetlog while at times it accepts noresetlogs

  • Archive logs are missing in hot backup

    Hi All,
    We are using the following commands to take hot backup of our database. Hot backup is fired by "backup" user on Linux system.
    =======================
    rman target / nocatalog <<EOF
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '$backup_dir/$date/%F';
    run {
    allocate channel oem_backup_disk1 type disk format '$backup_dir/$date/%U';
    #--Switch archive logs for all threads
    sql 'alter system archive log current';
    backup as COMPRESSED BACKUPSET database include current controlfile;
    #--Switch archive logs for all threads
    sql 'alter system archive log current';
    #--Backup Archive logs and delete what we've backedup
    backup as COMPRESSED BACKUPSET archivelog all not backed up delete all input;
    release channel oem_backup_disk1;
    allocate channel for maintenance type disk;
    delete noprompt obsolete device type disk;
    release channel;
    exit
    EOF
    =======================
    Due to which after command (used 2 times) "sql 'alter system archive log current';" I see the following lines in alert log 2 times. Because of this all the online logs are not getting archived (Missing 2 logs per day), the backup taken is unusable when restoring. I am worried about this. I there any to avoid this situation.
    =======================
    Errors in file /u01/oracle/admin/rac/udump/rac1_ora_3546.trc:
    ORA-19504: failed to create file "+DATA/rac/1_32309_632680691.dbf"
    ORA-17502: ksfdcre:4 Failed to create file +DATA/rac/1_32309_632680691.dbf
    ORA-15055: unable to connect to ASM instance
    ORA-01031: insufficient privileges
    =======================
    Regards,
    Kunal.

    All thanks you for help, pleas find additional information. I goth the following error as log sequence was missing. Everyday during hotbackup, there are 2 missing archive logs, which makes our backup inconsistent and useless.
    archive log filename=/mnt/xtra-backup/ora_archivelogs/1_32531_632680691.dbf thread=1 sequence=32531
    archive log filename=/mnt/xtra-backup/ora_archivelogs/2_28768_632680691.dbf thread=2 sequence=28768
    archive log filename=/mnt/xtra-backup/ora_archivelogs/2_28769_632680691.dbf thread=2 sequence=28769
    archive log filename=/mnt/xtra-backup/ora_archivelogs/2_28770_632680691.dbf thread=2 sequence=28770
    archive log filename=/mnt/xtra-backup/ora_archivelogs/1_32532_632680691.dbf thread=1 sequence=32532
    archive log filename=/mnt/xtra-backup/ora_archivelogs/2_28771_632680691.dbf thread=2 sequence=28771
    archive log filename=/mnt/xtra-backup/ora_archivelogs/2_28772_632680691.dbf thread=2 sequence=28772
    archive log filename=/mnt/xtra-backup/ora_archivelogs/2_28772_632680691.dbf thread=2 sequence=28773
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of recover command at 12/13/2012 04:22:56
    RMAN-11003: failure during parse/execution of SQL statement: alter database recover logfile '/mnt/xtra-backup/ora_archivelogs/2_28772_632680691.dbf'
    ORA-00310: archived log contains sequence 28772; sequence 28773 required
    ORA-00334: archived log: '/mnt/xtra-backup/ora_archivelogs/2_28772_632680691.dbf'
    Let me try the susggestions provided above.

  • Using Data Guard and hot backups - 9.2.0.6.0

    Hi all,
    I have an existing 9.2.0.6.0 database that is setup in a DataGuard environment - one primary database with a physical standby in a separate datacenter. It is all setup and it works beautifully. On our primary database, we currently have 2 different types of backups we are doing - we do an export of the main schema (all of the application data is all in this one schema) 4 times a day, and we do a full database hot backup once a night.
    My question is in regards to the hot backup - I don't know that it is even worth doing a hot backup of this database? I am trying to think of a situation where we would actually want to restore a hot backup of the primary database... If we ran into some kind of a data issue, it would probably be quickest and easiest to restore data from one of the exports, and when we did that restore (import), I assume that data change would be replicated through DataGuard to the standby site. But if there was some kind of situation where we wanted to restore a recent hot backup of the primary database, that would essentially break the Data Guard configuration, and I assume that after the hot backup was restored, we would have to somehow re-instantiate Data Guard on the standby site.
    Does anyone have any input on this? If you are running with DataGuard, is it even worth it to be doing hot backups? What kind of situation would call for restoring a hot backup, instead of just failing over to the standby?
    Thanks!
    --Brad                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

    If we ran into some kind of a data issue, it would probably be quickest and easiest to restore data from one of the exportsIt would be quickest to failover to stand by database than restroing from dump file. After all you maintaing stand by db for that reason.
    How can you restore database up to the latest changes using export/import ? You may have to restore using rman and apply the logs.
    You can backup from stand by database. You do not need to back up primary.

  • Which is better for User Managed Hot backup - running in cron or oracle scheduler

    We are taking hot backup in our environment by using sql script of just begin backup, host copy and end backup
    Can we use this script to be put in cron for weekly two times or to use oracle scheduler .
    Which is better option and why?

    The answer to your question depens on your condition. Do you have more experienced Unix admin or DBA?
    From my point of view cron is better becuase all you need is SQL*Plus utility and shell script:
    export ORACLE_SID=...
    sqlplus <username>/<password> as sysdba <<EOF
    ALTER TABLESPACE .. BEGIN BACKUP;
    host copy ..
    ALTER TABLESPACE .. END BACKUP;
    EXIT
    EOF

  • Steps to create the standby with Hot Backup........?????

    Hi All
    I am planning to create the Physical standby with Hot backup ....
    Can anyboby give me the link/steps which explains how to do this?
    Thanks
    Gagan

    To copy the file from the primary, you can use RMAN, or manual hot backup copy with begin backup and end backup mode.
    You can read the doc about creation of standby database :
    http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14239/create_ps.htm#i70835
    http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14239/scenarios.htm#i1032302
    Nicolas.

  • When i start my hot backup my database getting very slow

    Hi,
    I am using following commands for enabling hot backup
    SQL>ALTER SYSTEM ARCHIVE LOG CURRENT;
    SQL>ARCHIVE LOG LIST;
    SQL >ALTER DABATBASE BEGIN BACKUP;
    Database altered.
    SQL>SELECT FILE#,STATUS FROM V$BACKUP;
    FILE# STATUS
    1 ACTIVE
    2 ACTIVE
    3 ACTIVE
    4 ACTIVE
    and using cp -rp command to copy the file (backup copying speed good) but database performance very slow
    How to improve performance ...
    Regards
    Vignesh C

    Uwe Hesse wrote:
    It is very likely that you experience slow performance with ALTER DATABASE BEGIN BACKUP , because until you do ALTER DATABASE END BACKUP , every modified block is additionally written into the online logfiles . Doesn't that happen only the first time the block is modified?
    >
    The command was introduced for split mirror backups, when this period is very short. Else ALTER TABLESPACE ... BEGIN/END BACKUP for every tablespace one at a time reduces the amount of additional redo during non-RMAN Hot Backup. There appear to be only 4 files. We don't know how big or sparse they are.
    >
    RMAN doesn't need that at all - much less redo - and also archive - generation then.
    Furthermore, you can use BACKUP AS COMPRESSED BACKUPSET DATABASE to decrease the size of the backup even more - if space is an issue.
    In short: Use RMAN :-)
    Agree with that! Unless the copy is actually going to an NFS mount or something, where I would be concerned whether it is the type of NFS that Oracle likes. I'd also advise a current patch set, as the OP didn't tell us the exact version, and I have this nagging unfocused memory of some compression problems of the "oh, I can't recover" variety.
    I'd like to see some evidence on I/O and cpu usage before giving advice. When I used to copy files like this, it would choke out everyone else. RMAN was a savior, but had to wait for local SAN upgrade.

Maybe you are looking for

  • Invalid serial number/switching serial from pc to mac?

    I have two problems/questions-. Number one, the serial number off of my Adobe Creative Suite 6 Design and Web package does not work. When I enter it in my products it says 'Please enter a valid serial number'. Do I need to install the program using t

  • Why can't I click on a URL in a calendar?

    Hi, When I open a calendar item created from an invite from elsewhere, the text of that invite is loaded but URLs miss the url information. Hence it is not possible to click on them. An example of this might be where an invite contains a link to a we

  • Old FP-1000 app developed with LabVIEW 5.1.1, not running correctly since updating to MAX 4.5/FP6.0.2

    I have an old application built under LabVIEW 5.1.1 which opens an .iak file (the file path is embedded in the app) and reads/writes to modules attached to an FP-1000.  Recently I was forced to move the setup onto a new PC and now I'm unable to get t

  • Is it possible to highlight a part of a grah in a waveform graph

    I would like to permit a user to select a part of a graph between 2 cursors and then highlight this part. Although I found in an example how to change colors in a waveform chart, I need to do the same in a waveform graph.. Thank you for your help

  • Display Leading Zeros in bex query

    Hi, I am new to BI and need to display leading zeros in bex report. There is an info-object billing number which is of type CHAR. The values, 00001 - 00010 has been given in the CSV file. when i generate the same in IP, the leading zeros are mentione