Hot backup related question

Hi all,
i have a question related to hot backup
If we take a hot backup ie. alter tablespace tbs begin backup....then it freezed the datafile and all the entries happen to the redo log files.....if we a two log groups and and hot backup happens to continue from BOD(beginning of day) to EOD,then there might be a situation in which log switch happen it means it will archive the same,so how the datafile will recover the same...
here it freezed the whole datafile or only datafile header.....

user00726 wrote:
Hi all,
i have a question related to hot backup
If we take a hot backup ie. alter tablespace tbs begin backup....then it freezed the datafile and all the entries happen to the redo log files.....if we a two log groups and and hot backup happens to continue from BOD(beginning of day) to EOD,then there might be a situation in which log switch happen it means it will archive the same,so how the datafile will recover the same...
here it freezed the whole datafile or only datafile header.....The crux of the hot backup done via manual cp command is that datafile is completely operational. So all the objects which can undergo changes like tables and all , they keep on working as like they used to do without the back up is on. As all the changed vectors are logged in the redo log files , so the same is true in this case also when the datafile goes into the backup mode. Oracle just freezes the header of teh datafile and freezes the SCN at that point for it to know where to start the recovery from in the order of a crash. When the file is put into the backup mode, the header is freezed and the checkpoint is done before putting the file into the backup mode. Makes sense as with the checkpointing , the related buffers for that file are put into the file , ensuring that the file now is a self consistent file.
There is no checkpoint that happens with the end backup though. The file is made in sync with the rest of the database files when the next full, system level checkopint will happen. All the changes required to make the file get in sync with the rest of the database is already logged in teh redo and archived files. So in the case of a crash, oracle just applies those to the respective file and the recovery is complete. So there won't be any impact on the recovery. Just there is one issue that there would be extra redo generation due to the first time logging of the whole data block in the redo stream to avoid the fractured block issue.
HTH
Aman....

Similar Messages

  • RPD backup related question.

    Hi
    Have a question related to RPD backup, we are taking backup of RPD from the path oraclebi\server\repository\.rpd, do we need to shutdown the services and then take backup or directly we can take backup when that particular rpd is online.
    Thanks

    Hi,
    The best one is stop the bi services and take backup.
    Note: With out stopping services also we can take backup.
    Hope this help's
    Thanks,
    Satya

  • Backup VHD size is much more than expected and other backup related questions.

    Hello,I have few windows 2008 server and i have scheduled the weekly backup (Windows Backup) which runs on saturday.
    I recently noticed that the actual size of the data drive is only 30 GB but the weekly backup creates a VHD of 65 GB.This is not happening for all servers but for most of the server.Why is it so..and how can i get the correct VHD size..60 GB VHD does't make
    sence for 30 GB data.
    2. If any moment of time if i have to restore entire Active Directory on windows 2008 R2 then is is the process same as windows 2003..means going to DSRM mode,restoring the backup,authoritative restore or is there any difference..
    3. I also noticed that if i have a backup VHD of one server (Lets Say A) and if i copy that Backup to other server (Let Say B) then windows 2008 only gives me an option to attach the VHD to server B but through backup utlity is there any way to restore the
    data from the VHD,Currently i am doing copy paste from VHD to server B data drive but  that is not the correct way of doing it..Is it a limitation of windows 2008?
    Senior System Engineer.

    Hi,
    If there are large number of files getting deleted on the data drive, the backup image can have large holes. You can compact the vhd image by using diskpart command to the correct VHD size.
    For more detaile information, please refer to the thread below:
    My Exchange backup is bigger than the used space on the Exchange drive.
    https://social.technet.microsoft.com/Forums/windowsserver/en-US/3ccdcb6c-e26a-4577-ad4b-f31f9cef5dd7/my-exchange-backup-is-bigger-than-the-used-space-on-the-exchange-drive?forum=windowsbackup
    For the second question, the answer is yes. If you want to restore entire Active Directory on windows 2008 R2, you need to start the domain controller in Directory Services Restore Mode to perform a nonauthoritative restore from backup.
    Performing Nonauthoritative Restore of Active Directory Domain Services
    http://technet.microsoft.com/en-us/library/cc816627(v=ws.10).aspx
    If you want to restore a backup to server B which is created on server A, you need to copy the WindowsImageBackup folder to server B.
    For more detailed information, please see:
    Restore After Copying the WindowsImageBackup Folder
    http://standalonelabs.wordpress.com/2011/05/30/restore-after-copying-the-windowsimagebackup-folder/
    Best Regards,
    Mandy
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]

  • Incremental Hot backup question

    This question originated from a different post of mine, but is somewhat different so I'm starting it up as a separate topic.
    Each night we do an incremental hot backup around 4am. Yesterday I dropped a tablespace successfully and removed the original datafile along with it. Autobackup backed up the control file before and after the drop. Then when the incremental backups ran last night at 4am everything went just fine.
    So up to this point we're all good.
    However, in the flash recovery area where it writes the backups (backupsets, datafiles, etc) it still has a file for that tablespace... except that it did not appear to be updated like all the reset were. This is fine, but that file is taking up space on the box and the unix backup script that copies everything to a separate box still picks it up (that script copies everything.. and the author of the code doesn't want to hard code any names)
    Can I just remove this file without causing the backup to be upset? Or do I need to do something else to accomplish this goal? And if so, what (still a newbie so be specific)?
    Thanks in advance.

    The first thing you need to do is to provide a complementary crystal ball in order to be able to guess nasty little details, like the contents of that script.
    Even 'incremental hot backup' is an ambiguous term. One would expect a RMAN backup, but as you didn't use various RMAN commands to establish all of your files have been backed up, and you talk about 'backupsets' and seem to discuss a standalone datafile: The only thing I see is lots of fog, and the need to tear the information out of you.
    No business for volunteers.
    Sybrand Bakker
    Senior Oracle DBA
    Experts: those who did read documentation.

  • Question on recovery from Hot backup

    Whenever I tried to recover from my hot backup using recover database untill cancel (or any other until option)..
    I get messages similar to the following :
    SQL> recover database using backup controlfile until cancel ;
    ORA-00279: change 212733060 generated at 11/18/2008 23:50:58 needed for thread
    1
    ORA-00289: suggestion : /d01/oradata/devl/arc/1_282_667362770.dbf
    ORA-00280: change 212733060 for thread 1 is in sequence #282
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    /d01/oradata/devl/arc/1_282_667362770.dbf
    ORA-00279: change 212733060 generated at needed for thread 2
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    /d01/oradata/devl/arc/2_257_667362770.dbf
    ORA-00279: change 212733060 generated at needed for thread 3
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    /d01/oradata/devl/arc/3_258_667362770.dbf
    ORA-00279: change 212733060 generated at needed for thread 4
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    I have all my archive files in the archive dest location.
    Is there any way to prevent this warning messages and let oracle find all the archive files?
    As you can see from the messages, Oracle is finding the correct file, then why there is an error message and why we have to provide the file names one-by one.
    Please help !!!

    Oracle will look for needed archive logfile from your log archive destination.
    If you are sure all these files are under /d01/oradata/devl/arc/, you can input AUTO, Oracle will work down the list until done.

  • Hot backup question

    Our DBA duties are currently outsourced and I'm just curious about something.
    We asked them to copy a schema from our Production database (during the day) down to it's equivalents in the Acceptance and Test environments. We found a problem in that the maximum value of a primary key on a table was 100121, and the sequence uses to generate the value for the primary key was only at 100081.
    Is this just one of the risks of taking a hot backup, and so should schedule a backup be taken after hours? Or did he possibly do something wrong?
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi

    If the request was to copy only one schema, then it is unlikely that a "hot backup" would have been used. Quite possibly exp/imp or expdp/impdp would have been used to export the schema from Production and import it into the other two databases.
    Sequences are always "a problem" when export - import is used while the database is active. The export of the sequence and the export of the table (or tables) that use the sequence are not at the same time / SCN so the values may diverge as they get updated in the source database.
    The plan is to reset the sequence in the target database by manually incrementing it till it reaches the desired target (100122).
    Hemant K Chitale

  • How to Restore datas in Oracle 10g using hot backup?

    I did Hot backup things (On-line backup) in Oracle 10g (Windows environment)
    Now backup files are stored in physical location..
    How to restore the datas to original location..
    Can any one give the url or command for recovery in ora 10g for hot backup
    Thanks

    Please do not dump all of your work in this forum, and expect volunteers to do it for free!!!
    Dear Great Senior DBA Sybrand Bakker
    What are Oracle Discussion Forums?
    Oracle Discussion Forums is an interactive community for sharing information, questions, and comments about Oracle products and related technologies. Most forums are moderated by product managers, and all of them are frequently visited by knowledgeable users from the community
    oracle forums is not your place for showing “bossing” it is a place where junior to seniors share there knowledge and advices, it means helping and sharing….. Knowledge is for giving ….giving only it will increase… don’t discourage any one like this ……
    There are my Great Seniors (not like u) mainly respected Mr. Jonathan Lewis, Mr. Don Burleson and a lot of oracle nominated ACE members are all giving patiently replay to jr. to sr.
    Obeasley oracle doubts are mostly related to practical issue only……
    Pls try to learn the word meaning of “volunteers”

  • Environment configuration for Hot Backups

    Hi all,
    1. I am trying to create a hot backup tool based on the read-only Environment strategy ([discussed in a previous thread|http://forums.oracle.com/forums/message.jspa?messageID=3674008#3674008] ).
    Now, leaving aside the EnvironmentConfig.setReadOnly(true), I have found quite a few possible other configuration options in the EnvironmentParams class and I'm wondering if there are some that I should be using.
    Here are a couple of examples:
    - ENV_RECOVERY
    - ENV_RUN_INCOMPRESSOR
    - ENV_RUN_CHECKPOINTER
    - ENV_RUN_CLEANER
    Would it make sense to configure any of these?
    2. After creating a hot backup I have tried to test its state. Basically, the approach was quite simple:
    - open a read-only env on the backup
    - try to access the databases in the env
    My idea is that if the above 2 ops are succeeding then there is a very good chance that the backup is correct.
    Now, while playing with the above configuration options I have noticed that if I'm setting ENV_RECOVERY to false in this test environment, then any attempt to access the databases within results in a DatabaseNotFoundException.
    Can someone help me understand what is happening? (basically, I cannot make a connection between recovery and access to the DBs in the environment)
    Many thanks in advance,
    ./alex
    PS: I've forgot to mention that I'm running a quite old version: 2.1.30
    Edited by: Alex Popescu on Aug 13, 2009 5:50 AM

    ENV_RECOVERY - suppresses running recovery at Environment creation. Internal parameter.
    ENV_RUN_INCOMPRESSOR, ENV_RUN_CHECKPOINTER, ENV_RUN_CLEANER - disable the INCompressor, Checkpointer, and Cleaner Daemon threads.
    You should not need to adjust any of these parameters for your DbBackup utility. In fact, ENV_RECOVERY is an "internal use only" parameter.
    PS: I've forgot to mention that I'm running a quite old version: 2.1.30
    I'm sorry to be the bearer of bad news, but as my colleague Mark Hayes stressed in a previous post, you really need to upgrade from 2.1.30 to 3.3.latest. It is highly probable that you will eventually run into bugs with 2.x and we are unlikely to (1) be willing to diagnose them, and (2) fix them. As Mark pointed out, 2.1 is 3.5 years old and the product has had a lot of improvements in that time. We are happy to answer questions on this forum relating to the latest major release, but dealing with old and crusty code is certainly going to be well below our allowable priority level.
    Charles Lamb

  • Oracle Hot Backup Confusion

    Hi everybody,
    This is regarding a confusions on Oracle Hot-Backup.
    Suppose i have 3 Online Redo Logs and i have put a tablespace in Backup Mode.
    So as a result whatever changes are made,Oracle will start generating redo for the entire block so as to avoid Fractured Blocks.
    Wat my Question is what if,while in the process of doing this all my Online Redo Logs files/members gets filled up how do then Oracle Manage to go ahead and keep recording the Transactions/Changes made to a Certain Block in the Tablespace.
    1.>Does Oracle Perform a Recovery from the Archived Redo Logs once the Tablespace taken out Backup Mode.
    2.>Or is there a reclamation/Re-sizing of On-line redo log file Space from other Segments.
    Thanks & Regards,
    Prosenjit Mukherjee.

    >
    I am not even sure that what the #2 means! Anways!
    Suppose i have 3 Online Redo Logs and i have put a tablespace in Backup Mode.It doesn't matter how many redo log groups you have and AFAIK this has no relation with a tablespace being in the backup mode whatsoever.
    So as a result whatever changes are made,Oracle will start generating redo for the entire block so as to avoid Fractured Blocks.Partially correct! The whole block gets copied only for the first time . For subsequent changes, only the change vectors go into the log buffer and then from there, to the redo log files. This is exactly equivalent to what does happen without a tablespace being not in the backup mode.
    Wat my Question is what if,while in the process of doing this all my Online Redo Logs files/members gets filled up how do then Oracle Manage to go ahead and keep recording the Transactions/Changes made to a Certain Block in the Tablespace.What does this mean exactly? If you fill up the redo logs, log switch will follow making teh LGWR switching from the current redo log group to the next, triggering checkpoint which would make the DBWR write the buffers to the datafile. I am not sure what is the confusion?
    Edit: I apologize , I read this point a little too fast! If all the log files are going to be filled up, database would be hung. There won't be any more transactions allowed as there is no place to write their change vectors anywhere. Sorry , I understood in the first time that you were talking about the standard log switching.
    1.>Does Oracle Perform a Recovery from the Archived Redo Logs once the Datafile is taken out Backup Mode.What recovery? Before the files go into the backup, there is a "global checkpoint" that is performed for them , pushing all of their buffers from the cache to them and then after the checkpoint is freezed for them, yet letting them do all the read/write operations. Once they come out of the backup mode, when the next time a checkpoint is performed, then their number gets synched with the rest of the database. From where the recovery comes into the picture here?
    2.>Or is there a reclamation/Re-sizing of On-line redo log file Space from other Segments.As I said before, I didn't understand at all what this point even means?
    HTH
    Aman....
    Edited by: Aman.... on Oct 24, 2009 9:19 PM added Edit

  • Consistent hot backup possible

    Is a consistent hot backup possible?
    I would like to perform hot backups while the database is in basically a read only state. I am currently using Oracle recommended backups via OEM, for example.
    run {
    allocate channel oem_disk_backup device type disk;
    recover copy of database with tag 'ORA$OEM_LEVEL_0';
    backup incremental level 1 cumulative copies=1 for recover of copy with tag 'ORA$OEM_LEVEL_0' database;
    release channel oem_disk_backup;
    allocate channel oem_sbt_backup1 type 'SBT_TAPE' format '%U';
    backup recovery area;
    Would executing the sql command "alter database begin backup;" before running the above RMAN script accomplish this task? Then off course when completed execute sql "alter database end backup;".
    My basic concern is this type of RMAN hot backup usable in a disaster situation, i.e recreated on another server from tape backup.
    I am open to any other ideas.
    Thanks for your help in advance.
    Ed - Wasilla, Alaska
    Edited by: evankrevelen on Sep 11, 2008 10:18 PM

    Thanks everyone who replied to this thread.
    Just to clarify my complete backup strategy, there are two RMAN scripts run on daily and weekly basis. The daily does pickup the archivelogs. I had shown the weekly when first opening this thread. Here is the daily.
    run {
    allocate channel oem_disk_backup device type disk;
    recover copy of database with tag 'ORA$OEM_LEVEL_0';
    backup incremental level 1 cumulative copies=1 for recover of copy with tag 'ORA$OEM_LEVEL_0' database;
    release channel oem_disk_backup;
    allocate channel oem_sbt_backup1 type 'SBT_TAPE' format '%U';
    backup archivelog all not backed up;
    backup backupset all not backed up since time 'SYSDATE-1';
    My question now is what RMAN does in the increments. It appears to be updating the original level 0 copies of datafiles with changed blocks only. Is the new copy of the datafile now a level 0 type file?
    Here is a transcript from one of the daily backups.
    Starting recover at 11-SEP-08
    channel oem_disk_backup: starting incremental datafile backupset restore
    channel oem_disk_backup: specifying datafile copies to recover
    recovering datafile copy fno=00001 name=+DEVRVYG1/landesk/datafile/system.2576.616107783
    recovering datafile copy fno=00002 name=+DEVRVYG1/landesk/datafile/undotbs1.2574.616107865
    recovering datafile copy fno=00003 name=+DEVRVYG1/landesk/datafile/sysaux.2575.616107829
    recovering datafile copy fno=00004 name=+DEVRVYG1/landesk/datafile/users.2572.616107871
    recovering datafile copy fno=00005 name=+DEVRVYG1/landesk/datafile/landesk.2914.616107643
    channel oem_disk_backup: reading from backup piece +DEVRVYG1/landesk/backupset/2008_09_10/nnndn1_tag20080910t220150_0.12330.665100189
    channel oem_disk_backup: restored backup piece 1
    piece handle=+DEVRVYG1/landesk/backupset/2008_09_10/nnndn1_tag20080910t220150_0.12330.665100189 tag=TAG20080910T220150
    channel oem_disk_backup: restore complete, elapsed time: 00:05:16
    Finished recover at 11-SEP-08
    Starting backup at 11-SEP-08
    channel oem_disk_backup: starting incremental level 1 datafile backupset
    channel oem_disk_backup: specifying datafile(s) in backupset
    input datafile fno=00005 name=+DEVG1/landesk/datafile/landesk.374.614072207
    input datafile fno=00003 name=+DEVG1/landesk/datafile/sysaux.384.614002027
    input datafile fno=00001 name=+DEVG1/landesk/datafile/system.383.614002025
    input datafile fno=00002 name=+DEVG1/landesk/datafile/undotbs1.385.614002027
    input datafile fno=00004 name=+DEVG1/landesk/datafile/users.386.614002027
    channel oem_disk_backup: starting piece 1 at 11-SEP-08
    channel oem_disk_backup: finished piece 1 at 11-SEP-08
    piece handle=+DEVRVYG1/landesk/backupset/2008_09_11/nnndn1_tag20080911t220708_0.12999.665186835 tag=TAG20080911T220708 comment=NONE
    channel oem_disk_backup: backup set complete, elapsed time: 00:02:26
    channel oem_disk_backup: starting incremental level 1 datafile backupset
    channel oem_disk_backup: specifying datafile(s) in backupset
    including current control file in backupset
    including current SPFILE in backupset
    channel oem_disk_backup: starting piece 1 at 11-SEP-08
    channel oem_disk_backup: finished piece 1 at 11-SEP-08
    piece handle=+DEVRVYG1/landesk/backupset/2008_09_11/ncsnn1_tag20080911t220708_0.2301.665186983 tag=TAG20080911T220708 comment=NONE
    channel oem_disk_backup: backup set complete, elapsed time: 00:00:21
    Finished backup at 11-SEP-08
    It appears to be updating the previous copy with updated blocks thus rolling forward the datafile copy to a new level 0 copy.
    Then to restore from the backup RMAN would first use this new copy of the datafile and then apply any archivelogs to them to bring the database to the point in time the incremental backup was taken.
    Are these assumptions true?
    Thanks for your help,
    ED

  • When to use REUSE/SET, NO-ARCHIVELOGS in create controlfile in HOT BACKUP?

    I am a trainee Oracle DBA and have the following queries. Kindly reply with detailed explanation as I want to get my concepts cleared!
    Q1>> While doing a user managed hot backup, when we are creating a control file(CREATE CONTROLFILE) from trace for recovery when do we use the create control file with the following options:
    *1. REUSE / SET*
    *2. ARCHIVELOGS / NOARCHIVELOGS*
    Q2>> In what scenarios do we re-create the control file while recovering datafiles from a hot backup??
    Thanks a tonne!
    Regards,
    Bhavi

    Hemant K Chitale wrote:
    1.1 It is not "REUSE/SET". These are two very different clauses.
    REUSE is when you want the CREATE to overwrite the existing controlfile(s). If the controlfile(s) {as named in the instance parameter file, initSID.ora or spfileSID.ora} is/are already present, the CREATE fails unless REUSE is specified.
    SET is when you want to change the database name. Oracle then creates the controlfile(s) with the specified database name and updates the headers of all the datafiles. If you run a CREATE with a database name that is different from that in the datafile headers, the CREATE fails unless you include a SET to specify that the name must be changed. Note that this also means that the name in the instance parameter file must already have been updated.
    1.2 ARCHIVELOG/NOARCHIVELOG is to set the database state. The same is achieved by issuing an "ALTER DATABASE ARCHIVELOG/NOARCHIVELOG" when the database is MOUNTed but not OPEN.
    2. You'd run the CREATE CONTROLFILE if you do not have a binary backup of the controlfile.
    Optionally, you can also use CREATE CONTROLFILE to rename all the datafiles by specifying the new locations of the datafiles -- the datafiles must already be present in the new locations, else the CREATE fails if it doesn't find a datafile that is included in the list of datafiles included in the CREATE statement.
    RMAN is the correct way to run Backups. User Managed Backup scripts are used in cases like Storage-based Snapshots / SnapClones / BCV.
    Hemant K ChitaleThanks that was really helpful..One last question when to use the resetlogs/noresetlogs clause in the create controlfile statement. I have noticed that at times it accepts resetlog while at times it accepts noresetlogs

  • Using Data Guard and hot backups - 9.2.0.6.0

    Hi all,
    I have an existing 9.2.0.6.0 database that is setup in a DataGuard environment - one primary database with a physical standby in a separate datacenter. It is all setup and it works beautifully. On our primary database, we currently have 2 different types of backups we are doing - we do an export of the main schema (all of the application data is all in this one schema) 4 times a day, and we do a full database hot backup once a night.
    My question is in regards to the hot backup - I don't know that it is even worth doing a hot backup of this database? I am trying to think of a situation where we would actually want to restore a hot backup of the primary database... If we ran into some kind of a data issue, it would probably be quickest and easiest to restore data from one of the exports, and when we did that restore (import), I assume that data change would be replicated through DataGuard to the standby site. But if there was some kind of situation where we wanted to restore a recent hot backup of the primary database, that would essentially break the Data Guard configuration, and I assume that after the hot backup was restored, we would have to somehow re-instantiate Data Guard on the standby site.
    Does anyone have any input on this? If you are running with DataGuard, is it even worth it to be doing hot backups? What kind of situation would call for restoring a hot backup, instead of just failing over to the standby?
    Thanks!
    --Brad                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

    If we ran into some kind of a data issue, it would probably be quickest and easiest to restore data from one of the exportsIt would be quickest to failover to stand by database than restroing from dump file. After all you maintaing stand by db for that reason.
    How can you restore database up to the latest changes using export/import ? You may have to restore using rman and apply the logs.
    You can backup from stand by database. You do not need to back up primary.

  • Which is better for User Managed Hot backup - running in cron or oracle scheduler

    We are taking hot backup in our environment by using sql script of just begin backup, host copy and end backup
    Can we use this script to be put in cron for weekly two times or to use oracle scheduler .
    Which is better option and why?

    The answer to your question depens on your condition. Do you have more experienced Unix admin or DBA?
    From my point of view cron is better becuase all you need is SQL*Plus utility and shell script:
    export ORACLE_SID=...
    sqlplus <username>/<password> as sysdba <<EOF
    ALTER TABLESPACE .. BEGIN BACKUP;
    host copy ..
    ALTER TABLESPACE .. END BACKUP;
    EXIT
    EOF

  • 10.2.0.4/10.2.0.5 Hot Backups (11r2 considered too) - ORACLE_HOME

    Scenario:
    We have a script that runs a hot backup of each database on a VM server. Because there are multiple databases on the VM we have a mixture of database versions. For example, we might have 8 databases on Server A running 10.2.0.5 and we might have 2 databases on the same server (Server A) running 10.2.0.4 (different homes of course). Our hot backup script sets one single ORACLE_HOME env variable for running a hot backup for all databases.
    Question:
    Can I safely export the script's ORACLE_HOME as 10.2.0.5 for all the databases on the VM and still get successful hot backups of the 10.2.0.4 and the 10.2.0.5 databases?
    Followup Question:
    When we eventually start upgrading the 10.2.0.5 databases to 11r2, will a single ORACLE_HOME set by the same hot backup script work knowing that some databases may be 11r2 and some may be 10.2.0.5 on the same VM?
    Thanks for your input.

    The cron job runs a script that builds a script that selects the SIDs on that server and then it runs another script that builds a sqlplus script (script detailed below). The sqlplus script is a file that contains the following commands for each SID on the machine:
    set feedback off
    set pagesize 0
    set termout off
    spool /xxx/xxx/b_backup.sql
    select 'set termout on' from dual;
    select 'set echo on' from dual;
    select distinct 'alter tablespace '||tablespace_name||' begin backup;'
    from dba_data_files;
    select 'select from v$backup;' from dual;*
    select 'exit' from dual;
    spool off
    *@/xxx/xxx/b_backup.sql*
    exit
    So what we end up with is a sqlplus file that will execute the above commands for each SID on the box. The databases may be different versions though (a mix of 10.2.0.4 and 10.2.0.5).
    My basic question is: Will using ORACLE_HOME pointing to 10.2.0.5 sqlplus have any negative effect on backing up a 10.2.0.4 database?
    My secondary/follow-up question is concerning future upgrades to 11r2: Will using ORACLE_HOME pointing to 11.2.0.2 sqlplus have any negative effect on backing up the remaining (not yet upgraded) 10.2.0.5 databases?
    Does sqlplus version matter when running sqlplus from a different ORACLE_HOME on a database with a different version?

  • Rman hot backup

    hello
    i am using rman hot backup script to take backup database everyday but the problem is it is working but not deleting old backup older than 2 days .
    also i have question .. my database is in archive log mode and everyday about 6-7 .arch files generating in my archive directory.
    it is not deleting the old files but generating new files everyday so adding up to the space.
    SQL> show parameter archive
    NAME TYPE VALUE
    archive_lag_target integer 0
    log_archive_config string
    log_archive_dest string /u03/archive_logs/DEVL
    log_archive_dest_1 string
    also should i set dest_1 as archive location or just log_archive_dest
    whats is the difference.?
    my rman script is
    RMAN Hot backup.unix script
    The RMAN hot backup script rman_backup.sh:
    # !/bin/bash
    # Declare your environment variables
    export ORACLE_SID=DEVL
    export ORACLE_BASE=/u01/app/oracle
    export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1
    export PATH=$PATH:${ORACLE_HOME}/bin
    # Start the rman commands
    rman target=/ << EOF
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/u03/backup/autobackup_control_file%F';
    CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 2 DAYS;
    run {
    allocate channel d1 type disk;
    allocate channel d2 type disk;
    allocate channel d3 type disk;
    allocate channel d4 type disk;
    ALLOCATE CHANNEL RMAN_BACK_CH01 TYPE DISK;
    CROSSCHECK BACKUP;
    BACKUP AS COMPRESSED BACKUPSET DATABASE FORMAT '/u03/backup/databasefiles_%d_%u_%s_%T';
    sql 'ALTER SYSTEM ARCHIVE LOG CURRENT';
    BACKUP AS COMPRESSED BACKUPSET ARCHIVELOG ALL FORMAT '/u03/backup/archivelogs_%d_%u_%s_%T' DELETE INPUT;
    BACKUP AS COMPRESSED BACKUPSET CURRENT CONTROLFILE FORMAT '/u03/backup/controlfile_%d_%u_%s_%T';
    CROSSCHECK BACKUP;
    DELETE NOPROMPT OBSOLETE;
    DELETE NOPROMPT EXPIRED BACKUP;
    RELEASE CHANNEL RMAN_BACK_CH01;
    EXIT;
    EOF
    thanks

    Ahmer Mansoor wrote:
    RMAN never deletes the Backups unless there is a space pressure in the Recovery Area. Instead it marks the Backups as OBSOLETE based on Retention Policy (in your case it is 2 Days),
    To confirm it SET DB_RECOVERY_FILE_DEST_SIZE to some smaller value, the RMAN will remove all the Obsolete Backups automatically to reclaim space.Be very careful with this. If you generate a LOT of archivelog files and you exceed this size, on the next archivelog switch your database will hang with "cannot continue until archiver freed". RMAN will not automatically remove anything. RMAN only removes stuff when you program it in your script.
    See:
    http://docs.oracle.com/cd/E14072_01/backup.112/e10642/rcmconfb.htm#insertedID4 Retention Policy (recovery window or redundancy)
    things like:
    set retention window and number of copies
    crosscheck backup
    delete obsolete <-- delete old, redundant, no longer necessary backups/archivelogs
    delete expired <-- NOTE: If you manually delete files and do not execute delete expired (missing file), the DB_RECOVERY_FILE_DEST_SIZE remains the same. So, you can clean out the space and oracle will still say the location is "full".
    Understand that if you also set this parameter too small and your backup recovery window/redundancy are incorrectly set, you can also exhaust the "logical" space of this location again, putting your database at risk. Your parameter could be set to 100G on a 400G file system and even though you have 300G available, Oracle will see the limit of this parameter.
    My suggestion, get in a DEV/TEST environment and test to see how to best configure your environment for RMAN database backups/control file, archivelog backups also taking into consideration OS tape backup solutions. I always configure DISK for RMAN backups, then have some other tape backup utility sweep those locations to tape ensuring that I have sufficient backups to reconstitute my database - I also include a copy of the init.ora file, password file as well as the spfile backup in this location.
    >
    In case of Archivelogs, It is better to create and execute a Purge Job to remove Archivelogs after backup them on tape.I almost agree. I try to keep all archivelogs necessary for recovery from last full backup online. I try to keep a full backup online as well. much faster at restoring stuff instead of trying to locate it on tape.

Maybe you are looking for

  • ENVY 20-d030 - Win 8.1 - Hard Drive Failure - Replacement Options

    Subject computer, only about 3 months out of its 1-year warranty, came up with a failed hard drive.  I tried internal Recovery - no luck, then Recovery Disks - now getting message, "ERROR: No boot disk has been detected or the disk has failed." This

  • No Microsoft files listed in the settings "convert into PDF files"

    Hi, I wanted to convert different Microsoft Office files, but it doesn't work. There are no Microsoft files listed in the settings "convert into PDF files". I had the trial version of Adobe Acrobat Pro XI installed on a MAC in Windows that runs troug

  • Drag Drop Cancel

    Is there a way to cleanly cancel a drag/drop operation. I want to cancel it by hitting the escape key. I can capture the Escape key being hit during the drag/drop I don't know how to make the Image proxy go away and return to it's source. Thanks in a

  • Preview before purchase

    I know this has been asked before, but it was never answered. How do you listen to a song before purchasing it (you know, trying to get the right version of the song....)?

  • How do you get the Calendar in My Page on the Wiki to show up on Group Calendar in Wiki

    I"m using Snow Leopard Server 10.6.8, and it permits you to setup a personal web calendar and a group calendar on its WIKI.  Since I'm a member of the group, why can't the calendar events I enter in My Page show up on the Group Calendar WIKI page?