Hot backup centralized of database distributed

I will like to know if it is possible to make a hot backup of the data base distant
help!

you might want to elaborate what you plan to do. What's the OS ?
Can you connect remotely to server to run the backup job, but the actual job will be run locally on server.

Similar Messages

  • How to take a Hot backup of Oracle database

    1: put the db in archive log mode
    2: set the db_sid to correct one
    3: login to sqlplus
    4: verify the name of the db that you are connected to
    select name from v$database;
    5: check if the db is in archive log made
    select log_mode from v$database;
    if not in archive log mode
    another command to check
    archive log list;
    6: find where on disk oracle writes archive log when it is in archive log mode
    sql> show parameter log_archive_dest_1;
    if the value is found to be 0, that means no values will be recorded, so we need to change it
    sql> alter system set log_archive_dest_1='LOCATION=c:\database\oradata\finance\archived_logs\'
    scope=spfile;
    7: shutdown immediate; < this is done just to prepare the db for hot backups >
    8: startup the db in mount mode
    startup mount;
    ( 3 startup types : nomount - just starts the instance, mount - locates the control files and open up according to the values, open - finds the datafiles from the control files and opens up the db )
    9: put the db in archive log mode
    alter database archivelog;
    10: open the database
    alter database open;
    11: check the status of the db
    select log_mode from v$database;
    SQL> archive log list;
    12: create a directory for archived log
    check if its empty, if empty we need to switch
    sql> alter system archive log current;
    run it 5 times < need to put / and enter > , then check the archive log dir , we will find files
    13: make a table in the database and insert data in it
    create table employees (fname varchar(2));
    check the table
    desc employees;
    insert values
    insert into employees values ('Mica');
    14: tablespace must be in hot backup mode
    check the status
    select * from v$backup;
    if found not active, then we need to change
    we cannot put the db in hot backup mode, unless it is archive log mode
    change to hot backup mode
    alter database begin backup;
    check the status
    select * from v$backup;
    15: now we can only COPY DBF FILES
    copy *dbf <distination location>
    16: need to take the db out to hot backup mode
    alter database end backup;
    17: need to make another archive log switch
    alter system archive log current;
    18: need to copy control files now, need to do a binary bckup
    alter database backup controlfile to '<location>\controlbackup';
    19: insert more values to the table
    insert into employess values ('NASH')
    COMMIT;
    make another archive log switch : alter system archive log current;
    do the same process for more values
    20 : backup all the archive logs to a new location
    21: shutdown the db and simulate a hw error, delete all the files from the database folder
    22: try to start the sqlplus and db ::: error
    23: copy all the backups to the db dir
    need to copy the control files, rename the binary backup of the control file and make the copies as needed
    24: try to mount the db, error < must use reset logs or noreset logs >
    25: need to do a recovering of the database
    shutdown
    restore the archive logs
    startup mount;
    recover database until cancel using backup controlfile;
    it will ask for a log file :
    yes for recovery
    cancel for cancelling recovery
    26: check status: open the database in readonly
    alter database open read only;
    check the tables to see the data
    shutdown immediate
    shartup mount;
    recover again : recover database until cancel using backup controlfile;
    if oracle is asking for a log that do nto exist , all we have to do is type cancel
    27: open the database
    alter database open;
    need to do reset logs
    alter database open resetlogs;
    28: check the db that you are connected, check the tables
    thanks and regards
    VKN
    site admin
    http://www.nitrofuture.com

    A very long list ... let me make it shorter.
    SQL> archive log list;If I see this:
    Database log mode              No Archive ModeI put the database into archivelog mode and leave it there forever.
    If it is in archivelog mode:
    RMAN> TARGET SYS/<password>@<service_name> NOCATALOG
    RMAN> BACKUP DATABASE PLUS ARCHIVELOG;Though there are a lot of things one could do better such as incrementals with block change tracking, creating an RMAN catalog, etc.

  • HOT Backups for smaller databases

    Should we schedule hot backups for small databases? the database size is around 1.5 GB. We have already scheduled daily FULL DB Export.
    The database is Oracle 8i.

    RMAN> run {
    2> allocate channel ch1 type disk format 'e:\rman_backup\backup%d_DB_%u_%p';
    3> backup database;
    4> backup archivelog all;
    5> release channel ch1;
    6> }
    RMAN-03022: compiling command: allocate
    RMAN-03026: error recovery releasing channel resources
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure during compilation of command
    RMAN-03013: command type: allocate
    RMAN-06172: not connected to recovery catalog database
    RMAN>

  • Hot Backup for oracle database?

    Dear all,
    I want to change Cold Backup to Hot Backup. Does anyone how to do Hot Backup and has some simple document I can follow? If the database is running in ARCHIVELOG mode, is the size grow very fast or other effect will overcome?
    Please advice,
    Amy

    I want to change Cold Backup to Hot Backup. Does
    anyone how to do Hot Backup and has some simple
    document I can follow?online/hot backup don’t need to shutdown the database we can put the database in backup mode and then start taking backup even though users database activity read/write data going on. This strategy useful if ours database goes for 24x7.
    For online/hot backup the database should be in archive mode.
    I hope you know how to turn on archive.Before turning on check the archive on or not.
    SQL> archive log list
    Database log mode              Archive Mode
    Automatic archival             Enabled
    Archive destination            USE_DB_RECOVERY_FILE_DEST
    Oldest online log sequence     1
    Next log sequence to archive   2
    Current log sequence           2
    SQL> you may also check it by connecting to database as sys
    SQL> select log_mode from v$database;
    LOG_MODE
    ARCHIVELOG
    SQL> if yours database is not in archive mode then first enable archive,please before turning on the archive take cold backup.
    SQL>shutdown immediate
    SQL>startup mount
    SQL>alter database archivelog
    SQL>alter database open;
    SQL> archive log list
    Now you are able to take hot/online backup ,its upto you you use user managed backup and recovery or use RMAN oracle own shipped free tool for backup and recovery ,however my recommendation would goes for RMAN.
    http://download-uk.oracle.com/docs/cd/A97630_01/server.920/a96572/toc.htm
    http://www.oracle.com/technology/deploy/availability/htdocs/rman_overview.htm
    ARCHIVELOG mode, is the size grow very fast or other
    effect will overcome?However, the growing size pace either fast or slow everyday can be impractical it alls depends on activity of yours database operation.You will have to observe it yourself by turning on archive log its vary database to database activity.
    Khurram

  • How to restore RMAN hot backup to another database on another server?

    I want to know how to restore RMAN hot backup from production server to another database on a testing server.
    The hot backup is from a database named PROD on the production server
    The database to be restored with the hot backup is TEST on the testing server. There is already a PROD database on the testing server and this PROD database must be kept.
    I have read some threads about changing initTEST.ora to PROD to restore such backup but (I think) will not work in my case since I already have a PROD database on the testing server.
    The version is 11gR2 on Linux but the compatible parameter is set to 10.2.0.1.0.
    Thanks for any help.

    Hi,
    Since you are on 11g, hope this helps you http://shivanandarao.wordpress.com/2012/04/28/duplicating-database-without-connecting-to-target-database-or-catalog-database-in-oracle-11g/
    881656     
    Handle:     881656
    Status Level:     Newbie
    Registered:     Aug 25, 2011
    Total Posts:     53
    Total Questions:      31 (31 unresolved)
    Looks like forum is of no help to you. To get better responses, consider closing your threads by providing appropriate points if you feel that they have been answered. Keep the forum clean !!

  • Is possible to recover hot.backup of a nonarchive database?

    I have read on several books that hot backups must be done on a database set with ARCHIVELOG mode, but it is physically possible to recover a hot-backup of a database that was in NOARCHIVELOG mode. Obviously some transactions would be lost, but it is possible to do it?
    I'm trying to do it unsuccesfully (when I try to startup the data base he asme for recovering the database and then always ask me for a archive log file.......)
    Thanks

    What you have red is correct.
    You can't do hot backup on non-archivelog database.

  • Creating RAC database thru RMAN hot backup

    Hi Guys,
    I am having ERP PROD database on 2 nodes 116 and 117(RAC) server. We have a scheduled RMAN HOT backup of this database
    WE also have ASM implemented.
    We have another 2 servers named 36 and 37(RAC and ASM) which has ERP Test Database and the thing is that we want to refresh this
    test database using ERP PROD RMAN backup.
    Can someone please post the proper steps to restore the RMAN backup to ERP Test database as ASM is alos there.
    As it is little urgent for us.
    help will be appreciated.
    Regards,
    Milan Rathod

    1. take the backup of production database
    rman connect target /
    backup as compressed backupset database plus archivelog format '/u01/db/backup/%d_%I_%s_%T';
    move the backup to test server (same location /u01/db/backup/ bcoz rman checks for backupset in the same location only)
    add the below two lines in the test database-- initTEST.ora file
    comment cluster_ parameters in initTEST.ora file
    eg:-
    DB_FILE_NAME_CONVERT = (+EBAOUATDATADG1/prod/,/u01/db/app/admin/test/)
    LOG_FILE_NAME_CONVERT = (+EBAOUATDATADG1/prod/,/u01/db/app/admin/test/)
    check the connectivity between prod and test database.
    from test
    $tnsping PROD
    from prod machine
    $tnsping TEST
    in test server
    startup nomount
    rman target sys@proddb auxiliary /
    duplicate target database to test;
    after completion of duplicate/clone step convert the single instance test to rac instance by uncomment the cluster_ parameters in initTEST.ora parameter
    startup mount the second node in TEST
    ALTER DATABASE ENABLE PUBLIC THREAD 2 ;
    alter database open;
    hope, this will helps you.
    Good Luck

  • Unique Error while doing hot backup Cloning!

    Dear All,
    I am using EBS 11.5.10.2 with DB 9.2.0.6.
    I am doing hot backup cloning by referring *[Cloning Oracle Application 11i /R12 with Rapid Clone - Database (9i/10g/11g) Using Hot Backup on Open Database|https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=760772.1]*
    I am facing a unique issue, when I run adcfgclone.pl dbconfig <my_context_file>, application of my Production Server start misbehaving and it gives me an error:
    Node Id does not exist for the current application server id.When I check the FND_NODE table I found that there server id was empty in it.
    I have to run than run the following steps as the resolution:
    1. Shutdown all the services.
    2. EXEC FND_CONC_CLONE.SETUP_CLEAN;
    3. COMMIT;
    4. Run AutoConfig on all tiers, firstly on the DB tier and then the APPS tiers,to repopulate the required system tables.
    5.start all the services and verify the issue.
    Than application behaves normally.
    Is there any additional step to be followed while Hot Backup Cloning other than mentioned in the above mentioned ML Doc?
    Please suggest,
    Anchorage :)

    I am facing a unique issue, when I run adcfgclone.pl dbconfig <my_context_file>, application of my Production Server start misbehaving and it gives me an error:
    Node Id does not exist for the current application server id.Where did you get this error?
    This is very strange, how come this happens in Production, when you are doing cloning in TEST/DEV instance.
    Is there any additional step to be followed while Hot Backup Cloning other than mentioned in the above mentioned ML Doc?I feel there are no other additional steps, as I Recently did successfully hotbackup cloning using the same document.

  • Question on recovery from Hot backup

    Whenever I tried to recover from my hot backup using recover database untill cancel (or any other until option)..
    I get messages similar to the following :
    SQL> recover database using backup controlfile until cancel ;
    ORA-00279: change 212733060 generated at 11/18/2008 23:50:58 needed for thread
    1
    ORA-00289: suggestion : /d01/oradata/devl/arc/1_282_667362770.dbf
    ORA-00280: change 212733060 for thread 1 is in sequence #282
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    /d01/oradata/devl/arc/1_282_667362770.dbf
    ORA-00279: change 212733060 generated at needed for thread 2
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    /d01/oradata/devl/arc/2_257_667362770.dbf
    ORA-00279: change 212733060 generated at needed for thread 3
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    /d01/oradata/devl/arc/3_258_667362770.dbf
    ORA-00279: change 212733060 generated at needed for thread 4
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    I have all my archive files in the archive dest location.
    Is there any way to prevent this warning messages and let oracle find all the archive files?
    As you can see from the messages, Oracle is finding the correct file, then why there is an error message and why we have to provide the file names one-by one.
    Please help !!!

    Oracle will look for needed archive logfile from your log archive destination.
    If you are sure all these files are under /d01/oradata/devl/arc/, you can input AUTO, Oracle will work down the list until done.

  • Recovery from hot backup

    I have a problem with recovery hot backup from production database to test database and open it to use.
    I have 'hot backup' without temporary tablespace and roll tablespace, I cant shut down production database (24x7), please give me a regulations how to recover database from hot backup, without online redo logs, temp tablespace, roll tablespace only with archive logs, and.....after this ..rename database sid.
    Every time when I recovered database, it's still want to recover...and recover from archive logs and I cant open it, it's still inconsistent.Please send me an answer for [email protected]
    thanks!

    Hi,
    What is the OS and version of Oracle?
    Do you have all the archivelogs after the hot backup? Is the backup a valid backup? Is it a RMAN backup or normal file system backup of the datafiles?
    Regards,
    Badri.

  • Consistent hot backup possible

    Is a consistent hot backup possible?
    I would like to perform hot backups while the database is in basically a read only state. I am currently using Oracle recommended backups via OEM, for example.
    run {
    allocate channel oem_disk_backup device type disk;
    recover copy of database with tag 'ORA$OEM_LEVEL_0';
    backup incremental level 1 cumulative copies=1 for recover of copy with tag 'ORA$OEM_LEVEL_0' database;
    release channel oem_disk_backup;
    allocate channel oem_sbt_backup1 type 'SBT_TAPE' format '%U';
    backup recovery area;
    Would executing the sql command "alter database begin backup;" before running the above RMAN script accomplish this task? Then off course when completed execute sql "alter database end backup;".
    My basic concern is this type of RMAN hot backup usable in a disaster situation, i.e recreated on another server from tape backup.
    I am open to any other ideas.
    Thanks for your help in advance.
    Ed - Wasilla, Alaska
    Edited by: evankrevelen on Sep 11, 2008 10:18 PM

    Thanks everyone who replied to this thread.
    Just to clarify my complete backup strategy, there are two RMAN scripts run on daily and weekly basis. The daily does pickup the archivelogs. I had shown the weekly when first opening this thread. Here is the daily.
    run {
    allocate channel oem_disk_backup device type disk;
    recover copy of database with tag 'ORA$OEM_LEVEL_0';
    backup incremental level 1 cumulative copies=1 for recover of copy with tag 'ORA$OEM_LEVEL_0' database;
    release channel oem_disk_backup;
    allocate channel oem_sbt_backup1 type 'SBT_TAPE' format '%U';
    backup archivelog all not backed up;
    backup backupset all not backed up since time 'SYSDATE-1';
    My question now is what RMAN does in the increments. It appears to be updating the original level 0 copies of datafiles with changed blocks only. Is the new copy of the datafile now a level 0 type file?
    Here is a transcript from one of the daily backups.
    Starting recover at 11-SEP-08
    channel oem_disk_backup: starting incremental datafile backupset restore
    channel oem_disk_backup: specifying datafile copies to recover
    recovering datafile copy fno=00001 name=+DEVRVYG1/landesk/datafile/system.2576.616107783
    recovering datafile copy fno=00002 name=+DEVRVYG1/landesk/datafile/undotbs1.2574.616107865
    recovering datafile copy fno=00003 name=+DEVRVYG1/landesk/datafile/sysaux.2575.616107829
    recovering datafile copy fno=00004 name=+DEVRVYG1/landesk/datafile/users.2572.616107871
    recovering datafile copy fno=00005 name=+DEVRVYG1/landesk/datafile/landesk.2914.616107643
    channel oem_disk_backup: reading from backup piece +DEVRVYG1/landesk/backupset/2008_09_10/nnndn1_tag20080910t220150_0.12330.665100189
    channel oem_disk_backup: restored backup piece 1
    piece handle=+DEVRVYG1/landesk/backupset/2008_09_10/nnndn1_tag20080910t220150_0.12330.665100189 tag=TAG20080910T220150
    channel oem_disk_backup: restore complete, elapsed time: 00:05:16
    Finished recover at 11-SEP-08
    Starting backup at 11-SEP-08
    channel oem_disk_backup: starting incremental level 1 datafile backupset
    channel oem_disk_backup: specifying datafile(s) in backupset
    input datafile fno=00005 name=+DEVG1/landesk/datafile/landesk.374.614072207
    input datafile fno=00003 name=+DEVG1/landesk/datafile/sysaux.384.614002027
    input datafile fno=00001 name=+DEVG1/landesk/datafile/system.383.614002025
    input datafile fno=00002 name=+DEVG1/landesk/datafile/undotbs1.385.614002027
    input datafile fno=00004 name=+DEVG1/landesk/datafile/users.386.614002027
    channel oem_disk_backup: starting piece 1 at 11-SEP-08
    channel oem_disk_backup: finished piece 1 at 11-SEP-08
    piece handle=+DEVRVYG1/landesk/backupset/2008_09_11/nnndn1_tag20080911t220708_0.12999.665186835 tag=TAG20080911T220708 comment=NONE
    channel oem_disk_backup: backup set complete, elapsed time: 00:02:26
    channel oem_disk_backup: starting incremental level 1 datafile backupset
    channel oem_disk_backup: specifying datafile(s) in backupset
    including current control file in backupset
    including current SPFILE in backupset
    channel oem_disk_backup: starting piece 1 at 11-SEP-08
    channel oem_disk_backup: finished piece 1 at 11-SEP-08
    piece handle=+DEVRVYG1/landesk/backupset/2008_09_11/ncsnn1_tag20080911t220708_0.2301.665186983 tag=TAG20080911T220708 comment=NONE
    channel oem_disk_backup: backup set complete, elapsed time: 00:00:21
    Finished backup at 11-SEP-08
    It appears to be updating the previous copy with updated blocks thus rolling forward the datafile copy to a new level 0 copy.
    Then to restore from the backup RMAN would first use this new copy of the datafile and then apply any archivelogs to them to bring the database to the point in time the incremental backup was taken.
    Are these assumptions true?
    Thanks for your help,
    ED

  • Archive logs are missing in hot backup

    Hi All,
    We are using the following commands to take hot backup of our database. Hot backup is fired by "backup" user on Linux system.
    =======================
    rman target / nocatalog <<EOF
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '$backup_dir/$date/%F';
    run {
    allocate channel oem_backup_disk1 type disk format '$backup_dir/$date/%U';
    #--Switch archive logs for all threads
    sql 'alter system archive log current';
    backup as COMPRESSED BACKUPSET database include current controlfile;
    #--Switch archive logs for all threads
    sql 'alter system archive log current';
    #--Backup Archive logs and delete what we've backedup
    backup as COMPRESSED BACKUPSET archivelog all not backed up delete all input;
    release channel oem_backup_disk1;
    allocate channel for maintenance type disk;
    delete noprompt obsolete device type disk;
    release channel;
    exit
    EOF
    =======================
    Due to which after command (used 2 times) "sql 'alter system archive log current';" I see the following lines in alert log 2 times. Because of this all the online logs are not getting archived (Missing 2 logs per day), the backup taken is unusable when restoring. I am worried about this. I there any to avoid this situation.
    =======================
    Errors in file /u01/oracle/admin/rac/udump/rac1_ora_3546.trc:
    ORA-19504: failed to create file "+DATA/rac/1_32309_632680691.dbf"
    ORA-17502: ksfdcre:4 Failed to create file +DATA/rac/1_32309_632680691.dbf
    ORA-15055: unable to connect to ASM instance
    ORA-01031: insufficient privileges
    =======================
    Regards,
    Kunal.

    All thanks you for help, pleas find additional information. I goth the following error as log sequence was missing. Everyday during hotbackup, there are 2 missing archive logs, which makes our backup inconsistent and useless.
    archive log filename=/mnt/xtra-backup/ora_archivelogs/1_32531_632680691.dbf thread=1 sequence=32531
    archive log filename=/mnt/xtra-backup/ora_archivelogs/2_28768_632680691.dbf thread=2 sequence=28768
    archive log filename=/mnt/xtra-backup/ora_archivelogs/2_28769_632680691.dbf thread=2 sequence=28769
    archive log filename=/mnt/xtra-backup/ora_archivelogs/2_28770_632680691.dbf thread=2 sequence=28770
    archive log filename=/mnt/xtra-backup/ora_archivelogs/1_32532_632680691.dbf thread=1 sequence=32532
    archive log filename=/mnt/xtra-backup/ora_archivelogs/2_28771_632680691.dbf thread=2 sequence=28771
    archive log filename=/mnt/xtra-backup/ora_archivelogs/2_28772_632680691.dbf thread=2 sequence=28772
    archive log filename=/mnt/xtra-backup/ora_archivelogs/2_28772_632680691.dbf thread=2 sequence=28773
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of recover command at 12/13/2012 04:22:56
    RMAN-11003: failure during parse/execution of SQL statement: alter database recover logfile '/mnt/xtra-backup/ora_archivelogs/2_28772_632680691.dbf'
    ORA-00310: archived log contains sequence 28772; sequence 28773 required
    ORA-00334: archived log: '/mnt/xtra-backup/ora_archivelogs/2_28772_632680691.dbf'
    Let me try the susggestions provided above.

  • Using Data Guard and hot backups - 9.2.0.6.0

    Hi all,
    I have an existing 9.2.0.6.0 database that is setup in a DataGuard environment - one primary database with a physical standby in a separate datacenter. It is all setup and it works beautifully. On our primary database, we currently have 2 different types of backups we are doing - we do an export of the main schema (all of the application data is all in this one schema) 4 times a day, and we do a full database hot backup once a night.
    My question is in regards to the hot backup - I don't know that it is even worth doing a hot backup of this database? I am trying to think of a situation where we would actually want to restore a hot backup of the primary database... If we ran into some kind of a data issue, it would probably be quickest and easiest to restore data from one of the exports, and when we did that restore (import), I assume that data change would be replicated through DataGuard to the standby site. But if there was some kind of situation where we wanted to restore a recent hot backup of the primary database, that would essentially break the Data Guard configuration, and I assume that after the hot backup was restored, we would have to somehow re-instantiate Data Guard on the standby site.
    Does anyone have any input on this? If you are running with DataGuard, is it even worth it to be doing hot backups? What kind of situation would call for restoring a hot backup, instead of just failing over to the standby?
    Thanks!
    --Brad                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

    If we ran into some kind of a data issue, it would probably be quickest and easiest to restore data from one of the exportsIt would be quickest to failover to stand by database than restroing from dump file. After all you maintaing stand by db for that reason.
    How can you restore database up to the latest changes using export/import ? You may have to restore using rman and apply the logs.
    You can backup from stand by database. You do not need to back up primary.

  • 10.2.0.4/10.2.0.5 Hot Backups (11r2 considered too) - ORACLE_HOME

    Scenario:
    We have a script that runs a hot backup of each database on a VM server. Because there are multiple databases on the VM we have a mixture of database versions. For example, we might have 8 databases on Server A running 10.2.0.5 and we might have 2 databases on the same server (Server A) running 10.2.0.4 (different homes of course). Our hot backup script sets one single ORACLE_HOME env variable for running a hot backup for all databases.
    Question:
    Can I safely export the script's ORACLE_HOME as 10.2.0.5 for all the databases on the VM and still get successful hot backups of the 10.2.0.4 and the 10.2.0.5 databases?
    Followup Question:
    When we eventually start upgrading the 10.2.0.5 databases to 11r2, will a single ORACLE_HOME set by the same hot backup script work knowing that some databases may be 11r2 and some may be 10.2.0.5 on the same VM?
    Thanks for your input.

    The cron job runs a script that builds a script that selects the SIDs on that server and then it runs another script that builds a sqlplus script (script detailed below). The sqlplus script is a file that contains the following commands for each SID on the machine:
    set feedback off
    set pagesize 0
    set termout off
    spool /xxx/xxx/b_backup.sql
    select 'set termout on' from dual;
    select 'set echo on' from dual;
    select distinct 'alter tablespace '||tablespace_name||' begin backup;'
    from dba_data_files;
    select 'select from v$backup;' from dual;*
    select 'exit' from dual;
    spool off
    *@/xxx/xxx/b_backup.sql*
    exit
    So what we end up with is a sqlplus file that will execute the above commands for each SID on the box. The databases may be different versions though (a mix of 10.2.0.4 and 10.2.0.5).
    My basic question is: Will using ORACLE_HOME pointing to 10.2.0.5 sqlplus have any negative effect on backing up a 10.2.0.4 database?
    My secondary/follow-up question is concerning future upgrades to 11r2: Will using ORACLE_HOME pointing to 11.2.0.2 sqlplus have any negative effect on backing up the remaining (not yet upgraded) 10.2.0.5 databases?
    Does sqlplus version matter when running sqlplus from a different ORACLE_HOME on a database with a different version?

  • Hot backup slow

    Hi,
    I am working in oracle8i.We are using hot backup in our database. Normally it takes 2 hrs to complete the backup process.
    but today it took 5 hrs to complete the backup process. Kindly let me know what are the possible ways to identify the solution for taking more time.
    Rgds.

    Hi,
    There could be any number of reasons for the backup taking longer - but here are a few likely candidates:
    The server was busier than usual - run top to see if anything unusual is hogging CPU.
    The database was busier than usual - did someone run a large report/update
    Tape drive contention - do you share a tape device with another app?
    The database has grown significantly.
    Cheers,
    Andy Barry
    http://www.shutdownabort.com

Maybe you are looking for