Consistent hot backup possible

Is a consistent hot backup possible?
I would like to perform hot backups while the database is in basically a read only state. I am currently using Oracle recommended backups via OEM, for example.
run {
allocate channel oem_disk_backup device type disk;
recover copy of database with tag 'ORA$OEM_LEVEL_0';
backup incremental level 1 cumulative copies=1 for recover of copy with tag 'ORA$OEM_LEVEL_0' database;
release channel oem_disk_backup;
allocate channel oem_sbt_backup1 type 'SBT_TAPE' format '%U';
backup recovery area;
Would executing the sql command "alter database begin backup;" before running the above RMAN script accomplish this task? Then off course when completed execute sql "alter database end backup;".
My basic concern is this type of RMAN hot backup usable in a disaster situation, i.e recreated on another server from tape backup.
I am open to any other ideas.
Thanks for your help in advance.
Ed - Wasilla, Alaska
Edited by: evankrevelen on Sep 11, 2008 10:18 PM

Thanks everyone who replied to this thread.
Just to clarify my complete backup strategy, there are two RMAN scripts run on daily and weekly basis. The daily does pickup the archivelogs. I had shown the weekly when first opening this thread. Here is the daily.
run {
allocate channel oem_disk_backup device type disk;
recover copy of database with tag 'ORA$OEM_LEVEL_0';
backup incremental level 1 cumulative copies=1 for recover of copy with tag 'ORA$OEM_LEVEL_0' database;
release channel oem_disk_backup;
allocate channel oem_sbt_backup1 type 'SBT_TAPE' format '%U';
backup archivelog all not backed up;
backup backupset all not backed up since time 'SYSDATE-1';
My question now is what RMAN does in the increments. It appears to be updating the original level 0 copies of datafiles with changed blocks only. Is the new copy of the datafile now a level 0 type file?
Here is a transcript from one of the daily backups.
Starting recover at 11-SEP-08
channel oem_disk_backup: starting incremental datafile backupset restore
channel oem_disk_backup: specifying datafile copies to recover
recovering datafile copy fno=00001 name=+DEVRVYG1/landesk/datafile/system.2576.616107783
recovering datafile copy fno=00002 name=+DEVRVYG1/landesk/datafile/undotbs1.2574.616107865
recovering datafile copy fno=00003 name=+DEVRVYG1/landesk/datafile/sysaux.2575.616107829
recovering datafile copy fno=00004 name=+DEVRVYG1/landesk/datafile/users.2572.616107871
recovering datafile copy fno=00005 name=+DEVRVYG1/landesk/datafile/landesk.2914.616107643
channel oem_disk_backup: reading from backup piece +DEVRVYG1/landesk/backupset/2008_09_10/nnndn1_tag20080910t220150_0.12330.665100189
channel oem_disk_backup: restored backup piece 1
piece handle=+DEVRVYG1/landesk/backupset/2008_09_10/nnndn1_tag20080910t220150_0.12330.665100189 tag=TAG20080910T220150
channel oem_disk_backup: restore complete, elapsed time: 00:05:16
Finished recover at 11-SEP-08
Starting backup at 11-SEP-08
channel oem_disk_backup: starting incremental level 1 datafile backupset
channel oem_disk_backup: specifying datafile(s) in backupset
input datafile fno=00005 name=+DEVG1/landesk/datafile/landesk.374.614072207
input datafile fno=00003 name=+DEVG1/landesk/datafile/sysaux.384.614002027
input datafile fno=00001 name=+DEVG1/landesk/datafile/system.383.614002025
input datafile fno=00002 name=+DEVG1/landesk/datafile/undotbs1.385.614002027
input datafile fno=00004 name=+DEVG1/landesk/datafile/users.386.614002027
channel oem_disk_backup: starting piece 1 at 11-SEP-08
channel oem_disk_backup: finished piece 1 at 11-SEP-08
piece handle=+DEVRVYG1/landesk/backupset/2008_09_11/nnndn1_tag20080911t220708_0.12999.665186835 tag=TAG20080911T220708 comment=NONE
channel oem_disk_backup: backup set complete, elapsed time: 00:02:26
channel oem_disk_backup: starting incremental level 1 datafile backupset
channel oem_disk_backup: specifying datafile(s) in backupset
including current control file in backupset
including current SPFILE in backupset
channel oem_disk_backup: starting piece 1 at 11-SEP-08
channel oem_disk_backup: finished piece 1 at 11-SEP-08
piece handle=+DEVRVYG1/landesk/backupset/2008_09_11/ncsnn1_tag20080911t220708_0.2301.665186983 tag=TAG20080911T220708 comment=NONE
channel oem_disk_backup: backup set complete, elapsed time: 00:00:21
Finished backup at 11-SEP-08
It appears to be updating the previous copy with updated blocks thus rolling forward the datafile copy to a new level 0 copy.
Then to restore from the backup RMAN would first use this new copy of the datafile and then apply any archivelogs to them to bring the database to the point in time the incremental backup was taken.
Are these assumptions true?
Thanks for your help,
ED

Similar Messages

  • Is possible to recover hot.backup of a nonarchive database?

    I have read on several books that hot backups must be done on a database set with ARCHIVELOG mode, but it is physically possible to recover a hot-backup of a database that was in NOARCHIVELOG mode. Obviously some transactions would be lost, but it is possible to do it?
    I'm trying to do it unsuccesfully (when I try to startup the data base he asme for recovering the database and then always ask me for a archive log file.......)
    Thanks

    What you have red is correct.
    You can't do hot backup on non-archivelog database.

  • Is it possible to apply hot backup to do data refresh for another env?

    There are 2 environments (UAT & DEV), now I am running HOT BACKUP on UAT environment. Is it possible I use the hot bacukp data file and apply into DEV environment. So that I can do the data refresh in DEV?
    Please advice,
    Amy

    you have two option either go with Duplicate Database or go with Restore And Recover Database to Another Host.
    Khurram

  • Hot backup on NOARCHIVELOG mode?

    DB version:10gR2, 11G
    Why is it not possible to do a Hot Backup in NOARCHIVELOG mode? What role does archived redo log files have in a Hot backup?

    Because it takes more than zero seconds to backup a database.
    Say your database consists of 1 single datafile of 10MB. This datafile, at the OS filesystem level, consists of 2,560 blocks of 4KB each.
    If you start a backup of the datafile, the OS utility (tar, cp, cpio, whatever command) reads the first 4KB block and copies it out. It then, after a certain time, reads the next block. And so on till it gets to the last block of the file.
    However, since the database is "open" transactions may have updated blocks in the datafile.
    Therefore, at time t0, block 1 may have been copied out. At time t5, block 128 may have been copied out. At time t32, block 400 may have been copied out. Unfortunately, some user may have updated block 1 at time t3 and block 128 at time t8.
    What would happen if these blocks, having been copied out at different times were restored ? They would be inconsistent !!
    It is the ArchiveLog mechanism that allows Oracle to know that a datafile was "active" when it was being backed up. Oracle has to "re-play" all changes that occurred on the datafile from the time the datafile backup began (at t0) till it ended (at the 2,560th block).

  • Why we cannot take hot backup if database is in noarchive log mode

    Hi,
    I am aware that if database is in noarchive log mode, we cannot take hot backups and only cold backup is possible.
    I would like to know the technical reason behind this restriction?
    Thank You
    Sarayu

    Hot backups are fuzzy backups, inconsistent, in other words, since something is always happening in the database.  When you recover, you restore data files and then apply redo to make the transactions consistent.  You can do a complete recovery or recover to a point in time.  So where does the redo come from?  That's what we call archiving redo logs.  When the online redo gets full, it gets archived.
    In the case of an instance crash, the redo is there in the online redo logs, so Oracle can recover automatically.  Anything beyond that, having to do with storage media, is a media recovery, and requires those archived logs.  So unless you have some other way to get your data back, always run in archivelog mode.
    It is really important to understand the concepts.  Please read the docs.
    http://docs.oracle.com/cd/E11882_01/server.112/e25789/cncptdba.htm#CNCPT031
    http://docs.oracle.com/cd/E11882_01/backup.112/e10642/rcmintro.htm#i1005488
    It may be worth your while to get a third party backup and recovery book too.

  • Status of datafiles during Hot backup

    Hi All Gurus,
    I have a problem. I have read different contradictory statements about what happens to data files when they are in backup mode (during hot backup). At some places it is given that after we execute the statement "ALTER TABLESPACE xxxx BEGIN BACKUP", oracle stops writing any data changes to data files so that a consistent copy can be made and for any changes during copying, It writes whole blocks of changed data to redo log fiiles. But at some other places it is given that in backup mode oracle continues to write to the datafiles just as normal, even during copying, it only freezes the datafile header's SCN.
    Now my question is that if we consider the later case, how it is possible to make a consistent copy of a file which is being changed during the copy? Which version of file will be copied? (Because file was different at the time copying started than at time the copying ends), Also if oracle keeps writing to datafiles in the backup mode then what is the benefit of putting them in the backup mode? I mean what purpose does the "ALTER TABLESPACE xxxx BEGIN BACKUP" statement serve if file is still getting changed during backup?
    Thanks,
    Amir Siddiqui.

    Ok, you are right, but if the datafile gets changes
    during backup, how can we get a consistent copy? the
    datafile's data is different at the start and end of
    copying, so the backup copy would contain some blocks
    with older data and some with new - that is, we get
    unreliable and inconsistent copy. Any idea how to
    handle this?You're right, the copy will not be consistent. But, when you go into backup mode, any time a block is modified for the first time since backup mode began, Oracle will log that entire block to redo, rather than just the change vector. When backup mode ends, Oracle writes an end of backup mode marker to the redo log stream. Now, your backup contains datafiles which are not consistent copies. But that's ok. If you ever need to use those datafiles in a recovery, you'll copy them from backup, and then, you'll apply archived redo logs. At a minimum, the archived redo from the point in time where the backup mode began (which is recorded in the frozen datafile header) until the tablespace comes out of backup mode (which will be applied from the redo log stream where the end backup marker was recorded), is required to have a valid, consistent datafile. So, the idea is, by applying all the appropriate archived redo, you'll "repair" any inconsistencies in the datafiles.
    Hope that helps,
    -Mark

  • How to find out which archived logs needed to recover a hot backup?

    I'm using Oracle 11gR2 (11.2.0.1.0).
    I have backed up a database when it is online using the following backup script through RMAN
    connect target /
    run {
    allocate channel d1 type disk;
    backup
    incremental level=0 cumulative
    filesperset 4
    format '/san/u01/app/backup/DB_%d_%T_%u_%c.rman'
    database
    }The backup set contains the backup of datafiles and control file. I have copied all the backup pieces to another server where I will restore/recover the database but I don't know which archived logs are needed in order to restore/recover the database to a consistent state.
    I have not deleted any archived log.
    How can I find out which archived logs are needed to recover the hot backup to a consistent state? Can this be done by querying V$BACKUP_DATAFILE and V$ARCHIVED_LOG? If yes, which columns should I query?
    Thanks for any help.

    A few ways :
    1a. Get the timestamps when the BACKUP ... DATABASE began and ended.
    1b. Review the alert.log of the database that was backed up.
    1c. From the alert.log identify the first Archivelog that was generated after the begin of the BACKUP ... DATABASE and the first Archivelog that was generated after the end of the BACKUP .. DATABASE.
    1d. These (from 1c) are the minimal Archivelogs that you need to RECOVER with. You can choose to apply additional Archivelogs that were generated at the source database to contininue to "roll-forward"
    2a. Do a RESTORE DATABASE alone.
    2b. Query V$DATAFILE on the restored database for the lowest CHECKPOINT_CHANGE# and CHECKPOINT_TIME. Also query for the highest CHECKPOINT_CHANGE# and CHECKPOINT_TIME.
    2c. Go back to the source database and query V$ARCHIVED_LOG (FIRST_CHANGE#) to identify the first Archivelog that has a higher SCN (FIRST_CHANGE#) than the lowest CHECKPOINT_CHANGE# from 2b above. Also query for the first Archivelog that has a higher SCN (FIRST_CHANGE#) than the highest CHECKPOINT_CHANGE# from 2b above.
    2d. These (from 2c) are the minimal Archivelogs that you need to RECOVER with.
    (why do you need to query V$ARCHIVED_LOG at the source ? If RESTORE a controlfile backup that was generated after the first Archivelog switch after the end of the BACKUP ... DATABASE, you would be able to query V$ARCHIVED_LOG at the restored database as well. That is why it is important to force an archivelog (log switch) after a BACKUP ... DATABASE and then backup the controlfile after this -- i.e. last. That way, the controlfile that you have restored to the new server has all the information needed).
    3. RESTORE DATABASE PREVIEW in RMAN if you have the archivelogs and subsequent controlfile in the backup itself !
    Hemant K Chitale

  • What are the consequences of not backing up controlfile in hot backup.

    Hi All,
    We have 8i,9i, 10g and 11g database. The unfortunate thing is that we are using this user managed hot backups for backing up our database. and I have no idea for what reason, we are not backing up the controlfile. Anyway, I would like you to comment on this, they said, they will recreate a text based controlfile and do the recover database until cancel using backup controlfile ; I know, if we have the path of all the datafiles , we can use this, but is this a right strategy ? of course, no one would agreed on this. I really appriciate, if somone could express his opinion in detail.
    Regards

    Hi,
    The database consists of the following files:
    Spfile/Pfile
    Controlfile
    Datafile
    Redo Log
    archivelog
    When we talk about backing up the database, (no matter how) must take backup up of all these files.
    If you are not taking these files in your backup you have a incomplete backup.
    Why not take a back up of control file to a binary file. Is more simple and fast restore controlfile using binary file.
    ALTER DATABASE BACKUP CONTROLFILE TO '/oracle/backup/control.bkp';To Trace should be used to produce SQL statements that can later be used to re-create your control file.
    ALTER DATABASE BACKUP CONTROLFILE TO TRACE;Why not use RMAN? A resource with endless options and features.
    Using user-managed backup you have none options/feature compared with the use of RMAN.
    It can be used in all Oracle databases without additional license.
    Regards,
    Levi Pereira

  • Incomplete Recovery Fails using Full hot backup & Archive logs !!

    Hello DBA's !!
    I am doing on Recovery scenario where I have taken One full hot backup of my Portal Database (EPR) and Restored it on New Test Server. Also I restored Archive logs from last full hot backup for next 6 days. Also I restored the latest Control file (binary) to their original locations. Now, I started the recovery scenario as follows....
    1) Installed Oracle 10.2.0.2 compatible with restored version of oracle.
    2) Configured tnsnames.ora, listener.ora, sqlnet.ora with hostname of Test server.
    3) Restored all Hot backup files from Tape to Test Server.
    4) Restored all archive logs from tape to Test server.
    5) Restored Latest Binary Control file from Tape to Test Server.
    6) Now, Started recovery using following command from SQL prompt.
    SQL> recover database until cancel using backup controlfile;
    7) Open database after Recovery Completion using RESETLOGS option.
    Now in Above scenario I completed steps upto 5) successfully. But when I execute the step 6) the recovery completes with Warning : Recovery completed but OPEN RESETLOGS may throw error " system file needs more recovery to be consistent " . Please find the following snapshot ....
    ORA-00279: change 7001816252 generated at 01/13/2008 12:53:05 needed for thread
    1
    ORA-00289: suggestion : /oracle/EPR/oraarch/1_9624_601570270.dbf
    ORA-00280: change 7001816252 for thread 1 is in sequence #9624
    ORA-00278: log file '/oracle/EPR/oraarch/1_9623_601570270.dbf' no longer needed
    for this recovery
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    ORA-00308: cannot open archived log '/oracle/EPR/oraarch/1_9624_601570270.dbf'
    ORA-27037: unable to obtain file status
    SVR4 Error: 2: No such file or directory
    Additional information: 3
    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
    ORA-01194: file 1 needs more recovery to be consistent
    ORA-01110: data file 1: '/oracle/EPR/sapdata1/system_1/system.data1'
    SQL> SQL> SQL> SQL> SQL> SQL> SQL>
    SQL> alter database open resetlogs;
    alter database open resetlogs
    ERROR at line 1:
    ORA-01194: file 1 needs more recovery to be consistent
    ORA-01110: data file 1: '/oracle/EPR/sapdata1/system_1/system.data1'
    Let me know What should be the reason behind recovery failure !
    Note : I tried to Open the database using Last Full Hot Backup only & not applying any archives. Then Database Opens successfully. It means my Database Installation & Configuration is OK !
    Please Let me know why my Incomplete Recovery using Archive logs Goes Fail ?
    Atul Patil.

    oh you made up a new thread so here again:
    there is nothing wrong.
    You restored your backup, archives etc.
    you started your recovery and oracle applyed all archives but the archive
    '/oracle/EPR/oraarch/1_9624_601570270.dbf'
    does not exist because it represents your current online redo log file and that is not present.
    the recovery process cancels by itself.
    the solution is:
    restart your recovery process with:
    recover database until cancel using backup controlfile
    and when oracle suggests you '/oracle/EPR/oraarch/1_9624_601570270.dbf'
    type cancel!
    now you should be able to open your database with open resetlogs.

  • Hot backup - best practice

    I have written a simple TimerTask subclass to handle backup for my application. I am doing my best to follow the guidelines in the Getting Started guide. I am using the provided DbBackup helper class along with a ReentrantReadWriteLock to quiesce the database. I want to make sure that writes wait for the lock and that a checkpoint happens before the backup. I have included a fragment of the code that we are using and I am hoping that perhaps someone could validate the approach.
    I should add that we are running a replicated group in which the replicas forward commands to the master using RMI and the Command pattern.
    Kind regards
    James Brook
    // Start backup, find out what needs to be copied.
    backupHelper.startBackup();
    log.info("running backup...");
    // Stop all writes by quiescing the database
    RepositoryNode.getInstance().bdb().quiesce();
    // Ensure all data is committed to disk by running a checkpoint with the default configuration - expensive
    environment.checkpoint(null);
    try { 
         String[] filesForBackup = backupHelper.getLogFilesInBackupSet();
         // Copy the files to archival storage.
         for (int i=0; i<filesForBackup.length; i++) {
              String filePathForBackup = filesForBackup;
              try {
                   File source = new File(environment.getHome(), filePathForBackup);
                   File destination = new File(backupDirectoryPath, new File(filePathForBackup).getName());
                   ERXFileUtilities.copyFileToFile(source, destination, false, true);
                   if (log.isDebugEnabled()) {
                        log.debug("backed up: " + source.getPath() + " to: destination: " + destination);
              } catch (FileNotFoundException e) {
                   log.error("backup failed for file: " + filePathForBackup +
    " the number stored in the holder did not exist.", e);
                   ERXFileUtilities.deleteFile(new File(backupDirectory, LATEST_BACKUP_NUMBER_FILE_NAME));
              } catch (IOException e) {
                   log.fatal("backup failed for file: " + filePathForBackup, e);
    saveLatestBackupFileNumber(backupDirectory, backupHelper.getLastFileInBackupSet());
    finally {
         // Remember to exit backup mode, or all log files won't be cleaned
         // and disk usage will bloat.
         backupHelper.endBackup();
         // Allow writes again
         RepositoryNode.getInstance().bdb().quiesce();
         log.info("finished backup");
    The quiesce() method acquires the writeLock on the ReentrantReadWriteLock.
    One thing I am not sure about is when we should checkpoint the database. Is it safe to do this after acquiring the lock, or should we do it just before?
    The plan is to backup files to a filesystem which is periodically copied to offline backup storage.
    Any help much appreciated.

    James,
    I am using the provided DbBackup helper class along with a ReentrantReadWriteLock to quiesce the database. I want to make sure that writes wait for the lock and that a checkpoint happens before the backup. One thing to make clear is that you can choose to do a hot backup or an offline backup. The Getting Started Guide, Performing Backups chapter defines these terms. When doing a hot backup, it's not necessary to stop any database operations or do a checkpoint, and you should use DbBackupHelper. On the other hand, when doing an offline backup, you would want to quiesce the database, close or sync your environment, and copy your .jdb files. In an offline backup, there is no need to do DbBackup.
    So while it doesn't hurt to do all of the steps for both hot and offline backup, as you are doing, it's more than you need to do.
    You may find the section on http://www.oracle.com/technology/documentation/berkeley-db/je/GettingStartedGuide/logfilesrevealed.html informative in how it explains how JE's storage is append only. This may help you understand why a hot backup does not require stopping database operations. If you want to do a hot backup, the code you show looks fine, though you don't need to quiesce the database or run a checkpoint.
    If you are running replication, you should read the chapter http://www.oracle.com/technology/documentation/berkeley-db/je/ReplicationGuide/dbbackup.html. Backing up a node that is a member of a replicated group is very much like a standalone backup, except you will want to catch that extra possible exception. Also note that backup files belong to a given node, and the files from one node's backup shouldn't be mixed with the files from another node.
    To be clear, suppose you have node A and node B. The backup of nodeA contains files 00000001.jdb, 00000003.jdb. The backup of node B contains files 00000002.jdb, 00000003.jdb. Although A's file 0000003.jdb and B's 00000003.jdb have the same name, they are not the same file and are not interchangeable.
    In a replication system, if you are doing a full restoration, you can use B's full set of files to restore A, and vice versa.
    Linda

  • Hot backup related question

    Hi all,
    i have a question related to hot backup
    If we take a hot backup ie. alter tablespace tbs begin backup....then it freezed the datafile and all the entries happen to the redo log files.....if we a two log groups and and hot backup happens to continue from BOD(beginning of day) to EOD,then there might be a situation in which log switch happen it means it will archive the same,so how the datafile will recover the same...
    here it freezed the whole datafile or only datafile header.....

    user00726 wrote:
    Hi all,
    i have a question related to hot backup
    If we take a hot backup ie. alter tablespace tbs begin backup....then it freezed the datafile and all the entries happen to the redo log files.....if we a two log groups and and hot backup happens to continue from BOD(beginning of day) to EOD,then there might be a situation in which log switch happen it means it will archive the same,so how the datafile will recover the same...
    here it freezed the whole datafile or only datafile header.....The crux of the hot backup done via manual cp command is that datafile is completely operational. So all the objects which can undergo changes like tables and all , they keep on working as like they used to do without the back up is on. As all the changed vectors are logged in the redo log files , so the same is true in this case also when the datafile goes into the backup mode. Oracle just freezes the header of teh datafile and freezes the SCN at that point for it to know where to start the recovery from in the order of a crash. When the file is put into the backup mode, the header is freezed and the checkpoint is done before putting the file into the backup mode. Makes sense as with the checkpointing , the related buffers for that file are put into the file , ensuring that the file now is a self consistent file.
    There is no checkpoint that happens with the end backup though. The file is made in sync with the rest of the database files when the next full, system level checkopint will happen. All the changes required to make the file get in sync with the rest of the database is already logged in teh redo and archived files. So in the case of a crash, oracle just applies those to the respective file and the recovery is complete. So there won't be any impact on the recovery. Just there is one issue that there would be extra redo generation due to the first time logging of the whole data block in the redo stream to avoid the fractured block issue.
    HTH
    Aman....

  • Hot backup slow

    Hi,
    I am working in oracle8i.We are using hot backup in our database. Normally it takes 2 hrs to complete the backup process.
    but today it took 5 hrs to complete the backup process. Kindly let me know what are the possible ways to identify the solution for taking more time.
    Rgds.

    Hi,
    There could be any number of reasons for the backup taking longer - but here are a few likely candidates:
    The server was busier than usual - run top to see if anything unusual is hogging CPU.
    The database was busier than usual - did someone run a large report/update
    Tape drive contention - do you share a tape device with another app?
    The database has grown significantly.
    Cheers,
    Andy Barry
    http://www.shutdownabort.com

  • Hot backup centralized of database distributed

    I will like to know if it is possible to make a hot backup of the data base distant
    help!

    you might want to elaborate what you plan to do. What's the OS ?
    Can you connect remotely to server to run the backup job, but the actual job will be run locally on server.

  • Hot backup question

    Our DBA duties are currently outsourced and I'm just curious about something.
    We asked them to copy a schema from our Production database (during the day) down to it's equivalents in the Acceptance and Test environments. We found a problem in that the maximum value of a primary key on a table was 100121, and the sequence uses to generate the value for the primary key was only at 100081.
    Is this just one of the risks of taking a hot backup, and so should schedule a backup be taken after hours? Or did he possibly do something wrong?
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi

    If the request was to copy only one schema, then it is unlikely that a "hot backup" would have been used. Quite possibly exp/imp or expdp/impdp would have been used to export the schema from Production and import it into the other two databases.
    Sequences are always "a problem" when export - import is used while the database is active. The export of the sequence and the export of the table (or tables) that use the sequence are not at the same time / SCN so the values may diverge as they get updated in the source database.
    The plan is to reset the sequence in the target database by manually incrementing it till it reaches the desired target (100122).
    Hemant K Chitale

  • Making atabase clone (but without data=rows) from hot-backup

    Hi,
    I am newbie in Oracle administration. I am using Oracle 10g database. I would like to clone database i.e. make an exact copy of tablespaces, tables, indexes, triggers, functions etc. but without data.
    What is the most efficient way to do it.? I am having the full hot backup of another database. Is it possible to restore a that backup using rman but without data (=rows) on the fresh installed oracle database
    Thanks for any advice
    Groxy

    I don't think RMAN has the feature you are looking for.
    For this purpose, it could be easier to use:
    - either old export with FULL=Y ROWS=N
    - or new data pump export with CONTENT=METADATA_ONLY.
    However in both cases you would need to recreate the empty database before importing the export file.
    Message was edited by:
    Pierre Forstmann

Maybe you are looking for

  • APEX SSO - execution of regapp.sql failing

    Hi All, I have Database 11.1.0.6.0. APEX version is 3.0.1 I am trying to Configure SSO(single sign-on) with Apex. I am logged in as FLOWS_030100 into the database I am facing following issues when trying to execute the regapp.sql, which is extracted

  • Where can i buy a new cooling fan for my Compaq Presario CQ61-330SO Notebook PC?

    like the headline says. Where can i buy a new cooling fan for my Compaq Presario CQ61-330SO Notebook PC? it died almost a month ago and i have been searching for a new one since I searched all over HP's website and cant find any information about tha

  • Best memory card for video (curve 8330)

    Hi everyone. I'm new to this forum. I have a curve 8330 and I want to start taking video with it. What sort of memory card do I need? And how much video will each gb of memory hold? Thanks! Solved! Go to Solution.

  • Trying the hello world page but getting an error in OAF

    Hi, I am trying to create the helloworld page but I am getting the following error while creating page from the web tier of the project. oracle.jbo.dt.objects.JboException: The following object(s) referred to objects that could not be found: oracle.a

  • Not urgent, just simple question.

    Okay, just a curiosity, why will not the Goto login(); and similar sections of the code not work? Am I using a non-existant command? VB6 has ruined me.. import cs1.Keyboard; import java.io.*; public class Game      public static void main (String[] a