Applying the arch logs after the full backup
DB version:10GR2
In NOCATALOG mode, I am going to take a full backup (Hot) of DBx (800gb) using the command
run {
allocate channel c1 TYPE DISK connect 'sys/dempo' FORMAT '/u04/bkp/stprod_%U.rbk';
backup as compressed backupset database tag 'full' plus archivelog;
}and restore it into DBy.
But how can i apply the archive logs generated in DBx during the full backup after the full backup?
The restored control file doesn't know about these archived logs. Right?
with controlfile autobackup off, the backup will be 'complete', as backing up a whole database means backing up it's system tablespace(1 st datafile to be exact, which is the part of system tablespace)The SYSTEM datafile (or multiple datafiles in SYSTEM) may be the first or the second or the nth datafile to be backed up. The BackupSet consisting of multiple backup pieces may have other tablespaces backed up after the SYSTEM datafile. Those other tablespaces and possible redo log switches have timestamps which succeed the SYSTEM datafile backup . Therefore, those other tablespaces and redo log switches are not captured in that controlfile backup that was included with the SYSTEM datafile.
(Yes, I know we can use the controlfile backup --- but I am just pointing out that this doesn't meet the OP's requirement that the controlfile be aware of all redologs that have been switched during or after the database backup).
Hemant K Chitale
http://hemantoracledba.blogspot.com
Similar Messages
-
Does restoring the full backup required pc to disjoin and rejoin to domain
we have a full backup of one client, and we want to restore that full backup. After restoring it, does it required to dis-join and rejoin to domain controller ?
some info on computer object password expiration:
https://blogs.technet.com/b/askds/archive/2009/02/15/test2.aspx
so if the backup is older, and the pc was used in the meantime, chances are it changed the password so that you would have to disjoin and rejoin the restored computer, as the restored password wouldn't match the one stored in ad -
Only import constraint from the full backup dump using datapump !
Dear Firends ,
I am using Oracle 10g in my production server . I firstly export the full database backup using expdp . Now I want to import just the constraints of my production server to a test server . Is it possible to import constraints only to a test server from a full dump backup ?
plz inform ... ...Yet another 'I refuse to read the documentation, unless some volunteer spoon feeds me' question.
What happened to the DBA community. Is there some special virus spreading all over the globe?
Or is the virus called 'OTN' and is OTN making DBAs permanently lazy, trying constantly how to do a little as possible for their money.
You can easily construct the answer yourself from this overview.
http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_import.htm#sthref389
And please re-read the Forums Etiquette post again, or read it for the first time, stating you need to peruse the documentation prior to asking a question.
Your assistance in diminishing the boot load of RTFM questions is appreciated.
Sybrand Bakker
Senior Oracle DBA -
Changing the Full Backup filename
Hi All:
This actually seems like a ridiculous simple question...
I f I name my full medium, the name is static. Is there anyway to create the filename that will change (auto increment or date append etc)?
(same question for incremental file).
The reason I need this is I don't want the files to overwrite.
Thanks
TonyHi Tony,
sorry, it's not possible to create a backup medium (full backup or incremental) such that the filename will change (auto increment or date append etc). Your only option is to change the name of the file on the OS level, or more simply, move the backup file to another location. You can then continue to use the same medium for subsequent backups.
Since you don't want the files to be accidentally overwritten, you should also set 'overwrite = no' in the medium properties.
Thanks,
Ashwath -
How to apply missing arching logs in a logical stand by database
hi,
I have created logical stand by data base for our production.It was working fine.We have not set the value for standby_archvie_dest and it send the archive files to $ORACLE_HOME/dbs, due to high volume transavtion many files generated and $ORACLE_HOME mount got filled and logical stan by apply stopped working and Db went down as well.
I tried to apply the files once i bought back the instance but after applying one archivie file it stopped applying further.
and logical stand by not working appropriately.
Please let me know that is there is a mechanism to apply the missing logs?
DB version : 10.2.0.5
OS :OEL 5
regards
ManojHi,
Since then the issue happened I have noticed archives are not shipping.
Following are the outputs
SQL> ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YY HH24:MI:SS';
Session altered.
SQL> COLUMN STATUS FORMAT A60
SQL> SELECT EVENT_TIME, STATUS, EVENT FROM DBA_LOGSTDBY_EVENTS ORDER BY EVENT_TIME, COMMIT_SCN;
EVENT_TIME STATUS
EVENT
18-MAR-12 11:11:35 ORA-16111: log mining and apply setting up
18-MAR-12 22:34:26 ORA-16226: DDL skipped due to lack of support
alter database begin backup
18-MAR-12 22:34:26 ORA-16226: DDL skipped due to lack of support
alter database end backup
EVENT_TIME STATUS
EVENT
18-MAR-12 22:49:25 ORA-16226: DDL skipped due to lack of support
alter database backup controlfile to '/tmp/PCEGYK_control.ctl'
18-MAR-12 22:49:25 ORA-16226: DDL skipped due to lack of support
alter database backup controlfile to trace
18-MAR-12 22:49:25 ORA-16226: DDL skipped due to lack of support
create pfile='/pcegyk/backup/hot_backups/18032012_2234/initPCEGYK.ora_from_spfil
EVENT_TIME STATUS
EVENT
19-MAR-12 00:04:40 ORA-16227: DDL skipped due to missing object
grant select,insert on sys.ora_temp_1_ds_218894 to "SYSADM"
19-MAR-12 00:04:41 ORA-16227: DDL skipped due to missing object
grant select,insert on sys.ora_temp_1_ds_218895 to "SYSADM"
19-MAR-12 00:04:41 ORA-16227: DDL skipped due to missing object
grant select,insert on sys.ora_temp_1_ds_218896 to "SYSADM"
EVENT_TIME STATUS
EVENT
19-MAR-12 00:04:41 ORA-16227: DDL skipped due to missing object
grant select,insert on sys.ora_temp_1_ds_218897 to "SYSADM"
19-MAR-12 00:04:41 ORA-16227: DDL skipped due to missing object
grant select,insert on sys.ora_temp_1_ds_218898 to "SYSADM"
19-MAR-12 00:04:41 ORA-16227: DDL skipped due to missing object
grant select,insert on sys.ora_temp_1_ds_218899 to "SYSADM"
EVENT_TIME STATUS
EVENT
19-MAR-12 00:19:26 ORA-16227: DDL skipped due to missing object
grant select,insert on sys.ora_temp_1_ds_218900 to "SYSADM"
19-MAR-12 00:19:26 ORA-16227: DDL skipped due to missing object
grant select,insert on sys.ora_temp_1_ds_218901 to "SYSADM"
19-MAR-12 00:19:26 ORA-16227: DDL skipped due to missing object
grant select,insert on sys.ora_temp_1_ds_218902 to "SYSADM"
EVENT_TIME STATUS
EVENT
20-MAR-12 03:28:09 ORA-16111: log mining and apply setting up
20-MAR-12 03:31:54 ORA-16128: User initiated stop apply successfully completed
20-MAR-12 03:55:13 ORA-16111: log mining and apply setting up
EVENT_TIME STATUS
EVENT
20-MAR-12 04:17:38 ORA-16128: User initiated stop apply successfully completed
20-MAR-12 04:17:54 ORA-16111: log mining and apply setting up
20-MAR-12 21:20:20 ORA-16111: log mining and apply setting up
21 rows selected.
SQL>
===========================
SQL> SELECT FILE_NAME, SEQUENCE#, FIRST_CHANGE#, NEXT_CHANGE#,TIMESTAMP, DICT_BEGIN, DICT_END, THREAD# FROM DBA_LOGSTDBY_LOG ORDER BY SEQUENCE#;
FILE_NAME
SEQUENCE# FIRST_CHANGE# NEXT_CHANGE# TIMESTAMP DIC DIC THREAD#
/pcegyk/oracle/product/102/dbs/archPCEGYK_1_138743_679263487.arc
138743 7.4580E+12 7.4580E+12 19-MAR-12 06:33:16 NO NO 1
/pcegyk/oracle/product/102/dbs/archPCEGYK_1_138744_679263487.arc
138744 7.4580E+12 7.4580E+12 19-MAR-12 06:36:22 NO NO 1
/pcegyk/oracle/product/102/dbs/archPCEGYK_1_138745_679263487.arc
138745 7.4580E+12 7.4580E+12 19-MAR-12 06:39:21 NO NO 1
FILE_NAME
SEQUENCE# FIRST_CHANGE# NEXT_CHANGE# TIMESTAMP DIC DIC THREAD#
/pcegyk/oracle/product/102/dbs/archPCEGYK_1_138746_679263487.arc
138746 7.4580E+12 7.4580E+12 19-MAR-12 06:41:25 NO NO 1
/pcegyk/oracle/product/102/dbs/archPCEGYK_1_138747_679263487.arc
138747 7.4580E+12 7.4580E+12 19-MAR-12 06:43:24 NO NO 1
/pcegyk/oracle/product/102/dbs/archPCEGYK_1_138748_679263487.arc
138748 7.4580E+12 7.4580E+12 19-MAR-12 06:45:21 NO NO 1
FILE_NAME
SEQUENCE# FIRST_CHANGE# NEXT_CHANGE# TIMESTAMP DIC DIC THREAD#
/pcegyk/oracle/product/102/dbs/archPCEGYK_1_138749_679263487.arc
138749 7.4580E+12 7.4580E+12 19-MAR-12 06:48:07 NO NO 1
/pcegyk/oracle/product/102/dbs/archPCEGYK_1_138750_679263487.arc
138750 7.4580E+12 7.4580E+12 19-MAR-12 06:50:19 NO NO 1
/pcegyk/oracle/product/102/dbs/archPCEGYK_1_138751_679263487.arc
138751 7.4580E+12 7.4580E+12 19-MAR-12 06:52:52 NO NO 1
FILE_NAME
SEQUENCE# FIRST_CHANGE# NEXT_CHANGE# TIMESTAMP DIC DIC THREAD#
/pcegyk/oracle/product/102/dbs/archPCEGYK_1_138752_679263487.arc
138752 7.4580E+12 7.4580E+12 19-MAR-12 06:55:32 NO NO 1
/pcegyk/oracle/product/102/dbs/archPCEGYK_1_138805_679263487.arc
138805 7.4580E+12 7.4580E+12 19-MAR-12 15:33:26 NO NO 1
=================
SQL> SELECT APPLIED_SCN, NEWEST_SCN FROM DBA_LOGSTDBY_PROGRESS;
APPLIED_SCN NEWEST_SCN
7.4580E+12 7.4580E+12 -
Time Machine - Shows 0 items backed up after finishing full backup
I just completed a full backup onto my Time Machine. (After completely erasing it.) If I enter the Time Machine, it shows '0' items in the backup...ie, no list of files.
If I use the finder and click on 'Time Machine Backups'...I am able to see that in fact 300GB of data was backed up.
I'm confused! I've used time machine before and was expecting to see a complete inventory of all the files after I enter Time Machine...The default is where you are when you +Enter Time Machine.+
If you're on your Desktop (or a Finder window of your Desktop), that's what you'll see in the +Star Wars+ display.
If you start on a Finder window of, say, your home folder, or your Document folder, or your Computer name, that's what you'll see in the +Star Wars+ display. -
To clear db2 archive log after db2 database backup
After db2 backup archive log remains as it is.is there setting to remove archive logs after database backup.DAS is not installed.
DB2 9.1 FP 6
os HP unixHello Anand,
If the archvied logs are not required, ie they are not required for restore
you can remove them.
You can check this is by running
db2 list history all for db <sid>
The above command will give you a detailed overview of the backup
history and what logs were archived and what logs were included or needed for restore of backups.
If they are no longer needed for a restore, then it is safe to remove them.
Kind regards,
Paul -
How do I recover playlists and subscriptions which are missing? I found my missing music files on the hard drive and the podcasts saved to the hard drive - but is there a way to recover the playlists and the podcast subscriptions in my itunes library?
All iTunes data (playlists, playcounts, etc) are contained in the iTunesLibrary.itl file.
This file is contained in the iTunes folder.
Restore the ENTIRE iTunes folder from the old computer or the backup of the old computer and iTunes will be exactly as it was. -
Differential backup files are almost the same size as full backups.
Hello All,
I have done a little research on this topic and feel like we are not doing anything to cause this issue. Any assistance is greatly appreciated.
The details: Microsoft SQL Server 2008 R2 (SP2) - 10.50.4297.0 (X64) Nov 22 2013 17:24:14 Copyright (c) Microsoft Corporation Web Edition (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1) (Hypervisor). The
database I am working with it 23GB. The full backup files are 23GB, differentials are 16GB (and growing) and transaction logs bounce between 700KB to 20MB. The backup schedules with T-SQL follow:
T-Log: daily every four hours
BACKUP LOG [my_dabase] TO DISK = N'F:\Backup\TLog\my_dabase_backup_2015_03_23_163444_2725556.trn' WITH NOFORMAT, NOINIT, NAME = N'my_dabase_backup_2015_03_23_163444_2725556', SKIP, REWIND, NOUNLOAD, STATS = 10
GO
Diff: once daily
BACKUP DATABASE [my_database] TO DISK = N'F:\Backup\Diff\my_database_backup_2015_03_23_163657_1825556.dif' WITH DIFFERENTIAL , NOFORMAT, NOINIT, NAME = N'my_database_backup_2015_03_23_163657_1825556', SKIP, REWIND, NOUNLOAD, STATS =
10
GO
declare @backupSetId as int
select @backupSetId = position from msdb..backupset where database_name=N'my_database' and backup_set_id=(select max(backup_set_id) from msdb..backupset where database_name=N'my_database' )
if @backupSetId is null begin raiserror(N'Verify failed. Backup information for database ''my_database'' not found.', 16, 1) end
RESTORE VERIFYONLY FROM DISK = N'F:\Backup\Diff\my_database_backup_2015_03_23_163657_1825556.dif' WITH FILE = @backupSetId, NOUNLOAD, NOREWIND
GO
Full: once weekly
BACKUP DATABASE [my_database] TO DISK = N'F:\Backup\Full\my_database_backup_2015_03_23_164248_7765556.bak' WITH NOFORMAT, NOINIT, NAME = N'my_database_backup_2015_03_23_164248_7765556', SKIP, REWIND, NOUNLOAD, STATS = 10
GO
declare @backupSetId as int
select @backupSetId = position from msdb..backupset where database_name=N'my_database' and backup_set_id=(select max(backup_set_id) from msdb..backupset where database_name=N'my_database' )
if @backupSetId is null begin raiserror(N'Verify failed. Backup information for database ''my_database'' not found.', 16, 1) end
RESTORE VERIFYONLY FROM DISK = N'F:\Backup\Full\my_database_backup_2015_03_23_164248_7765556.bak' WITH FILE = @backupSetId, NOUNLOAD, NOREWIND
GO
As you can probably tell we are not doing anything special in the backups, they are simply built out in MSSQL Management Studio. All databases are set to full recovery mode. We do not rebuild indexes but do reorganize indexes once weekly and also update
statistics weekly.
Reorganize Indexes T-SQL (there are 255 indexes on this database)
USE [my_database]
GO
ALTER INDEX [IDX_index_name_0] ON [dbo].[table_name] REORGANIZE WITH ( LOB_COMPACTION = ON )
GO
Update Statistics T-SQL (there are 80 tables updated)
use [my_database]
GO
UPDATE STATISTICS [dbo].[table_name]
WITH FULLSCAN
GO
In a different post I saw a request to run the following query:
use msdb
go
select top 10 bf.physical_device_name, bs.database_creation_date,bs.type
from dbo.backupset bs
inner join dbo.backupmediafamily bf on bf.media_set_id=bs.media_set_id
where bs.database_name='my_database'
order by bs.database_creation_date
Results of query:
physical_device_name database_creation_date type
F:\Backup\Full\my_database_backup_2015_03_07_000006_2780149.bak 2014-02-08 21:14:36.000 D
F:\Backup\Diff\Pre_Upgrade_OE.dif 2014-02-08 21:14:36.000 I
F:\Backup\Diff\my_database_backup_2015_03_11_160430_7481022.dif 2015-03-07 02:58:26.000 I
F:\Backup\Full\my_database_backup_2015_03_11_160923_9651022.bak 2015-03-07 02:58:26.000 D
F:\Backup\Diff\my_database_backup_2015_03_11_162343_7071022.dif 2015-03-07 02:58:26.000 I
F:\Backup\TLog\my_database_backup_2015_03_11_162707_4781022.trn 2015-03-07 02:58:26.000 L
F:\Backup\TLog\my_database_backup_2015_03_11_164411_5825904.trn 2015-03-07 02:58:26.000 L
F:\Backup\TLog\my_database_backup_2015_03_11_200004_1011022.trn 2015-03-07 02:58:26.000 L
F:\Backup\TLog\my_database_backup_2015_03_12_000005_4201022.trn 2015-03-07 02:58:26.000 L
F:\Backup\Diff\my_database_backup_2015_03_12_000005_4441022.dif 2015-03-07 02:58:26.000 I
Is your field ready?INIT basically intializes the backup file, in other words, it will overwrite the contents of the existing backup file with the new backup information.
basically, what you have now is you are appending all you backup files (differentials) one after the other (like chain).
you do not necessarily have to do it. these differential backups can exist as different files.
Infact, I would prefer them to separate, as it gives quick insight on the file, instead doing a "restore filelist" command to read the contents of the backup file.
The point Shanky, was mentioning is that : he wants to make sure that you are not getting confused between the actual differential backup file size to the physicial file size(since you are appending the backups) example : if you differential backup is 2
gb, and over the next five you take a differential backup and append to a single file,like you are doing now, the differential backup file size is 2gb but you physicial file size is 10Gb. he is trying to make sure you are confused between these
two.
Anyways, did you get a chance to run the below query and also did you refer to the link I posted above. It talks a case when differential backups can be bigger than full backups and ' inex reorganize' or 'dbcc shrinks' can cause this.
--backup size in GB
select database_name,backup_size/1024/1024/1024,Case WHEN type='D' then 'FULL'
WHEN type='L' then 'Log'
When type='I' then 'Differential' End as [BackupType],backup_start_date,backup_finish_date, datediff(minute,backup_start_date,backup_finish_date) as [BackupTime]
from msdb.dbo.backupset where database_name='mydatabase' and type in ('D','I')
order by backup_set_id desc
Hope it Helps!! -
Require backup script which deletes the daily backups after weekly backup
Hi,
I am new to Oracle administration. I require a backup script which will delete the daily backups (incremental) after weekly backup is taken (full database backup).
Currently I use Oracle 10g for one of my production server.
My backup strategy is given below.
I would like to take full database backup weekly(online). To be precise, on Saturday.
I need to have daily incremental backup and they should get deleted on Saturday after the weekly full database backup is taken.
Also I would like to keep 4 weeks full backups. After that one by one we need to delete the backup.
For example, if I have 4 weeks fullbackup of February, after taking full backup for first week of March, the backup of first week of February should get deleted.
Kindly help with your suggestions.
Thanks & Regards,
RafnasHi,
Thank you for pointing to the blog. It was very much useful.
I was testing the backup strategy using your script. It worked fine for first set of daily backups.
After weekly backup is taken, the daily backup script fails for taking the backup from next day onwards.
It gives the following error stack.
Starting backup at 05-MAR-09
using channel ORA_DISK_1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of backup command at 03/05/2009 01:47:55
ORA-01455: converting column overflows integer datatype
Could you please let me know why integer datatype comes into picture while taking backup?
Please advice.
Thanks & Regards,
Rafnas
Edited by: user10698483 on Mar 5, 2009 3:38 AM -
Problems with DUPLICATE DATABASE when datafile was added after full backup
Hi,
I'm facing a problem when performing database duplication with the RMAN duplicate database command on a 10g database. If I preform the duplication from a full backup that is missing a datafile which was added to the database after the full backup, I get the following error message:
Starting restore at 10-10-2009 18:00:38
released channel: t1
released channel: t2
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 10/10/2009 18:00:39
RMAN-03015: error occurred in stored script Memory Script
RMAN-06026: some targets not found - aborting restore
RMAN-06100: no channel to restore a backup or copy of datafile 43The redo log which was CURRENTat the time of the datafile's 43 creation is also available in the backups. It seems like RMAN can't use the information from the archived redo logs to reconstruct the contents of datafile 43. I suppose that because the failure is reported already in the RESTORE and not in the RECOVER phase, so the archived redo logs aren't even accessed yet. I get the same message even if I make a separate backup of datafile 43 (so a backup that is not in the same backupset as the backup of all other datafiles).
From the script the duplicate command produces, I guess that RMAN reads the contents of the source database's controlfile and tries to get a backup which contains all the datafiles to restore them on the auxiliary database - if such a backup is not found, it fails.
Of course if I try to perform a restore/recover of the source database it works without problems:
RMAN> restore database;
Starting restore at 13.10.09
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=156 devtype=DISK
creating datafile fno=43 name=F:\ORA10\ORADATA\SOVDEV\SOMEDATAFILE01.DBF
channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to F:\ORA10\ORADATA\SOVDEV\SYSTEM01.DBF
.....The datafile 43 is recreated and then redo is applied over.
So, does anyone know if duplicate database can't use archived redo logs to recreate the contents of a datafile as a normal restore/recover does? If it's so, then it means it's necessary to perform a full database backup before every run of duplicate database if a datafile was added after such a backup.
Thanks in advance for any answers.
Regards,
JureHi Jure,
I have hit exactly the same problem during duplication.
Because we backup the archive logs every 6 hours with rman I added an extra run block to this script.
run
backup incremental level 0
format 'bk_%d_%s_%p_%t'
filesperset 4
database not backed up;
(I also than hit a bug in the catalog which was solved by patching up the catalog dbs from 11.1.0.6 to 11.1.0.7.)
This will narrow down the datafile not being part of any rman backup to 6 hours while skipping datafiles for which a backup already exists.
Regards,
Tycho -
Archive Log vs Full Backup Concept
Hi,
I just need some clarification on how backups and archive logs work. Lets say starting at 1PM I have archive logs 1,2,3,4,5 and then I perform a full backup at 6PM.
Then I resume generating archive logs at 6PM to get logs 6,7,8,9,10. I then stop at 11PM.
If my understanding is correct, the archive logs should allow me to restore oracle to a point in time anywhere between 1PM and 11PM. But if I only have the full backup then I can only restore to a single point, which is 6PM. Is my understanding correct?
Do the archive logs only get applied to the datafiles when the backup occurs or only when a restore occurs? It doesn't seem like the archive logs get applied on the fly.
Thanks in advance.thelok wrote:
Thanks for the great explanation! So I can do a point in time restore from any time since the datafiles have last been written (or from when I have the last set of backed up datafiles plus the archive logs). From what you are saying, I can force the datafiles to be written from the redo logs (by doing a checkpoint with "alter set archive log current" or "backup database plus archivelog"), and then I can delete all the archive logs that have a SCN less than the checkpoint SCN on the datafiles. Is this true? This would be for the purposes of preserving disk space.Hi,
See this example. I hope this explain your doubt.
# My current date is 06-11-2011 17:15
# I not have backup of this database
# My retention policy is to have 1 backup
# I start listing archive logs.
RMAN> list archivelog all;
using target database control file instead of recovery catalog
List of Archived Log Copies
Key Thrd Seq S Low Time Name
29 1 8 A 29-10-2011 12:01:58 +HR/dbhr/archivelog/2011_10_31/thread_1_seq_8.399.766018837
30 1 9 A 31-10-2011 23:00:30 +HR/dbhr/archivelog/2011_11_03/thread_1_seq_9.409.766278025
31 1 10 A 03-11-2011 23:00:23 +HR/dbhr/archivelog/2011_11_04/thread_1_seq_10.391.766366105
32 1 11 A 04-11-2011 23:28:23 +HR/dbhr/archivelog/2011_11_06/thread_1_seq_11.411.766516065
33 1 12 A 05-11-2011 23:28:49 +HR/dbhr/archivelog/2011_11_06/thread_1_seq_12.413.766516349
## See I have archive logs from time "29-10-2011 12:01:58" until "05-11-2011 23:28:49" but I dont have any backup of database.
# So I perfom backup of database including archive logs.
RMAN> backup database plus archivelog delete input;
Starting backup at 06-11-2011 17:15:21
## Note above RMAN forcing archive current log, this archivelog generated will be usable only for previous backup.
## Is not my case... I don't have backup of database.
current log archived
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=159 devtype=DISK
channel ORA_DISK_1: starting archive log backupset
channel ORA_DISK_1: specifying archive log(s) in backup set
input archive log thread=1 sequence=8 recid=29 stamp=766018840
input archive log thread=1 sequence=9 recid=30 stamp=766278027
input archive log thread=1 sequence=10 recid=31 stamp=766366111
input archive log thread=1 sequence=11 recid=32 stamp=766516067
input archive log thread=1 sequence=12 recid=33 stamp=766516350
input archive log thread=1 sequence=13 recid=34 stamp=766516521
channel ORA_DISK_1: starting piece 1 at 06-11-2011 17:15:23
channel ORA_DISK_1: finished piece 1 at 06-11-2011 17:15:38
piece handle=+FRA/dbhr/backupset/2011_11_06/annnf0_tag20111106t171521_0.268.766516525 tag=TAG20111106T171521 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:16
channel ORA_DISK_1: deleting archive log(s)
archive log filename=+HR/dbhr/archivelog/2011_10_31/thread_1_seq_8.399.766018837 recid=29 stamp=766018840
archive log filename=+HR/dbhr/archivelog/2011_11_03/thread_1_seq_9.409.766278025 recid=30 stamp=766278027
archive log filename=+HR/dbhr/archivelog/2011_11_04/thread_1_seq_10.391.766366105 recid=31 stamp=766366111
archive log filename=+HR/dbhr/archivelog/2011_11_06/thread_1_seq_11.411.766516065 recid=32 stamp=766516067
archive log filename=+HR/dbhr/archivelog/2011_11_06/thread_1_seq_12.413.766516349 recid=33 stamp=766516350
archive log filename=+HR/dbhr/archivelog/2011_11_06/thread_1_seq_13.414.766516521 recid=34 stamp=766516521
Finished backup at 06-11-2011 17:15:38
## RMAN finish backup of Archivelog and Start Backup of Database
## My backup start at "06-11-2011 17:15:38"
Starting backup at 06-11-2011 17:15:38
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
input datafile fno=00001 name=+HR/dbhr/datafile/system.386.765556627
input datafile fno=00003 name=+HR/dbhr/datafile/sysaux.396.765556627
input datafile fno=00002 name=+HR/dbhr/datafile/undotbs1.393.765556627
input datafile fno=00004 name=+HR/dbhr/datafile/users.397.765557979
input datafile fno=00005 name=+BFILES/dbhr/datafile/bfiles.257.765542997
channel ORA_DISK_1: starting piece 1 at 06-11-2011 17:15:39
channel ORA_DISK_1: finished piece 1 at 06-11-2011 17:16:03
piece handle=+FRA/dbhr/backupset/2011_11_06/nnndf0_tag20111106t171539_0.269.766516539 tag=TAG20111106T171539 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:24
Finished backup at 06-11-2011 17:16:03
## And finish at "06-11-2011 17:16:03", so I can recovery my database from this time.
## I will need archivelogs (transactions) which was generated during backup of database.
## Note during backup some blocks are copied others not. The SCN is inconsistent state.
## To make it consistent I need apply archivelog which have all transactions recorded.
## Starting another backup of archived log generated during backup.
Starting backup at 06-11-2011 17:16:04
## So automatically RMAN force another "checkpoint" after backup finished,
## forcing archive current log, because this archivelog have all transactions to bring database in a consistent state.
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archive log backupset
channel ORA_DISK_1: specifying archive log(s) in backup set
input archive log thread=1 sequence=14 recid=35 stamp=766516564
channel ORA_DISK_1: starting piece 1 at 06-11-2011 17:16:05
channel ORA_DISK_1: finished piece 1 at 06-11-2011 17:16:06
piece handle=+FRA/dbhr/backupset/2011_11_06/annnf0_tag20111106t171604_0.272.766516565 tag=TAG20111106T171604 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
channel ORA_DISK_1: deleting archive log(s)
archive log filename=+HR/dbhr/archivelog/2011_11_06/thread_1_seq_14.414.766516565 recid=35 stamp=766516564
Finished backup at 06-11-2011 17:16:06
## Note: I can recover my database from time "06-11-2011 17:16:03" (finished backup full)
## until "06-11-2011 17:16:04" (last archivelog generated) that is my recover window in this scenary.
## Listing Backup I have:
## Archive Logs in backupset before backup full start - *BP Key: 40*
## Backup Full database in backupset - *BP Key: 41*
## Archive Logs in backupset after backup full stop - *BP Key: 42*
RMAN> list backup;
List of Backup Sets
===================
BS Key Size Device Type Elapsed Time Completion Time
40 196.73M DISK 00:00:15 06-11-2011 17:15:37
*BP Key: 40* Status: AVAILABLE Compressed: NO Tag: TAG20111106T171521
Piece Name: +FRA/dbhr/backupset/2011_11_06/annnf0_tag20111106t171521_0.268.766516525
List of Archived Logs in backup set 40
Thrd Seq Low SCN Low Time Next SCN Next Time
1 8 766216 29-10-2011 12:01:58 855033 31-10-2011 23:00:30
1 9 855033 31-10-2011 23:00:30 896458 03-11-2011 23:00:23
1 10 896458 03-11-2011 23:00:23 937172 04-11-2011 23:28:23
1 11 937172 04-11-2011 23:28:23 976938 05-11-2011 23:28:49
1 12 976938 05-11-2011 23:28:49 1023057 06-11-2011 17:12:28
1 13 1023057 06-11-2011 17:12:28 1023411 06-11-2011 17:15:21
BS Key Type LV Size Device Type Elapsed Time Completion Time
41 Full 565.66M DISK 00:00:18 06-11-2011 17:15:57
*BP Key: 41* Status: AVAILABLE Compressed: NO Tag: TAG20111106T171539
Piece Name: +FRA/dbhr/backupset/2011_11_06/nnndf0_tag20111106t171539_0.269.766516539
List of Datafiles in backup set 41
File LV Type Ckp SCN Ckp Time Name
1 Full 1023422 06-11-2011 17:15:39 +HR/dbhr/datafile/system.386.765556627
2 Full 1023422 06-11-2011 17:15:39 +HR/dbhr/datafile/undotbs1.393.765556627
3 Full 1023422 06-11-2011 17:15:39 +HR/dbhr/datafile/sysaux.396.765556627
4 Full 1023422 06-11-2011 17:15:39 +HR/dbhr/datafile/users.397.765557979
5 Full 1023422 06-11-2011 17:15:39 +BFILES/dbhr/datafile/bfiles.257.765542997
BS Key Size Device Type Elapsed Time Completion Time
42 3.00K DISK 00:00:02 06-11-2011 17:16:06
*BP Key: 42* Status: AVAILABLE Compressed: NO Tag: TAG20111106T171604
Piece Name: +FRA/dbhr/backupset/2011_11_06/annnf0_tag20111106t171604_0.272.766516565
List of Archived Logs in backup set 42
Thrd Seq Low SCN Low Time Next SCN Next Time
1 14 1023411 06-11-2011 17:15:21 1023433 06-11-2011 17:16:04
## Here make sense what I trying explain
## As I don't have backup of database before of my Last backup, all archivelogs generated before of my backup full is useless.
## Deleting what are obsolete in my env, RMAN choose backupset 40 (i.e all archived logs generated before my backup full)
RMAN> delete obsolete;
RMAN retention policy will be applied to the command
RMAN retention policy is set to redundancy 1
using channel ORA_DISK_1
Deleting the following obsolete backups and copies:
Type Key Completion Time Filename/Handle
*Backup Set 40* 06-11-2011 17:15:37
Backup Piece 40 06-11-2011 17:15:37 +FRA/dbhr/backupset/2011_11_06/annnf0_tag20111106t171521_0.268.766516525
Do you really want to delete the above objects (enter YES or NO)? yes
deleted backup piece
backup piece handle=+FRA/dbhr/backupset/2011_11_06/annnf0_tag20111106t171521_0.268.766516525 recid=40 stamp=766516523
Deleted 1 objectsIn the above example, I could before starting the backup run "delete archivelog all" because they would not be needed, but to show the example I follow this unnecessary way. (backup archivelog and delete after)
Regards,
Levi Pereira
Edited by: Levi Pereira on Nov 7, 2011 1:02 AM -
How to restore using increment backup after full backup restore in RMAN?
Hi All,
We have a files of full backup of database after turning on the Archive log.
And after that, daily incremental backup is taken.
Now, i want to restore the the database into a new machine using both files of full and incremental backups. Can anybody give me script for restore of full backup and after that restore of incremental backup?
Thanks,
Praveen.Praveen,
>>
In my case, i have 2 sets of backups. One is full backup and other is incremental backup. In order to keep the restored database upto date, i need to restore the full backup and then restore incremental backup. Now, i got any idea how to restore using full backup. My doubt is how to update the restored database with incremental backup also?
>>
Restore always looks for level 0 backup, not the incremental backups.
Incremental backups comes in picture during recovery.
During Recovery, Oracle looks for incremental backups, if exists, it will do the recovery with the incremental backups, if no incremental backups available, then, it will look for archived logs.
Therefore, incremental backups never used during restore, they are used only during the recovery.
Jaffar -
MaxDB 7.5 does not clear logarea after full backup
Hi all,
We have MaxDB 7.5, turned on the auto-log-backup and are running full data-backups every night.
The problem is that the log-area is not cleared as I expected, so the log-area is filling up at 300 MB per day.
Was I wrong to expect the log-area to be cleared after a full backup?
Is it a parameter-setting?
Any ideas how to solve this?
Thanks in advance,
MartinHi,
check Note 869267 - FAQ: MaxDB LOG area
regards,
kaushal -
What happens after TM deletes full backup & only incrementals are left?
If I understand this correctly, Time Machine does 1 complete backup & then incremental backups after that. If that's correct, what happens when the full backup is about to be erased because the backup drive is full? Does TM alert you to this & ask if you want to create a new full backup instead of another incremental one? Also, is there any way to get TM to create a log so you can see at a glance what's in the incremental backups without having to click through all the folders in each one?
TIA,
ElllenAnswering this dives into how files are represented on a disk. Basically, a disk is a huge expanse of storage space and an index/catalogue/"table of contents". When you copy data to a disk you physically copy the data into the storage space and then make a reference in the index as to where it resides on the disk. The operating system sees the reference, and doesnt go scouring the disk for files whenever you open a folder. This makes finding files and tracking their locations in the directory tree much easier. A single index-link to a storage chunk is called a "hard link". When the operating system "deletes" a file, it doesnt erase it from storage, but instead just removes the "hard link" index entry.
Since files are referenced through an index instead of reading the actual storage space of the disk itself, you can have multiple references in the index to the same chunk of storage space. Hence, you can have multiple hard links which are essentially the same file.
Keeping that in mind, read on...
Every "incremental" backup is a representation of the full backup, including the first "full" backup. Its no different than any other backup (the only difference is that it was the first one so it took longer to copy everything over). Once a file is copied over to the backup disk, it is referred to by a "hard link" in the disk's index. This is essentially how a disk points to a file on any drive. It copies the data over and then creates a hard link to the file's physical location on the disk, which is how the file is accessed by the operating system (or rather, howe the operating system knows the file exists on the disk). You can have multiple hard links to a file, where the same allocation of disk space is pointed to by two different hard links in the index. As such, you can have two representations of the same file (and therefore two links to it) represented on the same disk. If you delete one of the hard links, it will behave as if it's deleted the file, but in fact the file will still remain on disk and will be accessible through the second hard link.
Note that this is different from an Alias (which is also called a "soft link"), which is a file in itself that points to a single hard link. An alias can be removed without the original file being touched.
When TM does the initial backup it copies all the contents over to the backup disk, and then creates hard links to those files which it represents in the first backup folder. When the next backup occurs, any changed files are copied over with new hard links created. The unchanged files, however, are not copied again but instead are just referred to again with new hard links in the new backup folder. As such, new files have one hard link to them (until the next backup), and the old files have two (one in the current backup folder and one in the previous folder). This creates a situation where if the first backup folder is deleted, the original files that were copied are not lost, as they are still represented by hard links in the newer backup folder, and can be accessed that way.
So, why is this different than creating aliases to the original files in the initial backup? Since aliases are just files that point to hard links, if Aliases to the original backups were used instead of hard links, when the original hard links in the initial backup folder are removed the aliases would no longer point to anything anymore and the files would be gone. This defeats the purpose of a backup.
When a file changes from the initial backup, on the next backup the newer version will be copied over and a new hard link to it will be created. If you delete all hard links to the initial backup (by deleting the initial backup folder) the newer file will still be there in the new folder but all references to the old file will be gone and you'll be left with only the new file (along with everything else that was unchanged).
Now to answer your question: Basically, when the old backups are deleted by Time Machine, its not the full/initial backup that's deleted, but only the old versions of files that have been changed. All the original files that were not changed are still there, represented in every backup (new and old) by hard links.
Maybe you are looking for
-
Problem with Report Generation in TestStand 3.0
While using On-the-Fly Test Report Generation capabilities in TestStand 3.0, the PC become very sluggish and occasionally there are error messages containing an error code of -17000 and the statment, "Out of Memory". The error dialog usually appears
-
Need help. importing pictures to timeline with various length.
Need help cant find old toturial I am trying to make a template for a picture movie where i have everything ready so i just need to add 250 pictures and then it will automaticly shift the pictures to the beat of the music. I once saw a toturial how t
-
How do I create individual xml files from the parsed data output of a xml file?
I have written a program (DOM Parser) that parses data from a XMl File. I would like to create an individual file with the corresponding name for each set of data parsed from the xml document. If the parsed output is Single, Double, Triple, I would l
-
Functional Module required for getting list of successful jobs
Hi All, There is one function module BP_JOB_SELECT which gives me name of unsuccessful job. Now I want to find the name of all successful job. Can anyone please tell me the function module name for that? Thanks in advance, Siddarth.
-
I have a question about the 'filter' in the signature dictionary. It is defined as (taken from PDF 1.4 due to PDF/A): Filter name (Required; inheritable) The name of the signature handler to be used for authenticating the field's contents, such as Ad