Why backing up archive logs? (and more...)

Hi all,
If I schedule a weekly backup every Saturday:
BACKUP DATABASE PLUS ARCHIVELOG;
1- Does the command backup all the archivelogs generated during the week (and what for as far as I have the full backup) or only the archivelogs generated during the backup execution time?
2- If I use a daily incremental backup with a weekly merge strategy , does the weekly archivelogs needed (or only one day) in case of recovery?
3- The incremental Merge operation only uses the incremental backupset and the previous full backup to merge into datafile copies. No archivelogs are used to do this merge operation?
4- Does the following strategy meaningfull regarding the archivelogs (incremental and merge backup every day)?
RUN {
BACKUP INCREMENTAL LEVEL 1 FOR RECOVER OF COPY WITH TAG 'INCR' DATABASE;
RECOVER COPY OF DATABASE WITH TAG 'INCR';
DELETE NOPROMPT OBSOLETE;
BACKUP ARCHIVELOG ALL DELETE INPUT;
Thanks,
Vince

BACKUP DATABASE PLUS ARCHIVELOG;
1- Does the command backup all the archivelogs generated during the week (and what for as far as I have the full backup) or only the archivelogs generated during the backup execution time?
The above command will first generate a log switch then backup the archivelogs which have not been backed up so far then backup the database (full) and then switch the log file again and backup the remainining archivelogs.
2- If I use a daily incremental backup with a weekly merge strategy , does the weekly archivelogs needed (or only one day) in case of recovery?
RMAN always take incremental over archivelogs so if the incremental backup is available then archivelogs are not required BUT let say all the incrementals are taken as 2 am and you want to recover the database upto 4pm so next day incremental wont work then RMAN will use all incrementals upto the day of failure and then use the archivelogs for remaining recovery. In short you should decide the recovery window of your database means how far back you wanna go in case of incomplete recovery and you MUST keep all the archivelogs for that period of time.
3- The incremental Merge operation only uses the incremental backupset and the previous full backup to merge into datafile copies. No archivelogs are used to do this merge operation?
I hope answer number 2 will help here too.
4- Does the following strategy meaningfull regarding the archivelogs (incremental and merge backup every day)?
RUN {
BACKUP INCREMENTAL LEVEL 1 FOR RECOVER OF COPY WITH TAG 'INCR' DATABASE;
RECOVER COPY OF DATABASE WITH TAG 'INCR';
DELETE NOPROMPT OBSOLETE;
BACKUP ARCHIVELOG ALL DELETE INPUT;
If this is the only backups you are having so it means your recovery window is 1 day because your backup copies are a day older than your current database and you can't use these copies to restore more than a day back. BUT you still want to backup archivelogs for last day so that in case of failure you can use restore upto a specific time and you dont want the archilogs more than a day older, those are useless because there is no prior backup available where you can apply those.
Daljit Singh

Similar Messages

  • Delete old and backed up archive log,

    Hi all,
    Am trying to modify our RMAN backup script to delete old and backed up archive logs off the disk.
    We run daily Incremental L0 or L1 backups, with 3 daily archive log backups. Currently we delete archive logs as they are backed up (backup archivelog all delete input). However, we are considering leaving x days worth of archive logs on disk, just in case, quick restore is needed (e.g. tablespace media recovery etc. etc.)
    For example,
    We can delete archive logs old tha n x (where x=no_days depending on database) by,
    delete archivelog until time 'sysdate-15';
    We can also delete archive logs that have been backed up to disk by,
    delete archivelog all backed up 1 times to disk;
    Questions,
    1) Why does 'backed up 1 times to disk' require the 'all' in the statement ?
    2) Is there any way to combine the 2 statements ? (backed up 1 times and older than x days ?). Just being a bit extra cautious to ensure that we have everything backed up. In real-world shouldn't be a problem, as we will be alerted to any backup failures, but trying to cater for the very worst situation.
    Thanks for your help and insight.

    Hi,
    Scratch the questions. Found the answer in another thread. Seems when I tried to combine the statements initally, I had a typo.
    It is now working as expected.
    Thanks and regards,

  • Why size of archive log file increasing in merge clause

    my database is running in archive log mode.
    someone is running oracle merge statement. still it is running.
    He will issue commit after the operation.
    in that period redolog file increasing now.
    my question is why size of archive log file increasing with redolog file.
    i know that after commit archive log file should generate.(may be it is wrong).
    please suggest........
    Edited by: 855516 on Mar 13, 2012 11:18 AM

    855516 wrote:
    my database is running in archive log mode.
    someone is running oracle merge statement. still it is running.
    He will issue commit after the operation.
    in that period redolog file increasing now.
    my question is why size of archive log file increasing with redolog file.
    i know that after commit archive log file should generate.(may be it is wrong). No this is not correct that after commit archive log will generate....You know merge statement causes the insert (if data not present already) or update if database is present.. Obviously these operation will generate lots of redo if the amount of data been processed is high.
    If you feel that this operation is causing excessive of redo then root cause analysis should be done...
    For that use Logminer (excellent tool to provide segment level breakdown of redo size). V$logmnr_contens has columns redo block and redo byte address associated with the current redo
    change
    There are some gudlines in order to reduce redos( which may vary in any environment)
    1) check if there are unwanted indexes being used in tables which are refereed in merge. If yes then remove those could bring down the redo
    2) Use global temporary tables to reduce redo (if there is a need to keep data only temporarily in a session)
    3) Use nologging if possible (but see its implications)
    Hope this helps

  • Why wont my footage log and capture?

    Why wont my footage log and Capture?
    I have my Canon XH A1 connected to my Mac OS X computer. I am using Final Cut Pro to download the footage onto my Mac. The capture settings are set to HDV capture but it wont download properly.
    The computer downloads the first 20 seconds of footage then stops. The tape in the camera continues to roll to the end and then gives me the 'End of tape' window- but nothing has been downloading past the initial 20 seconds.
    Usually when HD footage is downloaded Final Cut creates many little clips. (Unlike normal DV which creates one big file). For some reason this isn't happening after the first 20 seconds.
    Then, even though I have already created a named folder for the footage to be saved into, the computer asks me to save it as an untitled project somewhere else.
    What is going on?
    My download settings for HDV log and capture are:
    Sequence Present: Apple ProRes 422 1440x1080 50i 48 kHz
    Capture Present: HDV
    Device Control Present: HDV FireWire Basic
    Playback Output Video: Digital Cinema Desktop Preview-Main
    Playback Output Audio: Default
    Edit to Tape/PTV Output Video: HDV (1440x1080) 50i
    Edit to Tape/PTV Output Audio: Default
    Help!
    Message was edited by: Caroline2

    Do you have an external firewire drive connected? Canon equipment is unreliable capturing via firewire to a firewire drive, or even when capturing to a non-firewire drive when there are other devices present on the firewire bus.

  • Multiplexing Online redo logs, archive logs, and control files.

    Currently I am only multiplexing my control files and online redo logs, My archive logs are only going to the FRA and then being backed up to tape.
    We have to replace disks that hold the FRA data. HP says there is a chance we will have to rebuild the FRA.
    As my archive logs are going to the FRA now, can I multiplex them to another disk group? And if all of the control files, online redo logs and archive logs are multiplexed to another disk group, when ASM dismounts the FRA disk group due to insufficient number of disks, will the database remain open and on line.
    If so then I will just need to rebuild the ASM volumes, and the FRA disk group and bring it to the mount state, correct?
    Thanks!

    You can save your online redo logs and archive logs anywhere you want by making use of of init params create_online_log_dest and log_archive_dest_n. You will have to create new redo log groups in the new location and drop the ones in the FRA. The archive logs will simply land wherever you designate with log_archive_dest_n parameters. Moving the control files off FRA is a little trickier because you will need to restore your controlfile to a non-FRA destination and then shutdown your instance, edit the control file param to reflect changes and restart.
    I think you will be happier if you move everything off the FRA diskgroup before dismounting it, and not expecting the db to automagically recover from the loss of files on the FRA.

  • Backing up Archive logs

    Hi!
    What is the best approach for backing up the Archive Logs?
    Because we must do this regularly, a scheduled job should be considered.
    Thank you!

    How you backing up the database and archive logs currently? Are the backups scheduled or ran manually?
    You can use Windows scheduler or cron on Unix (you did not provide OS or database version) to schedule the backup, or you can schedule the backup of the archive log in the database using DBMS_JOB or DBMS_SCHEDULER.

  • Issue with backing up Archive logs

    Hi All,
    Please help me with the issues/confusions I am facing :
    1. Currently, the  "First active log file  = S0008351.LOG"  from "db2 get db cfg for SMR"
        In the log_dir, there should be logs >=S0008351.LOG
        But in my case, in addition to these logs, there are some old logs like S0008309.LOG, S0008318.LOG, S0008331.LOG  etc...
        How can I clear all these 'not-really-wanted' logs from the log_dir ?
    2. There is some issue with archive backup as a result the archive backups are not running fine.
        Since this is a very low activity system, there are not much logs generated.
        But the issue is :
        There are so many archive logs in the "log_archive" directory, I want to cleanup the directory now.
        The latest online backup is @ 26.07.2011 04:01:04
        First Log File      : S0008344.LOG
        Last Log File       : S0008346.LOG
        Inside log_archive there are archive logs from  S0008121.LOG   to   S0008304.LOG
        I wont really require these logs, correct ?
    Please clear my confusions...

    Hi,
    >
    > 1. Currently, the  "First active log file  = S0008351.LOG"  from "db2 get db cfg for SMR"
    >     In the log_dir, there should be logs >=S0008351.LOG
    >     But in my case, in addition to these logs, there are some old logs like S0008309.LOG, S0008318.LOG, S0008331.LOG  etc...
    >     How can I clear all these 'not-really-wanted' logs from the log_dir ?
    >
    You should not delete logs from log_dir because there online Redo logs and if you delete then there will be problem in start of db.
    > 2. There is some issue with archive backup as a result the archive backups are not running fine.
    >     Since this is a very low activity system, there are not much logs generated.
    >     But the issue is :
    >     There are so many archive logs in the "log_archive" directory, I want to cleanup the directory now.
    >     The latest online backup is @ 26.07.2011 04:01:04
    >     First Log File      : S0008344.LOG
    >     Last Log File       : S0008346.LOG
    >   
    If your archive logs are backed up from log_archive directory then you can delete old logs.
    Thanks
    Sunny

  • Time Machine prepares back ups of 10GB and more with very little activity on my computer

    I have an early 2008 iMac with a Samsung 750GBHD, and I am using a Time Machine since 4 years ago working pretty well.
    As of beginning 2014 it started to show many errors but at the end of the day there were 1 or 2 back ups done. During this August I realized that back ups were of a size of 10Gb and more nearly every day while the activity of the computer was very light, internet and mail. I cut an anti virus that was working on the background with no success. There is no virus activity, the comp is clean.
    What could be the reason for this abnormal activity?

    Only way to find out is to analyse what is actually causing the large backups.
    Have you installed the widget yet??
    A1 here.
    http://pondini.org/TM/Troubleshooting.html
    Or you can get a software app that tells you what TM is doing since it is hardly forthcoming.
    See A2
    For why backups are over large.. generally.. see D4

  • Archive Logs using more Space

    May be you can help me out.
    I have resized my redo logs, and when I checked my file system the log switchs were less frequent as expected, however if compared to the same period when the redo logs were smaller(1 hour earlier) the archive logs (db in archive log mode) had doubled the usual capacity on the file system.
    Now Im sure Im being stupid here.
    I had expect the file system usage would be the same but have bigger but fewer archive logs. I am sure the through put is the same. Is my theory correct, even through the facts are not matching up?

    Thanks for confirming my thoughts. I was very surprised when the file system usage had changed and the last change was redo log size adjustment.
    It is one of though times when you were the last to touch it and somthing is different you get the blame.
    I suspect some process is being run that is generally not start at this time of day. just an unfortunate coincidence.
    thanks I thought I was going mad.

  • Archive logs and recovering database with no RMAN

    Hello,
    I inherited a database with no RMAN backup activated but archive redo logs are activated and generating at every moment.
    As there is no rman backup I always thought that those archive redo logs were useless and because of reduced disk space, regularly I delete all of them.
    Then I read that archive redo logs can be usefull even if there is no RMAN. Archive redo logs can be used in a recovery situation like this:
    set all tablespaces in backup mode
    alter tablespace tsp1..tsp100 begin backup;
    copy all datafiles
    alter tablespace tsp1..tsp100 end backup;
    alter database create standby controlfile as 'c:\temp\sbytemp.ctl';
    copy of archivelogs from start begin backup time till nowAs I delete my archive logs, I don't have all ones from the first time. My question is:
    How many archive redo log files would I need to have? Would I need to have all archive logs that were generated from the first time? Or would be enough to have the last three or four ones?
    Thanks
    Edited by: user521219 on 21-may-2012 7:50

    How many archive redo log files would I need to have?
    Would I need to have all archive logs that were generated from the first time?
    would be enough to have the last three or four ones?You need all archive redo log from the moment you have started taking full backup.
    If you take full backup every night at 1:00 AM, you will need all archive redo logs that have transactions finished at and after 1:00 AM.

  • Difference between  archive logs and flashback zipped archivelogs?

    Can someone explain to me the difference between the archive logs in the E:\oracle\product\11.1.0\db_1\RDBMS\ and the zipped archive logs in the E:\oracle\flash_recovery_area\sid\ARCHIVELOG\<date>?
    They appear to be written at the same time and to be an exact copy of each other. Couldn't you use the flashback archivelog as a backup if the archivelog under the oracle_home was corrupted?
    Thanks for taking this question!
    Kathie

    Obviously you have multiplexed archivelog destinations (check 'show parameter log_archive_dest' in sqlplus). By default one destination is defined as log_archive_dest_10='LOCATION=USE_DB_RECOVERY_FILE_DEST',that's the Flashback Recovery Area. 'Someone' defined a second destination.
    It's the purpose of multiple destinations to be protected against failures of one destination.
    Werner

  • Backing up/archiving projects and events in FCP X

    I have been using FCP Studio since 2003 and have a great workflow and back up procedure but with FCP X I have to find new ways to do things.
    Situation: My client will have me shoot many people asking the same questions. I love FCP X because metadata and keywords are much more powerful than logging was in FCP 7. So in FCP X I end up with an event and 12-14 projects. The client often will come back with changes many months later so I need to be prepared to modify an existing project or make a new one from the footage in the event that was never used in a previous projects.
    Back Up: So I duplicated the event to a external drive because I need everything ready to go but need to free up space. Very easy.
    Now the projects need to be duplicated to the external drive and this is where I am struggling.  I need back up each individual project because I can not select multible projects, which so different than FCP 7 where you could just send the project (which contained multilbe timelines) out as a ****.fcp file and put it with your original footage. The FCP X projects are duplicated projects only and it works just great but now only do I have to duplicate each project individually and they are not in nice folders. They are seperated folders by events and projects.
    Not complaining, just reaching out to see if anyone has more knowledge than I do or has worked out a better solution.

    Awesome. Thanks dredcomm. That will save tons of time.
    I have already been doing that inside of FCP X but it should also be noted that FCP X does not let you duplicate a project folder. 
    Now I will just use the finder to copy the appropriate folder to an external drive.

  • BLOBS and archive logs and standby

    Hi,
    1-are the informations about BLOBS registered in archivelogs ?
    2-is there any issus for dataguard standby if there are BLOBS in primary DB ?
    Many thanks.

    Hi,
    Upto my understanding BLOBS are registered.There wont be any impact on the standby database.
    Regards
    MMu

  • Recover DB which don have backup and not in archive log mode

    Hi
    today one of our DB in oracle 10gR2 in AIX got crashed which is not in archive log and which does not have any backup. It was maintained by the Developers and they need it to be recovered. When trying to open it says message like getting error with system01.dbf needs to more recover. Can any body help me. Very Urgent... Is there any way to recover the db??
    Thanks
    Ram

    Hello Mr.Sybrand Bakkar,
    Thanks for your suggestion. Being a DBA i myself knows that without anything I wont be able to. I m just trying to get some suggestions whether is there any way to recover it or has some experts came across these situations and got some answers. When we are not able to think beyond our limit, we come to the forums and post question. I know that this is not the place to write where i can be rude, Hence please change ur perception. When i say urgent, it means that unfortunately they have messed up their DB and came to DB Team for assistance to get their data back which has to be given to their process partners and As a support team we are trying our best to recover it. Anyhow thanks for your reply and suggestion. Have a great day.
    Thanks
    Ram

  • The file structure online redo log, archived redo log and standby redo log

    I have read some Oracle documentation for file structure and settings in Data Guard environment. But I still have some doubts. What is the best file structure or settings in Oracle 10.2.0.4 on UNIX for a data guard environment with 4 primary databases and 4 physical standby databases. Based on Oracle documents, there are 3 redo logs. They are: online redo logs, archived redo logs and standby redo logs. The basic settings are:
    1. Online redo logs --- This redo log must be on Primary database and logical standby database. But it is not necessary to be on physical standby database because physical standby is not open. It doesn't generate redo log. However, if don't set up online redo log on physical standby, when primary failover and switch standby as primary. How can standby perform without online redo logs? In my standby databases, online redo logs have been set up.
    2. Archived redo logs --- It is obviously that primary database, logical and physical standby database all need to have this log file being set up. Primary use it to archive log files and ship to standby. Standby use it to receive data from archived log and apply to database.
    3. Standby redo logs --- In the document, it says A standby redo log is similar to an online redo log, except that a standby redo log is used to store redo data received from another database. A standby redo log is required if you want to implement: The maximum protection and maximum availability levels of data protection and Real-time apply as well as Cascaded destinations. So it seems that this standby redo log only should be set up on standby database, not on primary database. Am my understanding correct? Because I review current redo log settings on my environment, I have found that Standby redo log directory and files have been set up on both primary and standby databases. I would like to get more information and education from experts. What is the best setting or structure on primary and standby database?

    FZheng:
    Thanks for your input. It is clear that we need 3 type of redo logs on both databases. You answer my question.
    But I have another one. In oracle ducument, it says If you have configured a standby redo log on one or more standby databases in the configuration, ensure the size of the current standby redo log file on each standby database exactly matches the size of the current online redo log file on the primary database. It says: At log switch time, if there are no available standby redo log files that match the size of the new current online redo log file on the primary database. The primary database will shut down
    My current one data gurard envirnment setting is: On primary DB, online redo log group size is 512M and standby redo log group size is 500M. On the standby DB, online redo log group size is 500M and standby redo log group size is 750M.
    This was setup by someone I don't know. Is this setting OK? or I should change Standby Redo Log on standby DB to 512M to exactly meatch with redo log size on primary?
    Edited by: 853153 on Jun 22, 2011 9:42 AM

Maybe you are looking for

  • Where can I buy a iPhone 4s car charger at Bangalore

    Where can I buy a car charger for iPhone 4s charger at Bangalore and how much does it cost

  • How to put a text into a picture package.

    I want to print a set of 6 pictures onto a single sheet of paper with a heading on the top.  I can do this by exporting all 6 pictures as jpegs, creating a page in my favourite word processor or publishing package and doing it all there. However, Lig

  • BC activation:all datasources are in  7.x instead of 3.x

    Hi, I'm activating SRM content and i have both  3.x and 7.x flows but the datasources that i had activated for the fist time are just in 7.x. 7.x flows are incomplete (trasnformations missing) so i decided to activate the 3.x  but as i mentioned befo

  • How to get those colors?

    http://pixelshaker.com/blog/?p=1804 how to get colors  like there?

  • Support of JSR 168 and JSR 286 portlets

    Hi, As far as I know there is no direct support for JSR 168 compliant portlets in the SAP portal. Remote portlets can be consumed using WSRP. I was wondering whether the SAP portal will provide direct support for JSR 168 or even JSR 286 portlets in f