RMAN Backup Log File

I have 3 control files in datafile directory..... it becomes a single backupset file for 3 control files. How can I know which backupset includes these 3 control files?
Here are the hot backup log:
Starting Control File and SPFILE Autobackup at 29-JUN-09
piece handle=/backup/db/backup/RMAN/c-1357907388-20090629-00.bck comment=NONE
Finished Control File and SPFILE Autobackup at 29-JUN-09
Starting backup at 29-JUN-09
channel ch1: starting full datafile backupset
channel ch1: specifying datafile(s) in backupset
including current controlfile in backupset
channel ch1: starting piece 1 at 29-JUN-09
channel ch1: finished piece 1 at 29-JUN-09
piece handle=/backup/db/backup/RMAN/backup_PROD_690781724_4213_1_3lkiovgs_1_1.bck comment=NONE
channel ch1: backup set complete, elapsed time: 00:00:01
Finished backup at 29-JUN-09
Starting backup at 29-JUN-09
channel ch1: starting full datafile backupset
channel ch1: specifying datafile(s) in backupset
including current SPFILE in backupset
channel ch1: starting piece 1 at 29-JUN-09
channel ch1: finished piece 1 at 29-JUN-09
piece handle=/backup/db/backup/RMAN/backup_PROD_690781725_4214_1_3mkiovgt_1_1.bck comment=NONE
channel ch1: backup set complete, elapsed time: 00:00:02
Finished backup at 29-JUN-09
Starting Control File and SPFILE Autobackup at 29-JUN-09
piece handle=/backup/db/backup/RMAN/c-1357907388-20090629-01.bck comment=NONE
Finished Control File and SPFILE Autobackup at 29-JUN-09
FAN

Hi FAN,
The following three pieces contain controlfile backups.
c-1357907388-20090629-00.bck
backup_PROD_690781724_4213_1_3lkiovgs_1_1.bck
c-1357907388-20090629-01.bck
Accordingto your output.
It does not contain three copies but it will use the control_files parameter to restore three.
Regards,
Tycho

Similar Messages

  • Create rman back log file issue in linux

    Hi Experts,
    I run 10.2.04 database in redhat 5.1
    i create a testing shell script for cron job. but I could not create rman backup log file with blank email.
    my code as
    #!/bin/bash
    EXPORT DTE='date +%m%d%C5y%H%M'
    export $ORACLE_HOME/bin
    export $ORACLE_SID=sale
    rman target='backuptest/backuptest@sale4' nocatalog log /urs/tmp/orarman/jimout.log << EOF
    RUN {
    show all;
    EXIT;
    EOF
    mail -s " backup"${DTE} [email protected] < urs/tmp/orarman/jimout.log
    Please advice me for dubug.
    Thanks for your help!
    JIm

    Thanks very much!
    I make changes as below
    #!/bin/bash
    EXPORT DTE='date +%m%d%C5y%H%M'
    export ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1
    export ORACLE_SID=sale
    export PATH=$ORACLE_HOME/bin:$PATH
    rman target='backuptest/[email protected]' nocatalog log='/urs/tmp/orarman/log/jimout.log' << EOF
    RUN {
    show all;
    EXIT;
    EOF
    mail -s " backup"${DTE} [email protected] < urs/tmp/orarman/log/jimout.log
    I am still not able see file under log='/urs/tmp/orarman/log/ directory.
    the rman option does not works for log='/urs/tmp/orarman/log/jimout.log'
    What is wrong in my code?
    I am looking for your help!
    JIm
    Edited by: user589812 on Jul 23, 2010 8:36 AM
    Edited by: user589812 on Jul 23, 2010 8:38 AM

  • RMAN causing "log file sync"

    Hi,
    Maybe someone can help me on this.
    We have a RAC database in production that (for some) applications need a response of 0,5 seconds. In general that is working.
    Outside of production hours we make a weekly full backup and daily incremental backup so that is not bothering us. However as soon as we make an archive backup or a backup of the control file during production hours we have a problem as the application have to wait for more then 0,5 seconds for a respons caused by the event "log file sync" with wait class "Commit".
    I already adjusted the script for RMAN so that we use only have 1 files per set and also use one channel. However that didn't work.
    Increasing the logbuffer was also not a success.
    Increasing Large pool is in our case not an option.
    We have 8 redolog groups with each 2 members ( each 250 Mb) and an average during the day of 12 logswitches per hour which is not very alarming. Even during the backup the I/O doesn't show very high activity. The increase of I/O at that moment is minor but (maybe) apperantly enough to cause the "log file sync".
    Oracle has no documentation that gives me more possible causes.
    Strange thing is that before the first of October we didn't have this problem and there were no changes made.
    Has anyone an idea where to look further or did anyone experience a thing like this and was able to solve it?
    Kind regards

    The only possible contention I can see is between the log writer and the archiver. 'Backup archivelog' in RMAN means implicitly 'ALTER SYSTEM ARCHIVE LOG CURRENT' (log switch and archiving the online log).
    You should alternate redo logs on different disks to minimize the effect of the archiver on the log writer.
    Werner

  • Rman backup control file

    hii i am working on oracle 10g 10.2.0.4.0 on solaris 10 have asm and rac setup(2 node rac).
    i have only one control file--+DATA_DG1/ftssdb/controlfile/current.270.664476369
    i am backing up these control file with rman
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/backup/rman_node1/%F';
    c-31850833-20100909-00 is a backed up piece of control file.
    now suddenly one system admin deleted that control file...how can i recover my database using rman backup??

    You can found these entries like :
    CREATE CONTROLFILE REUSE DATABASE "DBNAME" NORESETLOGS NOARCHIVELOG ==depends what is your db log mode
    MAXLOGFILES 32
    MAXLOGMEMBERS 2
    MAXDATAFILES 30
    MAXINSTANCES 8
    MAXLOGHISTORY 800
    LOGFILE
    GROUP 1 '/u01/oracle/7.1.6/dbs/log1p716.dbf' SIZE 500K,
    GROUP 2 '/u01/oracle/7.1.6/dbs/log2p716.dbf' SIZE 500K,
    GROUP 3 '/u01/oracle/7.1.6/dbs/log3p716.dbf' SIZE 500K
    DATAFILE
    '/u01/oracle/7.1.6/dbs/systp716.dbf' SIZE 40M,
    '/u01/oracle/7.1.6/dbs/tempp716.dbf' SIZE 550K,
    '/u01/oracle/7.1.6/dbs/toolp716.dbf' SIZE 15M
    # Recovery is required if any of the datafiles are restored backups,
    # or if the last shutdown was not normal or immediate.
    RECOVER DATABASE
    # Database can now be opened normally.
    ALTER DATABASE OPEN;
    Edited by: user00726 on Sep 9, 2010 4:42 AM

  • ASM RMAN backup to File System

    Hi all,
    I have a rman backup (datafile and controlfile) which was took in an ASM instance (not a RAC) ORACLE 11.2.0.2 in a Linux server, now I want restore the backup in a new database in windows/Linux OS using general File System storage (single instance rdbms) instead of ASM.
    Is this possible?
    Can I restrore an ASM rman backup in a file system storage mechanisim in a new server?
    Kindly clarify my question.
    Thanks in Advance..
    Nonuday

    Nonuday wrote:
    Hi Levi,
    Thanks for your invaluable script and blog.
    can you clarify me on this query:
    I have a RMAN backup taken from ASM and the backup is database and controlf file backup which contains datafiles and controlfiles.
    Now I need to restore this on my system and here I dont use ASM or archive log, I use single instance in no archive log mode database.
    I have restored the control file from the RMAN controfile backup.
    Before restoring the control file I have checked the orginal pfile of the backup database which had parameters like
    'db_create_file_dest',
    'db_create_online_log_dest',
    'db_recovery_file_dest_size',
    'db_recovery_dest',
    'log_archive_dest'.
    Since I am not gng to create a DB in no archive log mode, I didnt use any of the above parameters and created a database.
    Now my question is:
    If i restore the database and the datafile will get restored and after renaming all the logfiles, database will be opened.
    I want to know whether this method is correct or wrong and will the database work as it was working previously. Or do i need create the db_file_recovery and other parameters also for this database.About Parameter:
    All these parameters should reflect your current environment any reference to the old environment must be modified.
    About Filesystem used:
    Does not matter what Filesystem you are using the File (datafile/redolog/controlfile/archivelog/backuppiece) are created on Binary Format which depend on Platform only. So, The same binary file ( e.g datafile) have same format and content on raw device, ASM, ext3, ext2, and so on. So, to database it's only a location where file are stored, but the file are the same. ASM has a different architecture from Regular Filesystem and need be managed in a different manner (i.e using RMAN).
    About Database:
    Since your database files are the same even using different filesystem what you need is rename your datafiles/redofiles on controlfile during restore, the redo files will be recreated.
    So, does not matter if you database are noarchivelog or archivelog, the same way which you will do a restore on ASM is the same way to restore on Regular Filesystem. (it's only about renaming database file on controlfile during restore)
    On blog the post "How Migrate All Files on ASM to Non-ASM (Unix/Linux)" is about move the file from filesystem to another. But you can modify the script used to restore purposes;
    ## set newname tell to RMAN where file will be restored and keep this files location on memory buffer
    RMAN> set newname for datafile 1 to <location>;
    ### swich get list of files from memory buffer (rman) and rename on controlfile the files already restored.
    RMAN>switch datafile/tempfile all ;With database mounted use this script below:
    I just commented three lines that are unnecessary in your case.
    SET serveroutput ON;
    DECLARE
      vcount  NUMBER:=0;
      vfname VARCHAR2(1024);
      CURSOR df
      IS
        SELECT file#,
          rtrim(REPLACE(name,'+DG_DATA/drop/datafile/','/u01/app/oracle/oradata/drop/'),'.0123456789') AS name
        FROM v$datafile;
      CURSOR tp
      IS
        SELECT file#,
          rtrim(REPLACE(name,'+DG_DATA/drop/tempfile/','/u01/app/oracle/oradata/drop/'),'.0123456789') AS name
        FROM v$tempfile;
    BEGIN
    --  dbms_output.put_line('CONFIGURE CONTROLFILE AUTOBACKUP ON;'); ### commented
      FOR dfrec IN df
      LOOP
        IF dfrec.name  != vfname THEN
          vcount      :=1;
          vfname     := dfrec.name;
        ELSE
          vcount := vcount+1;
          vfname:= dfrec.name;
        END IF;
      --  dbms_output.put_line('backup as copy datafile ' || dfrec.file# ||' format  "'||dfrec.name ||vcount||'.dbf";');  ### commented
      END LOOP;
      dbms_output.put_line('run');
      dbms_output.put_line('{');
      FOR dfrec IN df
      LOOP
        IF dfrec.name  != vfname THEN
          vcount      :=1;
          vfname     := dfrec.name;
        ELSE
          vcount := vcount+1;
          vfname:= dfrec.name;
        END IF;
        dbms_output.put_line('set newname for datafile ' || dfrec.file# ||'  to  '''||dfrec.name ||vcount||'.dbf'' ;');
      END LOOP;
      FOR tprec IN tp
      LOOP
        IF tprec.name  !=  vfname THEN
          vcount      :=1;
          vfname     := tprec.name;
        ELSE
          vcount := vcount+1;
          vfname:= tprec.name;
        END IF;
        dbms_output.put_line('set newname for tempfile ' || tprec.file# ||'  to  '''||tprec.name ||vcount||'.dbf'' ;');
        END LOOP;
          dbms_output.put_line('restore database;');
        dbms_output.put_line('switch tempfile all;');
        dbms_output.put_line('switch datafile all;');
        dbms_output.put_line('recover database;');
        dbms_output.put_line('}');
    ---   dbms_output.put_line('alter database open;');  ### comented because you need rename your redologs on controlfile before open database
        dbms_output.put_line('exit');
    END;
    /After restore you must rename your redologs on controlfile from old location to new location:
    e.g
    ##  use this query to get current location of redolog
    SQL>  select group#,member from v$logfile order by 1;
    ## and change from <old_location> to <new_location>
    SQL > ALTER DATABASE
      RENAME FILE '+DG_TSM_DATA/tsm/onlinelog/group_3.263.720532229' 
               TO  '/u01/app/oracle/oradata/logs/log3a.rdo'  When you change all redolog on controlfile issue command below:
    SQL> alter database open resetlogs;PS: Always track database in real time using alert log file of database.
    HTH,
    Levi Pereira

  • How do I read the Windows 7 backup log file (.etl).

    I tried Windows 7 backup yesterday and it took over 5 hours to complete. When it did it used about 135GB of my backup drive which is okay, but when I looked a the backup drive, I found a couple directories that backup created. One had over 2000 .zip files in it. Is this how Custom backup works?
    I also had another folder with teh Image backup data.
    I decided I didn't like it so I deleted the backups, but now I would like to look at the log file. How do I do this? When I navigate to C:\Windows\Logs\WindowsBackup I see a WindowsBackup.2.etl file. When I double click on this I get the popup saying Windows 7 does not know what to do with this file. There is no way to look at the log from the Windows Backup window that I can see either.
    A Google search did not turn up  anything useful.Rich

    Hi Ztruker,
    Windows backup creates .zip files to store file backups and a VHD format to store image backups. Is there a reason why you are bothered with this format? To browse through the contents of the file backups, you could launch the restore files dialog and browse/search. To navigate through the image backup, you can mount the VHD through Disk Management. This is of course, if you have a backup. Unfortunately, backup does not create a user-readable log. The etl files are meant for product tracing in the case of failures and are for Microrosft use. Hope this helps.
    Thanks,
    Sneha
    [MSFT]

  • RMAN stanbdy log files

    Since setting up RMAN control file autobckup on a dataguard set up I get the following message in the alertlog:-
    Starting control autobackup
    Sat May 07 01:04:27 2005
    Control autobackup written to DISK device
         handle 'CF_C-00'
    Clearing standby activation ID 2115378951 (0x7e161f07)
    The primary database controlfile was created using the
    'MAXLOGFILES 9' clause.
    There is space for up to 6 standby redo logfiles
    Use the following SQL commands on the standby database to create
    standby redo logfiles that match the primary database:
    ALTER DATABASE ADD STANDBY LOGFILE 'srl1.f' SIZE 104857600;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl2.f' SIZE 104857600;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl3.f' SIZE 104857600;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl4.f' SIZE 104857600;
    Can anyone tell me a bit more a bit what this message is saying and explain the pros/cons of adding these standby log files to the standby database??

    This message is not related with Rman.
    In DataGurard there is a new fetaure of adding standby logfile for maximum protection.
    in 8i standby database this feature is not available.
    so this is normal message of adding standy logfile on dataguard.
    Thanks and Regards
    Kuljeet Pal Singh

  • AlwaysOn Availability groups and unable to backup LOG files

    Hi,
    I have an issue where we are using AlwaysOn availability groups, however my transaction log backup jobs are not working for any databases that are part of the groups.  Databases which are standalone backup fine.  I've tried using the builtin maintenance
    plans as well as the 'maintenance script' from Ola Hallengren which is my preference.
    All the jobs are reporting success even though nothing is bring written to disk, the jobs complete within a few seconds.
    If I backup the log files individually myself they work fine.  After
    going a bit of Googling this appears to have been a known issue that was fixed in CU7, unfortunately I'm running CU9 and still have the issue.
    I've tried changing the backup preference to any, as well as primary or secondary only to no avail.
    I'm using SQL Server SP1 CU9, 2 nodes in the group.  Anyone have any suggestions?
    Marcus

    Hi,
    I understand that you have tried every options in Backup Preferences. I still suggest you enable the “For availability database. Ignore Replica Priority for Backup and Backup on Primary Settings” when you define the maintenance plan and check how it works.
    This settings is off by default. And the maintenance plan will detect the availability group's AUTOMATED_BACKUP_PREFERENCE setting when deciding to backup a database or log if that database is defined in the group.
    'Given that an availability group's default AUTOMATED_BACKUP_PREFERENCE setting is 'SECONDARY, a maintenance plan, defined and running on a SQL Server instance hosting the primary replica, will NOT backup databases defined in the availability group.
    Reference:
    http://blogs.msdn.com/b/alwaysonpro/archive/2014/01/02/maintenance-plan-does-not-backup-database-or-log-of-database-that-belongs-to-availability-group.aspx
    Hope it helps.
    Tracy Cai
    TechNet Community Support

  • Restore Backup log file back SID .log and be*****.ant

    Dear all,
    I have to restore sap oracle online backup from source server to target server. We have taken the backup of source server on tape. Now we have to restore that tape backup on target server. We don't have back<SID>.log and be*****.ant with us.
    How can we restore the back<SID>.log and be****.ant tape first by which we can start the restore online backup on target server or without this back<SID>.log and be****.ant can we restore the backup on other location ( which i tried but not working, it is asking for back<SID>.log file.
    Thanks
    Ward

    Hi Ward,
    As long as you have complete oracle datafiles, restore without logfiles can be done.
    1. As a precauition, copy all /oracle/<SID> to a safe place during offline state (SAP down, Oracle down)
    2. Replace each datafile from tape to the current datafile location. e.g /tape/sr3.data1 -> /oracle/<SID>/sapdata1/sr3_1/sr3.data1
    3. Make sure that archive logs is complete from the point where you start online backup.
    4. sqlplus /nolog
    5. startup mount
    6. You will likely found error with controlfile. Replace all oracle controlfiles from tape backup, then repeat the startup
    7. recover database until cancel using backup controlfile
    8. choose the archive where you want the point of restore
    Good luck

  • Email RMAN backup logs in OEM 12c

    I am scheduling my RMAN daily backups via OEM 12c. My goal is that the email notification includes the full log, rather than just the status of results. I was able to do this in 11g fairly easily. I have created a ticket with Oracle, to which they have responded that this may not be possible with 12c. I find that hard to believe. I am probably missing something silly. Any help is greatly appreciated.

    Here is the response to your question. If its useful, please mark it so.
    https://oracletechnologistblog.wordpress.com/2012/08/27/tip-of-the-week-resolving-12c-cloud-control-database-backup-notification-issues/
    In 12c CC, after scheduling database backup job and configuring email notification, system doesn’t successfully email the status of the backup jobs. Email notification within 12c would work otherwise. Interestingly, the notification would not work just for database backup jobs.
    Steps to diagnose the issue:
    1.Please ensure your database target ‘yourdb’ in issue has been listed in the ‘Job Events For Targets’ by navigating to EM 12c -> Setup -> Incidents -> Job Events
    2.If the above doesn’t help, then please do:
    2.1 Log on the repository db of EM 12c as SYSMAN user to query the following then upload the output.html file for checking
    SQL>set markup html on spool on
    SQL>spool output.html
    SQL>select * from mgmt_targets where target_name=’PRD1′;
    SQL>select * from mgmt_notification_log;
    SQL>spool off
    2.2 Set EM 12c OMS to DEBUG level
    cd <OMS_HOME>/bin
    emctl set property -name log4j.rootCategory -value ‘DEBUG, emlogAppender, emtrcAppender’ -module logging
    2.3 Then reproduce this issue.
    If you find below errors in <gc_inst>/em/EMGC_OMS1/sysman/log/emoms.trc, then perform the below mentioned Resolution to resolve the issue.
    2012-08-23 17:05:51,519 [DeliveryThread-EMAIL6] WARN notification.pbs logp.251 – Delivery.run: java.util.MissingResourceException: Can’t find bundle for base name oracle.sysman.db.rsc.rec.BackupJobMsg, locale en_US
    java.util.MissingResourceException: Can’t find bundle for base name oracle.sysman.db.rsc.rec.BackupJobMsg, locale en_US
    Resolution:
    As per the uploaded file emoms_pbs.trc, it shows the exactly same error messages described in the bug 13334194.
    Apply the EM 12c BP1( per Doc ID 1430518.1) patch to resolve this issue, as it has included the fix of bug 13334194.
    Reference:
    Mandatory Enterprise Manager Cloud Control 12c Release 12.1.0.1 Bundle Patch 1 (BP1) for all available platforms (Doc ID 1430518.1)
    Document 1395505.1 – Announcing Enterprise Manager Cloud Control 12c Release 12.1.0.1 Bundle Patch 1(BP1) and 12.1.0.2 Plug-ins.
    EM 12c How to Configure Notifications for Job Executions? (Doc ID 1386816.1)

  • Email notification for rman backup log results

    Hi,
    Good Day!
    is this possible to send email notification about the rman database backup results (succesfull or failed) without using local O/S utilities/services like cron jobs or/and sendmail services?
    pl. note: Instead of local O/S' sendmail services, we would like to utilize the organizational - email server's address and email - account which is already running for other email correspondance.
    Plateform details
    +++++++++++++
    Database: 10.2.0.3
    O/S: HP-UX 11.31
    An urgent response will be highly obliged.
    Thanks
    Regards,
    X

    You can use dbms_scheduler to check for teh configured job and it's status and then using the utl_mail package,can send an email. This would require the smtp srever address of yours .
    Please refrain from using urgent word in the subseuent posts of yours. All the threads and the members of this and any other forum are alike and so are their issues/questions. For urgent request, please raise a sev 1 SR with support.
    HTH
    Aman....

  • Rman backup failure, and is generating a large number of files.

    I would appreciate some pointers on this if possible, as I'm a bit of an rman novice.
    Our rman backup logs indicated a failure and in the directory where it puts its files there appeared a large number of files for the 18th, which was the date of the failure. Previous days backups generated 5 files of moderate size. When it failed it generated between 30 - 40 G of files ( it looks like one for each database file ).
    The full backup is early monday morning, and the rest incremental :
    I have placed the rman log, the script and a the full directory file listing here : http://www.tinshed.plus.com/rman/
    Thanks in advance - George
    -rw-r----- 1 oracle dba 1073750016 Jan 18 00:03 database_f734055071_s244_s1
    -rw-r----- 1 oracle dba 1073750016 Jan 18 00:03 database_f734055096_s245_s1
    -rw-r----- 1 oracle dba 1073750016 Jan 18 00:03 database_f734573008_s281_s1
    -rw-r----- 1 oracle dba 1073750016 Jan 18 00:03 database_f734055045_s243_s1
    -rw-r----- 1 oracle dba 524296192 Jan 18 00:03 database_f734055121_s246_s1
    -rw-r----- 1 oracle dba 1073750016 Jan 18 00:03 database_f734055020_s242_s1
    -rw-r----- 1 oracle dba 4294975488 Jan 18 00:02 database_f734054454_s233_s1
    -rw-r----- 1 oracle dba 4294975488 Jan 18 00:02 database_f734054519_s234_s1
    -rw-r----- 1 oracle dba 4294975488 Jan 18 00:02 database_f734054595_s235_s1
    -rw-r----- 1 oracle dba 4294975488 Jan 18 00:02 database_f734054660_s236_s1
    -rw-r----- 1 oracle dba 4294975488 Jan 18 00:02 database_f734054725_s237_s1
    -rw-r----- 1 oracle dba 4294975488 Jan 18 00:02 database_f734054790_s238_s1
    -rw-r----- 1 oracle dba 209723392 Jan 18 00:02 database_f734055136_s247_s1
    -rw-r----- 1 oracle dba 73408512 Jan 18 00:02 database_f734055143_s248_s1
    -rw-r----- 1 oracle dba 67117056 Jan 18 00:02 database_f734055146_s249_s1
    -rw-r----- 1 oracle dba 4194312192 Jan 18 00:02 database_f734054855_s239_s1
    -rw-r----- 1 oracle dba 2147491840 Jan 18 00:02 database_f734054975_s241_s1
    -rw-r----- 1 oracle dba 3221233664 Jan 18 00:02 database_f734054920_s240_s1
    drwxr-xr-x 2 oracle dba 4096 Jan 18 00:00 logs
    -rw-r----- 1 oracle dba 18710528 Jan 17 00:15 controlfile_c-1911789030-20110117-00
    -rw-r----- 1 oracle dba 1343488 Jan 17 00:15 database_f740621746_s624_s1
    -rw-r----- 1 oracle dba 2958848 Jan 17 00:15 database_f740621745_s623_s1
    -rw-r----- 1 oracle dba 6415990784 Jan 17 00:15 database_f740620829_s622_s1
    -rw-r----- 1 oracle dba 172391424 Jan 17 00:00 database_f740620814_s621_s1

    george3 wrote:
    Ok, perhaps its my understanding of RMAN that is at fault. From the logs :
    Starting recover at 18-JAN-11
    channel m1: starting incremental datafile backup set restore
    channel m1: specifying datafile copies to recover
    recovering datafile copy file number=00001
    name=/exlibris1/rmanbackup/database_f734055020_s242_s1
    recovering datafile copy file number=00002
    name=/exlibris1/rmanbackup/database_f734055045_s243_s1
    it seems to make backup copies of the datafiles every night, so the creation of these large files is normal ?Above results indicate that you have full (incremental level 0) backup(all datafiles copies ) and there happen update/recover (applying incremental level 1) backup.So there was happen applying */exlibris1/rmanbackup/database_f734055045_s243_s1* inremental backup to full(level 1) backup.And size should be normal
    Why is it making copies of the datafiles even on days of incrementals ?
    Because after getting level 1 backup and need applying and every day will apply one incremental backup.

  • Can i restore with out backup log ?

    Hello,
    we have old system of 4.6c, which we are not using currently, but kept for old information. this had data till 2005, recently this system crashed and muliple HDD failed so we need to resore this system from backup, but the tape in which backup log files of folded /oracle/PRD/sapbackup exists is currupted, hence we dont have these file. how can we restore in this case,
    the full backup in the tape is fine, we can read the headerinfo.
    the oracle version is 8.1.7
    please suggest.
    Thanks In Advance.

    Hi,
    If you have lost backup logs then i don't think you can restore database with ordinary methods. You have to perfrom disaster recover if possible.
    Check this [link|http://help.sap.com/saphelp_nw70/helpdata/en/65/cade3bd0c8545ee10000000a114084/content.htm]
    Thanks
    Sunny

  • System Log Files - which can be removed

    i have two system log folders that are about 10Gb in size each. I would like to recover some of this space, but am unsure of which files I can safely delete.
    DiskName>Users>UserID>Library>Application Support>MobileSyn>Backup
    This folder has nine folders of varying size from a few MB to 3.1GB. Four are over 700MB
    Which can I safely delete, and what am I "losing" by deleting the older ones?
    DiskName>private>var>log
    contains a number of files consistently named "systemlog.#" with all but one of them named "systemlog.#.bz2"
    The .bz2 files were created by Archive Utility.
    the file without the .bz2 extension does not show a creator application in the SysInfo box.
    Can any of these files be deleted?
    The current one is over 3Gb in size, and one of the .bz2 files is over 2Gb
    Is there a Utility that will automatically manage these backup/log files to remove them after a defined period of time?
    Thanks in advance for any input
    PeterP
    Sydney, Australia

    Well, I solved the MobileSync files question
    And yes ... I apologise ... I mispelled the file directory name. I should have been:
    DiskName>Users>UserID>Library>Application Support>MobileSync>Backup
    These files are created by iTunes each time you backup your iPhone.
    Under iTunes>Preferences>Devices, there is a list of backups of the iPhone being maintained by iTunes (Device Backups).
    I had had a number of changes of serial number of my iPhone as I moved my earlier generation iPhone around my family, and as I had iPhones replaced after repair/warranty. Each time you do that, iTunes opens a new folder in the Backups directory. It also seems to close the current folder and open a new folder from time to time, as the past three folders are all with the same iPhone serial number.
    Whatever, by deleting the old unwanted backups via iTunes Preferences>Devices>Device Backups, iTunes also politely deleted the folders in the ...
    DiskName>Users>UserID>Library>Application Support>MobileSync>Backup
    ... folder, thus solving this problem.
    Thank you to Thomas A Reed, and to Francine Schwieder for your kind assistance in pointing me in the right direction on this matter.
    Re the systemlog file that is obviously "old" but doesn't have a .bz2 extension, as this file is only 5 days old, I have decided not to mess with it, and allow MacOSX to clean this up over time. I suspect either the weekly or monthly Unix Script run by the Marcaroni utility, (which I installed years ago), will take care of this.
    I took your advice Thomas and opened that log to see what was filling it up.
    I got lots of messages like this:
    21/02/11 14:18:54 com.vvi.peervisualserver[57] SCS Client Error: Error connecting to SCS type server process for channels 1 through 1 inclusive.
    21/02/11 14:19:04 com.vvi.peervisualserver[57] 2011-02-21 14:19:04.195 PeerVisualServer[27813:903] The application with bundle ID com.vvi.PeerVisualServer is running setugid(), which is not allowed.
    21/02/11 14:19:04 com.vvi.peervisualserver[57] SCS Client Error: Error connecting to SCS type server process for channels 0 through 0 inclusive.erVisualServer is running setugid(), which is not allowed.
    This runs continuously through the log.
    and like this:
    21/02/11 14:19:00 com.apple.launchd.peruser.502[172] (at.obdev.LittleSnitchUIAgent) Throttling respawn: Will start in 10 seconds
    21/02/11 14:19:02 com.apple.launchd[1] (at.obdev.littlesnitchd[27811]) Exited with exit code: 1
    I am running Little Snitch so this is perhaps not surprising.
    Thanks for your inputs on this. I will post in three weeks time if Marcaroni has deleted the systemlog file.
    Sincerely
    PeterP
    Sydney, Australia

  • Log file sync  during RMAN archive backup

    Hi,
    I have a small question. I hope someone can answer it.
    Our database(cluster) needs to have a response within 0.5 seconds. Most of the time it works, except when the RMAN backup is running.
    During the week we run one time a full backup, every weekday one incremental backup, every hour a controlfile backup and every 15 minutes an archival backup.
    During a backup reponse time can be much longer then this 0.5 seconds.
    Below an typical example of responsetime.
    EVENT: log file sync
    WAIT_CLASS: Commit
    TIME_WAITED: 10,774
    It is obvious that it takes very long to get a commit. This is in seconds. As you can see this is long. It is clearly related to the RMAN backup since this kind of responsetime comes up when the backup is running.
    I would like to ask why response times are so high, even if I only backup the archivelog files? We didn't have this problem before but suddenly since 2 weeks we have this problem and I can't find the problem.
    - We use a 11.2G RAC database on ASM. Redo logs and database files are on the same disks.
    - Autobackup of controlfile is off.
    - Dataguard: LogXptMode = 'arch'
    Greetings,

    Hi,
    Thank you. I am new here and so I was wondering how I can put things into the right category. It is very obvious I am in the wrong one so I thank the people who are still responding.
    -Actually the example that I gave is one of the many hundreds a day. The respone times during the archive backup is most of the time between 2 and 11 seconds. When we backup the controlfile with it, it is for sure that these will be the response times.
    -The autobackup of the controfile is put off since we already have also a backup of the controlfile every hour. As we have a backup of archivefiles every 15 minutes it is not necessary to also backup the controlfile every 15 minutes, specially if that even causes more delay. Controlfile is a lifeline but if you have properly backupped your archivefiles, a full restore with max 15 minutes of data loss is still possible. We put autobackup off since it is severely in the way of performance at the moment.
    As already mentioned for specific applications the DB has to respond in 0,5 seconds. When it doesn’t happen then an entry will be written in a table used by that application. So I can compare the time of failure with the time of something happening. The times from the archivelog backup and the failure match in 95% of the cases. It also show that log file sync at that moment is also part of this performance issue. I actually built a script that I used for myself to determine out of the application what the cause is of the problem;
    select ASH.INST_ID INST,
    ASH.EVENT EVENT,
    ASH.P2TEXT,
    ASH.WAIT_CLASS,
    DE.OWNER OWNER,
    DE.OBJECT_NAME OBJECT_NAME,
    DE.OBJECT_TYPE OBJECT_TYPE,
    ASH.TIJD,
    ASH.TIME_WAITED TIME_WAITED
    from (SELECT INST_ID,
    EVENT,
    CURRENT_OBJ#,
    ROUND(TIME_WAITED / 1000000,3) TIME_WAITED,
    TO_CHAR(SAMPLE_TIME, 'DD-MON-YYYY HH24:MI:SS') TIJD,
    WAIT_CLASS,
    P2TEXT
    FROM gv$active_session_history
    WHERE PROGRAM IN ('yyyyy', 'xxxxx')) ASH,
    (SELECT OWNER, OBJECT_NAME, OBJECT_TYPE, OBJECT_ID FROM DBA_OBJECTS) DE
    WHERE DE.OBJECT_id = ASH.CURRENT_OBJ#
    AND ASH.TIME_WAITED > 2
    ORDER BY 8,6
    - Our logfiles are 250M and we have 8 groups of 2 members.
    - Large pool is not set since we use memory_max_target and memory_target . I know that Oracle maybe doesn’t use memory well with this parameter so it is truly a thing that I should look into.
    - I looked for the size of the logbuffer. Actually our logbuffer is 28M which in my opinion is very large so maybe I should put it even smaller. It is very well possible that the logbuffer is causing this problem. Thank you for the tip.
    - I will also definitely look into the I/O. Eventhough we work with ASM on raid 10 I don’t think it is wise to put redo logs and datafiles on the same disks. Then again, it is not installed by me. So, you are right, I have to investigate.
    Thank you all very much for still responding even if I put this in the totally wrong category.
    Greetings,

Maybe you are looking for

  • IPod 5.5G Video, 30GB, No Sound Audio, Play/Pause Won't Shut Down, etc.

    Even though I have marked this topic as a question, it is more of a heads up for those having problems with Sound and Audio on the 5th Gen (Video) iPods. I was having audio problems like many others are describing and took it upon myself to look into

  • SOAP adapter and serializing

    Hello, all. My scenario is R/3(IDOC) -> XI -> IIS(SOAP message). I need idocs and soap messages to be processed in order on both inbound and outbound sides. I set up inbound qRFC queue for serializing IDocs using the IDoc Adapter. But when SOAP messa

  • Vertical display of fields using SQLPlus

    Hi all, I have a table Personnel (example) ID First_Name Last_Name, Age 1 Billy, Morgan, 56 2 Mary, Lyons, 35 3 Jimmy, Murphy, 55 and what I would like to be able to scroll through on my SQLPlus window is First_Name Billy Last_Name Morgan Age 56 i.e.

  • I need help with this recursion method

         public boolean findTheExit(int row, int col) {           char[][] array = this.getArray();           boolean escaped = false;           System.out.println("row" + " " + row + " " + "col" + " " + col);           System.out.println(array[row][col]

  • Loading fonts with stylemanager

    Hi all, I am have embedded fonts with css and compiled css to swf. Each font I am loading with StlyeManager.load methods. When each loads swf it renders whole application. any suggestion what to do? Should i use module loader ? Please help me. Thanks