RMAN Log file

Hi Folks,
I am using the below mentioned script through windows task scheduler to backup database by RMAN.
RUN
{backup database  format '\\200.200.200.132\backup\oracledb%s%t' include current controlfile;
SQL 'ALTER SYSTEM ARCHIVE LOG CURRENT';}
THE PROBLEM IS;
Everytime my backup is unsuccessful & i am unable to know the cause.
Please let me know if its possible to create a Log File....
Thanks in advance...

You can use >> rman.log in your rman command
example
rman target / @rman_script.scr >>rman.log
First try from command prompt after you satisfy you schedule it.

Similar Messages

  • Generating rman log file

    hi
    How can I generate Rman log file? Oracle xe on windows
    My rman backup script is like:
    run{
    backup device type disk tag '%TAG' database;
    SQL 'ALTER SYSTEM ARCHIVE LOG CURRENT';
    backup device type disk tag '%TAG' archivelog all not backed up delete all input;
    delete noprompt obsolete device type disk;
    }

    In addition to what Paul mentioned, you can also consider the LOG option from the command line.

  • Will RMAN delete archive log files on a Standby server?

    Environment:
    Oracle 11.2.0.3 EE on Solaris 10.5
    I am currently NOT using an RMAN repository (coming soon).
    I have a Primary database sending log files to a Standby.
    My Retention Policy is set to 'RECOVERY WINDOW OF 8 DAYS'.
    Question: Will RMAN delete the archive log files on the Standby server after they become obsolete based on the Retention Policy or do I need to remove them manually via O/S command?
    Does the fact that I'm NOT using an RMAN Repository at the moment make a difference?
    Couldn't find the answer in the docs.
    Thanks very much!!
    -gary

    Hello again Gary;
    Sorry for the delay.
    Why is what you suggested better?
    No, its not better, but I prefer to manage the archive. This method works, period.
    Does that fact (running a backup every 4 hours) make my archivelog deletion policy irrelevant?
    No. The policy is important.
    Having the Primary set to :
    CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBYBut set to "NONE" on the Standby.
    Means the worst thing that can happen is RMAN will bark when you try to delete something. ( this is a good thing )
    How do I prevent the archive backup process from backing up an archive log file before it gets shipped to the standby?
    Should be a non-issue, the archive does not move, the REDO is transported and applied. There's SQL to monitor both ( Transport and Apply )
    For Data Guard I would consider getting a copy of
    "Oracle Data Guard 11g Handbook" - Larry Carpenter (AKA Dr. Paranoid ) ISBN 978-0-07-162111-2
    Best Oracle book I've read in 10 years. Covers a ton of ground clearly.
    Also Data Guard forum here :
    Data Guard
    Best Regards
    mseberg
    Edited by: mseberg on Apr 10, 2012 4:39 PM

  • Show RMAN  message in log file and screen at same time

    Is there any way I can save all RMAN message in a log file and also show on standar out (screen).
    Thanks

    Hi,
    You can try a shell script like this if you are using linux:
    #!/bin/sh
    # Name: test_backup
    # Author: Tad_cs
    # Description: Executes backup using the RMAN
    export ORACLE_HOME=$1
    export ORACLE_SID=$2
    export LOG_DIR=$3
    # Variables:
    SCRIPT="test_backup"
    data_log=`date '+%y-%m-%d_%H:%M:%S'`
    logfile=${LOG_DIR}/${SCRIPT}-${data_log}.log
    # Execution of script backup of rman:
    $ORACLE_HOME/bin/rman <<EOF > $logfile
    connect target rman/rman
    connect catalog rman/rman
    run { execute script test_backup; }
    EOF
    exitSo, While the script is performing, you can open other screen, search for
    the log that is generating, and see the contents using:
    tail -f log_file_name.logIs this you need?
    []´s

  • RMAN success, but errors in alert.log file

    My RMAN backup script runs well, but generates errors in alert.log file.
    Here is the trace file contents:
    /usr/lib/oracle/xe/app/oracle/admin/XE/udump/xe_ora_3990.trc
    Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
    ORACLE_HOME = /usr/lib/oracle/xe/app/oracle/product/10.2.0/server
    System name: Linux
    Node name: plockton
    Release: 2.6.18-128.2.1.el5
    Version: #1 SMP Wed Jul 8 11:54:54 EDT 2009
    Machine: i686
    Instance name: XE
    Redo thread mounted by this instance: 1
    Oracle process number: 26
    Unix process pid: 3990, image: oracle@plockton (TNS V1-V3)
    *** 2009-07-23 23:05:01.835
    *** ACTION NAME:(0000025 STARTED111) 2009-07-23 23:05:01.823
    *** MODULE NAME:(backup full datafile) 2009-07-23 23:05:01.823
    *** SERVICE NAME:(SYS$USERS) 2009-07-23 23:05:01.823
    *** SESSION ID:(33.154) 2009-07-23 23:05:01.823
    *** 2009-07-23 23:05:18.689
    *** ACTION NAME:(0000045 STARTED111) 2009-07-23 23:05:18.689
    *** MODULE NAME:(backup archivelog) 2009-07-23 23:05:18.689
    Does anyone know why? Thanks.
    Richard

    I'm not sure if this will answer your question or not, but I believe these messages can likely be ignored.
    I'm currently running 10.2.0.1.0 Enterprise Edition in pre-production (yes, I know I should apply the latest patchset and I plan to do so as soon as I get a development box allocated to me and can test it's impact). I see the same types of messages that you've reported with each of my regularly-scheduled backups:
    a) The alert_<$SID>.log reports that there are errors in trace files:
    Mon Aug 10 04:33:49 2009
    Starting control autobackup
    Mon Aug 10 04:33:50 2009
    Errors in file /opt/oracle/admin/blah/udump/blah_ora_32520.trc:
    Mon Aug 10 04:33:50 2009
    Errors in file /opt/oracle/admin/blah/udump/blah_ora_32520.trc:
    Mon Aug 10 04:33:50 2009
    Errors in file /opt/oracle/admin/blah/udump/blah_ora_32520.trc:
    Control autobackup written to DISK device
    handle '/backup/physical/BLAH/RMAN/cf_c-2740124895-20090810-00'
    b) The .trc files, when you look at them contain no errors - only these "informational" messages:
    *** 2009-08-10 04:33:50.781
    *** ACTION NAME:(0000105 STARTED111) 2009-08-10 04:33:50.754
    *** MODULE NAME:(backup archivelog) 2009-08-10 04:33:50.754
    *** SERVICE NAME:(SYS$USERS) 2009-08-10 04:33:50.754
    *** SESSION ID:(126.28030) 2009-08-10 04:33:50.754
    c) I've verified that LOG_ARCHIVE_TRACE is set to 0:
    SQL*Plus> show parameter log_archive_trace
    NAME TYPE VALUE
    log_archive_trace integer 0
    As best I can discern from my own experience, these should just be ignored and I trust (read: "hope") they will simply go away once the latest patchset is applied. As for you running Oracle XE, a patchset is not an option, unfortunately.
    V/R
    -Eric

  • Where can I file the log file generate by RMAN using EM 10g.

    Hi
    I trying to find the log file that is generated using RMAN invoked from EM.
    I can only see the file using Internet Explorer with the URL:
    em/console/database/rec/bkpMgmt?skey=257&type=oracle_database&target=isatprod.dla_dns.com&event=showJobDe
    But I need to find where the log files are located in the filesystem because in other server I will not have EM with OC4J.
    Thanks.
    Juan.

    When use OEM for 10g and choose the option / Maintance/Backup Reports
    I can see information of all my backups, that includes:
    Backup Name - Start Time - Time Taken - Status - Type- Output Devices - Input Size .....
    When I click on Status field I can see the log file of this Backup.
    (whe click in status one URL will be invoked ,something like below URL)
    http://10.5.0.86:1158/em/console/database/rec/bkpMgmt?skey=259&type=oracle_database&target=isatprod.dla_dns.com&event=showJobDet&objType=jobDtl
    So the log file exist in any place for every backup made , the problem that I can not find it.
    The log has approximately 500 lines, if you want I can send you by email the log.
    Currently I don't have a repository catalog, I use a control file as repository.
    I think that 500 lines of log is not include in any dynamic performance views.
    Thanks
    Juan

  • RMAN causing "log file sync"

    Hi,
    Maybe someone can help me on this.
    We have a RAC database in production that (for some) applications need a response of 0,5 seconds. In general that is working.
    Outside of production hours we make a weekly full backup and daily incremental backup so that is not bothering us. However as soon as we make an archive backup or a backup of the control file during production hours we have a problem as the application have to wait for more then 0,5 seconds for a respons caused by the event "log file sync" with wait class "Commit".
    I already adjusted the script for RMAN so that we use only have 1 files per set and also use one channel. However that didn't work.
    Increasing the logbuffer was also not a success.
    Increasing Large pool is in our case not an option.
    We have 8 redolog groups with each 2 members ( each 250 Mb) and an average during the day of 12 logswitches per hour which is not very alarming. Even during the backup the I/O doesn't show very high activity. The increase of I/O at that moment is minor but (maybe) apperantly enough to cause the "log file sync".
    Oracle has no documentation that gives me more possible causes.
    Strange thing is that before the first of October we didn't have this problem and there were no changes made.
    Has anyone an idea where to look further or did anyone experience a thing like this and was able to solve it?
    Kind regards

    The only possible contention I can see is between the log writer and the archiver. 'Backup archivelog' in RMAN means implicitly 'ALTER SYSTEM ARCHIVE LOG CURRENT' (log switch and archiving the online log).
    You should alternate redo logs on different disks to minimize the effect of the archiver on the log writer.
    Werner

  • Log file sync  during RMAN archive backup

    Hi,
    I have a small question. I hope someone can answer it.
    Our database(cluster) needs to have a response within 0.5 seconds. Most of the time it works, except when the RMAN backup is running.
    During the week we run one time a full backup, every weekday one incremental backup, every hour a controlfile backup and every 15 minutes an archival backup.
    During a backup reponse time can be much longer then this 0.5 seconds.
    Below an typical example of responsetime.
    EVENT: log file sync
    WAIT_CLASS: Commit
    TIME_WAITED: 10,774
    It is obvious that it takes very long to get a commit. This is in seconds. As you can see this is long. It is clearly related to the RMAN backup since this kind of responsetime comes up when the backup is running.
    I would like to ask why response times are so high, even if I only backup the archivelog files? We didn't have this problem before but suddenly since 2 weeks we have this problem and I can't find the problem.
    - We use a 11.2G RAC database on ASM. Redo logs and database files are on the same disks.
    - Autobackup of controlfile is off.
    - Dataguard: LogXptMode = 'arch'
    Greetings,

    Hi,
    Thank you. I am new here and so I was wondering how I can put things into the right category. It is very obvious I am in the wrong one so I thank the people who are still responding.
    -Actually the example that I gave is one of the many hundreds a day. The respone times during the archive backup is most of the time between 2 and 11 seconds. When we backup the controlfile with it, it is for sure that these will be the response times.
    -The autobackup of the controfile is put off since we already have also a backup of the controlfile every hour. As we have a backup of archivefiles every 15 minutes it is not necessary to also backup the controlfile every 15 minutes, specially if that even causes more delay. Controlfile is a lifeline but if you have properly backupped your archivefiles, a full restore with max 15 minutes of data loss is still possible. We put autobackup off since it is severely in the way of performance at the moment.
    As already mentioned for specific applications the DB has to respond in 0,5 seconds. When it doesn’t happen then an entry will be written in a table used by that application. So I can compare the time of failure with the time of something happening. The times from the archivelog backup and the failure match in 95% of the cases. It also show that log file sync at that moment is also part of this performance issue. I actually built a script that I used for myself to determine out of the application what the cause is of the problem;
    select ASH.INST_ID INST,
    ASH.EVENT EVENT,
    ASH.P2TEXT,
    ASH.WAIT_CLASS,
    DE.OWNER OWNER,
    DE.OBJECT_NAME OBJECT_NAME,
    DE.OBJECT_TYPE OBJECT_TYPE,
    ASH.TIJD,
    ASH.TIME_WAITED TIME_WAITED
    from (SELECT INST_ID,
    EVENT,
    CURRENT_OBJ#,
    ROUND(TIME_WAITED / 1000000,3) TIME_WAITED,
    TO_CHAR(SAMPLE_TIME, 'DD-MON-YYYY HH24:MI:SS') TIJD,
    WAIT_CLASS,
    P2TEXT
    FROM gv$active_session_history
    WHERE PROGRAM IN ('yyyyy', 'xxxxx')) ASH,
    (SELECT OWNER, OBJECT_NAME, OBJECT_TYPE, OBJECT_ID FROM DBA_OBJECTS) DE
    WHERE DE.OBJECT_id = ASH.CURRENT_OBJ#
    AND ASH.TIME_WAITED > 2
    ORDER BY 8,6
    - Our logfiles are 250M and we have 8 groups of 2 members.
    - Large pool is not set since we use memory_max_target and memory_target . I know that Oracle maybe doesn’t use memory well with this parameter so it is truly a thing that I should look into.
    - I looked for the size of the logbuffer. Actually our logbuffer is 28M which in my opinion is very large so maybe I should put it even smaller. It is very well possible that the logbuffer is causing this problem. Thank you for the tip.
    - I will also definitely look into the I/O. Eventhough we work with ASM on raid 10 I don’t think it is wise to put redo logs and datafiles on the same disks. Then again, it is not installed by me. So, you are right, I have to investigate.
    Thank you all very much for still responding even if I put this in the totally wrong category.
    Greetings,

  • How can access data in rman output file "daily.log"

    Hi All,
    i did use following query for rman out
    output file "daily.log" has been created,but does not exist data.
    how can access data in output file "daily.log"
    bash-3.00$ sh rmant.sh
    Recovery Manager: Release 10.2.0.3.0 - Production on Thu Jun 30 16:54:16 2011
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    connected to target database: DEV (DBID=45558086)
    RMAN> Spool log to daily.log;
    2> run
    3> {
    4> backup databas;
    5>}
    6> spool log off;
    7>
    Spooling started in log file: daily.log
    Recovery Manager10.2.0.3.0
    Starting backup at 30-JUN-11
    using target database control file instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: sid=634 devtype=DISK
    channel ORA_DISK_1: starting compressed incremental level 0 datafile backupset
    channel ORA_DISK_1: specifying datafile(s) in backupset
    input datafile fno=00014 name=/u02/proddata/a_int01.dbf
    input datafile fno=00022 name=/u02/proddata/a_txn_data01.dbf
    input datafile fno=00023 name=/u02/proddata/a_txn_data02.dbf
    input datafile fno=00024 name=/u02/proddata/a_txn_data03.dbf
    input datafile fno=00039 name=/u02/proddata/a_txn_data04.dbf
    input datafile fno=00038 name=/u02/proddata/undo02.dbf
    input datafile fno=00027 name=/u02/proddata/a_txn_ind03.dbf
    input datafile fno=00028 name=/u02/proddata/a_txn_ind04.dbf
    input datafile fno=00012 name=/u02/proddata/undo01.dbf
    input datafile fno=00015 name=/u02/proddata/a_media01.dbf
    input datafile fno=00025 name=/u02/proddata/a_txn_ind01.dbf
    input datafile fno=00026 name=/u02/proddata/a_txn_ind02.dbf
    input datafile fno=00029 name=/u02/proddata/a_txn_ind05.dbf
    input datafile fno=00021 name=/u02/proddata/a_summ01.dbf
    input datafile fno=00019 name=/u02/proddata/a_ref01.dbf
    input datafile fno=00020 name=/u02/proddata/a_ref02.dbf
    input datafile fno=00001 name=/u02/proddata/system01.dbf
    input datafile fno=00002 name=/u02/proddata/system02.dbf
    input datafile fno=00003 name=/u02/proddata/system03.dbf
    input datafile fno=00004 name=/u02/proddata/system04.dbf
    input datafile fno=00005 name=/u02/proddata/system05.dbf
    input datafile fno=00006 name=/u02/proddata/system06.dbf
    input datafile fno=00007 name=/u02/proddata/system07.dbf
    input datafile fno=00008 name=/u02/proddata/system08.dbf
    input datafile fno=00009 name=/u02/proddata/system09.dbf
    input datafile fno=00010 name=/u02/proddata/system10.dbf
    input datafile fno=00011 name=/u02/proddata/system11.dbf
    input datafile fno=00013 name=/u02/proddata/a_archive01.dbf
    input datafile fno=00036 name=/u02/proddata/sysaux01.dbf
    input datafile fno=00016 name=/u02/proddata/a_nolog01.dbf
    input datafile fno=00037 name=/u02/proddata/izu01.dbf
    input datafile fno=00017 name=/u02/proddata/a_queue01.dbf
    input datafile fno=00018 name=/u02/proddata/a_queue02.dbf
    input datafile fno=00031 name=/u02/proddata/odm.dbf
    input datafile fno=00034 name=/u02/proddata/portal01.dbf
    input datafile fno=00035 name=/u02/proddata/sfx01.dbf
    input datafile fno=00030 name=/u02/proddata/ctxd01.dbf
    input datafile fno=00032 name=/u02/proddata/olap.dbf
    input datafile fno=00033 name=/u02/proddata/owad01.dbf
    channel ORA_DISK_1: starting piece 1 at 30-JUN-11
    channel ORA_DISK_1: finished piece 1 at 30-JUN-11
    piece handle=/sw/weekly_cum_database_DEV_t755196858_c1_s24_p1 tag=WEEKLY_CUM_DAT
    ABASE comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:32:55
    channel ORA_DISK_1: starting compressed incremental level 0 datafile backupset
    channel ORA_DISK_1: specifying datafile(s) in backupset
    including current control file in backupset
    including current SPFILE in backupset
    channel ORA_DISK_1: starting piece 1 at 30-JUN-11
    channel ORA_DISK_1: finished piece 1 at 30-JUN-11
    piece handle=/sw/weekly_cum_database_DEV_t755198834_c1_s25_p1 tag=WEEKLY_CUM_DAT
    ABASE comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
    Finished backup at 30-JUN-11
    Spooling for log turned off
    Recovery Manager10.2.0.3.0
    Recovery Manager complete.
    bash-3.00$

    i did try this but it hangs
    bash-3.00$ sh rmant.sh
    Recovery Manager: Release 10.2.0.3.0 - Production on Fri Jul 1 10:52:01 2011
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    connected to target database: DEV (DBID=45558086)
    RMAN> Spool log to 'daily' append;
    2> run
    3> {
    4> backup as backupset
    5> incremental level=0 cumulative
    6> device type disk
    7> tag "weekly_cum_database"
    8> format '/sw/weekly_cum_database_%d_t%t_c%c_s%s_p%p'
    9> database;
    10> }
    11>

  • Rman can not able generate log file in linux shell script

    Friends,
    In shell script we use  below command to save processing information.
    $ORACLE_HOME/bin/rman "target=/" > ${LOG} <<- EOF
    Recent days, it does not work and return message as
    ./test.sh: line 78: LOG: command not found
    ./test.sh: line 83: /u01/test/oracle/scripts/crons/logs/testdb_013120141148.log: No such file or directory
    find: /u01/test/oracle/scripts/crons/logs: No such file or directory
    It seems that rman can not create a log file when we passed log variable value.
    Thanks for help

    Hi,
    Seems some one removed the log directory.. check the log directory as specified in the script. and create that one if not exist.

  • RMAN stanbdy log files

    Since setting up RMAN control file autobckup on a dataguard set up I get the following message in the alertlog:-
    Starting control autobackup
    Sat May 07 01:04:27 2005
    Control autobackup written to DISK device
         handle 'CF_C-00'
    Clearing standby activation ID 2115378951 (0x7e161f07)
    The primary database controlfile was created using the
    'MAXLOGFILES 9' clause.
    There is space for up to 6 standby redo logfiles
    Use the following SQL commands on the standby database to create
    standby redo logfiles that match the primary database:
    ALTER DATABASE ADD STANDBY LOGFILE 'srl1.f' SIZE 104857600;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl2.f' SIZE 104857600;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl3.f' SIZE 104857600;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl4.f' SIZE 104857600;
    Can anyone tell me a bit more a bit what this message is saying and explain the pros/cons of adding these standby log files to the standby database??

    This message is not related with Rman.
    In DataGurard there is a new fetaure of adding standby logfile for maximum protection.
    in 8i standby database this feature is not available.
    so this is normal message of adding standy logfile on dataguard.
    Thanks and Regards
    Kuljeet Pal Singh

  • How do I setup RMAN not to delete archive log files on the source database so GoldenGate can process DDL/DML changes?

    I want to setup RMAN not to delete any archive log files that will be used by GoldenGate.   Once GoldenGate is completed with the archive log file, the archive log file can be backup and deleted by RMAN.   It's my understanding that I can issue the following command "REGISTER EXTRACT <ext_name>, LOGRETENTION" to enable to functionally.   Is this the only thing I need to do to execute to enable this functionally?

    Hello,
    Yes this is the rigth way  using clasic capture.
    Using the command : REGISTER EXTRACT Extract_name LOGRETENTION.
    Create a Oracle Streams Group Capture (Artificial)  that prevent RMAN archive deletion if these are pending to process for Golden Gate capture process.
    You can see this integration doing a SELECT * FROM DBA_CAPTURE; after execute the register command.
    Then, when RMAN try to delete a archive file pending to process for GG this warning appear AT RMAN logs:
    Error:     RMAN 8317 (RMAN-08317 RMAN-8317)
    Text:     WARNING: archived log not deleted, needed for standby or upstream capture process.
    Then , this is a good manageability feature. I think is a 11.1 GG new feature.
    Tip. To avoid RMAN backup multiples times a archive pending to process, there is a option called BACKUP archivelog not backed 1 times.
    If you remove a Capture process that is registered with the database you need to use this comand to remove the streams capture group:
    unREGISTER EXTRACT extract_name LOGRETENTION;
    Then if you query dba_capture, the artificial Streams group is deleted.
    I hope help.
    Regards
    Arturo

  • Are there any possible reason RMAN generates some corrupt archive log files

    Dear all,
    Are there any possible reason RMAN generates some corrupt archive log files?
    Best Regards,
    Amy

    Because I try to perform daily backup at lunch time and found out it takes more than 1 hour had no any progress. Normally we take around 40 minus. The following is the log file:
    RMAN> Run
    2> {
    3> CONFIGURE CONTROLFILE AUTOBACKUP ON;
    4> CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/u03/db/backup/RMAN/%F.bck';
    5> CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 1 DAYS;
    6> allocate channel ch1 type disk format '/u03/db/backup/RMAN/backup_%d_%t_%s_%p_%U.bck';
    7> backup incremental level 1 cumulative database plus archivelog delete all input;
    8> backup current controlfile;
    9> backup spfile;
    10> release channel ch1;
    11> }
    12> allocate channel for maintenance type disk;
    13> delete noprompt obsolete;
    14> delete noprompt archivelog all backed up 2 times to disk;
    15>
    16>
    using target database controlfile instead of recovery catalog
    old RMAN configuration parameters:
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    new RMAN configuration parameters:
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    new RMAN configuration parameters are successfully stored
    old RMAN configuration parameters:
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/u03/db/backup/RMAN/%F.bck';
    new RMAN configuration parameters:
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/u03/db/backup/RMAN/%F.bck';
    new RMAN configuration parameters are successfully stored
    old RMAN configuration parameters:
    CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 1 DAYS;
    new RMAN configuration parameters:
    CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 1 DAYS;
    new RMAN configuration parameters are successfully stored
    allocated channel: ch1
    channel ch1: sid=99 devtype=DISK
    Starting backup at 31-MAR-09
    current log archived
    After that I go to archive log directory "/u02/oracle/uat/uatdb/9.2.0/dbs" and use ls -lt command to see how many archive logs and my screen just hang. After we found out that we cannot use ls -lt command to read arch1_171.dbf
    archive log, the rest of archive logs able to use ls -lt command.
    We cannot delete this file as well. We shutdown database abort and perform check disk...... and fix the disk error and then open database again. Everything seems back to normal and we can use ls -lt command to read arch1_171.dbf.
    The strange problem is we have the same problem in Development and Production..... one ore more archive logs seems to be corrupted under the same directories /u02/oracle/uat/uatdb/9.2.0/dbs.
    Does anyone encounter the same problem?
    Amy

  • Create rman back log file issue in linux

    Hi Experts,
    I run 10.2.04 database in redhat 5.1
    i create a testing shell script for cron job. but I could not create rman backup log file with blank email.
    my code as
    #!/bin/bash
    EXPORT DTE='date +%m%d%C5y%H%M'
    export $ORACLE_HOME/bin
    export $ORACLE_SID=sale
    rman target='backuptest/backuptest@sale4' nocatalog log /urs/tmp/orarman/jimout.log << EOF
    RUN {
    show all;
    EXIT;
    EOF
    mail -s " backup"${DTE} [email protected] < urs/tmp/orarman/jimout.log
    Please advice me for dubug.
    Thanks for your help!
    JIm

    Thanks very much!
    I make changes as below
    #!/bin/bash
    EXPORT DTE='date +%m%d%C5y%H%M'
    export ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1
    export ORACLE_SID=sale
    export PATH=$ORACLE_HOME/bin:$PATH
    rman target='backuptest/[email protected]' nocatalog log='/urs/tmp/orarman/log/jimout.log' << EOF
    RUN {
    show all;
    EXIT;
    EOF
    mail -s " backup"${DTE} [email protected] < urs/tmp/orarman/log/jimout.log
    I am still not able see file under log='/urs/tmp/orarman/log/ directory.
    the rman option does not works for log='/urs/tmp/orarman/log/jimout.log'
    What is wrong in my code?
    I am looking for your help!
    JIm
    Edited by: user589812 on Jul 23, 2010 8:36 AM
    Edited by: user589812 on Jul 23, 2010 8:38 AM

  • RMAN recovery from lost online redo log files

    Hi,
    Can any one tell me, How can we recover database when all online redolog files (current, Active and Inactive) are lost.
    Note : RMAN with catalog.

    Hi,
    Database is running or down?
    If the database is is down and your database files and controlfile are synchronised then then
    perform a fake incomplete recovery like "Recover database until cancel and open the database using resetlogs.
    This will create new log files. And then take a full database backup.
    Regards,
    Navneet

Maybe you are looking for

  • Ipod not being detected

    when i plug my ipod into my computer nothing shows up, i tried it on 2 different computers, so i know it's not the usp jack that isn't working.it was just replaced with a new one cause the other one would not respond either. I've tried doing all thr

  • New iTunes 10 took away auto syncing for PDFs from podcasts?!

    You might as well make this a sticky since I've answered my own question: The autosyncing of PDFs to iBooks has been changed. You now have to select your PDFs and put them to the "book" selection under media selection. I guess you can make a smartlis

  • Lpadmin creates printer, share this printer selected

    I have used the following command to install a printer: /usr/sbin/lpadmin -p "name of printer" -E -v lpd://"printer IP or DNS"/"queue name" -P "path to PPD file" -D "description" however, after looking at the printer on the computer via sys prefs, th

  • Multiple Choices RE: Website Builder Forms

    Website Builder at Verizon has forms you can add. On one element, a drop down box of choices you can edit the nature of the drop down. At that area you can check "accept multiple choices" or something close to that. I have that checked but the only c

  • Adobe flash player in chrome

    I am trying to do hbogo and watch a movie and it keeps telling me that my google chrome settings are wrong and i need to fix adobe flash player. i cannot do this and cannot figure out what to do thanks, eileen