Archived log missed in standby database

Hi,
OS; Windows 2003 server
Oracle: 10.2.0.4
Data Guard: Max Performance
Dataguard missed some of the archivelog files and but latest log files are applying. standby database is not in sync with primary.
SELECT LOCAL.THREAD#, LOCAL.SEQUENCE# FROM (SELECT THREAD#, SEQUENCE# FROM V$ARCHIVED_LOG WHERE DEST_ID=1) LOCAL WHERE LOCAL.SEQUENCE# NOT IN (SELECT SEQUENCE# FROM V$ARCHIVED_LOG WHERE DEST_ID=2 AND THREAD# = LOCAL.THREAD#);
I queried above command and I found some files are missed in standby.
select status, type, database_mode, recovery_mode,protection_mode, srl, synchronization_status,synchronized from V$ARCHIVE_DEST_STATUS where dest_id=2;
STATUS TYPE DATABASE_MODE RECOVERY_MODE PROTECTION_MODE SRL SYNCHRONIZATION_STATUS SYN
VALID PHYSICAL MOUNTED-STANDBY MANAGED MAXIMUM PERFORMANCE NO CHECK CONFIGURATION NO
Anyone can tell me how to apply those missed archive log files.
Thanks in advacne

Deccan Charger wrote:
I got below error.
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION
ERROR at line 1:
ORA-01153: an incompatible media recovery is activeYou need to essentially do the following.
1) Stop managed recovery on the standby.
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;2) Resolve the archive log gap - if you have configured FAL_SERVER and FAL_CLIENT Oracle should do this when you follow step 3 below, as you've manually copied the missed logs you should be OK
3) restart managed recovery using the command shown above.
You can monitor archive log catchup using the alert.log or your original query.
Niall Litchfield
http://www.orawin.info/
Edited by: Niall Litchfield on May 4, 2010 2:29 PM
missed tag

Similar Messages

  • Archive log missing on standby: FAL[client]: Failed to request gap sequence

    My current environment is Oracle 10.2.0.4 with ASM 10.2.0.4 on a 2 node RAC in production and a standby that is the same setup. I'm also running on Oracle Linux 5. Almost daily now an archivelog doesnt make it to the standby and oracle doesnt seem to resolve the gap sequence from the primary. If I stop and restart recovery it gets the logfile and continues recovery just fine. I have checked my fal_client and fal_server settings and they look good. The logs after this error do continue to get written to the standby but the standby wont continue recovery until I stop and restart recovery and it fetches this missing log.
    The only thing I know thats happening is that the firewall people are disconnecting any connections that are inactive for 60 minutes and recently did an upgrade that they are claiming didnt change anything:)  I dont know if this is causing this problem or not. Any thoughts on what might be happening?
    Error in standby alert.log:
    Tue Jun 29 23:15:35 2010
    RFS[258]: Possible network disconnect with primary database
    Tue Jun 29 23:15:36 2010
    Fetching gap sequence in thread 2, gap sequence 9206-9206
    Tue Jun 29 23:16:46 2010
    FAL[client]: Failed to request gap sequence
    GAP - thread 2 sequence 9206-9206
    DBID 661398854 branch 714087609
    FAL[client]: All defined FAL servers have been attempted.
    Error on primary alert.log:
    Tue Jun 29 23:00:07 2010
    ARC0: Creating remote archive destination LOG_ARCHIVE_DEST_2: 'WSSPRDB' (thread 1 sequence 9265)
    (WSSPRD1)
    ARC0: Transmitting activation ID 0x29c37469
    Tue Jun 29 23:00:07 2010
    Errors in file /u01/app/oracle/admin/WSSPRD/bdump/wssprd1_arc0_14024.trc:
    ORA-03135: connection lost contact
    FAL[server, ARC0]: FAL archive failed, see trace file.
    Tue Jun 29 23:00:07 2010
    Errors in file /u01/app/oracle/admin/WSSPRD/bdump/wssprd1_arc0_14024.trc:
    ORA-16055: FAL request rejected
    ARCH: FAL archive failed. Archiver continuing
    Tue Jun 29 23:00:07 2010
    ORACLE Instance WSSPRD1 - Archival Error. Archiver continuing.
    Tue Jun 29 23:00:41 2010
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    Tue Jun 29 23:00:41 2010
    FAL[server, ARC2]: Begin FAL archive (dbid 0 branch 714087609 thread 2 sequence 9206 dest WSSPRDB)
    FAL[server, ARC2]: FAL archive failed, see trace file.
    Tue Jun 29 23:00:43 2010
    Errors in file /u01/app/oracle/admin/WSSPRD/bdump/wssprd1_arc2_14028.trc:
    ORA-16055: FAL request rejected
    ARCH: FAL archive failed. Archiver continuing
    Tue Jun 29 23:00:43 2010
    ORACLE Instance WSSPRD1 - Archival Error. Archiver continuing.
    Tue Jun 29 23:01:16 2010
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    Tue Jun 29 23:15:01 2010
    Thread 1 advanced to log sequence 9267 (LGWR switch)
    I have checked the trace files that get spit out but they arent anything meaningful to me as to whats really happening. Snipit of the trace file:
    tkcrrwkx: Starting to process work request
    tkcrfgli: SRL header: 0
    tkcrfgli: SRL tail: 0
    tkcrfgli: ORL to arch: 4
    tkcrfgli: le# seq thr for bck tba flags
    tkcrfgli: 1 359 1 2 0 3 0x0008 ORL active cur
    tkcrfgli: 2 358 1 0 1 1 0x0000 ORL active
    tkcrfgli: 3 361 2 4 0 0 0x0008 ORL active cur
    tkcrfgli: 4 360 2 0 3 2 0x0000 ORL active
    tkcrfgli: 5 -- entry deleted --
    tkcrfgli: 6 -- entry deleted --
    tkcrfgli: 7 -- entry deleted --
    tkcrfgli: 8 -- entry deleted --
    tkcrfgli: 9 -- entry deleted --
    tkcrfgli: 191 -- entry deleted --
    tkcrfgli: 192 -- entry deleted --
    *** 2010-03-27 01:30:32.603 20998 kcrr.c
    tkcrrwkx: Request from LGWR to perform: <startup>
    tkcrrcrlc: Starting CRL ARCH check
    *** 2010-03-27 01:30:32.603 66085 kcrr.c
    Beginning controlfile transaction 0x0x7fffd0b53198 [kcrr.c:20395 (14011)]
    *** 2010-03-27 01:30:32.645 66173 kcrr.c
    Acquired controlfile transaction 0x0x7fffd0b53198 [kcrr.c:20395 (14024)]
    *** 2010-03-27 01:30:32.649 66394 kcrr.c
    Ending controlfile transaction 0x0x7fffd0b53198 [kcrr.c:20397]
    tkcrrasgn: Checking for 'no FAL', 'no SRL', and 'HB' ARCH process
    # HB NoF NoS CRL Name
    29 NO NO NO NO ARC0
    28 NO YES YES NO ARC1
    27 NO NO NO NO ARC2
    26 NO NO NO NO ARC3
    25 YES NO NO NO ARC4
    24 NO NO NO NO ARC5
    23 NO NO NO NO ARC6
    22 NO NO NO NO ARC7
    21 NO NO NO NO ARC8
    20 NO NO NO NO ARC9
    Thanks.
    Kristi

    It's the network that's messing up; unlikely due to firewall timeout as it waits for 60 minutes and you are switching every 15 minutes. There may be some other network glitch that needs rectified.
    In any case - arch file missing/ corrupt / halfway through - FAL setting should have refetched the problematic archive log automatically.
    As many had suggested already, the best way to resolve RFS issues I believe is to use real-time apply by configuring standby redo logs. It's very easy to configure it and you can opt for real-time apply even in max-performance mode that you are using right now.
    Even though you are maintaining (I guess) 1-1 between primary & standby instances, you can provide both primary instances in fal_server (like fal_server=string1,string2). See if that helps.
    lastly, check if you are having simiar issue at other times as well that might be getting rectified automatically as expected.
    col message for a80
    col time for a20
    select message, to_char(timestamp,'dd-mon-rr hh24:mi:ss') time
    from v$dataguard_status
    where severity in ('Error','Fatal')
    order by timestamp;
    Cheers.

  • How to delete archive logs on the standby database....in 9i

    Hello,
    We are planning to setup a data guard (Maximum performance configuration ) between two Oracle 9i databases on two different servers.
    The archive logs on the primary servers are deleted via a RMAN job bases on a policy , just wondering how I should delete the archive logs that are shipped to the standby.
    Is putting a cron job on the standby to delete archive logs that are say 2 days old the proper approach or is there a built in data guard option that would some how allow archive logs that are no longer needed or are two days old deleted automatically.
    thanks,
    C.

    We are planning to setup a data guard (Maximum performance configuration ) between two Oracle 9i databases on two different servers.
    The archive logs on the primary servers are deleted via a RMAN job bases on a policy , just wondering how I should delete the archive logs that are shipped to the standby.
    Is putting a cron job on the standby to delete archive logs that are say 2 days old the proper approach or is there a built in data guard option that would some how allow archive logs that are no longer needed or are two days old deleted automatically.From 10g there is option to purge on deletion policy when archives were applied. Check this note.
    *Configure RMAN to purge archivelogs after applied on standby [ID 728053.1]*
    Still it is on 9i, So you need to schedule RMAN job or Shell script file to delete archives.
    Before deleting archives
    1) you need to check is all the archives are applied or not
    2) then you can remove all the archives completed before 'sysdate-2';
    RMAN> delete archvielog all completed before 'sysdate-2';
    As per your requirement.

  • Redo Archive Logs Missing

    Hi Gurus
    While Configuring Data Guard for ORacle 10g (10.2.0.4) 64 bits on Windows 2007 Server 64 bits.
    I got few questions
    1. What is the Default mode of Standby Database?
    2. Should we Always Start Physical Standby Database to Recover Missing Redo Archive Log?
    SQL> startup mount;
    ORACLE instance started.
    Total System Global Area 591396864 bytes
    Fixed Size 2067496 bytes
    Variable Size 163578840 bytes
    Database Buffers 419430400 bytes
    Redo Buffers 6320128 bytes
    Database mounted.
    SQL> alter databse recover managed standby database disconnect from session;
                   Database altered.
    3. When there are missing Redo Log Archives e.g.
    ----On Standby Database--------
    SQL> SELECT RESETLOGS_ID,SEQUENCE#,STATUS,ARCHIVED FROM V$ARCHIVED_LOG
    2 ORDER BY RESETLOGS_ID,SEQUENCE#;
    RESETLOGS_ID SEQUENCE# S ARC
    812980008 15 A YES
    812980008 16 A YES
    812980008 17 A YES
    812980008 18 A YES
    812980008 19 A YES
    812980008 20 A YES
    812980008 21 A YES
    812980008 22 A YES
    812980008 23 A YES
    812980008 24 A YES
    812980008 25 A YES
    RESETLOGS_ID SEQUENCE# S ARC
    812980008 26 A YES
    812980008 27 A YES
    812980008 28 A YES
    812980008 29 A YES
    812980008 30 A YES
    812980008 31 A YES
    812980008 32 A YES
    812980008 33 A YES
    812980008 34 A YES
    812980008 35 A YES
    812980008 36 A YES
    RESETLOGS_ID SEQUENCE# S ARC
    812980008 37 A YES
    812980008 38 A YES
    812980008 39 A YES
    812980008 40 A YES
    812980008 41 A YES
    812980008 42 A YES
    812980008 43 A YES
    29 rows selected.
    ---------------On Primary Database---------------------
    SQL> SELECT RESETLOGS_ID,SEQUENCE#,STATUS,ARCHIVED FROM V$ARCHIVED_LOG
    2 ORDER BY RESETLOGS_ID,SEQUENCE# ;
    RESETLOGS_ID SEQUENCE# S ARC
    *812980008 8 A YES*
    *812980008 9 A YES*
    *812980008 10 A YES*
    *812980008 11 A YES*
    *812980008 12 A YES*
    *812980008 13 A YES*
    *812980008 14 A YES*
    812980008 15 A YES
    812980008 15 A YES
    812980008 16 A YES
    812980008 16 A YES
    RESETLOGS_ID SEQUENCE# S ARC
    812980008 17 A YES
    812980008 17 A YES
    812980008 18 A YES
    812980008 18 A YES
    812980008 19 A YES
    812980008 19 A YES
    812980008 20 A YES
    812980008 20 A YES
    812980008 21 A YES
    812980008 21 A YES
    812980008 22 A YES
    RESETLOGS_ID SEQUENCE# S ARC
    812980008 22 A YES
    812980008 23 A YES
    812980008 23 A YES
    812980008 24 A YES
    812980008 24 A YES
    812980008 25 A YES
    812980008 25 A YES
    812980008 26 A YES
    812980008 26 A YES
    812980008 27 A YES
    812980008 27 A YES
    RESETLOGS_ID SEQUENCE# S ARC
    812980008 28 A YES
    812980008 28 A YES
    812980008 29 A YES
    812980008 29 A YES
    812980008 30 A YES
    812980008 30 A YES
    812980008 31 A YES
    812980008 31 A YES
    812980008 32 A YES
    812980008 32 A YES
    812980008 33 A YES
    RESETLOGS_ID SEQUENCE# S ARC
    812980008 33 A YES
    812980008 34 A YES
    812980008 34 A YES
    812980008 35 A YES
    812980008 35 A YES
    812980008 36 A YES
    812980008 36 A YES
    812980008 37 A YES
    812980008 37 A YES
    812980008 38 A YES
    812980008 38 A YES
    RESETLOGS_ID SEQUENCE# S ARC
    812980008 39 A YES
    812980008 39 A YES
    812980008 40 A YES
    812980008 40 A YES
    812980008 41 A YES
    812980008 41 A YES
    812980008 42 A YES
    812980008 42 A YES
    812980008 43 A YES
    812980008 43 A YES
    65 rows selected.
    Log 8, 9, 10, 11, 12, 13, 14, 15 are missing.
    How to Apply / Recover These Logs on Standby Database?
    Regards
    Thunder2777

    Hi
    Thunder2777 wrote:
    Hi Gurus
    While Configuring Data Guard for ORacle 10g (10.2.0.4) 64 bits on Windows 2007 Server 64 bits.
    I got few questions
    1. What is the Default mode of Standby Database?
    What is default mode? I think you want ask in what mode standby database is apply redo logs.
    Standby database can apply received redo only MOUNT mode, (your version is 10g, after 11g can apply open mode with READ ONLY WITH APPLY)
    2. Should we Always Start Physical Standby Database to Recover Missing Redo Archive Log?
    If Standby database opened mount mode, then database can receive redo.
    If you are start Redo Apply then MPR can request from primary for missing redo logs.
    SQL> startup mount;
    ORACLE instance started.
    Total System Global Area 591396864 bytes
    Fixed Size 2067496 bytes
    Variable Size 163578840 bytes
    Database Buffers 419430400 bytes
    Redo Buffers 6320128 bytes
    Database mounted.
    SQL> alter databse recover managed standby database disconnect from session;
                   Database altered.
    It is started recovery, in other words Redo Apply (MRP0 process)
    >
    3. When there are missing Redo Log Archives e.g.
    ----On Standby Database-------->
    SQL> SELECT RESETLOGS_ID,SEQUENCE#,STATUS,ARCHIVED FROM V$ARCHIVED_LOG
    2 ORDER BY RESETLOGS_ID,SEQUENCE#;
    RESETLOGS_ID SEQUENCE# S ARC
    812980008 15 A YES
    812980008 16 A YES
    812980008 17 A YES
    812980008 18 A YES
    812980008 19 A YES
    812980008 20 A YES
    812980008 21 A YES
    812980008 22 A YES
    812980008 23 A YES
    812980008 24 A YES
    812980008 25 A YES
    RESETLOGS_ID SEQUENCE# S ARC
    812980008 26 A YES
    812980008 27 A YES
    812980008 28 A YES
    812980008 29 A YES
    812980008 30 A YES
    812980008 31 A YES
    812980008 32 A YES
    812980008 33 A YES
    812980008 34 A YES
    812980008 35 A YES
    812980008 36 A YES
    RESETLOGS_ID SEQUENCE# S ARC
    812980008 37 A YES
    812980008 38 A YES
    812980008 39 A YES
    812980008 40 A YES
    812980008 41 A YES
    812980008 42 A YES
    812980008 43 A YES
    29 rows selected.
    ---------------On Primary Database---------------------
    SQL> SELECT RESETLOGS_ID,SEQUENCE#,STATUS,ARCHIVED FROM V$ARCHIVED_LOG
    2 ORDER BY RESETLOGS_ID,SEQUENCE# ;
    RESETLOGS_ID SEQUENCE# S ARC
    *812980008 8 A YES*
    *812980008 9 A YES*
    *812980008 10 A YES*
    *812980008 11 A YES*
    *812980008 12 A YES*
    *812980008 13 A YES*
    *812980008 14 A YES*
    812980008 15 A YES
    812980008 15 A YES
    812980008 16 A YES
    812980008 16 A YES
    RESETLOGS_ID SEQUENCE# S ARC
    812980008 17 A YES
    812980008 17 A YES
    812980008 18 A YES
    812980008 18 A YES
    812980008 19 A YES
    812980008 19 A YES
    812980008 20 A YES
    812980008 20 A YES
    812980008 21 A YES
    812980008 21 A YES
    812980008 22 A YES
    RESETLOGS_ID SEQUENCE# S ARC
    812980008 22 A YES
    812980008 23 A YES
    812980008 23 A YES
    812980008 24 A YES
    812980008 24 A YES
    812980008 25 A YES
    812980008 25 A YES
    812980008 26 A YES
    812980008 26 A YES
    812980008 27 A YES
    812980008 27 A YES
    RESETLOGS_ID SEQUENCE# S ARC
    812980008 28 A YES
    812980008 28 A YES
    812980008 29 A YES
    812980008 29 A YES
    812980008 30 A YES
    812980008 30 A YES
    812980008 31 A YES
    812980008 31 A YES
    812980008 32 A YES
    812980008 32 A YES
    812980008 33 A YES
    RESETLOGS_ID SEQUENCE# S ARC
    812980008 33 A YES
    812980008 34 A YES
    812980008 34 A YES
    812980008 35 A YES
    812980008 35 A YES
    812980008 36 A YES
    812980008 36 A YES
    812980008 37 A YES
    812980008 37 A YES
    812980008 38 A YES
    812980008 38 A YES
    RESETLOGS_ID SEQUENCE# S ARC
    812980008 39 A YES
    812980008 39 A YES
    812980008 40 A YES
    812980008 40 A YES
    812980008 41 A YES
    812980008 41 A YES
    812980008 42 A YES
    812980008 42 A YES
    812980008 43 A YES
    812980008 43 A YES
    65 rows selected.
    Log 8, 9, 10, 11, 12, 13, 14, 15 are missing.It is no missing, you are created standby database, after sequence 15.
    As you know , if a sequence redo is not applied, then after is sequenced redo log is cannot apply to standby database.
    It means GAP.
    There have 43 archived redo log, and your last sequenced archive log received by standby database
    and applied.
    You can check with following scripts, too
    select max(Sequence#) from v$archived_log; -- on primary
    select max(Sequence#) from v$archived_log where applied = 'YES';  - on standby  Regards
    Mahir M. Quluzade

  • Apply missing log on physical standby database

    How to apply the missing log on physical standby database?
    I already registered the missing log and started the recovery process.
    Still...having log on standby that has status ..not applied.
    SQL> SELECT SEQUENCE#,APPLIED FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;
    SEQUENCE# APP
    16018 YES
    16019 YES
    16020 NO ---------------------> Not applied.
    16021 YES
    Thanks

    Not much experience doing this, but according to the 9i doc (http://download-east.oracle.com/docs/cd/B10501_01/server.920/a96653/log_apply.htm#1017352), all you need is properly-configured FAL_CLIENT and FAL_SERVER initialization parameters, and things should take care of themselves automatically. Let us know if that doesn't work for you, we might be able to think of something else.
    Daniel

  • *HOW TO DELETE THE ARCHIVE LOGS ON THE STANDBY*

    HOW TO DELETE THE ARCHIVE LOGS ON THE STANDBY
    I have set the RMAN CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBY; on my physical standby server.
    My archivelog files are not deleted on standby.
    I have set the CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default on the Primary server.
    I've checked the archivelogs with the FRA and they are not beign deleted on the STANDBY. Do I have to do something for the configuation to take effect? Like run a RMAN backup?
    I've done a lot ofresearch and i'm getting mixed answers. Please help. Thanks in advanced.
    J

    Setting the Policy will not delete the Archive logs on the Standby. ( I found a thread where the Data Guard product manager says "The deletion policy on both sides will do what you want" ). However I still
    like to clean them off with RMAN.
    I would use RMAN to delete them so that it can use that Policy are you are protected in case of Gap, transport issue etc.
    There are many ways to do this. You can simply run RMAN and have it clean out the Archive.
    Example :
    #!/bin/bash
    # Name: db_rman_arch_standby.sh
    # Purpose: Database rman backup
    # Usage : db_rman_arch_standby <DBNAME>
    if [ "$1" ]
    then DBNAME=$1
    else
    echo "basename $0 : Syntax error : use . db_rman_full <DBNAME> "
    exit 1
    fi
    . /u01/app/oracle/dba_tool/env/${DBNAME}.env
    echo ${DBNAME}
    MAILHEADER="Archive_cleanup_on_STANDBY_${DBNAME}"
    echo "Starting RMAN..."
    $ORACLE_HOME/bin/rman target / catalog <user>/<password>@<catalog> << EOF
    delete noprompt ARCHIVELOG UNTIL TIME 'SYSDATE-8';
    exit
    EOF
    echo `date`
    echo
    echo 'End of archive cleanup on STANDBY'
    mailx -s ${MAILHEADER} $MAILTO < /tmp/rmandbarchstandby.out
    # End of ScriptThis uses ( calls an ENV) so the crontab has an environment.
    Example ( STANDBY.env )
    ORACLE_BASE=/u01/app/oracle
    ULIMIT=unlimited
    ORACLE_SID=STANDBY
    ORACLE_HOME=$ORACLE_BASE/product/11.2.0.2
    ORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/data
    LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
    LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib
    LIBPATH=$LD_LIBRARY_PATH:/usr/lib
    TNS_ADMIN=$ORACLE_HOME/network/admin
    PATH=$ORACLE_HOME/bin:$ORACLE_BASE/dba_tool/bin:/bin:/usr/bin:/usr/ccs/bin:/etc:/usr/sbin:/usr/ucb:$HOME/bin:/usr/bin/X11:/sbin:/usr/lbin:/GNU/bin/make:/u01/app/oracle/dba_tool/bin:/home/oracle/utils/SCRIPTS:/usr/local/bin:.
    #export TERM=linux=80x25 wrong wrong wrong wrong wrong
    export TERM=vt100
    export ORACLE_BASE ORACLE_SID ORACLE_TERM ULIMIT
    export ORACLE_HOME
    export LIBPATH LD_LIBRARY_PATH ORA_NLS33
    export TNS_ADMIN
    export PATH
    export MAILTO=?? your email hereNote use the env command in Unix to get you settings.
    There are probably ten other/better ways to do this, but this works.
    other options ( you decide )
    Configure RMAN to purge archivelogs after applied on standby [ID 728053.1]
    http://www.oracle.com/technetwork/database/features/availability/rman-dataguard-10g-wp-1-129486.pdf
    Maintenance Of Archivelogs On Standby Databases [ID 464668.1]
    Tip I don't care myself but in some of the other forums people seem to mind if you use all caps in the subject. They say it shouting. My take is if somebody is shouting at me I'm probably going to just move away.
    Best Regards
    mseberg
    Edited by: mseberg on May 8, 2012 11:53 AM
    Edited by: mseberg on May 8, 2012 11:56 AM

  • How: Script archive log transfer to standby db

    Hi,
    I’m implementing disaster recovery right now. For some special reason, the only option for me is to implement non-managed standby (manual recovery) database.
    The following is what I’m trying to do using shell script:
    1.     Compress archive logs and copy them from Primary site to Standby site every hour. ( I have a very low network )
    2.     Decompress archive logs at standby site
    3.     Check if there are missed archive logs. If no, then do the manual recovery
    Did I miss something above? And I’m not skill in to build shell scripts, is there any sample scripts I can follow? Thanks.
    Nabil
    Message was edited by:
    11iuser

    Hi,
    Take a look at data guard packages. There is a package just for this purpose: Bipul Kumar notes:
    http://www.dba-oracle.com/t_oracledataguard_174_unskip_table_.htm
    "the time lag between the log transfer and the log apply service can be built using the DELAY attribute of the log_archive_dest_n initialization parameter on the primary database. This delay timer starts when the archived log is completely transferred to the standby site. The default value of the DELAY attribute is 30 minutes, but this value can be overridden as shown in the following example:
    LOG_ARCHIVE_DEST_3=’SERVICE=logdbstdby DELAY=60’;"
    1. Compress archive logs and copy them from Primary site to Standby site every hour.Me, I use tar (or compress) and rcp, but I don't know the details of your environment. Jon Emmons has some good notes:
    http://www.lifeaftercoffee.com/2006/12/05/archiving-directories-and-files-with-tar/
    2. Decompress archive logs at standby siteSee the man pages for uncompress. I do it through a named pipe to simplify the process:
    http://www.dba-oracle.com/linux/conditional_statements.htm
    3. Check if there are missed archive logs.I keep my standby data in recovery mode, and as soon as the incoming logs are uncompressed, they are applied automatically.
    Again, if you don't feel comfortable writing your own, consider using the data guard packages.
    Hope this helps. . .
    Donald K. Burleson
    Oracle Press author

  • BRARCHIVE backup for high volume offline redo log files on Standby Database

    Hi All,
    We are through with all of Standby database activity, also started applying the offline redo log files on the Standby site.
    The throughput is not utilizing the actual available bandwith.
    So we are not able to copy the offline redo files on time, as the offline redo files are piling up on the Production side.
    My query is how we can parallely copy the offline redo log files on the DR site (ie. 4-5 redo files at a time).
    Kindly guide for the same.
    Regards,
    Shaibaz

    hi,
    I have one doubt.
    On other server (r3qas) the Umask settings are as followed
    User     UMASK value
    <sid>adm          077              
    ora<SID>           077
    root                   077
    Running SAP System :   SAP R3 4.6C
    Running DBMS          :  Oracle 9.0
    Operating System      :- HP_UX
    On this system The new offline redo log files are created with 600 permissions. There is not a problem here, while taking the backup. I checked last "r3qas-archive" backups. There, i have not found any single error related to permissions, or any others (something like, Cannot open /oracle/RQ1/../.........dbf).
    If everything is working fine, with this umask setting on this server, then, what's going wrong with the BW Quality server, which have the same umask settings (also others) for all the concerned users, as mentioned above.
    Regards,
    Bhavik Shroff

  • Archive log miss : How to restart capture

    Hi Gurus,
    I configured hotlog CDC distributed on 10.2.0.4 Databases.
    I make a mistake: in my source Db I have deleted an Archive log,
    and now the state of Capture process in V_$STREAM_CAPTURE is "WAITING FOR REDO: LAST SCN MINED 930696".
    Now I'd like to restart the capture process from the next archive (just after the missed archive)
    How is it possible?
    tnk Fabio

    I'm sorry to tell you that, but it's not possible. (Just as it's not possible to recover database with missing logs...)
    You will have to recreate the capture process and to re-instantiate the replicated tables.
    Regards,

  • Archive apply issue for standby database in Standard Edition.

    We have setup standby database in Oracle standard edition. the archive log cannot be send by oracle automatically. So we manually send the archive over. But do not see the database apply the archive log file. What could be wrong and where should we check on it?
    We also used following when finishing the standby DB after running "recover standby database;" for a few archive log file:
    alter database recover managed standby database disconnect from session;
    ======================================================================================
    This is the status:
    SQL> select OPEN_MODE, SWITCHOVER# ,REMOTE_ARCHIVE ,ARCHIVELOG_CHANGE#,SWITCHOVER_STATUS, DATABASE_ROLE from v$database;
    OPEN_MODE SWITCHOVER# REMOTE_A ARCHIVELOG_CHANGE# SWITCHOVER_STATUS DATABASE_ROLE
    MOUNTED 495550636 ENABLED 1.2201E+13 SESSIONS ACTIVE PHYSICAL STANDBY

    The DB version is 10.2.0.5
    I can do this to apply at standby DB:
    SQL> set autorecovery on
    SQL> recover standby database;
    Then I tried to run this to leave the session:
    alter database recover managed standby database disconnect from session;
    Do not seen any apply.
    =============================
    Here is the last lines of alert log:
    Errors in file /oracle/admin/cntus/bdump/cntus_mrp0_12389.trc:
    ORA-00313: open failed for members of log group 1 of thread 1
    ORA-00312: online log 1 thread 1: '/u03/cntus/redolog/redo01b.log'
    ORA-27037: unable to obtain file status
    SVR4 Error: 2: No such file or directory
    Additional information: 3
    ORA-00312: online log 1 thread 1: '/u02/cntus/redolog/redo01a.log'
    ORA-27037: unable to obtain file status
    SVR4 Error: 2: No such file or directory
    Additional information: 3
    Thu Sep 06 15:08:34 EDT 2012
    Errors in file /oracle/admin/cntus/bdump/cntus_mrp0_12389.trc:
    ORA-00313: open failed for members of log group 1 of thread 1
    ORA-00312: online log 1 thread 1: '/u03/cntus/redolog/redo01b.log'
    ORA-27037: unable to obtain file status
    SVR4 Error: 2: No such file or directory
    Additional information: 3
    ORA-00312: online log 1 thread 1: '/u02/cntus/redolog/redo01a.log'
    ORA-27037: unable to obtain file status
    SVR4 Error: 2: No such file or directory
    Additional information: 3
    Clearing online redo logfile 1 /u02/cntus/redolog/redo01a.log
    Clearing online log 1 of thread 1 sequence number 10158
    Thu Sep 06 15:08:34 EDT 2012
    Errors in file /oracle/admin/cntus/bdump/cntus_mrp0_12389.trc:
    ORA-00313: open failed for members of log group 1 of thread 1
    ORA-00312: online log 1 thread 1: '/u03/cntus/redolog/redo01b.log'
    ORA-27037: unable to obtain file status
    SVR4 Error: 2: No such file or directory
    Additional information: 3
    ORA-00312: online log 1 thread 1: '/u02/cntus/redolog/redo01a.log'
    ORA-27037: unable to obtain file status
    SVR4 Error: 2: No such file or directory
    Additional information: 3
    Thu Sep 06 15:08:34 EDT 2012
    Errors in file /oracle/admin/cntus/bdump/cntus_mrp0_12389.trc:
    ORA-19527: physical standby redo log must be renamed
    ORA-00312: online log 1 thread 1: '/u02/cntus/redolog/redo01a.log'
    Clearing online redo logfile 1 complete
    Media Recovery Waiting for thread 1 sequence 10166
    Thu Sep 06 15:08:35 EDT 2012
    Completed: alter database recover managed standby database disconnect from session
    ======
    We have the directory /u02/cntus/redolog and /u03/cntus/redolog. But nothing in there. Should we create redo for it? Is this a separate issue?
    Thanks!

  • Archive log generation in standby

    Dear all,
    DB: 11.1.0.7
    We are configuring physical standby for our production system.we have the same file
    system and configuration for both the servers.. now primary archive
    destination is d:/arch and the standby server also have d:/arch .Now
    archive logs are properly logged into the standby and the data is
    intact . the problem we have archive log generation proper in the
    primary arch destionation. but no archive logs are getting
    generated in the standby archive location. but archive logs are being
    applied to the standby database ?
    is this normal ?..in standby archive logs will not be generated ?
    Please guide
    Kai

    There are no standby logs should be generated on standby side. Why do you think it should. If you are talking about parameter standby_archive_dest then, if you set this parameter oracle will copy applied log to this directory, not create new one.
    in 11g oracle recomended to not use this parameter. Instead oracle recomended to set log_archive_dest_1 and log_archive_dest_3 similar to this:
    ALTER SYSTEM SET log_archive_dest_1 = 'location="USE_DB_RECOVERY_FILE_DEST", valid_for=(ALL_LOGFILES,ALL_ROLES)'
    ALTER SYSTEM SET log_archive_dest_3 = 'SERVICE=<primary_tns> LGWR ASYNC db_unique_name=<prim_db_unique_name> valid_for=(online_logfile,primary_role)'
    /

  • Check actual apply log time on standby database after synchronization

    Hi All,
    I want to check the date and time stamp of applied archived logs on stanby database. How should I check that??
    My dataguard link was broken for sometime and meanwhile lot of transactions happened on primary database. Now When the link came up the synchronisation happened within few hours and ultimatly transport and appliy lag became 0. But now I want to check actual time taken for tranporting the logs and applying them on standby database. Is there any way I could do that easily..
    Thanks

    This script written by Yousef Rifai I found here http://www.dba-village.com/village/dvp_forum.OpenThread?ThreadIdA=34772&DestinationA=RSS might be just what you need (run on standby database):
    set ver off
    alter session set nls_date_format='dd-mon-yy hh24:mi:ss'
    select app_thread, seq_app, tm_applied,
    nvl(seq_rcvd,seq_app) seq_rcvd, nvl(tm_rcvd,tm_applied) tm_rcvd
    from
    (select sequence# seq_app, FIRST_TIME tm_applied, thread# app_thread
    from v$archived_log where applied = 'YES'
    and (first_time, thread#) in (
    select max(FIRST_TIME ), thread#
    from v$archived_log where applied = 'YES'
    group by thread# )
    (select sequence# seq_rcvd, FIRST_TIME tm_rcvd, thread# rcvd_thread
    from v$archived_log where applied = 'NO'
    and (first_time, thread#) in (
    select max(FIRST_TIME ), thread#
    from v$archived_log where applied = 'NO'
    group by thread# )
    where rcvd_thread(+)= app_thread
    Best regards,
    Robert
    http://robertvsoracle.blogspot.com

  • Manually Apply Archived log to stand by database

    Hi all,
    I'm using Oracle 10g. So, we need to apply the archived logs of Primary database to stand by database, but this process must be manually.
    A "robot" copy the archived logs to destination. I need to get these archived logs from destination and manually apply to stand by database.
    How can I do this?
    thank you very much!!!!

    How can I do this?First stop current recover process as
    alter database recover managed standby database cancel;Then transfer all need archive logs from primary to standby side and apply these manually using below command.
    recover standby database until cancel
    /*this command will ask full path of archive logs
    and give these then finally enter "CANCEL" word*/Finally you can continue normal/automatic state using
    alter database recover managed standby database ;

  • Dropping log file in standby database

    Please,
    I need a help for the following issue:
    I'm making a technical documentation on various event that occur on dataguard configuraation, right now I drop a redo log group file on primary database, and when I try to drop the equivalent log group file on standby database I got the following error:
    SQL> alter database drop logfile group 3;
    alter database drop logfile group 3
    ERROR at line 1:
    ORA-01156: recovery in progress may need access to files
    this is the current state of the redolog file on standby database.
    SQL> select group#,members,status from v$log;
    GROUP# MEMBERS STATUS
    1 3 CLEARING_CURRENT
    3 3 CLEARING
    2 3 CLEARING
    Eventhough I do the following command on standby I also get an error.
    SQL> ALTER DATABASE CLEAR LOGFILE GROUP 3;
    ALTER DATABASE CLEAR LOGFILE GROUP 3
    ERROR at line 1:
    ORA-01156: recovery in progress may need access to files
    Can someone tell me how to drop on dataguard configuration the redolog file on primary database and their corresponding on standby database
    I'm working on 10 release 2, on Windows
    Thanks you

    Oracle Dataguard Concept and administration release 2,ref b14239: is my source but it doesn't work when trying to drop stanby group or logile member.
    For example, if the primary database has 10 online redo log files and the standby
    database has 2, and then you switch over to the standby database so that it functions
    as the new primary database, the new primary database is forced to archive more
    frequently than the original primary database.
    Consequently, when you add or drop an online redo log file at the primary site, it is
    important that you synchronize the changes in the standby database by following
    these steps:
    1. If Redo Apply is running, you must cancel Redo Apply before you can change the
    log files.
    2. If the STANDBY_FILE_MANAGEMENT initialization parameter is set to AUTO,
    change the value to MANUAL.
    3. Add or drop an online redo log file:
    ■ To add an online redo log file, use a SQL statement such as this:
    SQL> ALTER DATABASE ADD LOGFILE '/disk1/oracle/oradata/payroll/prmy3.log'
    SIZE 100M;
    ■ To drop an online redo log file, use a SQL statement such as this:
    SQL> ALTER DATABASE DROP LOGFILE '/disk1/oracle/oradata/payroll/prmy3.log';
    4. Repeat the statement you used in Step 3 on each standby database.
    5. Restore the STANDBY_FILE_MANAGEMENT initialization parameter and the Redo Apply options to their original states.
    Thank

  • Capture process issue...archive log missing!!!!!

    Hi,
    Oracle Streams capture process is alternating between INITIALIZING and DICTIONARY INITIALIZATION state and not proceeding after this state to capture updates made on table.
    we have accidentally missing archivelogs and no backup archive logs.
    Now I am going to recreate the capture process again.
    How I can start the the capture process from new SCN ?
    And Waht is the batter way to remove the archive log files from central server, because
    SCN used by capture processes?
    Thanks,
    Faziarain
    Edited by: [email protected] on Aug 12, 2009 12:27 AM

    Using dbms_Streams_Adm to add a capture, perform also a dbms_capture_adm.build. You will see in v$archived_log at the column dictionary_begin a 'yes', which means that the first_change# of this archivelog is first suitable SCN for starting capture.
    'rman' is the prefered way in 10g+ to remove the archives as it is aware of streams constraints. If you can't use rman to purge the archives, then you need to check the min required SCN in your system by script and act accordingly.
    Since 10g, I recommend to use rman, but nevertheless, here is the script I made in 9i in the old time were rman was eating the archives needed by Streams with appetite.
    #!/usr/bin/ksh
    # program : watch_arc.sh
    # purpose : check your archive directory and if actual percentage is > MAX_PERC
    #           then undertake the action coded by -a param
    # Author : Bernard Polarski
    # Date   :  01-08-2000
    #           12-09-2005      : added option -s MAX_SIZE
    #           20-11-2005      : added option -f to check if an archive is applied on data guard site before deleting it
    #           20-12-2005      : added option -z to check if an archive is still needed by logminer in a streams operation
    # set -xv
    #--------------------------- default values if not defined --------------
    # put here default values if you don't want to code then at run time
    MAX_PERC=85
    ARC_DIR=
    ACTION=
    LOG=/tmp/watch_arch.log
    EXT_ARC=
    PART=2
    #------------------------- Function section -----------------------------
    get_perc_occup()
      cd $ARC_DIR
      if [ $MAX_SIZE -gt 0 ];then
           # size is given in mb, we calculate all in K
           TOTAL_DISK=`expr $MAX_SIZE \* 1024`
           USED=`du -ks . | tail -1| awk '{print $1}'`    # in Kb!
      else
        USED=`df -k . | tail -1| awk '{print $3}'`    # in Kb!
        if [ `uname -a | awk '{print $1}'` = HP-UX ] ;then
               TOTAL_DISK=`df -b . | cut -f2 -d: | awk '{print $1}'`
        elif [ `uname -s` = AIX ] ;then
               TOTAL_DISK=`df -k . | tail -1| awk '{print $2}'`
        elif [ `uname -s` = ReliantUNIX-N ] ;then
               TOTAL_DISK=`df -k . | tail -1| awk '{print $2}'`
        else
                 # works on Sun
                 TOTAL_DISK=`df -b . | sed  '/avail/d' | awk '{print $2}'`
        fi
      fi
      USED100=`expr $USED \* 100`
      USG_PERC=`expr $USED100 / $TOTAL_DISK`
      echo $USG_PERC
    #------------------------ Main process ------------------------------------------
    usage()
        cat <<EOF
                  Usage : watch_arc.sh -h
                          watch_arc.sh  -p <MAX_PERC> -e <EXTENTION> -l -d -m <TARGET_DIR> -r <PART>
                                        -t <ARCHIVE_DIR> -c <gzip|compress> -v <LOGFILE>
                                        -s <MAX_SIZE (meg)> -i <SID> -g -f
                  Note :
                           -c compress file after move using either compress or gzip (if available)
                              if -c is given without -m then file will be compressed in ARCHIVE DIR
                           -d Delete selected files
                           -e Extention of files to be processed
                           -f Check if log has been applied, required -i <sid> and -g if v8
                           -g Version 8 (use svrmgrl instead of sqlplus /
                           -i Oracle SID
                           -l List file that will be processing using -d or -m
                           -h help
                           -m move file to TARGET_DIR
                           -p Max percentage above wich action is triggered.
                              Actions are of type -l, -d  or -m
                           -t ARCHIVE_DIR
                           -s Perform action if size of target dir is bigger than MAX_SIZE (meg)
                           -v report action performed in LOGFILE
                           -r Part of files that will be affected by action :
                               2=half, 3=a third, 4=a quater .... [ default=2 ]
                           -z Check if log is still needed by logminer (used in streams),
                                    it requires -i <sid> and also -g for Oracle 8i
                  This program list, delete or move half of all file whose extention is given [ or default 'arc']
                  It check the size of the archive directory and if the percentage occupancy is above the given limit
                  then it performs the action on the half older files.
            How to use this prg :
                    run this file from the crontab, say, each hour.
         example
         1) Delete archive that is sharing common arch disk, when you are at 85% of 2500 mega perform delete half of the files
         whose extention is 'arc' using default affected file (default is -r 2)
         0,30 * * * * /usr/local/bin/watch_arc.sh -e arc -t /arc/POLDEV -s 2500 -p 85 -d -v /var/tmp/watch_arc.POLDEV.log
         2) Delete archive that is sharing common disk with oother DB in /archive, act when 90% of 140G, affect by deleting
         a quater of all files (-r 4) whose extention is 'dbf' but connect before as sysdba in POLDEV db (-i) if they are
         applied (-f is a dataguard option)
         watch_arc.sh -e dbf -t /archive/standby/CITSPRD -s 140000 -p 90 -d -f -i POLDEV -r 4 -v /tmp/watch_arc.POLDEV.log
         3) Delete archive of DB POLDEV when it reaches 75% affect 1/3 third of files, but connect in DB to check if
         logminer do not need this archive (-z). this is usefull in 9iR2 when using Rman as rman do not support delete input
         in connection to Logminer.
         watch_arc.sh -e arc -t /archive/standby/CITSPRD  -p 75 -d -z -i POLDEV -r 3 -v /tmp/watch_arc.POLDEV.log
    EOF
    #------------------------- Function section -----------------------------
    if [ "x-$1" = "x-" ];then
          usage
          exit
    fi
    MAX_SIZE=-1  # disable this feature if it is not specificaly selected
    while getopts  c:e:p:m:r:s:i:t:v:dhlfgz ARG
      do
        case $ARG in
           e ) EXT_ARC=$OPTARG ;;
           f ) CHECK_APPLIED=YES ;;
           g ) VERSION8=TRUE;;
           i ) ORACLE_SID=$OPTARG;;
           h ) usage
               exit ;;
           c ) COMPRESS_PRG=$OPTARG ;;
           p ) MAX_PERC=$OPTARG ;;
           d ) ACTION=delete ;;
           l ) ACTION=list ;;
           m ) ACTION=move
               TARGET_DIR=$OPTARG
               if [ ! -d $TARGET_DIR ] ;then
                   echo "Dir $TARGET_DIR does not exits"
                   exit
               fi;;
           r)  PART=$OPTARG ;;
           s)  MAX_SIZE=$OPTARG ;;
           t)  ARC_DIR=$OPTARG ;;
           v)  VERBOSE=TRUE
               LOG=$OPTARG
               if [ ! -f $LOG ];then
                   > $LOG
               fi ;;
           z)  LOGMINER=TRUE;;
        esac
    done
    if [ "x-$ARC_DIR" = "x-" ];then
         echo "NO ARC_DIR : aborting"
         exit
    fi
    if [ "x-$EXT_ARC" = "x-" ];then
         echo "NO EXT_ARC : aborting"
         exit
    fi
    if [ "x-$ACTION" = "x-" ];then
         echo "NO ACTION : aborting"
         exit
    fi
    if [ ! "x-$COMPRESS_PRG" = "x-" ];then
       if [ ! "x-$ACTION" =  "x-move" ];then
             ACTION=compress
       fi
    fi
    if [ "$CHECK_APPLIED" = "YES" ];then
       if [ -n "$ORACLE_SID" ];then
             export PATH=$PATH:/usr/local/bin
             export ORAENV_ASK=NO
             export ORACLE_SID=$ORACLE_SID
             . /usr/local/bin/oraenv
       fi
       if [ "$VERSION8" = "TRUE" ];then
          ret=`svrmgrl <<EOF
    connect internal
    select max(sequence#) from v\\$log_history ;
    EOF`
    LAST_APPLIED=`echo $ret | sed 's/.*------ \([^ ][^ ]* \).*/\1/' | awk '{print $1}'`
       else
        ret=`sqlplus -s '/ as sysdba' <<EOF
    set pagesize 0 head off pause off
    select max(SEQUENCE#) FROM V\\$ARCHIVED_LOG where applied = 'YES';
    EOF`
       LAST_APPLIED=`echo $ret | awk '{print $1}'`
       fi
    elif [ "$LOGMINER" = "TRUE" ];then
       if [ -n "$ORACLE_SID" ];then
             export PATH=$PATH:/usr/local/bin
             export ORAENV_ASK=NO
             export ORACLE_SID=$ORACLE_SID
             . /usr/local/bin/oraenv
       fi
        var=`sqlplus -s '/ as sysdba' <<EOF
    set pagesize 0 head off pause off serveroutput on
    DECLARE
    hScn number := 0;
    lScn number := 0;
    sScn number;
    ascn number;
    alog varchar2(1000);
    begin
      select min(start_scn), min(applied_scn) into sScn, ascn from dba_capture ;
      DBMS_OUTPUT.ENABLE(2000);
      for cr in (select distinct(a.ckpt_scn)
                 from system.logmnr_restart_ckpt\\$ a
                 where a.ckpt_scn <= ascn and a.valid = 1
                   and exists (select * from system.logmnr_log\\$ l
                       where a.ckpt_scn between l.first_change# and l.next_change#)
                  order by a.ckpt_scn desc)
      loop
        if (hScn = 0) then
           hScn := cr.ckpt_scn;
        else
           lScn := cr.ckpt_scn;
           exit;
        end if;
      end loop;
      if lScn = 0 then
        lScn := sScn;
      end if;
       select min(sequence#) into alog from v\\$archived_log where lScn between first_change# and next_change#;
      dbms_output.put_line(alog);
    end;
    EOF`
      # if there are no mandatory keep archive, instead of a number we just get the "PLS/SQL successfull"
      ret=`echo $var | awk '{print $1}'`
      if [ ! "$ret" = "PL/SQL" ];then
         LAST_APPLIED=$ret
      else
         unset LOGMINER
      fi
    fi
    PERC_NOW=`get_perc_occup`
    if [ $PERC_NOW -gt $MAX_PERC ];then
         cd $ARC_DIR
         cpt=`ls -tr *.$EXT_ARC | wc -w`
         if [ ! "x-$cpt" = "x-" ];then
              MID=`expr $cpt / $PART`
              cpt=0
              ls -tr *.$EXT_ARC |while read ARC
                  do
                     cpt=`expr $cpt + 1`
                     if [ $cpt -gt $MID ];then
                          break
                     fi
                     if [ "$CHECK_APPLIED" = "YES" -o "$LOGMINER" = "TRUE" ];then
                        VAR=`echo $ARC | sed 's/.*_\([0-9][0-9]*\)\..*/\1/' | sed 's/[^0-9][^0-9].*//'`
                        if [ $VAR -gt $LAST_APPLIED ];then
                             continue
                        fi
                     fi
                     case $ACTION in
                          'compress' ) $COMPRESS_PRG $ARC_DIR/$ARC
                                     if [ "x-$VERBOSE" = "x-TRUE" ];then
                                           echo " `date +%d-%m-%Y' '%H:%M` : $ARC compressed using $COMPRESS_PRG" >> $LOG
                                     fi ;;
                          'delete' ) rm $ARC_DIR/$ARC
                                     if [ "x-$VERBOSE" = "x-TRUE" ];then
                                           echo " `date +%d-%m-%Y' '%H:%M` : $ARC deleted" >> $LOG
                                     fi ;;
                          'list'   )   ls -l $ARC_DIR/$ARC ;;
                          'move'   ) mv  $ARC_DIR/$ARC $TARGET_DIR
                                     if [ ! "x-$COMPRESS_PRG" = "x-" ];then
                                           $COMPRESS_PRG $TARGET_DIR/$ARC
                                           if [ "x-$VERBOSE" = "x-TRUE" ];then
                                                 echo " `date +%d-%m-%Y' '%H:%M` : $ARC moved to $TARGET_DIR and compressed" >> $LOG
                                           fi
                                     else
                                           if [ "x-$VERBOSE" = "x-TRUE" ];then
                                                 echo " `date +%d-%m-%Y' '%H:%M` : $ARC moved to $TARGET_DIR" >> $LOG
                                           fi
                                     fi ;;
                      esac
              done
          else
              echo "Warning : The filesystem is not full due to archive logs !"
              exit
          fi
    elif [ "x-$VERBOSE" = "x-TRUE" ];then
         echo "Nothing to do at `date +%d-%m-%Y' '%H:%M`" >> $LOG
    fi

Maybe you are looking for