How to apply missing arching logs in a logical stand by database

hi,
I have created logical stand by data base for our production.It was working fine.We have not set the value for standby_archvie_dest and it send the archive files to $ORACLE_HOME/dbs, due to high volume transavtion many files generated and $ORACLE_HOME mount got filled and logical stan by apply stopped working and Db went down as well.
I tried to apply the files once i bought back the instance but after applying one archivie file it stopped applying further.
and logical stand by not working appropriately.
Please let me know that is there is a mechanism to apply the missing logs?
DB version : 10.2.0.5
OS :OEL 5
regards
Manoj

Hi,
Since then the issue happened I have noticed archives are not shipping.
Following are the outputs
SQL> ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YY HH24:MI:SS';
Session altered.
SQL> COLUMN STATUS FORMAT A60
SQL> SELECT EVENT_TIME, STATUS, EVENT FROM DBA_LOGSTDBY_EVENTS ORDER BY EVENT_TIME, COMMIT_SCN;
EVENT_TIME STATUS
EVENT
18-MAR-12 11:11:35 ORA-16111: log mining and apply setting up
18-MAR-12 22:34:26 ORA-16226: DDL skipped due to lack of support
alter database begin backup
18-MAR-12 22:34:26 ORA-16226: DDL skipped due to lack of support
alter database end backup
EVENT_TIME STATUS
EVENT
18-MAR-12 22:49:25 ORA-16226: DDL skipped due to lack of support
alter database backup controlfile to '/tmp/PCEGYK_control.ctl'
18-MAR-12 22:49:25 ORA-16226: DDL skipped due to lack of support
alter database backup controlfile to trace
18-MAR-12 22:49:25 ORA-16226: DDL skipped due to lack of support
create pfile='/pcegyk/backup/hot_backups/18032012_2234/initPCEGYK.ora_from_spfil
EVENT_TIME STATUS
EVENT
19-MAR-12 00:04:40 ORA-16227: DDL skipped due to missing object
grant select,insert on sys.ora_temp_1_ds_218894 to "SYSADM"
19-MAR-12 00:04:41 ORA-16227: DDL skipped due to missing object
grant select,insert on sys.ora_temp_1_ds_218895 to "SYSADM"
19-MAR-12 00:04:41 ORA-16227: DDL skipped due to missing object
grant select,insert on sys.ora_temp_1_ds_218896 to "SYSADM"
EVENT_TIME STATUS
EVENT
19-MAR-12 00:04:41 ORA-16227: DDL skipped due to missing object
grant select,insert on sys.ora_temp_1_ds_218897 to "SYSADM"
19-MAR-12 00:04:41 ORA-16227: DDL skipped due to missing object
grant select,insert on sys.ora_temp_1_ds_218898 to "SYSADM"
19-MAR-12 00:04:41 ORA-16227: DDL skipped due to missing object
grant select,insert on sys.ora_temp_1_ds_218899 to "SYSADM"
EVENT_TIME STATUS
EVENT
19-MAR-12 00:19:26 ORA-16227: DDL skipped due to missing object
grant select,insert on sys.ora_temp_1_ds_218900 to "SYSADM"
19-MAR-12 00:19:26 ORA-16227: DDL skipped due to missing object
grant select,insert on sys.ora_temp_1_ds_218901 to "SYSADM"
19-MAR-12 00:19:26 ORA-16227: DDL skipped due to missing object
grant select,insert on sys.ora_temp_1_ds_218902 to "SYSADM"
EVENT_TIME STATUS
EVENT
20-MAR-12 03:28:09 ORA-16111: log mining and apply setting up
20-MAR-12 03:31:54 ORA-16128: User initiated stop apply successfully completed
20-MAR-12 03:55:13 ORA-16111: log mining and apply setting up
EVENT_TIME STATUS
EVENT
20-MAR-12 04:17:38 ORA-16128: User initiated stop apply successfully completed
20-MAR-12 04:17:54 ORA-16111: log mining and apply setting up
20-MAR-12 21:20:20 ORA-16111: log mining and apply setting up
21 rows selected.
SQL>
===========================
SQL> SELECT FILE_NAME, SEQUENCE#, FIRST_CHANGE#, NEXT_CHANGE#,TIMESTAMP, DICT_BEGIN, DICT_END, THREAD# FROM DBA_LOGSTDBY_LOG ORDER BY SEQUENCE#;
FILE_NAME
SEQUENCE# FIRST_CHANGE# NEXT_CHANGE# TIMESTAMP DIC DIC THREAD#
/pcegyk/oracle/product/102/dbs/archPCEGYK_1_138743_679263487.arc
138743 7.4580E+12 7.4580E+12 19-MAR-12 06:33:16 NO NO 1
/pcegyk/oracle/product/102/dbs/archPCEGYK_1_138744_679263487.arc
138744 7.4580E+12 7.4580E+12 19-MAR-12 06:36:22 NO NO 1
/pcegyk/oracle/product/102/dbs/archPCEGYK_1_138745_679263487.arc
138745 7.4580E+12 7.4580E+12 19-MAR-12 06:39:21 NO NO 1
FILE_NAME
SEQUENCE# FIRST_CHANGE# NEXT_CHANGE# TIMESTAMP DIC DIC THREAD#
/pcegyk/oracle/product/102/dbs/archPCEGYK_1_138746_679263487.arc
138746 7.4580E+12 7.4580E+12 19-MAR-12 06:41:25 NO NO 1
/pcegyk/oracle/product/102/dbs/archPCEGYK_1_138747_679263487.arc
138747 7.4580E+12 7.4580E+12 19-MAR-12 06:43:24 NO NO 1
/pcegyk/oracle/product/102/dbs/archPCEGYK_1_138748_679263487.arc
138748 7.4580E+12 7.4580E+12 19-MAR-12 06:45:21 NO NO 1
FILE_NAME
SEQUENCE# FIRST_CHANGE# NEXT_CHANGE# TIMESTAMP DIC DIC THREAD#
/pcegyk/oracle/product/102/dbs/archPCEGYK_1_138749_679263487.arc
138749 7.4580E+12 7.4580E+12 19-MAR-12 06:48:07 NO NO 1
/pcegyk/oracle/product/102/dbs/archPCEGYK_1_138750_679263487.arc
138750 7.4580E+12 7.4580E+12 19-MAR-12 06:50:19 NO NO 1
/pcegyk/oracle/product/102/dbs/archPCEGYK_1_138751_679263487.arc
138751 7.4580E+12 7.4580E+12 19-MAR-12 06:52:52 NO NO 1
FILE_NAME
SEQUENCE# FIRST_CHANGE# NEXT_CHANGE# TIMESTAMP DIC DIC THREAD#
/pcegyk/oracle/product/102/dbs/archPCEGYK_1_138752_679263487.arc
138752 7.4580E+12 7.4580E+12 19-MAR-12 06:55:32 NO NO 1
/pcegyk/oracle/product/102/dbs/archPCEGYK_1_138805_679263487.arc
138805 7.4580E+12 7.4580E+12 19-MAR-12 15:33:26 NO NO 1
=================
SQL> SELECT APPLIED_SCN, NEWEST_SCN FROM DBA_LOGSTDBY_PROGRESS;
APPLIED_SCN NEWEST_SCN
7.4580E+12 7.4580E+12

Similar Messages

  • Applying the arch logs after the full backup

    DB version:10GR2
    In NOCATALOG mode, I am going to take a full backup (Hot) of DBx (800gb) using the command
    run {
    allocate channel c1 TYPE DISK connect 'sys/dempo'  FORMAT   '/u04/bkp/stprod_%U.rbk';
    backup as compressed backupset database tag 'full'  plus archivelog;
    }and restore it into DBy.
    But how can i apply the archive logs generated in DBx during the full backup after the full backup?
    The restored control file doesn't know about these archived logs. Right?

    with controlfile autobackup off, the backup will be 'complete', as backing up a whole database means backing up it's system tablespace(1 st datafile to be exact, which is the part of system tablespace)The SYSTEM datafile (or multiple datafiles in SYSTEM) may be the first or the second or the nth datafile to be backed up. The BackupSet consisting of multiple backup pieces may have other tablespaces backed up after the SYSTEM datafile. Those other tablespaces and possible redo log switches have timestamps which succeed the SYSTEM datafile backup . Therefore, those other tablespaces and redo log switches are not captured in that controlfile backup that was included with the SYSTEM datafile.
    (Yes, I know we can use the controlfile backup --- but I am just pointing out that this doesn't meet the OP's requirement that the controlfile be aware of all redologs that have been switched during or after the database backup).
    Hemant K Chitale
    http://hemantoracledba.blogspot.com

  • Missing arch logs

    Hi
    Oracle DB Version: 11.1.6
    OS Version: Windows Server 2003.
    When ever I start the database it is asking for recovery.
    SQL> startup
    ORACLE instance started.
    Total System Global Area 644468736 bytes
    Fixed Size 1335108 bytes
    Variable Size 176160956 bytes
    Database Buffers 461373440 bytes
    Redo Buffers 5599232 bytes
    Database mounted.
    ORA-01113: file 5 needs media recovery
    ORA-01110: data file 5: 'D:\SIEBELOWB\OWBDB\OWBDATA01.DBF'
    SQL> recover datafile 5
    ORA-00279: change 37455422 generated at 03/25/2011 06:56:41 needed for thread 1
    ORA-00289: suggestion : C:\ORADB11G_HOME\RDBMS\ARC00210_0711838953.001
    ORA-00280: change 37455422 for thread 1 is in sequence #210
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    auto
    ORA-00310: archived log contains sequence 218; sequence 210 required
    ORA-00334: archived log: 'C:\ORADB11G_HOME\RDBMS\ARC00210_0711838953.001'
    The fact is that the database was never in archive log
    Any idea or suggestion..
    Thanks.

    1) Because applying (archived) redo log is the only recovery mechanism available, as it is the very purpose of redo log
    2) Because there is a possibility the online redologs are sufficient. Can Oracle know? Of course not.
    3) Because no one in their sane mind is running a production database in noarchivelog.
    As Tom Kyte has once put it : 'I don't believe a database which is running in noarchivelog, is a production database'
    Rest assured, in the next major release, noarchivelog will disappear.
    Apparently Oracle knows there are too many careless 'DBA's' out there, who never think about recovery, and run their database in noarchivelog.
    There is an old Latin proverb 'Vitula demersa, puteum completur' Just in case you can't Google for it, it means most preventive measures are taken after the accident happened.
    You have lost your database, I can not help it. Your only hope is you have an export somewhere.
    Sybrand Bakker
    Senior Oracle DBA

  • How the Payload Message and Logs are stored in the B1i Database Table: BZSTDOC

    I would appreciate it if someone could provide any documentation regarding B1i database further maintenance.
    for example:
    I want to know how the payload message and logs are stored in the table BZSTDOC, and how can we retrieve the payload message directly from the column DOCDATA.
    As described in the B1iSNGuide05 3.2 LogGarbageCollection:
    to avoid the overload of the B1i Database, I set the Backup Buffer to 90 days : so this means Message Logs from the last 90 days will always be available, but is there some way we can save those old messages to a disk so that I can retrieve the payload message anytime?
    in addition, let’s assume the worst, the B1iSN server or the B1i database damaged, Can we just simply restore the B1i database from a latest backup DB then it can work automatically after the B1iSN server is up and running again?
    BR/Jim

    Dear SAP,
    Two weeks passed, I still haven't received any feedback from you guys.
    Could you please have a look at my question?
    How is this Question going? Is it Untouched/Solving/Reassigned ?

  • Apply missing log on physical standby database

    How to apply the missing log on physical standby database?
    I already registered the missing log and started the recovery process.
    Still...having log on standby that has status ..not applied.
    SQL> SELECT SEQUENCE#,APPLIED FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;
    SEQUENCE# APP
    16018 YES
    16019 YES
    16020 NO ---------------------> Not applied.
    16021 YES
    Thanks

    Not much experience doing this, but according to the 9i doc (http://download-east.oracle.com/docs/cd/B10501_01/server.920/a96653/log_apply.htm#1017352), all you need is properly-configured FAL_CLIENT and FAL_SERVER initialization parameters, and things should take care of themselves automatically. Let us know if that doesn't work for you, we might be able to think of something else.
    Daniel

  • Standby database is not applying redo logs due to missing archive log

    We use 9.2.0.7 Oracle Database. My goal is to create a physical standby database.
    I have followed all the steps necessary to fulfill this in Oracle Data Guard Concepts and Administration manual. Archived redo logs are transmitted from primary to standby database regularly. But the logs are not applied due to archive log gap.
    SQL> select process, status from v$managed_standby;
    PROCESS STATUS
    ARCH CONNECTED
    ARCH CONNECTED
    MRP0 WAIT_FOR_GAP
    RFS RECEIVING
    RFS ATTACHED
    SQL> select * from v$archive_gap;
    THREAD# LOW_SEQUENCE# HIGH_SEQUENCE#
    1 503 677
    I have tried to find the missing archives on the primary database, but was unable to. They have been deleted (somehow) regularly by the existing backup policy on the primary database. I have looked up the backups, but these archive logs are too old to be in the backup. Backup retention policy is 1 redundant backup of each file. I didn't save older backups as I didn't really need them from up to this point.
    I have cross checked (using rman crosscheck) the archive log copies on the primary database and deleted the "obsolete" copies of archive logs. But, v$archived_log view on the primary database only marked those entries as "deleted". Unfortunately, the standby database is still waiting for those logs to "close the gap" and doesn't apply the redo logs at all. I am reluctant to recreate the control file on the primary database as I'm afraid this occurred through the regular database backup operations, due to current backup retention policy and it probably might happen again.
    The standby creation procedure was done by using the data files from 3 days ago. The archive logs which are "producing the gap" are older than a month, and are probably unneeded for standby recovery.
    What shall I do?
    Kind regards and thanks in advance,
    Milivoj

    On a physical standby database
    To determine if there is an archive gap on your physical standby database, query the V$ARCHIVE_GAP view as shown in the following example:
    SQL> SELECT * FROM V$ARCHIVE_GAP;
    THREAD# LOW_SEQUENCE# HIGH_SEQUENCE#
    1 7 10
    The output from the previous example indicates your physical standby database is currently missing log files from sequence 7 to sequence 10 for thread 1.
    After you identify the gap, issue the following SQL statement on the primary database to locate the archived redo log files on your primary
    database (assuming the local archive destination on the primary database is LOG_ARCHIVE_DEST_1):
    SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE THREAD#=1 AND DEST_ID=1 AND 2> SEQUENCE# BETWEEN 7 AND 10;
    NAME
    /primary/thread1_dest/arcr_1_7.arc /primary/thread1_dest/arcr_1_8.arc /primary/thread1_dest/arcr_1_9.arc
    Copy these log files to your physical standby database and register them using the ALTER DATABASE REGISTER LOGFILE statement on your physical standby database. For example:
    SQL> ALTER DATABASE REGISTER LOGFILE
    '/physical_standby1/thread1_dest/arcr_1_7.arc';
    SQL> ALTER DATABASE REGISTER LOGFILE
    '/physical_standby1/thread1_dest/arcr_1_8.arc';
    After you register these log files on the physical standby database, you can restart Redo Apply.
    Note:
    The V$ARCHIVE_GAP fixed view on a physical standby database only returns the next gap that is currently blocking Redo Apply from continuing. After resolving the gap and starting Redo Apply, query the V$ARCHIVE_GAP fixed view again on the physical standby database to determine the next gap sequence, if there is one. Repeat this process until there are no more gaps.
    Restoring the archived logs from the backup set
    If the archived logs are not available in the archive destination then at that time we need to restore the required archived logs from the backup step. This task is accomplished in the following way.
    To restore range specified archived logs:
    Run {
    Set archivelog destination to '/oracle/arch/arch_restore'
    Restore archivelog from logseq=<xxxxx> until logseq=<xxxxxxx>
    To restore all the archived logs:
    Run {
    Set archivelog destination to '/oracle/arch/arch_restore';
    Restore archivelog all;
    }

  • How to apply business rules with out breaking to patch missing data

    Hello Experts,
    I am very new to SQL And i have no idea how to apply below  rules
    Data is polling for every 5 minutes so 12 time intervals per hour 6:00  -->1
    6:05---->2
    6:50---->11
    6:55---->12
                      6:50---->11
    Missing value
    Patching
    Rule
    No values missing
    No patching
    0
    Time interval 1 is null
    Time interval 1 is patched with the value from time interval number 2 subject to this not being null.
    1.1
    Time interval 12 is null
    Time interval 12 is patched as the value of time interval 11 subject to this not being null.
    1.2
    Time intervals 1 and 2 are null
    Time interval 1 and 2 are both patched with the value of time interval 3 subject to this not being null
    1.3
    Two consecutive time intervals (excluding both 1 & 2 or both 11&12) are null e.g. time interval 3 and 4 are null
    Average the preceding and succeeding time intervals of the missing 2 values. If time intervals 3 and 4 are null then these become the average of time intervals 2 and 5.
    1.4
    Time intervals 11 and 12 are missing
    Time interval 11 and 12 are both patched with the value of time interval 10 subject to this not being null
    1.5
    Some time intervals between 2 and 11 are null with 6 or more non null time intervals
    Time interval is patched with the average of interval – 1 and interval + 1 subject to these not being null.
    For example if time interval 5 was null this would be patched with an average of time interval 4 and time interval 6
    n.b this rule can happen up to a maximum of 5 times.
    1.6
    Three consecutive time intervals are missing
    Set all time intervals for the period as null
    2.1
    More than 6 time intervals are null
    Set all time intervals for the period as null
    2.2
    This will be more info table structure
     CREATE TABLE DATA_MIN
      DAYASNUMBER    INTEGER,
      TIMEID         INTEGER,
      COSIT          INTEGER,
      LANEDIRECTION  INTEGER,
      VOLUME         INTEGER,
      AVGSPEED       INTEGER,
      PMLHGV         INTEGER,
      CLASS1VOLUME   INTEGER,
      CLASS2VOLUME   INTEGER,
      CLASS3VOLUME   INTEGER,
      LINK_ID        INTEGER
    Sampledata
    DAYASNUMBER TIMEID      COSIT     LANEDIRECTION    VOLUME    AVGSPEED PMLHGV    CLASS1vol  LINK_ID                                                                                                   
    20,140,110  201,401,102,315    5    1    47    12,109    0    45    5,001
    20,140,110  201,401,102,325    5    1    33    12,912    0    29    5,001
    20,140,110  201,401,102,330    5    1    39    14,237    0    37    5,001
    20,140,110  201,401,102,345    5    1    45    12,172    0    42    5,001
    20,140,110  201,401,102,350    5    1    30    12,611    0    29    5,001
    20,140,111  201,401,100,000    5    1    30    12,611    0    29    5,001
    output something like FOR above sample data
    DAYASNUMBER TIMEID      COSIT     LANEDIRECTION    VOLUME    AVGSPEED PMLHGV    CLASS1  LINK_ID                                                                                                                                                                                                                                                                                                                                                                                           
    Rule
    20,140,110  201,401,102,315    5     1    47    12,109    0    45   5,001                                                                                                
    0
    20,140,110  201,401,102,320    5    1    40    12,109    0    45    5,001                                                                                     
               1.4(patched row)
    20,140,110  201,401,102,325    5    1    33    12,912    0    29    5,001                                                                                        
             0
    20,140,110  201,401,102,330    5    1    39    14,237    0    37    5,001                                                                                        
              0
    20,140,110  201,401,102,335    5    1    42    14,237    0    37    5,001                                                                                      
            1.4(patched row)
    20,140,110  201,401,102,345    5    1    45    12,172    0    42    5,001                                                                                           
          0
    20,140,110  201,401,102,350    5    1    30    12,611    0    29    5,001                                                                                            
         0
    20,140,110  201,401,102,355    5    1    30    12,611    0    29    5,001                                                                                        
              1.2(patched row)
    20,140,111  201,401,100,000    5    1    30    12,611    0    29    5,001 
    Any help and suggestions to extend the code to achieve would be greatly appreciate.
    Note:Here the key value is Volume to be patched for missed time intervals
    Thanks in advance

    row_number() OVER (PARTITION BY LANEDIRECTION ORDER BY TIMEID) AS RN,DAYASNUMBER,(*to_date*(timeid,'yyyymmdd hh24miss')) as cte, COSIT,
    Are you in the right place? to_date is an Oracle function, and this forum is for Microsoft SQL Server which is a different product.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Data guard - missed archive log

    Hi:
    I am on 10.2.0.3, using physical standby. One of my check-up sqls shows missed (not applied) arch log:
    SELECT THREAD#, SEQUENCE#, APPLIED FROM V$ARCHIVED_LOG
    where applied='NO'
    THREAD# SEQUENCE# APP
    1 16595 NO
    It's relatively old one, my max is 18211i and the log application is going fine.
    How can I apply this missed one? Do I have to stop managed recovery and do manual recovery for it?
    TIA.

    GregX,
    Please check if this file actually needs to be applied.
    You can find out the when the archived log file was created – FIRST_TIME/NEXT_TIME columns in V$ARCHIVED_LOG.
    If the archived log file was created before you created the stand-by DB, you do not need ( nor you can) to apply that file.
    Please note that the physical standby would stop applying archivelog if you fail to provide the next required archive log file. Since this has not happened (as per your statement) there is a good chance that you archive redo in question is not needed.
    Hope that helps,
    Edited by: IordanIotzov on Jan 30, 2009 1:34 PM

  • (R11) HOW TO FIX MISSING CM DATA IN AR_PAYMENT_SCHEDULES_ALL TABLE

    제품 : FIN_AR
    작성날짜 : 2004-11-09
    (R11) HOW TO FIX MISSING CM DATA IN AR_PAYMENT_SCHEDULES_ALL TABLE
    ==================================================================
    PURPOSE
    Credit Memo data가
    AR_PAYMENT_SCHEDULES_ALL TABLE과 AR_RECEIVABLE_APPLICATIONS_ALL 에 생성
    이 되어 있지 않는 경우 datafix script 으로 해결할 수 있도록 한다.
    Problem Description
    주로 타 시스템에서 autoinvoice 를 통해 넘어온 cm data에서 자주 발생하는 문제로 credit transaction 화면을 통해 보았을때 100% 로 applied 된 credit memo 임에도 applied 된 transaction의 balance에 전혀 영향을 주지 않는다.
    cm_info_11.sql 을 수행해 보면 ar_payment_schedules_all table과 ar_receivable_applications_all table에 해당 데이타가 존재하지 않음을 알수 있다.
    Workaround
    Solution Description
    1. AR Support Script Page (http://eastapps.us.oracle.com/appsar/SQL/SQL.htm) 에서 trx.sql과 pay.sql을 download 받는다. 이때 홈페이지 상의 90 Days, 150 Days 등은 transaction date scope을 의미한다. 즉 현재일로 부터 90일전까지의 transaction인 경우는 trx.sql을 150 일전까지의 transaction인 경우는 trx150.sql 을 download 받아야 한다.
    2. trx.sql 을 수행한다.
    3. trx.sql 수행시 생성되는 trx_<trx_id>.log 를 trx.sql 로 rename하여 다시 수행한다.
    4. 정상적으로 수행되었는지 확인 후 pay.sql을 수행한다.
    5. pay_<ps_id>.log 를 pay.sql로 rename한 후 수행한다.
    Reference Documents
    BUG 1401610

    'ORA-29278: SMTP transient error: 421 Service not available' --> "Oracle had a problem communicating with the SMTP server"
    Did you check:
    How I can resolve the error " 421 Service not available"
    Also look at document id 604763.1
    Edited by: MccM on Feb 23, 2010 11:38 AM

  • *HOW TO DELETE THE ARCHIVE LOGS ON THE STANDBY*

    HOW TO DELETE THE ARCHIVE LOGS ON THE STANDBY
    I have set the RMAN CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBY; on my physical standby server.
    My archivelog files are not deleted on standby.
    I have set the CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default on the Primary server.
    I've checked the archivelogs with the FRA and they are not beign deleted on the STANDBY. Do I have to do something for the configuation to take effect? Like run a RMAN backup?
    I've done a lot ofresearch and i'm getting mixed answers. Please help. Thanks in advanced.
    J

    Setting the Policy will not delete the Archive logs on the Standby. ( I found a thread where the Data Guard product manager says "The deletion policy on both sides will do what you want" ). However I still
    like to clean them off with RMAN.
    I would use RMAN to delete them so that it can use that Policy are you are protected in case of Gap, transport issue etc.
    There are many ways to do this. You can simply run RMAN and have it clean out the Archive.
    Example :
    #!/bin/bash
    # Name: db_rman_arch_standby.sh
    # Purpose: Database rman backup
    # Usage : db_rman_arch_standby <DBNAME>
    if [ "$1" ]
    then DBNAME=$1
    else
    echo "basename $0 : Syntax error : use . db_rman_full <DBNAME> "
    exit 1
    fi
    . /u01/app/oracle/dba_tool/env/${DBNAME}.env
    echo ${DBNAME}
    MAILHEADER="Archive_cleanup_on_STANDBY_${DBNAME}"
    echo "Starting RMAN..."
    $ORACLE_HOME/bin/rman target / catalog <user>/<password>@<catalog> << EOF
    delete noprompt ARCHIVELOG UNTIL TIME 'SYSDATE-8';
    exit
    EOF
    echo `date`
    echo
    echo 'End of archive cleanup on STANDBY'
    mailx -s ${MAILHEADER} $MAILTO < /tmp/rmandbarchstandby.out
    # End of ScriptThis uses ( calls an ENV) so the crontab has an environment.
    Example ( STANDBY.env )
    ORACLE_BASE=/u01/app/oracle
    ULIMIT=unlimited
    ORACLE_SID=STANDBY
    ORACLE_HOME=$ORACLE_BASE/product/11.2.0.2
    ORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/data
    LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
    LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib
    LIBPATH=$LD_LIBRARY_PATH:/usr/lib
    TNS_ADMIN=$ORACLE_HOME/network/admin
    PATH=$ORACLE_HOME/bin:$ORACLE_BASE/dba_tool/bin:/bin:/usr/bin:/usr/ccs/bin:/etc:/usr/sbin:/usr/ucb:$HOME/bin:/usr/bin/X11:/sbin:/usr/lbin:/GNU/bin/make:/u01/app/oracle/dba_tool/bin:/home/oracle/utils/SCRIPTS:/usr/local/bin:.
    #export TERM=linux=80x25 wrong wrong wrong wrong wrong
    export TERM=vt100
    export ORACLE_BASE ORACLE_SID ORACLE_TERM ULIMIT
    export ORACLE_HOME
    export LIBPATH LD_LIBRARY_PATH ORA_NLS33
    export TNS_ADMIN
    export PATH
    export MAILTO=?? your email hereNote use the env command in Unix to get you settings.
    There are probably ten other/better ways to do this, but this works.
    other options ( you decide )
    Configure RMAN to purge archivelogs after applied on standby [ID 728053.1]
    http://www.oracle.com/technetwork/database/features/availability/rman-dataguard-10g-wp-1-129486.pdf
    Maintenance Of Archivelogs On Standby Databases [ID 464668.1]
    Tip I don't care myself but in some of the other forums people seem to mind if you use all caps in the subject. They say it shouting. My take is if somebody is shouting at me I'm probably going to just move away.
    Best Regards
    mseberg
    Edited by: mseberg on May 8, 2012 11:53 AM
    Edited by: mseberg on May 8, 2012 11:56 AM

  • How to apply the Patchset 7 to Forms 6i

    Hi all !!!
    I need install the PatchSet 7 in the iAS Server 1.0.2.2.2 (concrectly at the Forms Server Service).
    I had stopped all services of Oracle, and had applied the patch. But the services of Forms and Reports is missing after.
    Any idea.
    How to appplies the patch?
    Thanks and best regards.
    Carlos Hernandez
    Barcelona (Spain)

    848478 wrote:
    How to apply the latest patch set for all the Installed Modules in 11.5.10.2 along with the Roll up's if any.https://forums.oracle.com/forums/search.jspa?threadID=&q=Latest+AND+Patchsets&objID=c3&dateRange=all&userID=&numResults=15&rankBy=10001
    https://forums.oracle.com/forums/search.jspa?threadID=&q=Latest+AND+Patches&objID=c3&dateRange=all&userID=&numResults=15&rankBy=10001
    Thanks,
    Hussein

  • How to apply license for SSM 7.5 & Netweaver CE 7.1?

    How to apply license for SSM 7.5 & Netweaver CE 7.1?
    My application server java in SAPMMC: appear message: no valid license found.
    Acually we already bought license and download,
    So, how to solve this license problem?

    Chamnap,
    When your NetWeaver license expires, it can be restarted, but will only run for 1/2 hour, to allow installing a new license.
    Step 1- Start NetWeaver CE from the control console
    Step 2 - Log on to NetWeaver Administrator http:///[server]:50000/nwa and go to Configuration tab then Infrastructure sub-tab, then Licenses
    Step 3 -  Make note of your System Number and Active Hardware Key in the System Parameters section
    Step 4 - Navigate to the Service Marketplace http://service.sap.com/licensekey
    Step 5 - You are looking for SAP Business Suite - SAP Products with ABAP Stack, Without ABAP stack (WEB AS Java-J2EE Engine) SAP Enterprise Portal link
    Step 6 - Fill in your Hardware Key and System # -  Make sure you choose this License Type: J2EE Engine - Web Application Server Java (it is not the default selection)
    Step 7 - A message at the bottom of the page will note a new key has been generated and you can Download the txt file needed (you also receive an email with the info as well)
    Step 8 - (Restart NetWeaver CE - if its timed-out) On the Licenses page of nwa click Install from File > Browse to the txt doc you downloaded and ADD
    You now have the new license install. Make note of the date to schedule the time for any future updates, so that service is not interrupted.
    If you cannot access the license key on the Service Marketplace you will need to contact your SAP account representative.
    Regards,
    Bob

  • How to apply recommendations given by addm report

    Hi Gurus
    Actually the addm report of my test database is giving some recommendations and i am not getting ,how to apply those on my database.
    so i am putting some data here,if anybody could give me some hint regarding that it would be of great help as i am new in dba.
    Waits on event "log file sync" while performing COMMIT and ROLLBACK operations were consuming significant database time. RECOMMENDATION 1: Application Analysis, 9.9% benefit (147 seconds) ACTION: Investigate application logic for possible reduction in the number of COMMIT operations by increasing the size of transactions. RATIONALE: The application was performing 112 transactions per minute with an average redo size of 3655 bytes per transaction. RECOMMENDATION 2: Host Configuration, 9.9% benefit (147 seconds) ACTION: Investigate the possibility of improving the performance of I/O to the online redo log files. RATIONALE: The average size of writes to the online redo log files was 3 K and the average time per write was 4 milliseconds. SYMPTOMS THAT LED TO THE FINDING: Wait class "Commit" was consuming significant database time. (9.9% impact [147 seconds]) FINDING 6: 8% impact (119 seconds) ---------------------------------- Wait event "process startup" in wait class "Other" was consuming significant database time. RECOMMENDATION 1: Application Analysis, 8% benefit (119 seconds) ACTION: Investigate the cause for high "process startup" waits. Refer to Oracle's "Database Reference" for the description of this wait event. Use given SQL for further investigation. RATIONALE: The SQL statement with SQL_ID "NULL-SQLID" was found waiting for "process startup" wait event. RELEVANT OBJECT: SQL statement with SQL_ID NULL-SQLID RECOMMENDATION 2: Application Analysis, 8% benefit (119 seconds) ACTION: Investigate the cause for high "process startup" waits in Service "SYS$BACKGROUND". FINDING 7: 6.3% impact (93 seconds)
    NO RECOMMENDATIONS AVAILABLE ADDITIONAL INFORMATION: Hard parses due to cursor environment mismatch were not consuming significant database time. Hard parsing SQL statements that encountered parse errors was not consuming significant database time. Parse errors due to inadequately sized shared pool were not consuming significant database time. Hard parsing due to cursors getting aged out of shared pool was not consuming significant database time. Hard parses due to literal usage and cursor invalidation were not consuming significant database time. FINDING 8: 4.3% impact (63 seconds) ----------------------------------- The throughput of the I/O subsystem was significantly lower than expected. RECOMMENDATION 1: Host Configuration, 4.3% benefit (63 seconds) ACTION: Consider increasing the throughput of the I/O subsystem. Oracle's recommended solution is to stripe all data file using the SAME methodology. You might also need to increase the number of disks for better performance. Alternatively, consider using Oracle's Automatic Storage Management solution. SYMPTOMS THAT LED TO THE FINDING: Wait class "User I/O" was consuming significant database time. (13% impact [191 seconds]) FINDING 9: 4.1% impact (60 seconds) ----------------------------------- Buffer cache writes due to small log files were consuming significant database time. NO RECOMMENDATIONS AVAILABLE SYMPTOMS THAT LED TO THE FINDING: The throughput of the I/O subsystem was significantly lower than expected. (4.3% impact [63 seconds]) Wait class "User I/O" was consuming significant database time. (13% impact [191 seconds]) FINDING 10: 3.5% impact (51 seconds) ------------------------------------ Wait event "class slave wait" in wait class "Other" was consuming significant database time. RECOMMENDATION 1: Application Analysis, 3.5% benefit (51 seconds) ACTION: Investigate the cause for high "class slave wait" waits. Refer to Oracle's "Database Reference" for the description of this wait event. Use given SQL for further investigation. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ADDITIONAL INFORMATION ---------------------- Wait class "Administrative" was not consuming significant database time. Wait class "Application" was not consuming significant database time. Wait class "Cluster" was not consuming significant database time. Wait class "Concurrency" was not consuming significant database time. Wait class "Configuration" was not consuming significant database time. Wait class "Network" was not consuming significant database time. Wait class "Scheduler" was not consuming significant database time. The analysis of I/O performance is based on the default assumption that the average read time for one database block is 10000 micro-seconds. An explanation of the terminology used in this report is available when you run the report with the 'ALL' level of detail.
    regards
    Richa

    I'm not sure what about the recommendations you don't understand as what you posted seems quite clear. Take #1 for example:
    Investigate application logic for possible reduction in the number of COMMIT operations by increasing the size of transactions.is telling you that it appears you are doing incremental commits. Are you? Can you change it if you are?
    When you respond please include full version information.

  • How to apply 9044638  on Solaris 10

    Hello,
    Please tell me how to apply 9044638 R12.1.3 with 11.1.0.7 database on solaris 10 on wich homw this patch to be applied ?

    When given a file source lie l3:/usr/ptraq/man with one colon (:) ssh attempts to connect using ssh.
    If you want to connect to the rsync daemon, you need to give 2 colons l3::/usr/ptraq/man.
    Your other alternative is to setup ssh keys so you can log in without a password and then you won't have to run rsync in daemon mode at all.

  • Data guard real time apply vs archived log apply on physical standby

    Dear DBA's,
    last week i configuared DR , now the phyiscal stanby database is archive apply mode,
    i want to confirm is it better to apply the archived log or should i cahnge it to real time apply .
    give me sugesstions.
    Thanks and Regards
    Raja...

    One question are you using ARCH transport to move the redo? or have you configured standby redo logs and logwr transport (either async or syncronous), if you are using the archiver to transport the logs then you can not use real time apply.
    If you are using log writer to transpor the redo the realtime apply reduces the recovery time required if you need to failover as trher should be less redo to apply to bring the standby up to date, which mode you use to transport redo will depend on what is acceptable in terms of data loss and the impact on performance.

Maybe you are looking for