FAST_START_MTTR_TARGET initialization parameter.... in Ora10g

Hi ,
Can you please clarify the following...???
"FAST_START_MTTR_TARGET enables the definition of the number of seconds the database takes to perform crash recovery of a single instance".
The above means that if this parameter is set to 0 then the database will be operational again - after the successful recovery process- after 0 seconds... meaning instantly ....and if set to 3600 (the maximum value) then the database will be operational again after an hour....?????
Many thanks,
Sim

Setting this to 0 will disable MTTR advisory.
http://download.oracle.com/docs/cd/B10501_01/server.920/a96533/instreco.htm#445433

Similar Messages

  • ORA-02095: specified initialization parameter cannot be modified

    Oracle : 10.2.0.4
    I am getiing below error while setting this parameter in one of teh RAC instance.
    ORA-02095: specified initialization parameter cannot be modified
    Is there any solution for that ?
    Thx.

    Have a look here:
    http://download.oracle.com/docs/cd/E11882_01/server.112/e10820/initparams262.htm#REFRN10230
    If you query the V$SYSTEM_PARAMETERS you can get a "quick help" because the columns are self documenting.
    I don't use RAC, but in single instance server you must stop the DB, modify the utl_file_dir, and restart.
    I think it's the same for a RAC...
    Bye,
    Antonio

  • Where is the Database Initialization Parameter file (inti.ora ??) located ?

    I have E-Business Suite R12 installed on Windows XP.
    I want to change utl_file_dir path.
    I know that I have to change the Database Initialization Parameter file (inti.ora ??)
    Does anybody know, where is the file located ?
    When I search I am getting two files. (init.ora and init.ora.txt)
    Which file I need to make the changes in ???
    Thanks in advance

    The initialization file for the database is located in the $ORACLE_HOME/dbs directory, and is called init<SID>.ora

  • Initialize parameter in server side

    hi
    How i can set initialize parameter in server side, such as nls_language, and set it to default for all sessions.
    thanks.

    Majid,
    use the default.env file in teh forms90/server directory
    Frank

  • How come I can not change the initialization parameter

    show parameters audit_trail
    NAME TYPE VALUE
    audit_trail string NONE
    alter system set audit_trail = true
    ERROR at line 1:
    ORA-02095: specified initialization parameter cannot be modified
    I am using windows XP professional, Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
    sqlplus sheet
    How to change the parameter like this?
    Edited by: user8117130 on May 4, 2009 9:23 AM

    user8117130 wrote:
    show parameters audit_trail
    NAME TYPE VALUE
    audit_trail string NONE
    alter system set audit_trail = true
    ERROR at line 1:
    ORA-02095: specified initialization parameter cannot be modified
    I am using windows XP professional, Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
    sqlplus sheet
    How to change the parameter like this?
    Edited by: user8117130 on May 4, 2009 9:23 AMHi,
    This is a static parameter which means you can't change it when the db is running. If you are using an spfile, you need to change the query as ,
    alter system set audit_trail=true scope=spfileAnd then give a bounce to your db. If you are usnig a parameter file, than change in that and restart.
    http://download.oracle.com/docs/cd/B10501_01/server.920/a96536/ch112.htm#REFRN10006
    HTH
    Aman....

  • Database Initialization Parameter Setting - extrapolation on Load Testing

    We have a new application going live, hence a new database instance need to be set. The oracle version is 10.2.0.5 (exisiting env. constraint)
    The load testing is conducted for approx. 2000 user load. As per the test results, during the peak load, number of database sessions opened are 450. The expected user load is 5500 in production, hence we are expecting approx 1100 database sessions at that time.
    Due to constraints in load testing environment, we cannot test for 6000 users in production. Hence, we have to extrapolate the database parameter settings and put in production.
    The SGA sizing & some other parameters in Load Testing is as below (with the following setting in load, the performance is acceptable)
    sga_max_size 7.5 GB
    sga_target 6 GB
    db_cache_size 3 GB
    shared_pool_size 1.5 GB
    shared_pool_reserved_size 150 MB
    java_pool_size 0.2 GB
    large_pool_size 0.5 GB
    sort_area_size 0.5 MB
    streams_pool_size 48 MB
    pga_aggregate_target 4 GB
    processes 1200
    db_block_size 8K
    db_file_multiblock_read_count 16
    db_keep_cache_size 134217728
    fast_start_mttr_target 600
    open_links 25
    Please let me know how to set the database size for production on extrapolation. Apart from processes and sessions, which are parameters I should be giving more focus.

    user8211187 wrote:
    We have a new application going live, hence a new database instance need to be set. The oracle version is 10.2.0.5 (exisiting env. constraint)
    The load testing is conducted for approx. 2000 user load. As per the test results, during the peak load, number of database sessions opened are 450. The expected user load is 5500 in production, hence we are expecting approx 1100 database sessions at that time.
    Due to constraints in load testing environment, we cannot test for 6000 users in production. Hence, we have to extrapolate the database parameter settings and put in production.
    The SGA sizing & some other parameters in Load Testing is as below (with the following setting in load, the performance is acceptable)
    sga_max_size 7.5 GB Upon which metrics was 7.5GB derived?

  • Doubt on initialization parameter in RAC

    Hi Gurus,
    Can somebody explain the intention behind this kind of setting in cluster database ?
    I know when sga_target or memory_target is set ,the db_cache_size configured at instance level will act as the lower limit .
    Then what is  meant by  remaining setting ?  *.__db_cache_size=  and  PDOSB.__db_cache_size
    *.__db_cache_size=608M
    PDOSB.__db_cache_size=620756992
    PDOSB2.__db_cache_size=10200547328
    PDOSB3.__db_cache_size=9999220736
    PDOSB1.__db_cache_size=10401873920
    Thanks in advance,
    Mahi

    Hi,
    >>Then what is  meant by  remaining setting ?  *.__db_cache_size=  and  PDOSB.__db_cache_size
    it means the parameter value are managed by the database automatically.  The size *.__db_cache_size=608M, i.e. 608M  is the last size allocated to that component.
    *  - means it was across all instances.
    PDOSB.__db_cache_size=620756992   - means it specifies the size for PDOSB instance.
    HTH,
    Pradeep

  • Problem in Initialization Parameter file...Oracle10g

    Hi all,
    I am working in oracle 10g.I created a pfile from an exiting spfile and changed the Processes parameter and then again created a spfile from this changed pfile.
    Didnt delete the previous spfile and bounced the database.
    At startup I encountered the error:ORA-01092.
    Could any one suggest a solution and reason for the same.
    Thanks and regards,
    Nupur

    You can always mention the location where u want ur spfile to be created while creating it from sqlprompt.
    If u dont specify the path then it will create in
    "$ORACLE_HOME/dbs"..
    When the oracle instance is started,it looks for the parameter file in the following order....
    1) spfileSID.ora
    2) default spfile
    3) initSID.ora
    4) default pfile
    so even if u dont mention Pfile while startup and if u have an spfile the db will definitely start from spfile......

  • Conflict initialization parameter discription

    there's confusion in the descrition of this parameters
    "MAX_DISPATCHERS"
    between 10.2g database reference & 10.2g database admin guide
    1. 10.2g database reference says:
    MAX_DISPATCHERS specifies the maximum number of dispatcher processes allowed
    to be running simultaneously. It can be overridden by the DISPATCHERS parameter and is maintained for backward compatibility with older releases.
    http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams115.htm#sthref485
    2. 10.2g database admin guide says:
    MAX_DISPATCHERS: Specifies the maximum number of dispatcher processes that can run simultaneously. This parameter can be ignored for now. It will only be useful in a future release when the number of dispatchers is auto-tuned according to the number of concurrent connections.
    http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14231/manproc.htm#sthref618
    which one is right, they are totally opposite

    I'm searching for a pattern from my GUI application. It displays results when I give something specific like jac* but when I search j*, it displays an error.
    I'm told by the app support team that there is an Oracle setting called max_wildcard which when set, Oracle will search more records before throwing this error.

  • Initialization parameter won't stay set

    Running into a stange issue for the first time. Anyone have ideas on what might be causing this?
    DB - 10.2.0.4, Windows Sever 2008
    Current setting for optimizercost_based_transformation = OFF. We want to set it to ON.
    The following command is issue:
    alter system set "_optimizer_cost_based_transformation"=on scope=both;
    The database is bounced and the following is the result:
    SQL> conn sys@uimdev as sysdba
    Enter password: ***********
    Connected.
    SQL> alter system set "_optimizer_cost_based_transformation"=on scope=spfile;
    System altered.
    SQL> shutdown immediate
    Database closed.
    Database dismounted.
    ORACLE instance shut down.
    SQL> startup
    ORACLE instance started.
    Total System Global Area 5251268608 bytes
    Fixed Size 2073080 bytes
    Variable Size 1577061896 bytes
    Database Buffers 3657433088 bytes
    Redo Buffers 14700544 bytes
    Database mounted.
    Database opened.
    SQL> show parameter _optimizer
    NAME TYPE VALUE
    optimizercost_based_transformation string ON
    HERES THE PROBLEM:
    When any other session is opened, the parameter values show a value of OFF again. For example:
    SQL> select name,value from v$parameter where name like '_opt%';
    NAME                    VALUE
    optimizercost_based_transformation     OFF
    Anyone seen this type of behavior or know how to fix?
    Thaks in advance for assistance....

    Tony,
    Have you reviewed the on-logon DB triggers?
    The fact that you can see the hidden parameter in v$parameter means that someone/something sets it up (as opposed to using the default).
    Since the value is not the one you have in the spfile, then there might be some other place it gets reverted to "OFF".
    Hope that help,
    Iordan Iotzov

  • FAST_START_MTTR_TARGET parameter advice

    Hello
    I need some advice in setting the FAST_START_MTTR_TARGET initialization parameter. There is a lot of valuable documentation and blog posts out there but I am unable to find concrete guidance on this.
    In reading the Checkpoint Tuning and Troubleshooting Guide (147468.1), there is mention of 4 total parameters (including FAST_START_MTTR_TARGET) that need to be visited in order to set this:
    - FAST_START_MTTR_TARGET
    - LOG_CHECKPOINT_INTERVAL
    - LOG_CHECKPOINT_TIMEOUT
    - LOG_CHECKPOINTS_TO_ALERT
    My current db server environment settings are as follows:
    W2K8-R2-Std, 11g-11.2.0.1.0-Std
    OS-block-size = 4096 bytes
    The current initialization parameter settings are:
    FAST_START_MTTR_TARGET = 0
    LOG_CHECKPOINT_INTERVAL = 0
    LOG_CHECKPOINT_TIMEOUT = 1800
    LOG_CHECKPOINTS_TO_ALERT = FALSE
    The current redo log files are set at 51,200KB.
    The condition warranting this is these entries in the alert log:
    Thread 1 cannot allocate new log, sequence 117424
    Private strand flush not complete
    Current log# 1 seq# 117423 mem# 0: K:\ORACLE_SID\REDOG1M1.LOG
    Current log# 1 seq# 117423 mem# 1: L:\ORACLE_SID\REDOG1M2.LOG
    Thread 1 advanced to log sequence 117424 (LGWR switch)
    Current log# 3 seq# 117424 mem# 0: D:\APP\ORADATA\ORACLE_SID\REDOG3M1.LOG
    Current log# 3 seq# 117424 mem# 1: L:\ORACLE_SID\REDOG3M2.LOG
    Kindly advise. Thank you in advance.

    Look in your alert log to see how long it is between redo switches during times of heavy usage. Multiply the size of the redo by how much it would take to switch every 30 minutes. For example, if you switch every 5 minutes, multiply by 6. Then make them much larger and rely on the 1800 second timeout (or better, do the figuring during maximal loading or batch update times). Let it run a few days then check the advisor as in http://docs.oracle.com/cd/E25178_01/server.1111/e16638/build_db.htm#autoId3
    Here's the proper way to figure FSMT http://docs.oracle.com/cd/E25178_01/server.1111/e16638/instance_tune.htm#PFGRF13015
    Don't bother changing dbw settings yet. Wait until you see some evidence you need to do that.

  • Data Loss when a database crashes

    Hi Experts,
    I was asked the question of "how much data is lost when you pull the plug off an oracle database all of a sudden".. and my answer was "all the data in the buffers that have not been committed". We know that you can have committed data sitting in the redo logs (that have not been written to the datafiles) that the instance will use for recovery once the instance is restarted again; however, this got me thinking and asking the question of how much uncommitted data is actually sitting in memory that potentially can be lost if the instance goes down all of a sudden?
    With the use of sga_target and sga_max_size, the memory allocation for the cache buffer will vary from time to time.. So, is it possible to quantify the amount of lost data at all (in byts, kb, mb..etc)?
    For example if the sga is set to 1gb (sga_max_size=1000mb), check point every 15mins (as we can't predict/know how often the app commits).. assume a basic transaction size for any small to medium size database. Redo logs set to 50mb (even though this doesn't come into play at this stage).
    I would be really interested in your thoughts and ideas please.
    Thanks

    All Oracle Data Manipulation Language (DML) and Date Definition Language (DDL) statements must record an entry in the Redo log buffer before they are being executed.
    The Redo log buffer is:
    •     Part of the System Global Area (SGA)
    •     Operating in a circular fashion.
    •     Size in bytes determined by the LOG_BUFFER init parameter.
    Each Oracle instance has only one log writer process (LGWR). The log writer operates in the background and writes all records from the Redo log buffer to the Redo log files.
    Well, just to clarify, log writer is writing committed and uncommitted transactions from the redo log buffer to the log files more or less continuously - not just on commit (when the log buffer is 10mb full, 1/3 full, every 3 seconds or every commit - whichever comes first - those all trigger redo writes).
    The LGWR process writes:
    •     Every 3 seconds.
    •     Immediately when a transaction is committed.
    •     When the Redo log buffer is 1/3 full.
    •     When the database writer process (DBWR) signals.
    Crash and instance recovery involves the following:
    •     Roll-Forward
    The database applies the committed and uncommitted data in the current online redo log files.
    •     Roll-Backward
    The database removes the uncommitted transactions applied during a Roll-Forward.
    What also comes into play in the event of a crash is MTTR, for which there is a advisory utility as of Oracle 10g. Oracle recommends using the fast_start_mttr_target initialization parameter to control the duration of startup after instance failure.
    From what I understand, uncommitted transactions will be lost, or more precisely undone after an instance crash. That's why it is good practice to manually commit transactions, unless you plan to use SQL rollback. Btw, every DDL statement and exit from sqlplus implies an automatic commit.
    Edited by: Markus Waldorf on Sep 4, 2010 10:56 AM

  • Managing ARCHIVE Logs in Oracle 10.2.0.3

    I am working with a customer who seems to think there is a way of controling the database other than a custom JOB, script or RMAN in how it creates, manages and deletes its archive logs while running in archivelog mode. He wants the database to automatically delete obsolete archive logs. He also wants to control the duration in time between each time an archive log is written in order to stop the growth of archive logs and filling up disk space.
    I am saying this is not possible. You either configure RMAN to delete the obsolete or expired archive logs based on your retention policy or do it manually in the Enterprise Manager or Grid Control Console by deletenig obsolete or expired logs.
    Am I correct or am I off base here?

    4.1.3 Sizing Redo Log Files
    The size of the redo log files can influence performance, because the behavior of the database writer and archiver processes depend on the redo log sizes. Generally, larger redo log files provide better performance. Undersized log files increase checkpoint activity and reduce performance.
    Although the size of the redo log files does not affect LGWR performance, it can affect DBWR and checkpoint behavior. Checkpoint frequency is affected by several factors, including log file size and the setting of the FAST_START_MTTR_TARGET initialization parameter. If the FAST_START_MTTR_TARGET parameter is set to limit the instance recovery time, Oracle automatically tries to checkpoint as frequently as necessary. Under this condition, the size of the log files should be large enough to avoid additional checkpointing due to under sized log files. The optimal size can be obtained by querying the OPTIMAL_LOGFILE_SIZE column from the V$INSTANCE_RECOVERY view. You can also obtain sizing advice on the Redo Log Groups page of Oracle Enterprise Manager Database Control.
    It may not always be possible to provide a specific size recommendation for redo log files, but redo log files in the range of a hundred megabytes to a few gigabytes are considered reasonable. Size your online redo log files according to the amount of redo your system generates. A rough guide is to switch logs at most once every twenty minutes.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/build_db.htm#sthref237
    If you are talking about data guard then:
    4.1.3 Sizing Redo Log Files
    The size of the redo log files can influence performance, because the behavior of the database writer and archiver processes depend on the redo log sizes. Generally, larger redo log files provide better performance. Undersized log files increase checkpoint activity and reduce performance.
    Although the size of the redo log files does not affect LGWR performance, it can affect DBWR and checkpoint behavior. Checkpoint frequency is affected by several factors, including log file size and the setting of the FAST_START_MTTR_TARGET initialization parameter. If the FAST_START_MTTR_TARGET parameter is set to limit the instance recovery time, Oracle automatically tries to checkpoint as frequently as necessary. Under this condition, the size of the log files should be large enough to avoid additional checkpointing due to under sized log files. The optimal size can be obtained by querying the OPTIMAL_LOGFILE_SIZE column from the V$INSTANCE_RECOVERY view. You can also obtain sizing advice on the Redo Log Groups page of Oracle Enterprise Manager Database Control.
    It may not always be possible to provide a specific size recommendation for redo log files, but redo log files in the range of a hundred megabytes to a few gigabytes are considered reasonable. Size your online redo log files according to the amount of redo your system generates. A rough guide is to switch logs at most once every twenty minutes.
    Automatic Deletion of Applied Archive Logs
    Archived logs, once they are applied on the logical standby database, will be automatically deleted by SQL Apply.
    This feature reduces storage consumption on the logical standby database and improves Data Guard manageability.
    See also:
    Oracle Data Guard Concepts and Administration for details
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14214/chapter1.htm#sthref269

  • "checkpoint not complete" in alert log file.

    Hi, all.
    I have got a message of "Checkpoint not complete" in alert log file.
    Thread 2 advanced to log sequence 531
    Current log# 7 seq# 531 mem# 0: \\.\REDO231
    Current log# 7 seq# 531 mem# 1: \\.\REDO232
    Thread 2 cannot allocate new log, sequence 532
    Checkpoint not complete
    Current log# 7 seq# 531 mem# 0: \\.\REDO231
    Current log# 7 seq# 531 mem# 1: \\.\REDO232
    I searched "Checkpoint not complete" issue in this forum.
    As solutions,
    1. add more redo log groups
    2. increase the size of redo log
    3. check I/O contention
    4. set LOG_CHECKPOINT_INTERVAL, LOG_CHECKPOINT_TIMEOUT or
    FAST_START_MTTR_TARGET
    I think No.4 is the possible first approach in our environment.
    I think No.1 and No2 are not the ploblems in our environment.
    I ask the above issue oracle support center, but
    I was told that "if you are not getting this message frequently, you do not need to worry about it" from an oracle engineer.
    Is this true?? If I am not getting this message frequently, there is no problem
    in terms of database integrity, consistency, and performance?
    I will be waiting for your advice and experience in real life.
    Thanks and Regards.

    Redo Log Tuning Advisory and Automatic Checkpoint Tuning are new features introduced in Oracle 10G, if you are on 10g you may benefit from these features.
    The size of the redo log files can influence performance, because the behavior of the database writer and archiver processes depend on the redo log sizes. Generally, larger redo log files provide better performance, however it must balanced out with the expected recovery time. Undersized log files increase checkpoint activity and increase CPU usage. As rule of thumb switching logs at most once every fifteen minutes.
    Checkpoint frequency is affected by several factors, including log file size and the setting of the FAST_START_MTTR_TARGET initialization parameter. If the FAST_START_MTTR_TARGET parameter is set to limit the instance recovery time, Oracle automatically tries to checkpoint as frequently as necessary. Under this condition, the size of the log files should be large enough to avoid additional checkpointing due to under sized log files.
    The redo logfile sizing advisory is specified by column optimal_logfile_size of v$instance_recovery. This feature require setting the parameter "fast_start_mttr_target" for the advisory to take effect and populate the column optimal_logfile_size.
    Also you can obtain redo sizing advice on the Redo Log Groups page of Oracle Enterprise Manager Database Control.
    To enable automatic checkpoint tuning, unset FAST_START_MTTR_TARGET or set it to a nonzero value(This is measured in seconds and by default, this feature is not enabled, because FAST_START_MTTR_TARGET has a default value of 0). If you set this parameter to zero this feature will be disabled. When you enable fast-start checkpointing, remove or disable(set to 0) the following initialization parameters:
    - LOG_CHECKPOINT_INTERVAL
    - LOG_CHECKPOINT_TIMEOUT
    - FAST_START_IO_TARGET
    Enabling fast-start checkpointing can be done statically using the initialization files or dynamically using -
    SQL> alter system set FAST_START_MTTR_TARGET=10;
    Best regards.

  • How to reduce redo space wait

    os:x86_64 x86_64 x86_64 GNU/Linux
    oracle:9.2.0.6
    running : Data guard
    Problem : Redo space wait is very high
    Init.ora paramaeters
    *.background_dump_dest='/u01/app/oracle/admin/PBPR01/bdump'
    *.compatible='9.2.0'
    *.control_files='/s410/oradata/PBPR01/control01.ctl','/s420/oradata/PBPR01/control02.ctl','/s430/oradata/PBPR01/control03.ctl'
    *.core_dump_dest='/u01/app/oracle/admin/PBPR01/cdump'
    *.cursor_space_for_time=true
    *.db_block_size=8192
    *.db_cache_size=576000000
    *.db_domain='cc.com'
    *.db_file_multiblock_read_count=16
    *.db_files=150
    *.db_name='PBPR01'
    *.db_writer_processes=1
    *.dbwr_io_slaves=2
    *.disk_asynch_io=false
    *.fast_start_mttr_target=1800
    *.java_pool_size=10485760
    *.job_queue_processes=5
    *.log_archive_dest_1='LOCATION=/s470/oraarch/PBPR01'
    *.log_archive_dest_3='service=DR_PBPR01 LGWR ASYNC=20480'
    *.log_archive_format='PBPR01_%t_%s.arc'
    *.log_archive_start=true
    *.log_buffer=524288
    *.log_checkpoints_to_alert=true
    *.max_dump_file_size='500000'
    *.object_cache_max_size_percent=20
    *.object_cache_optimal_size=512000
    *.open_cursors=500
    *.optimizer_mode='CHOOSE'
    *.processes=500
    *.pga_aggregate_target=414187520
    *.replication_dependency_tracking=false
    *.undo_management=AUTO
    *.undo_retention=10800
    *.undo_tablespace=UNDOTBS1
    *.undo_suppress_errors=TRUE
    *.session_cached_cursors=20
    *.shared_pool_size=450000000
    *.user_dump_dest='/u01/app/oracle/admin/PBPR01/udump'
    SGA :
    SQL> show sga
    Total System Global Area 1108839248 bytes
    Fixed Size 744272 bytes
    Variable Size 520093696 bytes
    Database Buffers 587202560 bytes
    Redo Buffers 798720 bytes
    SQL>
    I created log groups with 2 memebers each and with size 25 mb.
    Redo space waits shows as
    SQL> SELECT name, value
    FROM v$sysstat
    WHERE name = 'redo log space requests';
    NAME VALUE
    redo log space requests 152797
    this is running between 140000 and 160000
    some of the trace file error
    [oracle@hipclora6b bdump]$ cat PBPR01_lns0_23689.trc
    Dump file /u01/app/oracle/admin/PBPR01/bdump/PBPR01_lns0_23689.trc
    Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.6.0 - Production
    ORACLE_HOME = /u01/app/oracle/product/9.2.0.6
    System name: Linux
    Node name: hipclora6b.clickipc.hipc.clickcommerce.com
    Release: 2.4.21-37.EL
    Version: #1 SMP Wed Sep 7 13:32:18 EDT 2005
    Machine: x86_64
    Instance name: PBPR01
    Redo thread mounted by this instance: 1
    Oracle process number: 34
    Unix process pid: 23689, image: [email protected]
    *** SESSION ID:(82.51071) 2008-04-14 23:40:04.122
    *** 2008-04-14 23:40:04.122 46512 kcrr.c
    NetServer 0: initializing for LGWR communication
    NetServer 0: connecting to KSR channel
    : success
    NetServer 0: subscribing to KSR channel
    : success
    *** 2008-04-14 23:40:04.162 46559 kcrr.c
    NetServer 0: initialized successfully
    *** 2008-04-14 23:40:04.172 46819 kcrr.c
    NetServer 0: Request to Perform KCRRNSUPIAHM
    NetServer 0: connecting to remote destination DR_PBPR01
    *** 2008-04-14 23:40:04.412 46866 kcrr.c
    NetServer 0: connect status = 0
    A Sample alert Log
    Thread 1 advanced to log sequence 275496
    Current log# 1 seq# 275496 mem# 0: /s420/oradata/PBPR01/redo01a.log
    Current log# 1 seq# 275496 mem# 1: /s420/oradata/PBPR01/redo01b.log
    Tue Apr 15 09:10:03 2008
    ARC0: Evaluating archive log 4 thread 1 sequence 275495
    ARC0: Archive destination LOG_ARCHIVE_DEST_3: Previously completed
    ARC0: Beginning to archive log 4 thread 1 sequence 275495
    Creating archive destination LOG_ARCHIVE_DEST_1: '/s470/oraarch/PBPR01/PBPR01_1_275495.arc'
    Tue Apr 15 09:10:03 2008
    Beginning global checkpoint up to RBA [0x43428.3.10], SCN: 0x0000.3c1594fd
    Completed checkpoint up to RBA [0x43428.2.10], SCN: 0x0000.3c1594fa
    Completed checkpoint up to RBA [0x43428.3.10], SCN: 0x0000.3c1594fd
    Tue Apr 15 09:10:03 2008
    ARC0: Completed archiving log 4 thread 1 sequence 275495
    Tue Apr 15 09:29:15 2008
    LGWR: Completed archiving log 1 thread 1 sequence 275496
    Creating archive destination LOG_ARCHIVE_DEST_3: 'DR_PBPR01'
    LGWR: Beginning to archive log 5 thread 1 sequence 275497
    Beginning log switch checkpoint up to RBA [0x43429.2.10], SCN: 0x0000.3c15bc33
    Tue Apr 15 09:29:16 2008
    ARC1: Evaluating archive log 1 thread 1 sequence 275496
    ARC1: Archive destination LOG_ARCHIVE_DEST_3: Previously completed
    ARC1: Beginning to archive log 1 thread 1 sequence 275496
    Creating archive destination LOG_ARCHIVE_DEST_1: '/s470/oraarch/PBPR01/PBPR01_1_275496.arc'
    Tue Apr 15 09:29:16 2008
    Thread 1 advanced to log sequence 275497
    Current log# 5 seq# 275497 mem# 0: /s420/oradata/PBPR01/redo05a.log
    Current log# 5 seq# 275497 mem# 1: /s420/oradata/PBPR01/redo05b.log
    Tue Apr 15 09:29:16 2008
    ARC1: Completed archiving log 1 thread 1 sequence 275496
    Log file size
    SQL> select GROUP#,MEMBERS ,sum(bytes)/(1024*1024) from v$log group by
    2 GROUP#,MEMBERS;
    GROUP# MEMBERS SUM(BYTES)/(1024*1024)
    1 2 25
    2 2 25
    3 2 25
    4 2 25
    5 2 25
    Pl. give your view what can be thought of to reduce redospace wait

    Below are my suggestion:
    Increase log buffer between [ 5Mb and 15Mb]
    differ the the commit: COMMIT_WRITE=NOWAIT,BATCH
    You can also increase your redo log fil, but read the following
    Sizing Redo Logs with Oracle 10g
    Oracle has introduced a Redo Logfile Sizing Advisor that will recommend a size for our redo logs that limit excessive log switches, incomplete and excessive checkpoints, log archiving issues, DBWR performance and excessive disk I/O. All these issues result in transactions bottlenecking within redo and performance degradation. While many DBAs' first thought is throughput of the transaction base, not very many give thought to the recovery time required in relation to the size of redo generated or the actual size of the redo log groups. With the introduction of Oracle's Mean Time to Recovery features, DBAs can now specify through the FAST_START_MTTR_TARGET initialization variable just how long a crash recovery should take. Oracle will then try its best to issue the proper checkpoints during normal system operation to help meet this target. Since the size of redo logs and the checkpointing of data have a key role in Oracle's ability to recover within a desired time frame, Oracle will now use the value of FAST_START_MTTR_TARGET to suggest an optimal redo log size. In actuality, the setting of FAST_START_MTTR_TARGET is what triggers the new redo logfile sizing advisor, and if you do not set it, Oracle will not provide a suggestion for your redo logs. If you do not have any real time requirement for recovery you should at least set this to its maximum value of 3600 seconds, or one hour and you will then be able to take advantage of the advisory. After setting the FAST_START_MTTR_TARGET initialization parameter a DBA need only query the V$INSTANCE_RECOVERY view for the column OPTIMAL_LOGFILE_SIZE value, in MEG, and then rebuild the redo log groups with this recommendation.
    Simple query to show the optimal size for redo logs
    SQL> SELECT OPTIMAL_LOGFILE_SIZE
    FROM V$INSTANCE_RECOVERY
    OPTIMAL_LOGFILE_SIZE
    64
    A few notes about setting FAST_START_MTTR_TARGET
    •     Specify a value in seconds (0-3600) that you wish Oracle to perform recovery within.
    •     Is overridden by LOG_CHECKPOINT_INTERVAL:
    Since LOG_CHECKPOINT_INTERVAL requests Oracle to checkpoint after a specified amount of redo blocks have been written, and FAST_START_MTTR_TARGET basically attempts to size the redo logs in such a way as to perform a checkpoint when they switch, you can easily see that these two parameters are of conflicting interest. You will need to unset LOG_CHECKPOINT_INTERVAL if you wish to use the redo log sizing advisor and have checkpoints occur with log switches. This is how it was recommended to be done in the v7 days and really I can't quite see any reason for anything else.
    •     Is overridden by LOG_CHECKPOINT_TIMEOUT:
    LOG_CHECKPOINT_TIMEOUT controls the amount of time in between checkpoints if a log switch or the amount of redo generated has not yet triggered a checkpoint. Since our focus is now on Mean Time to Recovery (MTTR) this parameter is no longer of concern because we are asking Oracle to determine when to checkpoint based on our crash recovery requirements.
    •     Is overridden by FAST_START_IO_TARGET:
    Actually, the FAST_START_IO_TARGET parameter is deprecated and you should switch over to the FAST_START_MTTR_TARGET parameter
    Thanks

Maybe you are looking for