Unable to copy redo-logs for cloning

Hello
We are using R12 with 10.2.0.2 db
we take db tier backups every night and apps backup twice a month(15 days interval)
We normally run autocfg.sh and adprecolne.pl on dbTier.After this we stop all services, shutdown the database and copy all the datafiles,controlfiles and logfiles)
This backup is helpful in restoration in case any failure on PROD server and for cloning purpose on an another node for testing purpose
Now my question is, are we doing the clone procedure properly or is there a better way for cloning for suppose RMAN..or maybe any other
Secondly, Last night all the datafiles got copied after a clean shutdown except for logfiles.
Will i be able to restore the data later if i dont have the log files? can i build new logfiles?
Will this data be succesful for cloning without the redo logs
Please Note: This is a cold backup we take i.e Consistent backup
Thanks

I would go against the oft-repeated and common "Expert" and "Documentation" advice here.
If you are doing a COLD Backup and are are planning to use the Backup to restore to another server, there is no harm in taking the online redo logs as well.
Where the risk with online redo log backups arises is when the SysAdmin/DBA , when doing a FullRestore also restores the online redo logs back to the target database overwriting any present online redo logs.
For this to occcur :
a. The SysAdmin/DBA happens to restore all the files in the backup set without selectively excluding the online redo logs
b. The databases's online redo logs are present and good on disk
c. The DBA isn't very confident of being able to rollforward through his redo logs
It is for those reasons that it is easier to advice everyone not to backup online redo logs.
However, as I've noted above, if you are doing a Cold Backup and restoring to another server (with no database present at the target site), there is no harm in taking the online redo logs. Such a backup and restore makes your scripting very easy and doesn't even need a DBA.
Hemant K Chitale
http://hemantoracledba.blogpsot.com
Here are some of my comments on the issue of backups of online redo logs :
http://hemantoracledba.blogspot.com/2008/03/backup-online-redo-logs.html

Similar Messages

  • Unable to ship Redo logs to standby DB (Oracle Data Gaurd)

    Hi all,
    We have configured Oracle Data Gaurd between our Production (NPP) & Standby (NPP_DR).
    The configuration is complete however, the production is unable to ship redo logs to standby DB.
    We keep getting the error "PING[ARC0]: Heartbeat failed to connect to standby 'NPP_DR'. Error is 12154." in Primary DB
    Primary & DR are on different boxes.
    Please see the logs below in the production alert log file & npp_arc0_18944.trc trace files:
    npp_arc0_18944.trc :*
    2011-01-19 09:17:38.007 62692 kcrr.c
    Error 12154 received logging on to the standby
    Error 12154 connecting to destination LOG_ARCHIVE_DEST_2 standby host 'NPP_DR'
    Error 12154 attaching to destination LOG_ARCHIVE_DEST_2 standby host 'NPP_DR'
    2011-01-19 09:17:38.007 62692 kcrr.c
    PING[ARC0]: Heartbeat failed to connect to standby 'NPP_DR'. Error is 12154.
    2011-01-19 09:17:38.007 60970 kcrr.c
    kcrrfail: dest:2 err:12154 force:0 blast:1
    2011-01-19 09:22:38.863
    Redo shipping client performing standby login
    OCIServerAttach failed -1
    .. Detailed OCI error val is 12154 and errmsg is 'ORA-12154: TNS:could not resolve the connect identifier specified
    alert log file on Primary*
    Error 12154 received logging on to the standby
    Wed Jan 19 09:02:35 2011
    Error 12154 received logging on to the standby
    Wed Jan 19 09:07:36 2011
    Error 12154 received logging on to the standby
    Wed Jan 19 09:12:37 2011
    Error 12154 received logging on to the standby
    Wed Jan 19 09:13:10 2011
    Incremental checkpoint up to RBA [0x2cc.2fe0.0], current log tail at RBA [0x2cc.2fe9.0]
    Wed Jan 19 09:17:38 2011
    Error 12154 received logging on to the standby
    Wed Jan 19 09:22:38 2011
    Error 12154 received logging on to the standby
    Wed Jan 19 09:27:39 2011
    Error 12154 received logging on to the standby
    However, we are able to tnsping from primary to DR
    Tnsping Results
    From Primary:
    juemdbp1:oranpp 19> tnsping NPP_DR
    TNS Ping Utility for HPUX: Version 10.2.0.4.0 - Production on 19-JAN-2011 09:32:50
    Copyright (c) 1997,  2007, Oracle.  All rights reserved.
    Used parameter files:
    /oracle/NPP/102_64/network/admin/sqlnet.ora
    Used TNSNAMES adapter to resolve the alias
    Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (COMMUNITY = SAP.WORLD) (PROTOCOL = TCP) (HOST = 10.80.51.101) (PORT = 49160))) (CONNECT_DATA = (SID = NPP) (SERVER = DEDICATED)))
    OK (60 msec)
    Tnsnames.ora in Primary:
    Filename......: tnsnames.ora
    Created.......: created by SAP AG, R/3 Rel. >= 6.10
    Name..........:
    Date..........:
    @(#) $Id: //bc/700-1_REL/src/ins/SAPINST/impl/tpls/ora/ind/TNSNAMES.ORA#4 $
    NPP.WORLD=
      (DESCRIPTION =
        (ADDRESS_LIST =
            (ADDRESS =
              (COMMUNITY = SAP.WORLD)
              (PROTOCOL = TCP)
              (HOST = nppjorp)
              (PORT = 49160)
        (CONNECT_DATA =
           (SID = NPP)
           (GLOBAL_NAME = NPP.WORLD)
    NPP_HQ.WORLD=
    (DESCRIPTION =
       (ADDRESS_LIST =
            (ADDRESS =
              (COMMUNITY = SAP.WORLD)
              (PROTOCOL = TCP)
              (HOST = nppjorp)
              (PORT = 49160)
       (CONNECT_DATA =
            (SID = NPP)
            (SERVER = DEDICATED)
    NPP_DR.WORLD=
    (DESCRIPTION =
      (ADDRESS_LIST =
       (ADDRESS =
            (COMMUNITY = SAP.WORLD)
            (PROTOCOL = TCP)
            (HOST = 10.80.51.101)
            (PORT = 49160)
    (CONNECT_DATA =
            (SID = NPP)
            (SERVER = DEDICATED)
    NPPLISTENER.WORLD=
            (DESCRIPTION =
              (ADDRESS =
                    (PROTOCOL = TCP)
                    (HOST = nppjorp)
                    (PORT = 49160)
    Listener.ora in Primary
    Filename......: listener.ora
    Created.......: created by SAP AG, R/3 Rel. >= 6.10
    Name..........:
    Date..........:
    @(#) $Id: //bc/700-1_REL/src/ins/SAPINST/impl/tpls/ora/ind/LISTENER.ORA#4 $
    ADMIN_RESTRICTIONS_LISTENER = on
    LISTENER =
      (ADDRESS_LIST =
            (ADDRESS =
              (PROTOCOL = IPC)
              (KEY = NPP.WORLD)
            (ADDRESS=
              (PROTOCOL = IPC)
              (KEY = NPP)
            (ADDRESS =
              (COMMUNITY = SAP.WORLD)
              (PROTOCOL = TCP)
              (HOST = nppjorp)
              (PORT = 49160)
    STARTUP_WAIT_TIME_LISTENER = 0
    CONNECT_TIMEOUT_LISTENER = 10
    TRACE_LEVEL_LISTENER = OFF
    SID_LIST_LISTENER =
      (SID_LIST =
        (SID_DESC =
          (SID_NAME = NPP)
          (ORACLE_HOME = /oracle/NPP/102_64)
    Thank You,
    Salman Qayyum
    Edited by: Salman M.A. Qayyum on Jan 19, 2011 8:12 AM

    Hi,
    Please find the remaining post ...
    Tnsnames.ora in DR:
    Filename......: tnsnames.ora
    Created.......: created by SAP AG, R/3 Rel. >= 6.10
    Name..........:
    Date..........:
    @(#) $Id: //bc/700-1_REL/src/ins/SAPINST/impl/tpls/ora/ind/TNSNAMES.ORA#4 $
    NPP.WORLD=
      (DESCRIPTION =
        (ADDRESS_LIST =
            (ADDRESS =
              (COMMUNITY = SAP.WORLD)
              (PROTOCOL = TCP)
              (HOST = nppjor)
              (PORT = 49160)
        (CONNECT_DATA =
           (SID = NPP)
           (GLOBAL_NAME = NPP.WORLD)
    NPP_HQ.WORLD=
      (DESCRIPTION =
        (ADDRESS_LIST =
          (ADDRESS =
            (COMMUNITY = SAP.WORLD)
            (PROTOCOL = TCP)
            (HOST = hq_nppjor)
            (PORT = 49160)
        (CONNECT_DATA =
            (SID = NPP)
            (SERVER = DEDICATED)
    NPP_DR.WORLD=
      (DESCRIPTION =
        (ADDRESS_LIST =
          (ADDRESS =
            (COMMUNITY = SAP.WORLD)
            (PROTOCOL = TCP)
            (HOST = nppjor)
            (PORT = 49160)
        (CONNECT_DATA =
            (SID = NPP)
            (SERVER = DEDICATED)
            (SERVICE_NAME = NPP_DR)
    NPPLISTENER.WORLD=
            (DESCRIPTION =
              (ADDRESS =
                    (PROTOCOL = TCP)
                    (HOST = nppjor)
                    (PORT = 49160)
    Listener.ora in DR
    Filename......: listener.ora
    Created.......: created by SAP AG, R/3 Rel. >= 6.10
    Name..........:
    Date..........:
    @(#) $Id: //bc/700-1_REL/src/ins/SAPINST/impl/tpls/ora/ind/LISTENER.ORA#4 $
    ADMIN_RESTRICTIONS_LISTENER = on
    LISTENER =
      (ADDRESS_LIST =
            (ADDRESS =
              (PROTOCOL = IPC)
              (KEY = NPP.WORLD)
            (ADDRESS=
              (PROTOCOL = IPC)
              (KEY = NPP)
            (ADDRESS =
              (COMMUNITY = SAP.WORLD)
              (PROTOCOL = TCP)
              (HOST = nppjor)
              (PORT = 49160)
           (ADDRESS =
             (COMMUNITY = SAP.WORLD)
             (PROTOCOL = TCP)
             (HOST = 10.80.50.101)
             (PORT = 49160)
    STARTUP_WAIT_TIME_LISTENER = 0
    CONNECT_TIMEOUT_LISTENER = 10
    TRACE_LEVEL_LISTENER = OFF
    SID_LIST_LISTENER =
      (SID_LIST =
        (SID_DESC =
          (SID_NAME = NPP)
          (ORACLE_HOME = /oracle/NPP/102_64)
    /etc/hosts settings in Primary
    host:oranpp 25> grep nppjor /etc/hosts
    10.32.243.54    nppjor.sabic.com        nppjor
    10.32.50.115    nppjorp.sabic.com        nppjorp
    /etc/hosts settings in DR
    host:oranpp 11> grep nppjor /etc/hosts
    10.32.243.54    hq_nppjor.sabic.com     hq_nppjor
    10.80.243.54    nppjor.sabic.com        nppjor
    10.80.50.115    nppjorp.sabic.com        nppjorp
    Thank You,
    Salman Qayyum

  • Is There a Way to Run a Redo log for a Single Tablespace?

    I'm still fairly new to Oracle. I've been reading up on the architecture and I am getting the hang of it. Actually, I have 2 questions.
    1) My first question is..."Is there a way to run the redo log file...but to specify something so that it only applies to a single tablespace and it's related files?"
    So, in a situation where, for some reason, only a single dbf file has become corrupted, I only have to worry about replaying the log for those transactions that affect the tablespace associated with that file.
    2) Also, I would like to know if there is a query I can run from iSQLPlus that would allow me to view the datafiles that are associated with a tablespace.
    Thanks

    1) My first question is..."Is there a way to run the
    redo log file...but to specify something so that it
    only applies to a single tablespace and it's related
    files?"
    No You can't specify a redolog file to record the transaction entries for a particular tablespace.
    In cas if a file gets corrupted.you need to apply all the archivelogs since the last backup plus the redologs to bring back the DB to consistent state.
    >
    2) Also, I would like to know if there is a query I
    can run from iSQLPlus that would allow me to view the
    datafiles that are associated with a tablespace.Select file_name,tablespace_name from dba_data_files will give you the
    The above will give you the number of datafiles that a tablespace is made of.
    In your case you have created the tablespace iwth one datafile.
    Message was edited by:
    Maran.E

  • Why not use Redo log for consistent read

    Oracle 11.1.0.7:
    This might be a stupid question.
    As I understand if a select was issued at 7:00 AM and the data that select is going to read has changed at 7:10 AM even then Oracle will return the data that existed at 7:00 AM. And for this Oracle needs the data in Undo segments.
    My question is since redo also has past and current information why can't redo logs be used to retreive that information? Why is undo required when redo already has all that information.

    user628400 wrote:
    Thanks. I get that piece but isn't it the same problem with UNDO? It's written as it expires and there is no guranteee until we specifically ask oracle to gurantee the UNDO retention? I guess I am trying to understand that UNDO was created for effeciency purposes so that there is less performance overhead as compared to reading and writing from redo.And this also you said,
    >
    If data was changed to 100 to 200 wouldn't both the values be there in redo logs. As I understand:
    1. Insert row with value 100 at 7:00 AM and commit. 100 will be writen to redo log
    2. update row to 200 at 8:00 AM and commit. 200 will be written to redo log
    So in essence 100 and 200 both are there in the redo logs and if select was issued at 7:00 data can be read from redo log too. Please correct me if I am understanding it incorrectly.I guess you didnt understand the explaination that I did. Its not the old data that is kept. Its the changed vector of Undo that is kept which is useful to "recover" it when its gone but not useful as such for a select statement. Whereas in an Undo block, the actual value is kept. You must remember that its still a block only which can contain data just like your normal block which may contain a table like EMP. So its not 100,200 but the change vectors of these things which is useful to recover the transaction based on their SCN numbers and would be read in that order as well. And to read the data from Undo, its quite simple for oracle to do so using an Undo block as the transaction table which holds the entry for the transaction, knows where the old data is kept in the Undo Segment. You may have seen XIDSEQ, XIDUSN, XIDSLOT in the tranaction id which are nothing but the information that where the undo data is kept. And to read it, unlke redo, undo plays a good role.
    About the expiry of Undo, you must know that only INACTIVE Undo extents are marked for expiry. The Active Extents which are having an ongoing tranaction records, are never marked for it. You can come back after a lifetime and if undo is there, your old data would be kept safe by oracle since its useful for the multiversioning. Undo Retention is to keep the old data after commit, something which you need not to do if you are on 11g and using Total Recall feature!
    HTH
    Aman....

  • How to damage online redo log for simulation

    Dear All,
    Kindly please help to inform me, is there any way to damage online redo log from database level (not from OS command like dd) ?
    I need to do it for test case to enable db_block_checking and db_block_checksum (set both parameter to TRUE).
    Is those parameter will help from redo log corruption ? that's why, i want to prove it.
    Thanks
    Anthony

    user12215770 wrote:
    My purpose is i want to verify that the db_block_checking and db_block_checksum can avoid redo corruption (the corruption caused by process in the database).Redo corruption could also occur due to other issues as http://docs.oracle.com/cd/E11882_01/server.112/e25513/initparams049.htm#REFRN10030 says:
    >
    Checksums allow Oracle to detect corruption caused by underlying disks, storage systems, or I/O systems. If set to FULL, DB_BLOCK_CHECKSUM also catches in-memory corruptions and stops them from making it to the disk.
    >
    You could try to use ORADEBUG POKE command to write directly in the SGA if you know how to find log buffer blocks ... About ORADEBUG please read http://www.juliandyke.com/Diagnostics/Tools/ORADEBUG/ORADEBUG.html.

  • Adding redo logs for dataguard primary server

    Dear all:
    i have physical dataguard servers
    i want to add new redologs for primary server with big size ..
    please advice what will be the action with stand by server.
    Thanks ,,

    Dear all:
    i have physical dataguard servers
    i want to add new redologs for primary server with
    big size ..
    please advice what will be the action with stand by
    server.
    Thanks ,,Most of these information are kept in the control file . You have to recreate your stand by control file after you added the new online redo logs. Transfer this to the standby server , with the the new online redo redo while you standby is down . Your standby system should be able to recognize it when it goes back to stand by mode.

  • Unable to get Audit logs for Data Mining Model Oracle11g

    Hi All, I followed all the steps given below 2 links but not getting any audit logs.
    http://download.oracle.com/docs/cd/B28359_01/datamine.111/b28130/install_odm.htm#DMADM024
    http://download.oracle.com/docs/cd/B28359_01/datamine.111/b28130/schemaobjs.htm#sthref233
    Made sure the audit_trail is set to DB.
    SQL cmds and its output shown below
    SQL> AUDIT GRANT,AUDIT,COMMENT,RENAME,SELECT ON mining model NB_SH_Clas_sample;
    Audit succeeded.
    SQL> COMMENT ON MINING MODEL NB_SH_Clas_sample IS 'i am here';
    COMMENT ON MINING MODEL NB_SH_Clas_sample IS 'i am here'
    ERROR at line 1:
    ORA-03113: end-of-file on communication channel
    Process ID: 31648
    Session ID: 135 Serial number: 114
    SQL> quit
    Please help me if i have left out any step
    Thanks in Advance

    Hi.
    Please take a look at the other concurrent thread on model object auditing for more detailed information. If you have Oracle support, please file a TAR with Metalink (http://metalink.oracle.com).
    Regards, Peter.

  • Unable to get the log for Suspended process in BPM

    Hi All,
    We are integrating SAP BPM with SAP BRFPlus.
    We created a rule in a BRFPlus function and converted that function as a RFC.
    Now we imported the RFC into BPM process and did the subsequent settings as per the blog
    Calling ABAP RFC in CE BPM 7.2
    But when we check the process,it shows to be suspended and when we try to resume it it still goes in error.
    in the BPM we did not create any activity other than one automated activity hosting the RFC between start and end steps.
    Can you please guide me where to check the error log and how to resolve this error with automated activity?
    Checked most of the articles and blogs on SDN but did not find any answer.
    Please help me.
    Thanks in advance

    Hi Siddhant,
    Thanks a lot for your valuable inputs.
    We are getting the following error.
    A technical error during invocation: Could not invoke service reference name
    Tried with all the possible help from SDN but could not find the solution.
    We have created a BRFPLUS function converted it into RFC and imported that RFC into the BPM process and inserted it into an automated activity activity between start and end process.
    We got the error while testing the automated activity.
    Can you please guide us?This is very important.
    Thanks in advance

  • SAP Authorization: unable to see change logs for role assignments

    Hi ,
    please do help us in this regards .
    When trying to find the changes update to certain role . We are unable to see that changes/ hope the changes are note gettng updated.
    we recive the message " NO CHAGE DOCUMENT FOUND TO MATCH SPECIFIED CRITERIA"
    MESSAGE NO .CD887.
    Please let us know how can this be sorted.
    Best Regards,
    Rahul.

    Hi,
    As per your query you can get this info in suim and when you modify any role the you have to activate this.
    Anil

  • Third party tools for redo log

    Dear,
    Any third party tools can read redo log for oracle9i?
    ManyThanks

    Most 3rd parties gave up when they realized that
    a) Oracle now owns, and includes for free, the logminer, and
    b) Oracle is free - and willing to - change the contents of the log files at any time, including patch releases.

  • Data Guard - Redo log files

    Hi....
    I have done data guard .......every thing is fine.......archives are bring transferred to standby..........
    Also, during configuration, I had created standby redolog groups 4,5,6 and copied to standby.....
    But in real time apply.......the standby is not using standby redolog groups 4,5,6........when i am query v$log it is showing group 1,2,3.
    Actually, its should use standby redo log for maximum availability.
    Please help....
    Thanks in advance.

    There was a similar question here just a few days ago:
    Data Guard - redo log files

  • Sizing the redo log files using optimal_logfile_size view.

    Regards
    I have a specific question regarding logfile size. I have deployed a test database and i was exploring certain aspects with regards to selecting optimal size of redo logs for performance tuning using optimal_logfile_size view from v$instance_recovery. My main goal is to reduce the redo bytes required for instance recovery. Currently i have not been able to optimize the redo log file size. Here are the steps i followed:-
    In order to use the advisory from v$instance_recovery i had to set fast_start_mttr_target parameter which is by default not set so i did these steps:-
    1)SQL> sho parameter fast_start_mttr_target;
    NAME TYPE VALUE
    fast_start_mttr_target               integer                           0
    2) Setting the fast_start_mttr_target requires nullifying following deferred parameters :-
    SQL> show parameter log_checkpoint;
    NAME TYPE VALUE
    log_checkpoint_interval integer 0
    log_checkpoint_timeout integer 1800
    log_checkpoints_to_alert boolean FALSE
    SQL> select ISSES_MODIFIABLE,ISSYS_MODIFIABLE,ISINSTANCE_MODIFIABLE,ISMODIFIED from v$parameter where name like'log_checkpoint_timeout';
    ISSES_MODIFIABL ISSYS_MODIFIABLE ISINSTANCE_MODI ISMODIFIED
    FALSE IMMEDIATE TRUE FALSE
    SQL> alter system set log_checkpoint_timeout=0 scope=both;
    System altered.
    SQL> show parameter log_checkpoint_timeout;
    NAME TYPE VALUE
    log_checkpoint_timeout               integer                           0
    3) Now setting fast_start_mttr_target
    SQL> select ISSES_MODIFIABLE,ISSYS_MODIFIABLE,ISINSTANCE_MODIFIABLE,ISMODIFIED from v$parameter where name like'fast_start_mttr_target';
    ISSES_MODIFIABL ISSYS_MODIFIABLE ISINSTANCE_MODI ISMODIFIED
    FALSE IMMEDIATE TRUE FALSE
    Setting the fast_mttr_target to 1200 = 20 minutes of checkpoint switching according to Oracle recommendation
    Querying the v$instance_recovery view
    4) SQL> select ACTUAL_REDO_BLKS,TARGET_REDO_BLKS,TARGET_MTTR,ESTIMATED_MTTR, OPTIMAL_LOGFILE_SIZE,CKPT_BLOCK_WRITES from v$instance_recovery;
    ACTUAL_REDO_BLKS TARGET_REDO_BLKS TARGET_MTTR ESTIMATED_MTTR OPTIMAL_LOGFILE_SIZE CKPT_BLOCK_WRITES
    276 165888 *93* 59 361 16040
    Here Target Mttr was 93 so i set the fast_mttr_target to 120
    SQL> alter system set fast_start_mttr_target=120 scope=both;
    System altered.
    Now the logfile size suggested by v$instance_recovery is 290 Mb
    SQL> select ACTUAL_REDO_BLKS,TARGET_REDO_BLKS,TARGET_MTTR,ESTIMATED_MTTR, OPTIMAL_LOGFILE_SIZE,CKPT_BLOCK_WRITES from v$instance_recovery;
    ACTUAL_REDO_BLKS TARGET_REDO_BLKS TARGET_MTTR ESTIMATED_MTTR OPTIMAL_LOGFILE_SIZE CKPT_BLOCK_WRITES
    59 165888 93 59 290 16080
    After altering the logfile size to 290 as show below by v$log view :-
    SQL> select GROUP#,THREAD#,SEQUENCE#,BYTES from v$log;
    GROUP# THREAD# SEQUENCE# BYTES
    1 1 24 304087040
    2 1 0 304087040
    3 1 0 304087040
    4 1 0 304087040
    5 ) After altering the size i have observed the anomaly as redo log blocks to be applied for recovery has increased from *59 to 696* also now v$instance_recovery view is now suggesting the logfile size of *276 mb*. Have i misunderstood something
    SQL> select ACTUAL_REDO_BLKS,TARGET_REDO_BLKS,TARGET_MTTR,ESTIMATED_MTTR, OPTIMAL_LOGFILE_SIZE,CKPT_BLOCK_WRITES from v$instance_recovery;
    ACTUAL_REDO_BLKS TARGET_REDO_BLKS TARGET_MTTR ESTIMATED_MTTR OPTIMAL_LOGFILE_SIZE CKPT_BLOCK_WRITES
    *696* 646947 120 59 *276* 18474
    Please clarify the above output i am unable to optimize the logfile size and have not been able to achieve the goal of reducing the redo log blocks to be applied for recovery, any help is appreciated in this regard.

    sunny_123 wrote:
    Sir oracle says that fast_start_mttr target can be set to 3600 = 1hour. As suggested by following oracle document
    http://docs.oracle.com/cd/B10500_01/server.920/a96533/instreco.htm
    I set mine value to 1200 = 20 minutes. Later i adjusted it to 120=2 minutes as Target_mttr suggested it to be around 100 (if fast_mttr_target value is too high or too low effective value is contained in target_mttr of v$instance_recovery)Just to add, you are reading the documentation of 9.2 and a lot has changed since then. For example, in 9.2 the parameter FSMTTR was introduced and explicitly required to be set and monitored by the DBA for teh additional checkpoint writes which might get caused by it. Since 10g onwards this parameter has been made automatically maintained by Oracle. Also it's been long that 9i has been desupported followed by 10g so it's better that you start reading the latest documentation of 11g and if not that, at least of 10.2.
    Aman....

  • Standby redo log does not exist...

    Hello,
    Oracle 11.2.0.2, running on Solaris
    This is kind of a continuation of a previous thread, but this is a different question:
    I have a DG configuration, with the primary database having 2 redo logs per redo log group.
    And, it has 2 redo logs for the standby redo log groups.
    But I just found out something very strange:
    For the standby redo log group, even though the database shows it has two redo log files per group, the second file does not actually exist on the file system.
    And, I have no idea where it is, or why the database is not complaining about it.
    See below:
    SQL> select * from v$logfile;
             GROUP# STATUS    TYPE    MEMBER                                        IS_
                 11           STANDBY /opt/oracle/oradata2/PROD/REDO01A_STDBY.log NO
                 11           STANDBY /opt/oracle/oradata3/PROD/REDO01B_STDBY.log NO    <== does not exist on file system
                 12           STANDBY /opt/oracle/oradata2/PROD/REDO02A_STDBY.log NO
                 12           STANDBY /opt/oracle/oradata3/PROD/REDO02B_STDBY.log NO    <== does not exist on file system
                 13           STANDBY /opt/oracle/oradata2/PROD/REDO03A_STDBY.log NO
                 13           STANDBY /opt/oracle/oradata3/PROD/REDO03B_STDBY.log NO    <== does not exist on file system
                 14           STANDBY /opt/oracle/oradata2/PROD/REDO04A_STDBY.log NO
                 14           STANDBY /opt/oracle/oradata3/PROD/REDO04B_STDBY.log NO    <== does not exist on file system
                 15           STANDBY /opt/oracle/oradata2/PROD/REDO05A_STDBY.log NO
                 15           STANDBY /opt/oracle/oradata3/PROD/REDO05B_STDBY.log NO    <== does not exist on file system
                  5           ONLINE  /opt/oracle/oradata1/PROD/REDO05A.log       NO
                  5           ONLINE  /opt/oracle/oradata2/PROD/REDO05B.log       NO
                  6           ONLINE  /opt/oracle/oradata1/PROD/REDO06A.log       NO
                  6           ONLINE  /opt/oracle/oradata2/PROD/REDO06B.log       NO
                  7           ONLINE  /opt/oracle/oradata1/PROD/REDO07A.log       NO
                  7           ONLINE  /opt/oracle/oradata2/PROD/REDO07B.log       NO
                  8           ONLINE  /opt/oracle/oradata1/PROD/REDO08A.log       NO
                  8           ONLINE  /opt/oracle/oradata2/PROD/REDO08B.log       NO
    18 rows selected.
    Notice below that the "B" redo logs do not exist.
    SQL> !ls -l /opt/oracle/oradata3/PROD/REDO01B_STDBY.log
    /opt/oracle/oradata3/PROD/REDO01B_STDBY.log: No such file or directory
    SQL> !ls -l /opt/oracle/oradata3/PROD/REDO02B_STDBY.log
    /opt/oracle/oradata3/PROD/REDO02B_STDBY.log: No such file or directory
    SQL> !ls -l /opt/oracle/oradata3/PROD/REDO03B_STDBY.log
    /opt/oracle/oradata3/PROD/REDO03B_STDBY.log: No such file or directory
    SQL> !ls -l /opt/oracle/oradata3/PROD/REDO04B_STDBY.log
    /opt/oracle/oradata3/PROD/REDO04B_STDBY.log: No such file or directory
    SQL> !ls -l /opt/oracle/oradata3/PROD/REDO05B_STDBY.log
    /opt/oracle/oradata3/PROD/REDO05B_STDBY.log: No such file or directory
    But here, you can see that the "A" redo logs actually do exist.
    SQL> !ls -l /opt/oracle/oradata2/PROD/REDO01A_STDBY.log
    -rw-r-----   1 oracle   dba      536871424 Jan  7  2011 /opt/oracle/oradata2/PROD/REDO01A_STDBY.log

    Hello;
    I'm able to recreate
    SQL> select * from v$logfile;
        GROUP# STATUS  TYPE    MEMBER                                             IS_
             3         ONLINE  /u01/app/oracle/flash_recovery_area/RECOVER2/onlin YES
                               elog/o1_mf_3_8gtxxrl6_.log
             2         ONLINE  /u01/app/oracle/flash_recovery_area/RECOVER2/onlin YES
                               elog/o1_mf_2_8gtxxr4f_.log
             1         ONLINE  /u01/app/oracle/flash_recovery_area/RECOVER2/onlin YES
                               elog/o1_mf_1_8gtxxqng_.log
             4         STANDBY /u01/app/oracle/oradata/RECOVER2/redo04.log        NO
             5         STANDBY /u01/app/oracle/oradata/RECOVER2/redo05.log        NO
             6         STANDBY /u01/app/oracle/oradata/RECOVER2/redo06.log        NOAnd then
    SQL> !ls -al /u01/app/oracle/oradata/RECOVER2/redo04.log
    ls: /u01/app/oracle/oradata/RECOVER2/redo04.log: No such file or directoryChecking... *Not there, but Oracle ( 11.2.0.3 ) allows clean up without barking. Had only one member so I did GROUP drop.
    SQL> ALTER DATABASE DROP LOGFILE GROUP 4;
    Database altered.
    SQL> select * from v$logfile;
        GROUP# STATUS  TYPE    MEMBER                                             IS_
             3         ONLINE  /u01/app/oracle/flash_recovery_area/RECOVER2/onlin YES
                               elog/o1_mf_3_8gtxxrl6_.log
             2         ONLINE  /u01/app/oracle/flash_recovery_area/RECOVER2/onlin YES
                               elog/o1_mf_2_8gtxxr4f_.log
             1         ONLINE  /u01/app/oracle/flash_recovery_area/RECOVER2/onlin YES
                               elog/o1_mf_1_8gtxxqng_.log
             5         STANDBY /u01/app/oracle/oradata/RECOVER2/redo05.log        NO
             6         STANDBY /u01/app/oracle/oradata/RECOVER2/redo06.log        NO
    SQL> Best Regards
    mseberg

  • Online Redo logs instead of Standby Redo logs

    RDBMS Version: 11.2.0.3/Platform : RHEL 6.3
    To migrate a 3TB Database to a new DB server , we are going to use RMAN DUPLICATE.
    Step1. Take full backup of DB + Standby Control file at primary site and transfer the Bkp files to Standby site
    Step2. At standy site, we will run the RMAN duplicate target database for standby
    After the above step, we don't want to create the standby redo logs because the newly restored DB in standby server is going to be the new Prod DB which application will be pointing to.
    So, Can I skip the Standby Redo log creation part and create Online redo logs instead  ?
    As mentioned earlier, Our objective is not to create a proper Dataguard Standby DB setup. We just want to clone our DB to another server using RMAN Duplicate.

    Tom wrote:
    RDBMS Version: 11.2.0.3/Platform : RHEL 6.3
    To migrate a 3TB Database to a new DB server , we are going to use RMAN DUPLICATE.
    Step1. Take full backup of DB + Standby Control file at primary site and transfer the Bkp files to Standby site
    Step2. At standy site, we will run the RMAN duplicate target database for standby
    After the above step, we don't want to create the standby redo logs because the newly restored DB in standby server is going to be the new Prod DB which application will be pointing to.
    So, Can I skip the Standby Redo log creation part and create Online redo logs instead  ?
    As mentioned earlier, Our objective is not to create a proper Dataguard Standby DB setup. We just want to clone our DB to another server using RMAN Duplicate.
    Hi,
    Take full backup of DB + Standby Control
    We just want to clone our DB to another server using RMAN Duplicate
    If you want only clone database of production, why you  are take Standby controlfile?
    If you don't want create standby  database then, why you using DUPLICATE  command with FOR STANDBY option.
    You can  use DUPLICATE command for clone database, without for standby option.
    If you  say no, we want create standby database and we will perform swithover,
    then yes, you can use online redo  logs for max performance mode.
    and you can create standby redo logs on all database, but this redo logs will use by database when database role
    is standby.
    Regards
    Mahir M. Quluzade

  • Online redo log group needed??

    hey guys!
         Am about to recreate the clone file of the clone db, open it using recover it until cancel using backup controlfile and apply all necessary archive logs!
    My question is, does ai need to have online redo log group of the production db? since ai have all the archive log, so ai think while recraeting the control file, online redo log group will be created too?
    Thanks!

    while recraeting the control file, online redo log group will be created tooWhile creating control file, online redo log group doesn't get created. After applying all the archivel logs you have to open the database with RESET LOG option and then oracle will create redo logs for you.
    Best of Luck !!
    Daljit Singh

Maybe you are looking for