RAC redo logs (Confusion)

I was reading RAC documents that is written by Steve Karam.
Cache Fusion for RAC
RAC provides us a multiple instance, single database system. In a RAC environment, there is one shared set of datafiles. Each instance in the “cluster” will have its own SGA (RAM areas) and binary processes. They will also have their own control files and redo log files, though these must be viewable by all nodes, or systems, in the cluster.
http://www.dba-oracle.com/t_implementation_decision_rac_clusters.htm
I'm confused whether each instance has it's own redo log file or they are centrally stored. The above document says that each instance has it's own redo log files.
Rakesh Soni.

I'm confused whether each instance has it's own redo log file or they are centrally stored. The above document says that each instance has it's own redo log files.No it is not...
We do have log_archive_format parameter here we will be using _%t along with the format
t=Thread whih tells oracle which instance the log is coming from...
I am not RAC expert but it cannot be like that

Similar Messages

  • Data Guard : Standby Redo Log CONFUSION

    Trying to set up test Standby db on 10.2.0
    I am well confused about below step 3.1.3, how is the normal redo linked with standby redo, should standby not be members of orginal redo groups?
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/create_ps.htm#i1225703
    Original redo logs:
    SQL>  select * from v$log;
        GROUP#    THREAD#  SEQUENCE#      BYTES    MEMBERS ARC STATUS           FIRST_CHANGE# FIRST_TIME
             1          1         28   52428800          1 YES INACTIVE                375136 22-NOV-07
             2          1         29   52428800          1 YES INACTIVE                375138 22-NOV-07
             3          1         30   52428800          1 NO  CURRENT                 375143 22-NOV-07I added below from notes:
    ALTER DATABASE ADD STANDBY LOGFILE GROUP 10
    ('/u01/oracle/oradata/db01/redo01_stb.log') SIZE 50M;
    ALTER DATABASE ADD STANDBY LOGFILE GROUP 11
    ('/u02/oracle/oradata/db01/redo02_stb.log') SIZE 50M;
    ALTER DATABASE ADD STANDBY LOGFILE GROUP 12
    ('/u03/oracle/oradata/db01/redo03_stb.log') SIZE 50M;After few alter system switch logfile; I still have:
        GROUP#    THREAD#  SEQUENCE# ARC STATUS
            10          0          0 YES UNASSIGNED
            11          0          0 YES UNASSIGNED
            12          0          0 YES UNASSIGNEDAll are UNASSIGNED, should one standby group not be ACTIVE like the above link shows.
    Many think for any help

    First things first:
    From the Docs.:
    "Minimally, the configuration should have one more standby redo log file group than the number of online redo log file groups on the primary database. However, the recommended number of standby redo log file groups is dependent on the number of threads on the primary database. Use the following equation to determine an appropriate number of standby redo log file groups:
    (maximum number of logfiles for each thread + 1) * maximum number of threads
    Using this equation reduces the likelihood that the primary instance's log writer (LGWR) process will be blocked because a standby redo log file cannot be allocated on the standby database. For example, if the primary database has 2 log files for each thread and 2 threads, then 6 standby redo log file groups are needed on the standby database."
    You are 1 short!

  • RAC Redo Log Internal

    Hi all
    I want ask some questions about redo log generation in RAC.
    1. Does Oracle confirms that each committed transaction, from begin_transaction to commit, redo and undo information resides in redo log of one node? Which means, would it possible that Oracle put begin transaction in redo log of node A, and put commit in redo log of another node B?

    Reup this thread :)
    Another questions:
    What about RAC broadcast performs? I mean is it true that when node A commits a transaction T, then it broadcast this information to all other nodes, would Oracle write this information about T to node B's online redo log? for example, in redo log of Node B contains a redo record including opcode=5.4 and T's transaction id and Node A's thread number.
    But during my analyze, Oracle 10.2.0.1.0 RAC, (NFS share), both dump file of redo log and binary redo log don't contains that redo record.
    So I wonder what Oracle does?
    Black Thought

  • Confused about standby redo log groups

    hi masters,
    i am little bit confuse about creating redo log group for standby database,as per document number of standby redo group depends on following equation.
    (maximum number of logfiles for each thread + 1) * maximum number of threads
    but i dont know where to fing threads? actually i would like to know about thread in deep.
    how to find current thread?
    thanks and regards
    VD

    is it really possible that we can install standby and primary on same host??
    yes its possible and i have done it many times within the same machine.
    For yours confusion about spfile ,i agree document recommend you to use spfile which is for DG broker handling if you go with DG borker in future only.
    There is no concern spfile using is an integral step for primary and standby database implementation you can go with pfile but good is use spfile.Anyhow you always keep pfile on that basis you created spfile,i said you make an entry within pfile then mount yours standby database with this pfile or you can create spfile from this pfile after adding these parameter within pfile,i said cause you might be adding this parmeter from SQL prompt.
    1. logs are not getting transfered(even i configure listener using net manager)
    2.logs are not getting archived at standby diectory.
    3.'ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION' NEVER COMPLETE ITS RECOVERY
    4. when tried to open database it always note it 'always' said system datafile is not from sufficiently old backup.
    5.i tried 'alter database recover managed standby database camncel' also.Read yours alert log file and paste the latest log here..
    Khurram

  • Resizing redo log files on a 3 node RAC with single node standby database

    Hi
    On a 3 node 11g RAC system,I have to resize the redo logs on primary database from 50M to 100M. I was planning to do the following steps:
    SQL> select group#,thread#,members,status from v$log;
    GROUP# THREAD# MEMBERS STATUS
    1 1 3 INACTIVE <-- whenefver INACTIVE, logfile group can be dropped
    2 1 3 CURRENT & resized, switch logfile can change logfile group
    3 1 3 INACTIVE
    4 2 3 INACTIVE
    5 2 3 INACTIVE
    6 2 3 CURRENT
    7 3 3 INACTIVE
    8 3 3 INACTIVE
    9 3 3 CURRENT
    9 rows selected.
    SQL> alter database drop logfile group 1;
    Database altered.
    SQL> ALTER DATABASE ADD LOGFILE THREAD 1
    GROUP 1 (
    '/PROD/redo1/redo01a.log',
    '/PROD/redo2/redo01b.log',
    '/PROD/redo3/redo01c.log'
    ) SIZE 100M reuse; 2 3 4 5 6
    Database altered.
    However I am not sure what needs to be done for the standby. The standby_file_management is set to auto and it is single instance standby.
    SQL> select group#,member from v$logfile where type='STANDBY';
    GROUP#
    MEMBER
    10
    /PROD/flashback/PROD/onlinelog/o1_mf_10_7b44gy67_.log
    11
    /PROD/flashback/PROD/onlinelog/o1_mf_11_7b44h7gy_.log
    12
    /PROD/flashback/PROD/onlinelog/o1_mf_12_7b44hjcr_.log
    Please let me know.
    Thanks
    Sumathy

    Hello;
    For Redo and Standby redo this won't help :
    standby_file_management is set to auto
    On the Standby cancel recovery, then drop and recreate the redo and or Standby redo.
    Then start recovery again.
    Example ( I have a habit of removing the old file at the OS to avoid REUSE and conflicts )
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
    SQL> ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT='MANUAL';
    alter database add standby logfile group 4
    ('/u01/app/oracle/oradata/orcl/standby_redo04.log') size 100m;
    ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT='AUTO'
    Notes worth reviewing :
    Online Redo Logs on Physical Standby [ID 740675.1]
    Error At Standby Database Ora-16086: Standby Database Does Not Contain Available Standby Log Files [ID 1155773.1]
    Example of How To Resize the Online Redo Logfiles [ID 1035935.6]
    Best Regards
    mseberg

  • Brarchive not backing up offline redo log on RAC

    Hi all Oracle and SAP experts,
    After running brarchive on my newly setup RAC system, the program reports that there are 'No offline redo log files found for processing'. Hence, non of the archive log files are being backup.
    Tons of archive log files are generated in the archive log directory. I do not understand why brarchive is unable to locate these files.
    There are 2 Windows RAC nodes. I used the command 'brarchive -u / -c force -p initTST.sap -sd' to trigger brarchive. It is using rman to backup to a windows shared folder. And the brbackup runs successfully.
    The output from brarchive is below:
    BR0002I BRARCHIVE 7.00 (16)
    BR0006I Start of offline redo log processing: aebbxosu.svd 2009-07-24 14.07.36
    BR0477I Oracle pfile F:\oracle\TST\102\database\initTST1.ora created from spfile F:\oracle\TST\102\database\spfile.ora
    BR0101I Parameters
    Name                           Value
    oracle_sid                     TST1
    oracle_home                    F:\oracle\TST\102
    oracle_profile                 F:\oracle\TST\102\database\initTST1.ora
    sapdata_home                   F:\oracle\TST
    sap_profile                    F:\oracle\TST\102\database\initTST.sap
    backup_dev_type                disk
    archive_copy_dir              
    10.11.0.101\backup\RAC_test
    compress                       no
    disk_copy_cmd                  rman
    cpio_disk_flags                -pdcu
    rman_compress                  no
    archive_dupl_del               only
    parallel_instances             TST2:F:\oracle\TST\102@TST2
    system_info                    tstadm RACNODE1 Windows 5.2 Build 3790 Service Pack 2 AMD64
    oracle_info                    TST 10.2.0.4.0 8192 1492 10504542 RACNODE1 UTF8 UTF8
    sap_info                       620 SAPTST TST D1583565402 R3_ORA 0020087949
    make_info                      NTAMD64 OCI_10201_SHARE Aug 22 2006
    command_line                   brarchive -u / -c force -p initTST.sap -sd
    BR0013W No offline redo log files found for processing
    BR0007I End of offline redo log processing: aebbxosu.svd 2009-07-24 14.07.54
    BR0280I BRARCHIVE time stamp: 2009-07-24 14.07.55
    BR0004I BRARCHIVE completed successfully with warnings

    I had set BR_TRACE to 15 and receive the following in my trace file:
    BR0249I BR_TRACE: level 3, function BrCurrRedoGet exit with 0
    BR0249I BR_TRACE: level 2, function BrInstCheck exit with -10
    BR0248I BR_TRACE: level 2, function BrDiskStatGet entry with '
    10.11.0.101\backup\RAC_test'
    BR0250I BR_TRACE: level 2, function BrDiskStatGet exit with '19999863332864 9780518486016 9780518486016 9770967198432'
    BR0248I BR_TRACE: level 2, function arch_last_get entry with 'F:\oracle\TST\saparch\archTST1.log'
    BR0249I BR_TRACE: level 2, function arch_last_get exit with 0
    BR0248I BR_TRACE: level 2, function BrArchNameGet entry with '0 TST1'
    BR0250I BR_TRACE: level 2, function BrArchNameGet exit with 'G:\oracle\TST\oraarch\681026106_1_0.dbf'
    BR0248I BR_TRACE: level 2, function BrNameBuild entry with '41 G:\oracle\TST\oraarch\681026106_1_0.dbf NULL'
    BR0250I BR_TRACE: level 2, function BrNameBuild exit with 'G:\oracle\TST\oraarch'
    BR0248I BR_TRACE: level 2, function BrFileStatGet entry with 'G:\oracle\TST\oraarch'
    BR0250I BR_TRACE: level 2, function BrFileStatGet exit with '39171616256 0'
    BR0248I BR_TRACE: level 2, function BrArchExist entry with 'TST1'
    BR0248I BR_TRACE: level 3, function BrArchNameGet entry with '987656789 TST1'
    BR0250I BR_TRACE: level 3, function BrArchNameGet exit with 'G:\oracle\TST\oraarch\1_1_987656789.dbf'
    BR0249I BR_TRACE: level 2, function BrArchExist exit with -3
    BR0248I BR_TRACE: level 2, function BrDiskStatGet entry with '
    10.11.0.101\backup\RAC_test'
    BR0248I BR_TRACE: level 2, function BrDbDisconnect entry with 'void'
    BR0280I BRARCHIVE time stamp: 2009-07-27 09.41.42
    BR0644I BR_TRACE: location BrDbDisconnect-1, SQL statement:
    'COMMIT RELEASE'
    BR0300I BR_TRACE: SQL code: 0, number of processed rows: 0
    BR0248I BR_TRACE: level 3, function BrZombieKill entry with 'void'
    BR0250I BR_TRACE: level 3, function BrZombieKill exit with 'void'
    BR0249I BR_TRACE: level 2, function BrDbDisconnect exit with 0
    BR0013W No offline redo log files found for processing
    My current database incarnation is 681026106 but brarchive is searching for archivelog from incarnation 987656789 (1_1_987656789.dbf). As a workaround, I created dummy files (eg. 1_1_987656789.dbf') for each node and managed to trick brarchive into believing these are the real files. Subsequent backup works fine. Thanks Michael!

  • Question on Redo logs in RAC

    DB version:11.2
    Platform : Solaris 10
    We create RAC DBs manually. Below is a log of the DB creation from Node1 . Instance in Node2 is not yet created (only binary is installed in Node2).
    SQL> conn / as sysdba
    Connected to an idle instance.
    SQL> startup nomount pfile=/u03/oracle/11.2/db_1/dbs/initnehprd1.ora
    ORACLE instance started.
    Total System Global Area 1252643278 bytes                                      
    Fixed Size                  2219208 bytes                                      
    Variable Size             771752760 bytes                                      
    Database Buffers          469762048 bytes                                      
    Redo Buffers                8929280 bytes  
    SQL> CREATE DATABASE nehprd MAXINSTANCES 8 MAXLOGFILES 16 MAXLOGMEMBERS 4 MAXDATAFILES 1024
      2  CHARACTER SET AL32UTF8 NATIONAL CHARACTER SET AL16UTF16
      3  DATAFILE '+DG_DATA01/nehprd/nehprd_system01.dbf' SIZE 1000m EXTENT MANAGEMENT LOCAL
      4  SYSAUX DATAFILE '+DG_DATA01/nehprd/nehprd_sysaux01.dbf' SIZE 600m
      5  DEFAULT TEMPORARY TABLESPACE temp
      6  TEMPFILE '+DG_DATA01/nehprd/nehprd_temp01.dbf' SIZE 2000m EXTENT MANAGEMENT LOCAL UNIFORM SIZE 5m
      7  UNDO TABLESPACE undotbs11 DATAFILE '+DG_DATA01/nehprd/nehprd_undotbs1101.dbf' SIZE 700m
      8  LOGFILE
      9          GROUP 1 ('+DG_DATA01/nehprd/nehprd_log01.dbf') SIZE 150m,
    10          GROUP 2 ('+DG_DATA01/nehprd/nehprd_log02.dbf') SIZE 150m,
    11          GROUP 3 ('+DG_DATA01/nehprd/nehprd_log03.dbf') SIZE 150m
    12  /
    Database created.
    Elapsed: 00:00:18.95
    SQL> CREATE UNDO TABLESPACE undotbs12 DATAFILE '+DG_DATA01/nehprd/nehprd_undotbs1201.dbf' SIZE 700m;
    Tablespace created.
    Elapsed: 00:00:01.30
    SQL> ALTER DATABASE ADD LOGFILE thread 2 GROUP 4 '+DG_DATA01/nehprd/nehprd_log04.dbf' SIZE 150m;
    Database altered.
    Elapsed: 00:00:00.25
    SQL> ALTER DATABASE ADD LOGFILE thread 2 GROUP 5 '+DG_DATA01/nehprd/nehprd_log05.dbf' SIZE 150m;
    Database altered.
    Elapsed: 00:00:00.43
    SQL> ALTER DATABASE ADD LOGFILE thread 2 GROUP 6 '+DG_DATA01/nehprd/nehprd_log06.dbf' SIZE 150m;
    Database altered.But after the above activity, the following log files are created in the DB.
    6 log groups for each Instance and they all are on the same location +DG_DATA01/nehprd  !
    INST_ID     GROUP# STATUS  TYPE             MEMBER                                   IS_
             1          1         ONLINE  +DG_DATA01/nehprd/nehprd_log01.dbf             NO
             1          2         ONLINE  +DG_DATA01/nehprd/nehprd_log02.dbf             NO
             1          3         ONLINE  +DG_DATA01/nehprd/nehprd_log03.dbf             NO
             1          4         ONLINE  +DG_DATA01/nehprd/nehprd_log04.dbf             NO
             1          5         ONLINE  +DG_DATA01/nehprd/nehprd_log05.dbf             NO
             1          6         ONLINE  +DG_DATA01/nehprd/nehprd_log06.dbf             NO
             2          1         ONLINE  +DG_DATA01/nehprd/nehprd_log01.dbf             NO
             2          2         ONLINE  +DG_DATA01/nehprd/nehprd_log02.dbf             NO
             2          3         ONLINE  +DG_DATA01/nehprd/nehprd_log03.dbf             NO
             2          4         ONLINE  +DG_DATA01/nehprd/nehprd_log04.dbf             NO
             2          5         ONLINE  +DG_DATA01/nehprd/nehprd_log05.dbf             NO
             2          6         ONLINE  +DG_DATA01/nehprd/nehprd_log06.dbf             NO How was redo log group 4,5,6 created for thread 1 and how was redo log group 1,2,3 created for thread 2 ?

    Hi,
    To make things worse, when you query v$logfile , It will show 6 redo logfiles belonging to 6 redo groups for each instance.The fact that it shows all groups of redo does not mean it belongs to that instance. Try to query v$database or v$datafile, this means that database/datafiles belongs to only one instance, of course not.
    Isn't this a bit of a bug ?Of course not. It's concept.
    To understand it you need understand the difference between instance and database. An database (i.e files) can be opened by many instances.
    An Oracle database server consists of a database and at least one database instance (commonly referred to as simply an instance). Because an instance and a database are so closely connected, the term Oracle database is sometimes used to refer to both instance and database. In the strictest sense the terms have the following meanings:
    Database
    A database is a set of files, located on disk, that store data. These files can exist independently of a database instance.
    Database instance
    An instance is a set of memory structures that manage database files. The instance consists of a shared memory area, called the system global area (SGA), and a set of background processes. An instance can exist independently of database files.
    Database: (v$database)
    CONTROLFILE (v$controlfile)
    DATAFILE (v$datafile)
    ONLINELOG (v$logfile,v$log)
    ARCHIVELOG (v$archivelog)
    SPFILE
    These views above will show the same values in either instance, because if the file (database) is changed it is modified in all instances. That's means you not need use gv$ because the information are the same in all instances, also you not need get info connecting in each instance querying theses v$ because the inf are the same independent of the instance
    Instances: (v$instance)
    PARAMETERS (v$parameter)
    MEMORY STRUCTURE (e.g v$session)
    The view v$session will show information about sessions from that instance only. In RAC each instance have you own info about session so you will need query gv$session because it get information about session from others instances.
    The fact that each instance assign its own REDO/UNDO not mean they are part of the instances, REDO/UNDO are part of Database. They can be write by assigned instance and read by all instance (just it)
    It's not a bug when you query v$datafile, v$logfile, v$controlfile in all instances you will get same result, because it's the DATABASE. (An database (i.e files) can be opened by many instances).
    Levi Pereira

  • Redo log groups/files on the local disk for RAC DB

    Whether is it possible/supported to place redo log files/groups on the local disk for a RAC databases ?
    Thank you,

    Hi Mufalani,
    are you sure about this one? I'd think that this only works as long as the nodes don't restart/crash. I can imagine this like this:
    The nodes store redo logs on local disks which are NFS-exported to the other node. Everything should be fine. But when one node crashes/reboots, the other one has to perform crash recovery for the crashed instance but this won't work when the NFS mount (with redo logs) is unaccessible. So I would not want to do this.
    Bjoern

  • Adding bigger size Redo log groups In RAC , ASM

    Hi Folks,
    Database version - 10.1.0.4.0
    OS version - AIX 5.3
    RAC node 2 and ASM
    We had 4 redo log groups of lesser size on both nodes.yesterday I added 4 new groups of bigger size using pl/sql developer tool and deleted 2 old redolog groups. But I m not able to delete remaining 2 old groups.
    ORA-01567 dropping log2 would have less than 2 log files for instance 1.
    Our redolog files are on SAN and both node points to same storage. when I fired this query from command prompt
    SELECT v$logfile.member, v$logfile.group#, v$log.status, v$log.bytes
         FROM v$log, v$logfile
    WHERE v$log.group# = v$logfile.group#;
    I got same result for both nodes.
    The problem I suspect is that All the 4 new log groups are added to instance 2 and its 2 old grouips are are also deleted.
    Now my ques is that :
    1. shouild I have added redo log groups separately on both nodes even storage is same for both nodes ?
    2. redologs groups are defined separately for each node ?
    How should I assign 2 new redo log groups to instance 1 ?
    Regards,

    Please check:
    SQL> select instance_number, instance_name, thread# from gv$instance;
    Show Instance with thread ID
    SQL> select group#, thread#, members ,status from v$log;
    Check number groups in each of thread.
    On RAC, you have to add redo log Group each of node (each of thread )
    SQL> select group#, thread#, members ,status from v$log;
    From your environment ,I think you have 2 nodes = 2 thread
    If these're thread 1, 2
    So, add redo group should:
    Example:
    ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 11 ( '+DATA') SIZE 500M;
    ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 12 ( '+DATA') SIZE 500M;
    ALTER DATABASE ADD LOGFILE THREAD 2 GROUP 13 ( '+DATA') SIZE 500M;
    ALTER DATABASE ADD LOGFILE THREAD 2 GROUP 14 ( '+DATA') SIZE 500M;
    You should check each thread has >= 2 groups and that group had "INACTIVE" status before drop:
    SQL> select group#, thread#, members ,status from v$log;
    My Idea, you should have 3 redo log groups for each node(thread)
    Good Luck

  • Improving redo log writer performance

    I have a database on RAC (2 nodes)
    Oracle 10g
    Linux 3
    2 servers PowerEdge 2850
    I'm tuning my database with "spotilght". I have alredy this alert
    "The Average Redo Write Time alarm is activated when the time taken to write redo log entries exceeds a threshold. "
    The serveres are not in RAID5.
    How can I improve redo log writer performance?
    Unlike most other Oracle write I/Os, Oracle sessions must wait for redo log writes to complete before they can continue processing.
    Therefore, redo log devices should be placed on fast devices.
    Most modern disks should be able to process a redo log write in less than 20 milliseconds, and often much lower.
    To reduce redo write time see Improving redo log writer performance.
    See Also:
    Tuning Contention - Redo Log Files
    Tuning Disk I/O - Archive Writer

    Some comments on the section that was pulled from Wikipedia. There is some confusion in the market as their are different types of solid state disks with different pros and cons. The first major point is that the quote pulled from Wikipedia addresses issues with Flash hard disk drives. Flash disks are one type of solid state disk that would be a bad solution for redo acceleration (as I will attempt to describe below) they could be useful for accelerating read intensive applications. The type of solid state disk used for redo logs use DDR RAM as the storage media. You may decide to discount my advice because I work with one of these SSD manufacturers but I think if you do enough research you will see the point. There are many articles and many more customers who have used SSD to accelerate Oracle.
    > Assuming that you are not CPU constrained,
    moving the online redo to
    high-speed solid-state disk can make a hugedifference.
    Do you honestly think this is practical and usable
    advice Don? There is HUGE price difference between
    SSD and and normal hard disks. Never mind the
    following disadvantages. Quoting
    (http://en.wikipedia.org/wiki/Solid_state_disk):[
    i]
    # Price - As of early 2007, flash memory prices are
    still considerably higher  
    per gigabyte than those of comparable conventional
    hard drives - around $10
    per GB compared to about $0.25 for mechanical
    drives.Comment: Prices for DDR RAM base systems are actually higher than this with a typical list price around $1000 per GB. Your concern, however, is not price per capacity but price for performance. How many spindles will you have to spread your redo log across to get the performance that you need? How much impact are the redo logs having on your RAID cache effectiveness? Our system is obviously geared to the enterprise where Oracle is supporting mission critical databases where a hugh return can be made on accelerating Oracle.
    Capacity - The capacity of SSDs tends to be
    significantly smaller than the
    capacity of HDDs.Comment: This statement is true. Per hard disk drive versus per individual solid state disk system you can typically get higher density of storage with a hard disk drive. However, if your goal is redo log acceleration, storage capacity is not your bottleneck. Write performance, however, can be. Keep in mind, just as with any storage media you can deploy an array of solid state disks that provide terabytes of capacity (with either DDR or flash).
    Lower recoverability - After mechanical failure the
    data is completely lost as
    the cell is destroyed, while if normal HDD suffers
    mechanical failure the data
    is often recoverable using expert help.Comment: If you lose a hard drive for your redo log, the last thing you are likely to do is to have a disk restoration company partially restore your data. You ought to be getting data from your mirror or RAID to rebuild the failed disk. Similarly, with solid state disks (flash or DDR) we recommend host based mirroring to provide enterprise levels of reliability. In our experience, a DDR based solid state disk has a failure rate equal to the odds of losing two hard disk drives in a RAID set.
    Vulnerability against certain types of effects,
    including abrupt power loss
    (especially DRAM based SSDs), magnetic fields and
    electric/static charges
    compared to normal HDDs (which store the data inside
    a Faraday cage).Comment: This statement is all FUD. For example, our DDR RAM based systems have redundant power supplies, N+1 redundant batteries, four RAID protected "hard disk drives" for data backup. The memory is ECC protected and Chipkill protected.
    Slower than conventional disks on sequential I/OComment: Most Flash drives, will be slower on sequential I/O than a hard disk drive (to really understand this you should know there are different kinds of flash memory that also impact flash performance.) DDR RAM based systems, however, offer enormous performance benefits versus hard disk or flash based systems for sequential or random writes. DDR RAM systems can handle over 400,000 random write I/O's per second (the number is slightly higher for sequential access). We would be happy to share with you some Oracle ORION benchmark data to make the point. For redo logs on a heavily transactional system, the latency of the redo log storage can be the ultimate limit on the database.
    Limited write cycles. Typical Flash storage will
    typically wear out after
    100,000-300,000 write cycles, while high endurance
    Flash storage is often
    marketed with endurance of 1-5 million write cycles
    (many log files, file
    allocation tables, and other commonly used parts of
    the file system exceed
    this over the lifetime of a computer). Special file
    systems or firmware
    designs can mitigate this problem by spreading
    writes over the entire device,
    rather than rewriting files in place.
    Comment: This statement is mostly accurate but refers only to flash drives. DDR RAM based systems, such as those Don's books refer to, do not have this limitation.
    >
    Looking at many of your postings to Oracle Forums
    thus far Don, it seems to me that you are less
    interested in providing actual practical help, and
    more interested in self-promotion - of your company
    and the Oracle books produced by it.
    .. and that is not a very nice approach when people
    post real problems wanting real world practical
    advice and suggestions.Comment: Contact us and we will see if we can prove to you that Don, and any number of other reputable Oracle consultants, recommend using DDR based solid state disk to solve redo log performance issues. In fact, if it looks like your system can see a serious performance increase, we would be happy to put you on our evaluation program to try it out so that you can do it at no cost from us.

  • Best way to move redo log from one disk group to another in ASM?

    Hi All,
    Our db is 10.2.0.3 RAC db. And database servers are window 2003 server.
    We need to move more than 50 redo logs (some are regular and some are standby) which are not redundant from one disk group to another. Say we need to move from disk group 1 to 2. Here are the options we are thinking about but not sure which one is the best from easiness and safety prospective.
    Thank you very much for your help in advance.
    Shirley
    Option 1:
    1)     shutdown immediate
    2)     copy log files from disk group 1 to disk group2 using RMAN (need to research on this)
    3)     startup mount
    4)     alter database rename file ….
    5)     Open database open
    6)     delete the redo files from disk group 1 in ASM (how?)
    Option 2:
    1)     create a set of redo log groups in disk group 2
    2)     drop the redo log groups in disk group 1 when they are inactive and have been archived
    3)     delete the redo files associated with those dropped groups from disk group 1 (how?) (According to Oracle menu: when you drop the redo log group the operating system files are not deleted and you need to manually delete those files)
    Option 3:
    1)     create a set of redo members in disk group 2 for each redo log group in disk group 1
    2)     drop the redo log memebers in disk group 1
    3)     delete the redo files from disk group 1 associated with the dropped members

    Absolutely not, they are not even remotely similar concepts.
    OMF: Oracle Managed Files. It is an RDMBS feature, no matter what your storage technology is, Oracle will take care of file naming and location, you only have to define the size of a file, and in the case of a tablespace on an OMF DB Configuration you only need to issue a command similar to this:
    CREATE TABLESPACE <TSName>; So the OMF environment creates an autoextensible datafile at the predefined location with 100M by default as its initial size.
    On ASM it should only be required to specify '+DGroupName' as the datafile or redo log file argument so it can be fully managed by ASM.
    EMC. http://www.emc.com No further commens on it.
    ~ Madrid
    http://hrivera99.blogspot.com

  • What is a redo log file? can anyone explain in simple terms

    I am confused between redo log file and physical datafile can anyone explain in simple terms
    Thank u
    Regards,
    Vijay

    See Overview of Physical Database Structures

  • Standby Redo Log Files and Directory Structure in Standby Site

    Hi Guru's
    I just want to confirm, i know that if the Directory structure is different i need to mention these 2 parameter in pfile
    on primary site:
    DB_CONVERT_DATAFILE='standby','primary'
    LOG_CONVERT_DATAFILE='standby','primary'
    On secondary Site:
    DB_CONVERT_DATAFILE='primary','standby'
    LOG_CONVERT_DATAFILE='primary','standby'
    But i want to confirm this wheather i need to issue the complete path of the directory in both the above paramtere:
    like:
    DB_CONVERT_DATAFILE='/u01/oracle/app/oracle/oradata/standby','/u01/oracle/app/oracle/oradata/primary'
    LOG_CONVERT_DATAFILE='/u01/oracle/app/oracle/oradata/standby','/u01/oracle/app/oracle/oradata/primary'
    Second Confusion:-
    After transferring Redo Standby log files created on primary and taken to standby on the above mentioned directory structure and after restoring the backup of primary db alongwith the standby control file will not impact the physical standby redo log placed on the above mentioned location.
    Thanks in advance for your help

    Hello,
    Regarding your 1st question, you need to provide the complete path and not just the directory name.
    On the standby:
    db_file_name_convert='<Full path of the datafiles on primary server>','<full path of the datafiles to be stored on the standby server>';
    log_file_name_convert='<Full path of the redo logfiles on primary server>','<full path of the redo logfiles on the standby server>';
    Second Confusion:-
    After transferring Redo Standby log files created on primary and taken to standby on the above mentioned directory structure and after restoring the backup of primary db alongwith the standby control file will not impact the physical standby redo log placed on the above mentioned location.
    How are you creating the standby database ? Using RMAN duplicate or through the restore/recovery options ?
    You can create the standby redo logs later.
    Regards,
    Shivananda

  • How to increase the size of Redo log files?

    Hi All,
    I have 10g R2 RAC on RHEL. As of now, i have 3 redo log files of 50MB size. i have used redo log size advisor by setting fast_start_mttr_target=1800 to check the optimal size of the redologs, it is showing 400MB. Now, i want to increase the size of redo log files. how to increase it?
    If we are supposed to do it on production, how to do?
    I found the following in one of the article....
    "The size of the redo log files can influence performance, because the behavior of the database writer and archiver processes depend on the redo log sizes. Generally, larger redo log files provide better performance, however it must balanced out with the expected recovery time.Undersized log files increase checkpoint activity and increase CPU usage."
    I did not understand the the point however it must balanced out with the expected recovery time in the above given paragraph.
    Can anybody help me?
    Thanks,
    Praveen.

    You dont have to shutdown the database before dropping redo log group but make sure you have atleast two other redo log groups. Also note that you cannot drop active redo log group.
    Here is nice link,
    http://www.idevelopment.info/data/Oracle/DBA_tips/Database_Administration/DBA_34.shtml
    And make sure you test this in test database first. Production should be touched only after you are really comfortable with this procedure.

  • Where RFS exactly write redo data ?  ( archived redo log or standby redo log ) ?

    Good Morning to all ;
    I am getting bit confused from oracle official link . REF_LINK : Log Apply Services
    Redo data transmitted from the primary database is received by the RFS on the standby system ,
    where the RFS process writes the redo data to either archived redo log files  or  standby redo log files.
    In standby site , does rfs write redo data in any one file or both ?
    Thanks in advance ..

    Hi GTS,
    GTS (DBA) wrote:
    Primary & standby log file size should be same - this is okay.
    1) what are trying to disclose about  largest & smallest here ? -  You are confusing.
    Read: http://docs.oracle.com/cd/E11882_01/server.112/e25608/log_transport.htm#SBYDB4752
    "Each standby redo log file must be at least as large as the largest redo log file in the redo log of the redo source database. For administrative ease, Oracle recommends that all redo log files in the redo log at the redo source database and the standby redo log at a redo transport destination be of the same size."
    GTS (DBA) wrote:
    2) what abt group members ? should be same as primary or need  to add some members additionally. ?
    Data Guard best practice for performance, is to create one member per each group in standby DB. on standby DB, one member per group is reasonable enough. why? to avoid write penalty; writing to more than one log files at the standby DB.
    SCENARIO 1: if in your source primary DB you have 2 log member per group, in standby DB you can have 1 member  per group, additionally create an extra group.
    primary
    standby
    Member per group
    2
    1
    Number of log group
    4
    5
    SCENARIO 2: you can also have this scenario 2 but i will not encourage it
    primary
    standby
    Member per group
    2
    2
    Number of log group
    4
    5
    GTS (DBA) wrote:
    All standby redo logs of the correct size have not yet been archived.
      - at this situation , can we force on standby site ? any possibilities ? 
    you can not force it , just size your standby redo files correctly and make sure you don not have network failure that will cause redo gap.
    hope there is clarity now
    Tobi

Maybe you are looking for

  • An Unexpected Error while creating a new user...

    Hello everyone.           Recently a customer of ours received an error while creating a new user. I believe the page bombed out on him while he was creating the user and this is the error he received:           ======================================

  • How do you use home sharing to send an album from a pc to an iphone?

    I just got an iphone 5 and am trying to move purchased music from my husband's itunes to my phone.  I've enabled home sharing, the music is on my iphone but will only play when his computer is on and sharing is enabled.  How do I permantley add it to

  • Skype making static when talking to other people

    Whenever I talk to a friend or someone it makes a really loud and annoying static (not on my end but theirs). Apparently it goes and comes randomly. I have tried diffrent microphones yet they all still produce static for some reason. Please help have

  • IPhoto Calendar Problems

    I haven't seen this posted so I apoligize if I missed it somewhere.  When I attempt to make a calendar with iPhoto I come across a problem.  I select my photos, click create, and am given the option on how many months, etc.  When I click OK nothing h

  • Failed to clean up BI delta queue before applying enhp1

    Experts: Happy Holiday! During applying enhp1 on BI7.0, we get the error that BW delta queue was not clean up before starting enhp1: 1) I remember I did do the clean up per the guide, why it complains? Should I do it in certain way? 2) how to fix it