Redolog doubt in RAC

Hi,
I have small doubt regarding redolog corruption/deletion in RAC Instance:
OS: RHEL 4.5
Oracle: 10.2.0.1
DB_NAME=RAC ( 2 Instances, RAC1 and RAC2 are pointed to this DB).
One user (User1) is connected to DB using RAC1 Instance having redolog groups 2 (having each one member: redo1a.log and redo2a.log). I did logswitch freshly and Presently current redolog is redo2a.log. User1 created emp table and inserted 20 records and commited. note that, still current redolog is redo2a.log and there is no log switch. There are no users connected to DB other than user1.
Now, I shutdown abort the RAC1 instance and User1 session is switched to RAC2 Instance. When he queries the select * from emp table, he is getting 20 records? how come he gets? the inserted records are still in redolog2a.log of RAC1 instance and commit statement is also still there in redolog and thre is no log switch, instance RAC1 is shutdown abruptly. How come he gets 20 records, when he connected to RAC2 instance?
Please explain me, how it is possible? please execuse me, if there is anything wrong with my question.

Whether it is RAC or non RAC system DBWn process writes the dirty buffers to disk.
The DBWn process writes dirty buffer to disk under the following conditions:
1. When a checkpoint is issued.
2. When a server process cannot find a clean reusable buffer after scanning a threshold number of buffers.
3. Every 3 seconds
if still not written to disk when you start database next time media recovery will be completed.
Where in RAC environment it works on concept of GLOBAL CACHE. Whichever changes happening to 1st node same will be update to 2nd node as well. when node 1 goes down abruptly 2nd node identifies it and does the media recovery. so even when user again connects to 2nd node still data will be available.
Anil Malkai

Similar Messages

  • Doubts about RAC infraestructure with one disk array

    Hello everybody,
    I'm writing to you because we have a doubt about the correct infrastructure to implement RAC.
    Please, let me first explain the current design we are using for Oracle DB storage. Currently we are running several standalone instances in several servers, all of them connected to a SAN disk storage array. As we know this is a single point of failure we have redundant controlfiles, archiveds and redos both in the array and in the internal disk of each server, so in case array completely fails we “just” need to recover nightly cold backup, apply archs and redos and everything it's ok. This can be done because we have standalone instances and we can assume this 1 hour downtime.
    Now we want to use these servers and this array to implement a RAC solution and we know this array is our single point of failure and wonder if it's possible to have a multinode RAC solution (not RAC One Node) with redundant controlfiles/archs/redos in internal disks. Is it possible to have each node writing full RAC controlfiles/archs/redos in internal disks and apply these files consistently when the ASM filesystem used for RAC is restores (i.e. with a softlink in an internal disk and using just one node)? Or maybe the recommended solution is to have a second array to avoid this single point of failure?
    Thanks a lot!

    cssl wrote:
    Or maybe the recommended solution is to have a second array to avoid this single point of failure?Correct. This is the proper solution.
    In this case you can also decide to simply use striping on both arrays, then mirror array1's data onto array2 using ASM redundancy options.
    Also keep in mind that redundancy is also need for the connectivity. So you need at least 2 switches to connect to both arrays, and dual HBA ports on each server, with 2 fibres running, one to each switch. You will need multipath driver s/w on the server to deal with the multiple I/O paths to the same storage LUNs.
    Likewise you need to repeat this for your Interconnect. 2 private switches, 2 private NICs on each server that are bonded. Then connect these 2 NICs to the 2 switches, one NIC per switch.
    Also do not forget spares. Spare switches (one each for storage and Interconnect). Spare cables - fibre and whatever is used for the Interconnect.
    Bottom line - not a cheap solution to have full redundancy. What can be done is to combine the storage connection/protocol layer with the Interconnect layer and run both over the same architecture. Oracle's Database Machine and Exadata Storage Servers do this. You can run your storage protocol (e.g. SRP) and your Interconnect protocol (TCP or RDS) over the same 40Gb Infiniband infrastructure.
    Thus only 2 Infiniband switches are needed for redundancy, plus 1 spare. With each server running a dual port HCA and a cable to each of these 2 switches.

  • Doubt on RAC IP configurations

    Dear Legends,
    As I'm trying to install RAC in VMWARE with OEL 5.7 64 bit and 11gR2 by following the article "ORACLE-BASE - Oracle Database 11g Release 2 RAC On Linux Using VMware Server 2"
    Our Public host name "verac1.host.net" and "verac2.host.net"
    VM Settings is "BRIDGED"
    #PUBLIC
    192.168.1.180 verac1.host.net verac1 -->
    192.168.1.181 verac2.host.net verac2
    #PRIVATE
    192.168.0.10 verac1.host.net verac1-priv
    192.168.0.11 verac2.host.net verac2-priv
    #Virtual
    192.168.1.182 verac1-vip.host.net verac1-vip
    192.168.1.183 verac2-vip.host.net verac2-vip
    May I know these are the right one am using or not?
    While trying to nslookup public hostname it is working from both nodes, but
    nslookup verac1-priv
    nslookup verac2-priv
    nslookup verac1-vip
    nslookup verac2-vip
    error as : ** server can't find verac1-priv: NXDOMAIN
    Do I need to proceed ? Or I need to rectify it here.
    My Ip details
    Router: 192.168.1.10
    Thanks,
    Karthik

    Hi,
    Some what after trying out with the Doc ID's and references provided I tried to configure DNS in RAC1 and RAC2. Now my IP configurations as follows..
    cat /etc/hosts
    127.0.0.1 localhost.localdomain localhost
    #PUBLIC
    192.168.1.180 verac1.host.net      verac1
    192.168.1.181 verac2.host.net      verac2
    #PRIVATE
    10.10.2.10      verac1-priv.host.net      verac1-priv
    10.10.2.15      verac2-priv.host.net      verac2-priv
    #VIP
    10.10.1.10      verac1-vip.host.net       verac1-vip
    10.10.1.15      verac2-vip.host.net       verac2-vip
    Now the following works
    1. nslookup verac1, verac2
    [root@verac1 ~]# nslookup verac2
    Server:         192.168.1.180
    Address:        192.168.1.180#53
    Name:   verac2.host.net
    Address: 192.168.1.181
    [root@verac2 ~]# nslookup verac1
    Server:         192.168.1.181
    Address:        192.168.1.181#53
    Name:   verac1.host.net
    Address: 192.168.1.180
    2. nslookup verac1-vip, verac2-vip
    [root@verac1 ~]# nslookup verac2-vip
    Server:         192.168.1.180
    Address:        192.168.1.180#53
    Name:   verac2-vip.host.net
    Address: 10.10.1.20
    [root@verac2 ~]# nslookup verac1-vip
    Server:         192.168.1.181
    Address:        192.168.1.181#53
    Name:   verac1-vip.host.net
    Address: 10.10.1.10
    3. SCAN
    [root@verac1 ~]# nslookup verac-scan
    Server:         192.168.1.180
    Address:        192.168.1.180#53
    Name:   verac-scan.host.net
    Address: 10.10.1.11
    Name:   verac-scan.host.net
    Address: 10.10.1.12
    [root@verac1 ~]# nslookup verac-scan
    Server:         192.168.1.180
    Address:        192.168.1.180#53
    Name:   verac-scan.host.net
    Address: 10.10.1.12
    Name:   verac-scan.host.net
    Address: 10.10.1.11
    But the Following is NOT Working
    1. nslookup verac1-priv, verac2-priv
    [root@verac1 ~]# nslookup verac2-priv
    ;; connection timed out; no servers could be reached
    [root@verac2 ~]# nslookup verac1-priv
    ;; connection timed out; no servers could be reached
    2. Google or Out side World Ping is NOT Working
    [root@verac1 ~]# ping google.com
    ping: unknown host google.com
    If I disable eth1 and eth2, then google ping is working... Not Sure how to configure the DNS
    Ref:
    Oracle 11gR2 2-node RAC on VMWare Workstation 8 – Part VII | The Gruff DBA
    Please help me how to fix this.
    Thanks,
    Karthik

  • Doubt in RAC

    Hi.,
    Pls Refer very Good Material for RAC in 10G R2.

    1) Oracle documentation for the version of the database and the operating system you need - netither of which you mention.
    2) http://www.apress.com > Oracle > RAC for Linux book by Julian Dyke, et. al.

  • How to add members in REDO group in RAC

    Hi All
    I need to know the syntax to add members in redolog group in RAC database. Currently i have 4 groups (2 belonging to each thread) and one member in each group. I want to multiplex.
    Thanks in advance

    Check out:
    http://www.lc.leidenuniv.nl/awcourse/oracle/rac.920/a96600/mancrea.htm

  • Can we have Multiple Instance on same Node in Oracle 10g RAC

    Hi All,
    I am planning to implement the RAC in Oracle 10g.Before that i have one doubt regarding RAC.
    My question is "Can we create multiple Instance on Same node(Server) ?"
    is it possible.
    Any ideas or thoughts would be apperciable.
    Thanks in Advance.
    Anwar

    This is where it is important to keep the separation between 'database' and 'instance'.
    A database is the set of files that contains the data (and the redo, control files, etc). A database does nothing by itself, other than take up lots of disk space.
    An instance is theCPU cycles (running software) and the memory to control the database.
    In Oracle RAC, you can have as many instances controlling one database [at the same time] as you want (within reason). Each instance must be able to access the disk(s) that contains the database.
    These multiple instances can be on the same computer (effectively taking up a lot of server memory and CPU for nothing) or they can be on separate computers.
    If they are on separate computers, the disk subsystems must be able to be shared across computers - this is occasionally done using operating system clusterware and is the main reason why clusterware is required at all. (This is also the toughest part of the pre-requisites in setting up a RAC and is very vendor dependent unless you use ASM.)
    These instances need a communication connection to coordinate their work (usually a separate network card for each computer) so they do not corrupt the disk when they are trying to access the same file, and possibly the same block, at the same time.
    In a RAC configuration, instances can be added, started, running, stopped and removed independent of each other (allowing a lot of high availability) or can be started and stopped as a group.
    Each instance gets it's own SID, which is in no way any different than a non-RAC SID. It's just the name of a service that can be invoked. The neat thing is that the SID
    a) helps the DBA keep things straight by lettiung us talk about 'instance A' (the Oracle software to be running over on computer A) vs 'instance B' when starting, stopping and managing;
    b) helps the application by providing targets that can be listed in the TNSNAMES.ORA [against one service alias], which is used by ORacle Networking to provide automated load balance or failover (instance/SID a is not available, I guess I'll try the next in the list)
    Hope that helps the concept level a bit.

  • Rac node evicted and asm related

    Hi friends
    I have few doubts in rac environment
    1.In 2 node rac while adding datafile to tablespace if you forget to metion '+'then what will happen whether it is going to be create or it throws an error if it creates where exactly located and other node users how to work on that tablespace .what all steps to perform that datafile is usefull for all node users.
    2. In Rac environment how to check how many sessions connected to particular node.
    3)
    In Rac any node is evicted due to network failure then after we rebuild the network .Is there any steps to do manually to access the failure node after rebuilding the network or it will automatically available in cluster group which service is perform this activity.
    4.While configuring clusterware you choose voting disk and ocr disk location and which redundancy you will choose suppose if you go for normal redundancy how many disks you can select for each file either one or two?.

    [grid@srvtestdb1 ~]$ ps -ef|grep tns
    root 65 2 0 Aug29 ? 00:00:00 [netns]
    grid 4449 1 0 Aug29 ? 00:00:25 /u01/app/11.2.0/grid/bin/tnslsnr LISTENER_SCAN2 -inherit
    grid 4454 1 0 Aug29 ? 00:00:23 /u01/app/11.2.0/grid/bin/tnslsnr LISTENER_SCAN3 -inherit
    grid 4481 1 0 Aug29 ? 00:00:33 /u01/app/11.2.0/grid/bin/tnslsnr LISTENER -inherit
    grid 37028 1 0 09:38 ? 00:00:00 /u01/app/11.2.0/grid/bin/tnslsnr LISTENER_SCAN1 -inherit
    grid 37901 36372 0 09:45 pts/0 00:00:00 grep tns
    [grid@srvtestdb1 ~]$
    [grid@srvtestdb1 ~]$ srvctl config scan_listener
    SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
    SCAN Listener LISTENER_SCAN2 exists. Port: TCP:1521
    SCAN Listener LISTENER_SCAN3 exists. Port: TCP:1521
    [grid@srvtestdb1 ~]$
    [grid@srvtestdb1 ~]$ srvctl status scan_listener
    SCAN Listener LISTENER_SCAN1 is enabled
    SCAN listener LISTENER_SCAN1 is running on node srvtestdb1
    SCAN Listener LISTENER_SCAN2 is enabled
    SCAN listener LISTENER_SCAN2 is running on node srvtestdb1
    SCAN Listener LISTENER_SCAN3 is enabled
    SCAN listener LISTENER_SCAN3 is running on node srvtestdb1
    [grid@srvtestdb1 ~]$ srvctl status scan
    SCAN VIP scan1 is enabled
    SCAN VIP scan1 is running on node srvtestdb1
    SCAN VIP scan2 is enabled
    SCAN VIP scan2 is running on node srvtestdb1
    SCAN VIP scan3 is enabled
    SCAN VIP scan3 is running on node srvtestdb1

  • Need opinions for ScaleUp or ScaleOut

    Hi all,
    i've need some advise for resolve my doubt on RAC, because i have to
    sizing RAC environment in my business.
    Have been shows me two different solutions of RAC:
    1) Two nodes of Bl680c (until 4 CPU quad Core)
    2) Four nodes of Bl480c (only 2 CPU quad Core)
    First solution is correct sure but is more expensive and i would like
    to know if his cost is justifiable, more cpu are effective benefit?
    So if bl480c server with dual CPU have cost of 6.000$ and bl680c with
    4 CPU is 12.500$ (double), in RAC environment it is better 2 nodes of
    bl680c or 3-4 nodes of bl480c ?
    thanks for advices
    Andrew

    Scale-Up is the ability of your configuration to sustain the same response time when both the workload and resources increase proportionally.
    Scale-up is 'usually' achieved by increasing the amount of hardware resources i.e. CPU and memory.
    Scale-out is related to adding hardware resources by adding another node.
    Sometimes, depending of app. design, the scale-up option on existing hardware can be more efficient than scale-out.
    Simple explanation to this is related to the overheads associated with maintaining RAC cache coherency and DB locks between the nodes/instances which will occupy the available resources and 'take it away' from application.
    The mail goal which you would like to achieve (I assume) is that the application can scale. You should understand that correctly configured RAC system will give you higher availability than single-instance on single-hardware configuration, but it is not at the same time guarantee that your application will scale too.
    So, if you consider your design from scalability point of view you should consider it in terms of application response times.
    hth,
    goran

  • Want to move datafiles, controlfiles, redolog on new ASM Disks (11gR2 RAC)

    Hi Guys,
    Setup: Two Node 11gR2 (11.2.0.1) RAC on RHEL 5.4
    Existing disks are from Old SAN & New Disks are from New SAN.
    Can I move all datafiles (+DATA), controlfiles (+CTRL), redolog (+REDO) on new ASM Disks by adding disks in is same Diskgroup & dropping older disks from existing Diskgroup taking advantage of ASM Re-balancing Feature.
    1) add required disks in the DATA Diskgroups,
    ALTER DISKGROUP DATA ADD DISK
    '/dev/oracleasm/disks/NEWDATA3' NAME NEWDATA_0003,
    '/dev/oracleasm/disks/NEWDATA4' NAME NEWDATA_0004,
    '/dev/oracleasm/disks/NEWDATA5' NAME NEWDATA_0005
    REBALANCE POWER 11;
    Check rebalance status from v$ASM_OPERATION.
    2) When rebalance completes, drop the old disks.
    ALTER DISKGROUP DATA DROP DISK
    NEWDATA_0000,
    NEWDATA_0001
    REBALANCE POWER 11;
    Check rebalance status from v$ASM_OPERATION.
    3) Do it same for Redo log groups & Controlfile Diskgroups.
    I hope, I could do this Activity, even if database is Up. is there possibility of Database block Corruption ??? (or is it necessary to perform above steps when database is down)
    Would be appreciated, your quick responses on the same.
    It's an urgent requirement. Thanks.
    Regards,
    Manish

    Manish Nashikkar wrote:
    Hi Guys,
    Setup: Two Node 11gR2 (11.2.0.1) RAC on RHEL 5.4
    Existing disks are from Old SAN & New Disks are from New SAN.
    Can I move all datafiles (+DATA), controlfiles (+CTRL), redolog (+REDO) on new ASM Disks by adding disks in is same Diskgroup & dropping older disks from existing Diskgroup taking advantage of ASM Re-balancing Feature.
    1) add required disks in the DATA Diskgroups,
    ALTER DISKGROUP DATA ADD DISK
    '/dev/oracleasm/disks/NEWDATA3' NAME NEWDATA_0003,
    '/dev/oracleasm/disks/NEWDATA4' NAME NEWDATA_0004,
    '/dev/oracleasm/disks/NEWDATA5' NAME NEWDATA_0005
    REBALANCE POWER 11;
    Check rebalance status from v$ASM_OPERATION.
    2) When rebalance completes, drop the old disks.
    ALTER DISKGROUP DATA DROP DISK
    NEWDATA_0000,
    NEWDATA_0001
    REBALANCE POWER 11;
    Check rebalance status from v$ASM_OPERATION.
    3) Do it same for Redo log groups & Controlfile Diskgroups.
    I hope, I could do this Activity, even if database is Up. is there possibility of Database block Corruption ??? (or is it necessary to perform above steps when database is down)
    Would be appreciated, your quick responses on the same.
    It's an urgent requirement. Thanks.
    Regards,
    Manish
    Hi Manish,
    Yes you can do that by adding new disk to existing diskgroup and delete old diskgroup. The good thing is this can be done online however you need to make sure the rebalance power is meet your business time, higher rebalance power is faster to rebalance to complete however it also will consume more resources
    Cheers

  • Oracle Rac 11.2.0.3 doubts

    Hi experts,
    Current system info:
    server 1 with Redhat 6.5 and Orale ASM with SAP ECC 6  GRID 11.2.0.3 standalone installation
    Target system info:
    Server 1 and server 2 running  RAC 11.2.0.3 with SAP ECC 6  and RedHat 6.5 GRID  with cluster
    We are trying to convert our current system to oracle RAC but have some doubts.
    We are following  "Configuration of SAP NetWeaver for Oracle Grid Infrastructure 11.2.0.2 and Oracle Real Application Clusters 11g Release 2: A Best Practices Guide"  so:
    On page 29 It says: "Prepare the storage location for storing the shared ORACLE_HOME directory in the cluster. The Oracle RDBMS software should be installed into an empty directory, accessible from all nodes in the cluster" Same thing for ORACLE_BASE for the RDBMS, SAP subdirectories (sapbackup, sapcheck, sapreorg, saptrace, oraarch etc.) and homedirectories for SAP users ora<SID> and <SID>adm to a shared filesystem.
    1.-Can we just use NFS for sharing them? or what is the recommended software on REDHAT for doing it?
    'cause on note 527843 it says:
    You must store the following components in a shared file system (cluster, NFS, or ACFS) here it says we can, but down the note on section linux says:
    RAC 11.2.0.3/4 (x86 & x86_64 only):
    Oracle Clusterware 11.2.0.3/4 + ASM/ACFS 11.2.0.3/4 (Oracle Linux 5, Oracle Linux 6, RHEL 5, RHEL 6, SLES 10, SLES 11)
           Oracle Clusterware 11.2.0.3/4 + NetApp NFS or
    Oracle Clusterware 11.2.0.3/4 + EMC Celerra NFS
    It does not mention just NFS.
    2.-In our system test, we want to backup all oracle configuration files on file systems and then delete Oracle Grid to Install GRID with cluster option, then install RDBMS with rac option and then follow the guide, is that correct?
    Regards

    Hi Ramon,
    1.-Can we just use NFS for sharing them? or what is the recommended software on REDHAT for doing it?
    'cause on note 527843 it says:
    You must store the following components in a shared file system (cluster, NFS, or ACFS) here it says we can, but down the note on section linux says:
    RAC 11.2.0.3/4 (x86 & x86_64 only):
    Oracle Clusterware 11.2.0.3/4 + ASM/ACFS 11.2.0.3/4 (Oracle Linux 5, Oracle Linux 6, RHEL 5, RHEL 6, SLES 10, SLES 11)
           Oracle Clusterware 11.2.0.3/4 + NetApp NFS or
    Oracle Clusterware 11.2.0.3/4 + EMC Celerra NFS
    It does not mention just NFS.
    NFS mount as suggest in SAP documentation should work. The use of ACFS always requires a special Oracle Grid Infrastructure (GI) Patch Set Update (PSU). Oracle Support Note 1369107.1 contains details about which GI PSU is required when you use ACFS with a specific RHEL update, service pack from SLES or UEK version of Oracle.
    2.-In our system test, we want to backup all oracle configuration files on file systems and then delete Oracle Grid to Install GRID with cluster option, then install RDBMS with rac option and then follow the guide, is that correct?
    You may perform DB backup using backup tools and then scrap the existing Grid setup. Configure RAC and then restore the backup into the new configuraiton as per SAP guidelines under
    Configuration of SAP NetWeaver for Oracle Grid Infrastructure 11.2 with Oracle Real Application Clusters 11g Release 2
    Hope this helps.
    Regards,
    Deepak Kori

  • How to create a standby redolog from rac to non rac ??

    Hi,
    How to create a standby redolog from rac to non rac DR setup..???
    in rac we can create with specifying thread number for each instances..... but this will be replicating to standby ....so how this will act/create on single DR??
    pls help

    854393 wrote:
    Thanks Shivanandha,
    (maximum number of logfiles for each thread + 1) * maximum number of threads
    Using this equation reduces the likelihood that the primary instance's log writer (LGWR) process will be blocked because a standby redo log file cannot be allocated on the standby database. For example, if the primary database has 2 log files for each thread and 2 threads, then 6 standby redo log file groups are needed on the standby database.In maximum performance mode you can keep same number of redo logs but having one more redo log member for each thread is recommended and mandtory for other protection modes.
    How to create a standby redolog from rac to non rac DR setup..???
    in rac we can create with specifying thread number for each instances..... but this will be replicating to standby ....so how this will act/create on single DR??pls help
    Yes, even your DR is RAC or non-RAC, you must have both threads, because each instance information exist on own thread, So to perform redo transition from both the instances, you must have both standby redo log threads even though if it is non an RAC DR.
    Hope This helps.

  • Unable to delete the Redolog of Thread 2 in RAC

    Hi,
    Oracle 10gR2 ( 10.2.0.4), Rac Environment, Instance name orcl1, orcl2
    Recently the orcl2 got crashed so we rebuilt the machine and orcl2 node and added the node again back to cluster
    There was a failed attempt in DBCA (Addition instance part) where the redolog05 & 06 got created in ASM.
    We created instance orcl02 as thread 3 instead of thread 2 and successfully added the node to cluster.
    The redolog of thread 2 (redo05, redo06) created due to the failed DBCA add instance activity in unable to delete from the v$log entries.
    GROUP# THREAD# MEMBER ARCHIVED STATUS Size (MB)
    1 1 +ORCLDG3/orcl/redo01.log YES ACTIVE 50
    2 1 +ORCLDG3/orcl/redo02.log NO CURRENT 50
    5 2 +ORCLDG3/orcl/redo05.log YES UNUSED 50
    6 2 +ORCLDG3/orcl/redo06.log NO CURRENT 50
    8 3 +ORCLDG3/orcl/redo08.log NO CURRENT 50
    9 3 +ORCLDG3/orcl/redo09.log YES INACTIVE 50
    When i tried to disable the thread 2 i am getting the below errors, as the redo log 05, 06 are not physically present in the ASM
    SQL> alter database disable thread 2;
    alter database disable thread 2
    ERROR at line 1:
    ORA-00313: open failed for members of log group 6 of thread 2
    ORA-00312: online log 6 thread 2: '+ORCLDG3/orcl/redo06.log'
    ORA-17503: ksfdopn:2 Failed to open file +ORCLDG3/orcl/redo06.log
    ORA-15173: entry 'redo06.log' does not exist in directory 'orcl'
    Please help in removing the Thread 2 redo log files as this warning are getting written in the alert log every second and fills the mount point.
    Regards

    The method suggested in the thread is not working, please find the below errors,
    SQL> alter database clear logfile group 5;
    SQL> alter database clear logfile group 6;
    alter database clear logfile group 6
    ERROR at line 1:
    ORA-00350: log 6 of instance orcl2 (thread 2) needs to be archived
    ORA-00312: online log 6 thread 2: '+orclDG3/orcl/redo06.log'
    SQL> alter database drop logfile group 5;
    alter database drop logfile group 5
    ERROR at line 1:
    ORA-01567: dropping log 5 would leave less than 2 log files for instance orcl2 (thread 2)
    ORA-00312: online log 5 thread 2: '+orclDG3/orcl/redo05.log'
    SQL> alter database drop logfile group 6;
    alter database drop logfile group 6
    ERROR at line 1:
    ORA-01623: log 6 is current log for instance orcl2 (thread 2) - cannot drop
    ORA-00312: online log 6 thread 2: '+orclDG3/orcl/redo06.log'
    SQL> alter database disable thread 2;
    alter database disable thread 2
    ERROR at line 1:
    ORA-00313: open failed for members of log group 6 of thread 2
    ORA-00312: online log 6 thread 2: '+orclDG3/orcl/redo06.log'
    ORA-17503: ksfdopn:2 Failed to open file +orclDG3/orcl/redo06.log
    ORA-15173: entry 'redo06.log' does not exist in directory 'orcl'
    Any ideas,

  • Redologs in RAC

    Dear all,
    We are running RAC 10.2.0.4 on Solaris 5.10
    This is highly transactional DB. The DBA Stating this as the reason, he had created 9 redolog groups on instance 1 and 6 redologs groups on instance 2. recently we had a problem
    MED1 - Can not allocate log, archival required
    Tue Aug 18 09:46:05 2009
    Thread 1 cannot allocate new log, sequence 53164
    All online logs needed archiving
    So he increased number of redol groups and number of log_archive_processes=10
    Can we have different number of redo groups with same sizes in this RAC Instance ?
    Please advise
    Kai

    >
    We are running RAC 10.2.0.4 on Solaris 5.10
    This is highly transactional DB. The DBA Stating this as the reason, he had created 9 redolog groups on instance 1 and 6 redologs groups on instance 2. recently we had a problem
    MED1 - Can not allocate log, archival required
    Tue Aug 18 09:46:05 2009
    Thread 1 cannot allocate new log, sequence 53164
    All online logs needed archiving
    So he increased number of redol groups and number of log_archive_processes=10
    Can we have different number of redo groups with same sizes in this RAC Instance ?
    >
    Although it is common to have the same number of redolog groups for each instance, that is not necessarily so. If really the first instance has much higher peak activity than the second, it may be a valid approach to increase the number of redolog groups for that instance. With a higher number of log groups, the chances are better that the archivers managed to archive the log groups for the to be current group if the peak load is over.
    In order to make that approach successful, there has to be a phase of less activity on the first instance, though. If there is a constant high load, it won't help.
    Kind regards
    Uwe
    http://uhesse.wordpress.com

  • Doubts about shared disk for RAC

    Hi All,
    I am really new to RAC.Even after reading various documents,I still have many doubts regarding shared storage and file systems needed for RAC.
    1.Clusterware has to be installed on a shared file system like OCFS2.Which type of hard drive is required to install OCFS2 so that it can be accessed from all nodes??
    It has to be an external hard drive???Or we can use any simple hard disk for shared storage??
    If we use external hard drive then does it need to be connected to a seperate server alltogether or can it be connected to any one of the nodes in the cluster???
    Apart from this shared drives,approximately what size of hard disk is required for all nodes(for just a testing environment).
    Sincerely appreciate a reply!!
    Thanks in advance.

    Clusterware has to be installed on shared storage. RAC also requires shared storage for the database.
    Shared storage can be managed via many methods.
    1. Some sites using Linux or UNIX-based OSes choose to use RAW disk devices. This method is not frequently used due to the unpleasant management overhead and long-term manageability for RAW devices.
    2. Many sites use cluster filesystems. On Linux and Windows, Oracle offers OCFS2 as one (free) cluster filesystem. Other vendors also offer add-on products for some OSes that provide supported cluster filesystems (like GFS, GPFS, VxFS, and others). Supported cluster filesystems may be used for Clusterware files (OCR and voting disks) as well as database files. Check Metalink for a list of supported cluster filesystems.
    3. ASM can be used to manage shared storage used for database files. Unfortunately, due to architecture decisions made by Oracle, ASM cannot currently be used for Clusterware files (OCR and voting disks). It is relatively common to see ASM used for DB files and either RAW or a cluster filesystem used for Clusterware files. In other words, ASM and cluster filesystems and RAW are not mutually exclusive.
    As for hardware--I have not seen any hardware capable of easily connecting multiple servers to internal storage. So, shared storage is always (in my experience) housed externally. You can find some articles on OTN and other sites (search Google for them) that use firewire drives or a third computer running openfiler to provide the shared storage in test environments. In production environments, SAN devices are commonly employed to provide concurrent access to storage from multiple servers.
    Hope this helps!
    Message was edited by:
    Dan_Norris

  • RMAN on ASM RAC doubts

    Hi!
    I have some general questions for RAC 10gR2 (10.2.0.4 PS4) on Linux Ithanium with ASM (with OMF and default template) on raw blocks (not raw devices!).
    1) Is it important that backup/restore of DB is done from master node of the RAC or any node can be involved in this operation?
    2) When you perform "alter system archive log current" in RMAN script, does it switch log file for all instances (in sqlplus it is not like that)? Until now, on one instance (no RAC) I was using "alter system switch logfile" in RMAN with great success... Now I'd like to verify this doubt...
    3) I have read Oracle docs about Flash Recovery Area placed on ASM for backups, archived logs etc. . I?m fully aware of it's setup but I'd like to read something like "best practice" on that subject with some thresholds and examples... Regardless official Oracle is saying that it is more then recommended to use that technology in all incoming releases for this purpose, I'd like to make some experiment before start to use that in PROD environment.
    4) If we use ASM on raw blocks (this is new feature for Red Hat 5.x Linux) how to backup ASM instance itself. Is RMAN capable for that?
    5) How to ensure that in OMF with ASM log files and archived log files as well have SCN number in name?
    6) If anyone has some links, notes (out of official Oracle docs) about this subject...thx in front.
    Regards,
    Damir

    Hi Hermant!
    2) "Since the command specifies the INSTANCE "
    I do not think so.
    In RMAN you connect to database not any particular instance.... So this command should(!?) archive all CURRENT archive logs in all instance. This was my doubt .. still waiting for proof.
    And example of one adhoc full backup script to disk (AFR):
    run{
    # switch archive logs for all threads
    sql "alter system archive log current";
    backup database;
    # switch archive logs for all threads
    sql "alter system archive log current";
    BACKUP ARCHIVELOG FROM TIME "(trunc(sysdate)+(18.3/24))" filesperset 50;
    allocate channel t1 type disk;
    restore database preview;
    RESTORE DATABASE PREVIEW SUMMARY;
    release channel t1;
    }To be more specific: In RMAN if you do not specify exactly instance name, does all commands are meant as global (to all instances)?
    Hope now is clear.

Maybe you are looking for