RAC and ASM with geographically seperated disks

We are planning to use RAC and ASM with four nodes. The two nodes will be at one location and another two nodes at different location. The ASM disks also will be available locally. Is it possible to configure ASM with geographycally serepared disks.
We are thinking of two RAC and ASM setups with two nodes each and streams replication between the RACs.
I want to know what is the best method to implement. Is it RAC with four nodes OR two RACs with two nodes each and streams replication between the RACs.

Apart from the latency issue, which has been mentioned by others, you also need to mirror your disks between the two sites and be very careful about split brain scenarios under node or site crashes.
You need to mirror the disks between the sites in case one site crashes completely and loses all network access. The only way the second site can carry on is if it has access to all the data disks, and the cluster control disks (terms vary for this, such as quorum or voting disk). If you do not mirror you simply do not have a resilient design. Resilience is about eliminating all single points of failures, which means duplicating everything i.e. mirroring.
And mirroring between two remote sites adds more volume to the data traffic between the two sites. And it would need to be synchronous rather than asynchronous to guarantee no data loss. And your commits for each transaction would be limited by the round trip time to write to the remote site and get the acknowledgement back.
Split brain is something you need to avoid with remote clusters, and local clusters too. If the network goes down between the two sites and both sites have local copies of all of the data and cluster control disks, which site becomes the primary and which one becomes the standby? You cannot have both sites coming back up as primary with active databases and accepting transactions! At this point you have split brain, and each side is processing transactions independently, and both databases are now deviating from each other with different data changes.
I don't think a 4 node RAC environment split across geographical distances is really viable, except under small transaction volumes, and where response time (transaction time) is not critical. If you have high volumes and need fast transactions then you need to only use local RAC configurations, and replicate between the two sites asynchronously. DataGuard is easier, but you need to decide if you need Streams and how to make it work for you.
John

Similar Messages

  • Solaris 10 and Hitachi LUN mapping with Oracle 10g RAC and ASM?

    Hi all,
    I am working on an Oracle 10g RAC and ASM installation with Sun E6900 servers attached to a Hitachi SAN for shared storage with Sun Solaris 10 as the server OS. We are using Oracle 10g Release 2 (10.2.0.3) RAC clusterware
    for the clustering software and raw devices for shared storage and Veritas VxFs 4.1 filesystem.
    My question is this:
    How do I map the raw devices and LUNs on the Hitachi SAN to Solaris 10 OS and Oracle 10g RAC ASM?
    I am aware that with an Oracle 10g RAC and ASM instance, one needs to configure the ASM instance initialization parameter file to set the asm_diskstring setting to recognize the LUNs that are presented to the host.
    I know that Sun Solaris 10 uses /dev/rdsk/CwTxDySz naming convention at the OS level for disks. However, how would I map this to Oracle 10g ASM settings?
    I cannot find this critical piece of information ANYWHERE!!!!
    Thanks for your help!

    You don't seem to state categorically that you are using Solaris Cluster, so I'll assume it since this is mainly a forum about Solaris Cluster (and IMHO, Solaris Cluster with Clusterware is better than Clusterware on its own).
    Clusterware has to see the same device names from all cluster nodes. This is why Solaris Cluster (SC) is a positive benefit over Clusterware because SC provides an automatically managed, consistent name space. Clusterware on its own forces you to manage either the symbolic links (or worse mknods) to create a consistent namespace!
    So, given the SC consistent namespace you simple add the raw devices into the ASM configuration, i.e. /dev/did/rdsk/dXsY. If you are using Solaris Volume Manager, you would use /dev/md/<setname>/rdsk/dXXX and if you were using CVM/VxVM you would use /dev/vx/rdsk/<dg_name>/<dev_name>.
    Of course, if you genuinely are using Clusterware on its own, then you have somewhat of a management issue! ... time to think about installing SC?
    Tim
    ---

  • Cloning 11i non split to split configuration with RAC and ASM

    Hello Hussein,
    I just want to ask some ideas on what is the best way to clone our UAT/DEV environment to our PROD environment.
    Right now no RAC and ASM setup for the source system still 9.2.0.5 but the plan is to convert to ASM + 10g RAC.
    Can you please let me know on what is the best way to setup PROD out of our UAT environment?
    Here's my options:
    1. Install a fresh prod system
    2. Convert source system to ASM + RAC before cloning to target - setup as below:
    (Source)
    APPS server - 32bit
    DB server (2 node RAC + ASM) - 32bit
    (Target)
    APPS server - 32bit
    DB server (2 node RAC + ASM) - 64bit
    3. Clone existing target system (non RAC and non ASM)
    copy source APPL directories to target
    Install 64 bit Oracle 10g to the target system
    clone/convert database source (9i 32bit) to database targer (10g 64 bit) using RMAN.
    Install clusterware 11gR2
    Convert database to RAC
    Can you please let me know on what is the best approach to do this? For fresh install it will take some time to apply the current patch level and applying other patches.
    For Option 2, seems a bit complicated to do 32bit - 64 bit cloning on RAC. Appreciated if you can provide doc id for this.
    For Option 3, not sure how smooth the conversion from 32bit to 64bit.
    Appreciate your insights on this.
    Regards,
    jeffrey

    Hi Jeffrey,
    Since you are on 9.2.0.5, I assume you are running Oracle Apps 11i and not R12.
    1. Install a fresh prod systemThis option requires applying all patches (as you mentioned above) plus you will have to convert to ASM/RAC on the source/target instance. I would not recommend this approach since would require extra work/time.
    2. Convert source system to ASM + RAC before cloning to target - setup as below:
    (Source)
    APPS server - 32bit
    DB server (2 node RAC + ASM) - 32bit
    (Target)
    APPS server - 32bit
    DB server (2 node RAC + ASM) - 64bitWhat are the source and target database version?
    As per (Certified RAC Scenarios for E-Business Suite Cloning [ID 783188.1]) this is supported by Rapid Clone. So, in this case you need to convert the source instance to RAC and migrate to ASM then use Rapid Clone to clone the application/database.
    Cloning Oracle Applications Release 11i with Rapid Clone [ID 230672.1] -- 6. Cloning a RAC System
    You will have to convert the target database then from 32-bit to 64-bit.
    3. Clone existing target system (non RAC and non ASM)
    copy source APPL directories to target
    Install 64 bit Oracle 10g to the target system
    clone/convert database source (9i 32bit) to database targer (10g 64 bit) using RMAN.
    Install clusterware 11gR2
    Convert database to RAC Here you will have to convert to RAC/ASM on both the source/target instances -- You are eliminating the patches part in Option 1, but again extra work need to be done to convert the database from 32-bit to 64-bit on the target instance + convert to RAC and migrate to ASM (on both instances).
    Based on the above, I would recommend and suggest you go with Option 2.
    Thanks,
    Hussein

  • Question about RAC and ASM

    Hi,
    We are thinking about build RAC using ASM for OEM database. It'll have two nodes, oracle 10g and Hitachi san with solaris ( or linux ). I've few questions about RAC and ASM.
    1) Do I need to have ASM instance running on each node ? ( most likely yes... but want to make sure )
    2) can I share oracle_home between ASM instance and database instance ? what is the best choice ?
    3) I'm planning to use shared disks for all files, all databases.... what are the pros/cons ?
    4) what should be the installation procedure ? Meaning, first create ASM instance on each node, then install cluster software, build RAC databases...... can someone explain..
    5) I believe RMAN is the only option for backup since I'm using ASM, correct ?
    I'm a newbie to RAC and ASM..
    Thanks,

    user4866039 wrote:
    1) Do I need to have ASM instance running on each node ? ( most likely yes... but want to make sure )yes.
    2) can I share oracle_home between ASM instance and database instance ? what is the best choice ?in 10g you can, in 11gr2 you cannot. and it might be better to seperate them because it will give you more flexibility with patching.
    3) I'm planning to use shared disks for all files, all databases.... what are the pros/cons ?if you share your oracle_home you won't be able to do rolling updates. so i recommend to keep oracle_homes local.
    4) what should be the installation procedure ? Meaning, first create ASM instance on each node, then install cluster software, build RAC databases...... can someone explain..follow the install guide for your respective version. for 10g you'd install clusterware first, then asm and database is last.
    5) I believe RMAN is the only option for backup since I'm using ASM, correct ?pretty much. you could stop your database and dump the raw devices or use asmcmd/asmftp but rman is definitely the best choice
    Bjoern

  • Clone multi-node RAC and ASM to single node

    Hi everyone.
    I need clone system with 3 application server and 2 oracle database RAC and ASM to single-node. The operating system is RHEL 5.
    I see some metalink notes, but we can't found nothing. I find notes with multi-node to single-node, but nothing with RAC to non-RAC.
    The eBS version is 11.5.10.2 and database version is 10.2.0.3
    Is possible this clone?
    Thanks very much.

    Hi User;
    Please follow below and see its helpful:
    EBS R12 with RAC and non-RAC
    Re: RAC to single instance ebs R12
    Re: Clone Oracle Apps 11.5.10.2 RacDB to Non-RAC DB
    Re: CLONING R12 RAC to NON RAC CLONING giving error RMAN-05517 temporary file
    [b]Migrating the DB-Tier (DB and CM) to Two node non RAC cluster[/b]
    Also check:
    http://www.oracle.com/technology/pub/articles/chan_sing2rac_install.html
    Regard
    Helios

  • Guidelines for SGA and PGA tuning for 10gR2 RAC and ASM ?

    I am looking for tuning information on SGA and PGA tuning for Oracle 10g (10.2) RAC and ASM for a 4TB OLTP and DSS mixed environment on Solaris 10 platform.

    We are running Solaris 10 SPARC 64 bit with Oracle 10gR2 RAC Enterprise Edition and ASM on Sun servers with 32GB of RAM for memory for a 4TB OLTP database.
    It is in design phase so I do not have an existing AWR or Statspack report yet. Is there a best practices guideline on how to size parameters for the SGA (ie: shared_pool_size, etc) and PGA for this environment from Oracle?

  • Oracle database 10g RAC and ASM configuration

    Hi all,
    I want to ask to everybody something about Oracle 10g RAC and ASm configuration. We plan to migrate to Oracle 10g from 9i, and we will begin configuring oracle but we have to decide which configuration are the best.
    Our materials are bellow:
    Hardware: RP 3440 (HP)
    OS : HPUX 11i Ver 1
    Storage: EVA 4000 (eva disk group)
    The problem is:
    Our supplier recommand us to use HP serviguard + HP serviceguard extension for RAC+ RAc and Raw device as configuration.
    But we want to use Oracekl Clusterware + RAC + ASM
    My question is if anybody know what is the best configuration, we want to use ASm.
    Can we use HP serviguard and ASM.
    Some documentations or link explain oracle RAC and ASM configuration will be appreciate.
    Thanks for your help.
    Regards.
    raitsarevo

    Hello,
    there's no extra RAC software package, but the option is only offered, if one of the supported cluster layers for the respective OS has been installed before.
    10.1.0.3 looks like a complete redesign, but anyway it is a patch, you have to install 10.1.0.2 first.

  • Oracle database 10g RAC and ASm installation

    Hi all,
    I want to ask to everybody something about Oracle 10g RAC and ASm configuration. We plan to migrate to Oracle 10g from 9i, and we will begin configuring oracle but we have to decide which configuration are the best.
    Our materials are bellow:
    Hardware: RP 3440 (HP)
    OS : HPUX 11i Ver 1
    Storage: EVA 4000 (eva disk group)
    The problem is:
    Our supplier recommand us to use HP serviguard + HP serviceguard extension for RAC+ RAc and Raw device as configuration.
    But we want to use Oracekl Clusterware + RAC + ASM
    My question is if anybody know what is the best configuration, we want to use ASm.
    Can we use HP serviguard and ASM.
    Some documentations or link explain oracle RAC and ASM configuration will be appreciate.
    Thanks for your help.
    Regards.
    raitsarevo

    Hi Experts ,
    Using this command
    $ emca
    STARTED EMCA at Mon Mar 09 16:13:13 GMT+07:00 2009
    Enter the following information about the database to be configured
    Listener port number:
    Following error comes:
    Exception in thread "main" java.lang.NoClassDefFoundError: oracle/sysman/emSDK/conf/InventoryLoader
    at oracle.sysman.emcp.EMConfig.getFreePorts(EMConfig.java:4225)
    at oracle.sysman.emcp.EMConfig.checkParameters(EMConfig.java:994)
    at oracle.sysman.emcp.EMConfig.perform(EMConfig.java:265)
    at oracle.sysman.emcp.EMConfigAssistant.invokeEMCA(EMConfigAssistant.java:692)
    at oracle.sysman.emcp.EMConfigAssistant.performSetup(EMConfigAssistant.java:641)
    at oracle.sysman.emcp.EMConfigAssistant.statusMain(EMConfigAssistant.java:340)
    at oracle.sysman.emcp.EMConfigAssistant.main(EMConfigAssistant.java:180)
    Moreover this command does not work on Aix 5.3
    emca -config dbcontrol db -repos recreate
    Regards,
    Edited by: LazyDBA10g on Mar 16, 2009 6:20 AM

  • Move RAC database - need to format the disk storage having RAC and ASM

    Hello,
    Short story:
    I need to format the disk storage used by a two nodes RAC with ASM.
    How can I have the database working the new disk storage.
    Long story:
    We decided to change the external disk storage formatting from RAID 5 to RAID 10.
    Configuration is: 2 Solaris nodes having Oracle software installed on each node and running with ASM
    - node1: ORCL1 database and +ASM1 asm instances
    - node2: ORCL2 database and +ASM2 asm instances
    Available is a NFS storage with plenty of space.
    Please tell me which would be the easiest way to move the database?
    Thank you

    Thanks alot for your answer.
    1) There is more trouble: I also have the Voting disk and the OCR on the same (and only) SAN storage. Do you know about steps to backup/restore and/or recreate these?
    As I said I have a NFS file system available with plenty of space.
    2) I will need to backup the database, which way would you recommend: RMAN or EXP ?

  • RAC and ASM issue

    Hi All,
    i am getting an error message in asm_alert log file saying "NOTE: ASMB process exiting due to lack of ASM file activity".
    This leads to frequent crashing of NODE 1. Please check below detail error and suggest solution.
    Thu Mar 24 07:05:11 2011
    LMD0 (ospid: 32493) has not called a wait for 94 secs.
    GES: System Load is HIGH.
    GES: Current load is 55.87 and high load threshold is 20.00
    Thu Mar 24 07:06:32 2011
    LMD0 (ospid: 32493) has not called a wait for 174 secs.
    GES: System Load is HIGH.
    GES: Current load is 71.23 and high load threshold is 20.00
    Thu Mar 24 07:06:36 2011
    +Trace dumping is performing id=[cdmp_20110324070635]+
    Thu Mar 24 07:07:49 2011
    +Trace dumping is performing id=[cdmp_20110324070635]+
    Thu Mar 24 07:08:16 2011
    Waiting for clusterware split-brain resolution
    Thu Mar 24 07:18:17 2011
    Errors in file /u01/app/oracle/diag/asm/asm/+ASM1/trace/+ASM1_lmon_32484.trc (incident=60073):+
    ORA-29740: evicted by member 1, group incarnation 120
    Incident details in: /u01/app/oracle/diag/asm/asm/+ASM1/incident/incdir_60073/+ASM1_lmon_32484_i60073.trc+
    Thu Mar 24 07:18:19 2011
    +Trace dumping is performing id=[cdmp_20110324071819]+
    Errors in file /u01/app/oracle/diag/asm/asm/+ASM1/trace/+ASM1_lmon_32484.trc:+
    ORA-29740: evicted by member 1, group incarnation 120
    LMON (ospid: 32484): terminating the instance due to error 29740
    System state dump is made for local instance
    System State dumped to trace file /u01/app/oracle/diag/asm/asm/+ASM1/trace/+ASM1_diag_32459.trc+
    +Trace dumping is performing id=[cdmp_20110324071820]+
    Instance terminated by LMON, pid = 32484
    Thu Mar 24 07:18:31 2011
    Starting ORACLE instance (normal)
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Interface type 1 eth1 172.20.223.0 configured from OCR for use as a cluster interconnect
    Interface type 1 eth0 172.20.222.0 configured from OCR for use as  a public interface
    Picked latch-free SCN scheme 2
    Using LOG_ARCHIVE_DEST_1 parameter default value as /u01/app/oracle/product/11.1.0/asm_1/dbs/arch
    Autotune of undo retention is turned on.
    LICENSE_MAX_USERS = 0
    SYS auditing is disabled
    Starting up ORACLE RDBMS Version: 11.1.0.7.0.
    Using parameter settings in server-side pfile /u01/app/oracle/product/11.1.0/asm_1/dbs/initASM1.ora+
    System parameters with non-default values:
    large_pool_size          = 12M
    instance_type            = "asm"
    cluster_database         = TRUE
    instance_number          = 1
    asm_diskstring           = "ORCL:*"
    asm_diskgroups           = "REDO01"
    asm_diskgroups           = "REDO02"
    asm_diskgroups           = "DATA"
    asm_diskgroups           = "RECOVERY"
    diagnostic_dest          = "/u01/app/oracle"
    Cluster communication is configured to use the following interface(s) for this instance
    +172.20.223.25+
    cluster interconnect IPC version:Oracle UDP/IP (generic)
    IPC Vendor 1 proto 2
    Thu Mar 24 07:18:36 2011
    PMON started with pid=2, OS id=23120
    Thu Mar 24 07:18:36 2011
    VKTM started with pid=3, OS id=23123 at elevated priority
    VKTM running at (20)ms precision
    Thu Mar 24 07:18:36 2011
    DIAG started with pid=4, OS id=23127
    Thu Mar 24 07:18:37 2011
    PING started with pid=5, OS id=23129
    Thu Mar 24 07:18:37 2011
    PSP0 started with pid=6, OS id=23131
    Thu Mar 24 07:18:37 2011
    DIA0 started with pid=7, OS id=23133
    Thu Mar 24 07:18:37 2011
    LMON started with pid=8, OS id=23135
    Thu Mar 24 07:18:37 2011
    LMD0 started with pid=9, OS id=23137
    Thu Mar 24 07:18:37 2011
    LMS0 started with pid=10, OS id=23148 at elevated priority
    Thu Mar 24 07:18:37 2011
    MMAN started with pid=11, OS id=23152
    Thu Mar 24 07:18:38 2011
    DBW0 started with pid=12, OS id=23170
    Thu Mar 24 07:18:38 2011
    LGWR started with pid=13, OS id=23176
    Thu Mar 24 07:18:38 2011
    CKPT started with pid=14, OS id=23218
    Thu Mar 24 07:18:38 2011
    SMON started with pid=15, OS id=23224
    Thu Mar 24 07:18:38 2011
    RBAL started with pid=16, OS id=23237
    Thu Mar 24 07:18:38 2011
    GMON started with pid=17, OS id=23239
    lmon registered with NM - instance id 1 (internal mem no 0)
    Reconfiguration started (old inc 0, new inc 124)
    ASM instance
    List of nodes:
    +0 1 2+
    Global Resource Directory frozen
    * allocate domain 0, invalid = TRUE
    Communication channels reestablished
    * allocate domain 1, invalid = TRUE
    * allocate domain 2, invalid = TRUE
    * allocate domain 3, invalid = TRUE
    * allocate domain 4, invalid = TRUE
    * domain 0 valid = 1 according to instance 1
    * domain 1 valid = 1 according to instance 1
    * domain 2 valid = 1 according to instance 1
    * domain 3 valid = 1 according to instance 1
    * domain 4 valid = 1 according to instance 1
    Master broadcasted resource hash value bitmaps
    Non-local Process blocks cleaned out
    LMS 0: 0 GCS shadows cancelled, 0 closed, 0 Xw survived
    Set master node info
    Submitted all remote-enqueue requests
    Dwn-cvts replayed, VALBLKs dubious
    All grantable enqueues granted
    LMS 0: 0 GCS shadows traversed, 0 replayed
    Submitted all GCS remote-cache requests
    Post SMON to start 1st pass IR
    Fix write in gcs resources
    Reconfiguration complete
    Thu Mar 24 07:18:40 2011
    LCK0 started with pid=18, OS id=23277
    ORACLE_BASE from environment = /u01/app/oracle
    Thu Mar 24 07:18:41 2011
    SQL> ALTER DISKGROUP ALL MOUNT
    NOTE: cache registered group DATA number=1 incarn=0xf7063e39
    NOTE: cache began mount (not first) of group DATA number=1 incarn=0xf7063e39
    NOTE: cache registered group RECOVERY number=2 incarn=0xf7063e3a
    NOTE: cache began mount (not first) of group RECOVERY number=2 incarn=0xf7063e3a
    NOTE: cache registered group REDO01 number=3 incarn=0xf7163e3b
    NOTE: cache began mount (not first) of group REDO01 number=3 incarn=0xf7163e3b
    NOTE: cache registered group REDO02 number=4 incarn=0xf7163e3c
    NOTE: cache began mount (not first) of group REDO02 number=4 incarn=0xf7163e3c
    NOTE:Loaded lib: /opt/oracle/extapi/32/asm/orcl/1/libasm.so
    NOTE: Assigning number (1,0) to disk (ORCL:ASM_DATA1)
    NOTE: Assigning number (1,1) to disk (ORCL:ASM_DATA2)
    NOTE: Assigning number (2,0) to disk (ORCL:ASM_RECO1)
    NOTE: Assigning number (3,0) to disk (ORCL:ASM_LOG1)
    NOTE: Assigning number (4,0) to disk (ORCL:ASM_LOG2)
    kfdp_query(): 5
    kfdp_queryBg(): 5
    NOTE: cache opening disk 0 of grp 1: DATA1 label:ASM_DATA1
    NOTE: F1X0 found on disk 0 fcn 0.0
    NOTE: cache opening disk 1 of grp 1: DATA2 label:ASM_DATA2
    NOTE: cache mounting (not first) group 1/0xF7063E39 (DATA)
    kjbdomatt send to node 1
    kjbdomatt send to node 2
    NOTE: attached to recovery domain 1
    NOTE: LGWR attempting to mount thread 1 for diskgroup 1
    NOTE: LGWR mounted thread 1 for disk group 1
    NOTE: opening chunk 1 at fcn 0.10794571 ABA
    NOTE: seq=81 blk=1313
    NOTE: cache mounting group 1/0xF7063E39 (DATA) succeeded
    NOTE: cache ending mount (success) of group DATA number=1 incarn=0xf7063e39
    kfdp_query(): 6
    kfdp_queryBg(): 6
    NOTE: cache opening disk 0 of grp 2: RECO1 label:ASM_RECO1
    NOTE: F1X0 found on disk 0 fcn 0.0
    NOTE: cache mounting (not first) group 2/0xF7063E3A (RECOVERY)
    kjbdomatt send to node 1
    kjbdomatt send to node 2
    NOTE: attached to recovery domain 2
    NOTE: LGWR attempting to mount thread 1 for diskgroup 2
    NOTE: LGWR mounted thread 1 for disk group 2
    NOTE: opening chunk 1 at fcn 0.10436377 ABA
    NOTE: seq=48 blk=4298
    NOTE: cache mounting group 2/0xF7063E3A (RECOVERY) succeeded
    NOTE: cache ending mount (success) of group RECOVERY number=2 incarn=0xf7063e3a
    kfdp_query(): 7
    kfdp_queryBg(): 7
    NOTE: cache opening disk 0 of grp 3: LOG1 label:ASM_LOG1
    NOTE: F1X0 found on disk 0 fcn 0.0
    NOTE: cache mounting (not first) group 3/0xF7163E3B (REDO01)
    kjbdomatt send to node 1
    kjbdomatt send to node 2
    NOTE: attached to recovery domain 3
    NOTE: LGWR attempting to mount thread 1 for diskgroup 3
    NOTE: LGWR mounted thread 1 for disk group 3
    NOTE: opening chunk 1 at fcn 0.229332 ABA
    NOTE: seq=30 blk=10690
    NOTE: cache mounting group 3/0xF7163E3B (REDO01) succeeded
    NOTE: cache ending mount (success) of group REDO01 number=3 incarn=0xf7163e3b
    kfdp_query(): 8
    kfdp_queryBg(): 8
    NOTE: cache opening disk 0 of grp 4: LOG2 label:ASM_LOG2
    NOTE: F1X0 found on disk 0 fcn 0.0
    NOTE: cache mounting (not first) group 4/0xF7163E3C (REDO02)
    kjbdomatt send to node 1
    kjbdomatt send to node 2
    NOTE: attached to recovery domain 4
    NOTE: LGWR attempting to mount thread 1 for diskgroup 4
    NOTE: LGWR mounted thread 1 for disk group 4
    NOTE: opening chunk 1 at fcn 0.225880 ABA
    NOTE: seq=30 blk=10556
    NOTE: cache mounting group 4/0xF7163E3C (REDO02) succeeded
    NOTE: cache ending mount (success) of group REDO02 number=4 incarn=0xf7163e3c
    kfdp_query(): 9
    kfdp_queryBg(): 9
    NOTE: Instance updated compatible.asm to 10.1.0.0.0 for grp 1
    SUCCESS: diskgroup DATA was mounted
    kfdp_query(): 10
    kfdp_queryBg(): 10
    NOTE: Instance updated compatible.asm to 10.1.0.0.0 for grp 2
    SUCCESS: diskgroup RECOVERY was mounted
    kfdp_query(): 11
    kfdp_queryBg(): 11
    NOTE: Instance updated compatible.asm to 10.1.0.0.0 for grp 3
    SUCCESS: diskgroup REDO01 was mounted
    kfdp_query(): 12
    kfdp_queryBg(): 12
    NOTE: Instance updated compatible.asm to 10.1.0.0.0 for grp 4
    SUCCESS: diskgroup REDO02 was mounted
    SUCCESS: ALTER DISKGROUP ALL MOUNT
    Thu Mar 24 08:26:28 2011
    Starting background process ASMB
    Thu Mar 24 08:26:28 2011
    ASMB started with pid=20, OS id=9597
    NOTE: ASMB process exiting due to lack of ASM file activity for 5 seconds
    Thu Mar 24 08:27:39 2011
    Starting background process ASMB
    Thu Mar 24 08:27:39 2011
    ASMB started with pid=25, OS id=10735
    NOTE: ASMB process exiting due to lack of ASM file activity for 5 seconds
    +[oracle@qa1crmrac1 trace]$ tail -1500 alert_ASM1.log
    Thu Mar 24 07:05:11 2011
    LMD0 (ospid: 32493) has not called a wait for 94 secs.
    GES: System Load is HIGH.
    GES: Current load is 55.87 and high load threshold is 20.00
    Thu Mar 24 07:06:32 2011
    LMD0 (ospid: 32493) has not called a wait for 174 secs.
    GES: System Load is HIGH.
    GES: Current load is 71.23 and high load threshold is 20.00
    Thu Mar 24 07:06:36 2011
    +Trace dumping is performing id=[cdmp_20110324070635]+
    Thu Mar 24 07:07:49 2011
    +Trace dumping is performing id=[cdmp_20110324070635]+
    Thu Mar 24 07:08:16 2011
    Waiting for clusterware split-brain resolution
    Thu Mar 24 07:18:17 2011
    Errors in file /u01/app/oracle/diag/asm/asm/+ASM1/trace/+ASM1_lmon_32484.trc (incident=60073):+
    ORA-29740: evicted by member 1, group incarnation 120
    Incident details in: /u01/app/oracle/diag/asm/asm/+ASM1/incident/incdir_60073/+ASM1_lmon_32484_i60073.trc+
    Thu Mar 24 07:18:19 2011
    +Trace dumping is performing id=[cdmp_20110324071819]+
    Errors in file /u01/app/oracle/diag/asm/asm/+ASM1/trace/+ASM1_lmon_32484.trc:+
    ORA-29740: evicted by member 1, group incarnation 120
    LMON (ospid: 32484): terminating the instance due to error 29740
    System state dump is made for local instance
    System State dumped to trace file /u01/app/oracle/diag/asm/asm/+ASM1/trace/+ASM1_diag_32459.trc+
    +Trace dumping is performing id=[cdmp_20110324071820]+
    Instance terminated by LMON, pid = 32484
    Thu Mar 24 07:18:31 2011
    Starting ORACLE instance (normal)
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Interface type 1 eth1 172.20.223.0 configured from OCR for use as a cluster interconnect
    Interface type 1 eth0 172.20.222.0 configured from OCR for use as  a public interface
    Picked latch-free SCN scheme 2
    Using LOG_ARCHIVE_DEST_1 parameter default value as /u01/app/oracle/product/11.1.0/asm_1/dbs/arch
    Autotune of undo retention is turned on.
    LICENSE_MAX_USERS = 0
    SYS auditing is disabled
    Starting up ORACLE RDBMS Version: 11.1.0.7.0.
    Using parameter settings in server-side pfile /u01/app/oracle/product/11.1.0/asm_1/dbs/initASM1.ora+
    System parameters with non-default values:
    large_pool_size          = 12M
    instance_type            = "asm"
    cluster_database         = TRUE
    instance_number          = 1
    asm_diskstring           = "ORCL:*"
    asm_diskgroups           = "REDO01"
    asm_diskgroups           = "REDO02"
    asm_diskgroups           = "DATA"
    asm_diskgroups           = "RECOVERY"
    diagnostic_dest          = "/u01/app/oracle"
    Cluster communication is configured to use the following interface(s) for this instance
    +172.20.223.25+
    cluster interconnect IPC version:Oracle UDP/IP (generic)
    IPC Vendor 1 proto 2
    Thu Mar 24 07:18:36 2011
    PMON started with pid=2, OS id=23120
    Thu Mar 24 07:18:36 2011
    VKTM started with pid=3, OS id=23123 at elevated priority
    VKTM running at (20)ms precision
    Thu Mar 24 07:18:36 2011
    DIAG started with pid=4, OS id=23127
    Thu Mar 24 07:18:37 2011
    PING started with pid=5, OS id=23129
    Thu Mar 24 07:18:37 2011
    PSP0 started with pid=6, OS id=23131
    Thu Mar 24 07:18:37 2011
    DIA0 started with pid=7, OS id=23133
    Thu Mar 24 07:18:37 2011
    LMON started with pid=8, OS id=23135
    Thu Mar 24 07:18:37 2011
    LMD0 started with pid=9, OS id=23137
    Thu Mar 24 07:18:37 2011
    LMS0 started with pid=10, OS id=23148 at elevated priority
    Thu Mar 24 07:18:37 2011
    MMAN started with pid=11, OS id=23152
    Thu Mar 24 07:18:38 2011
    DBW0 started with pid=12, OS id=23170
    Thu Mar 24 07:18:38 2011
    LGWR started with pid=13, OS id=23176
    Thu Mar 24 07:18:38 2011
    CKPT started with pid=14, OS id=23218
    Thu Mar 24 07:18:38 2011
    SMON started with pid=15, OS id=23224
    Thu Mar 24 07:18:38 2011
    RBAL started with pid=16, OS id=23237
    Thu Mar 24 07:18:38 2011
    GMON started with pid=17, OS id=23239
    lmon registered with NM - instance id 1 (internal mem no 0)
    Reconfiguration started (old inc 0, new inc 124)
    ASM instance
    List of nodes:
    +0 1 2+
    Global Resource Directory frozen
    * allocate domain 0, invalid = TRUE
    Communication channels reestablished
    * allocate domain 1, invalid = TRUE
    * allocate domain 2, invalid = TRUE
    * allocate domain 3, invalid = TRUE
    * allocate domain 4, invalid = TRUE
    * domain 0 valid = 1 according to instance 1
    * domain 1 valid = 1 according to instance 1
    * domain 2 valid = 1 according to instance 1
    * domain 3 valid = 1 according to instance 1
    * domain 4 valid = 1 according to instance 1
    Master broadcasted resource hash value bitmaps
    Non-local Process blocks cleaned out
    LMS 0: 0 GCS shadows cancelled, 0 closed, 0 Xw survived
    Set master node info
    Submitted all remote-enqueue requests
    Dwn-cvts replayed, VALBLKs dubious
    All grantable enqueues granted
    LMS 0: 0 GCS shadows traversed, 0 replayed
    Submitted all GCS remote-cache requests
    Post SMON to start 1st pass IR
    Fix write in gcs resources
    Reconfiguration complete
    Thu Mar 24 07:18:40 2011
    LCK0 started with pid=18, OS id=23277
    ORACLE_BASE from environment = /u01/app/oracle
    Thu Mar 24 07:18:41 2011
    SQL> ALTER DISKGROUP ALL MOUNT
    NOTE: cache registered group DATA number=1 incarn=0xf7063e39
    NOTE: cache began mount (not first) of group DATA number=1 incarn=0xf7063e39
    NOTE: cache registered group RECOVERY number=2 incarn=0xf7063e3a
    NOTE: cache began mount (not first) of group RECOVERY number=2 incarn=0xf7063e3a
    NOTE: cache registered group REDO01 number=3 incarn=0xf7163e3b
    NOTE: cache began mount (not first) of group REDO01 number=3 incarn=0xf7163e3b
    NOTE: cache registered group REDO02 number=4 incarn=0xf7163e3c
    NOTE: cache began mount (not first) of group REDO02 number=4 incarn=0xf7163e3c
    NOTE:Loaded lib: /opt/oracle/extapi/32/asm/orcl/1/libasm.so
    NOTE: Assigning number (1,0) to disk (ORCL:ASM_DATA1)
    NOTE: Assigning number (1,1) to disk (ORCL:ASM_DATA2)
    NOTE: Assigning number (2,0) to disk (ORCL:ASM_RECO1)
    NOTE: Assigning number (3,0) to disk (ORCL:ASM_LOG1)
    NOTE: Assigning number (4,0) to disk (ORCL:ASM_LOG2)
    kfdp_query(): 5
    kfdp_queryBg(): 5
    NOTE: cache opening disk 0 of grp 1: DATA1 label:ASM_DATA1
    NOTE: F1X0 found on disk 0 fcn 0.0
    NOTE: cache opening disk 1 of grp 1: DATA2 label:ASM_DATA2
    NOTE: cache mounting (not first) group 1/0xF7063E39 (DATA)
    kjbdomatt send to node 1
    kjbdomatt send to node 2
    NOTE: attached to recovery domain 1
    NOTE: LGWR attempting to mount thread 1 for diskgroup 1
    NOTE: LGWR mounted thread 1 for disk group 1
    NOTE: opening chunk 1 at fcn 0.10794571 ABA
    NOTE: seq=81 blk=1313
    NOTE: cache mounting group 1/0xF7063E39 (DATA) succeeded
    NOTE: cache ending mount (success) of group DATA number=1 incarn=0xf7063e39
    kfdp_query(): 6
    kfdp_queryBg(): 6
    NOTE: cache opening disk 0 of grp 2: RECO1 label:ASM_RECO1
    NOTE: F1X0 found on disk 0 fcn 0.0
    NOTE: cache mounting (not first) group 2/0xF7063E3A (RECOVERY)
    kjbdomatt send to node 1
    kjbdomatt send to node 2
    NOTE: attached to recovery domain 2
    NOTE: LGWR attempting to mount thread 1 for diskgroup 2
    NOTE: LGWR mounted thread 1 for disk group 2
    NOTE: opening chunk 1 at fcn 0.10436377 ABA
    NOTE: seq=48 blk=4298
    NOTE: cache mounting group 2/0xF7063E3A (RECOVERY) succeeded
    NOTE: cache ending mount (success) of group RECOVERY number=2 incarn=0xf7063e3a
    kfdp_query(): 7
    kfdp_queryBg(): 7
    NOTE: cache opening disk 0 of grp 3: LOG1 label:ASM_LOG1
    NOTE: F1X0 found on disk 0 fcn 0.0
    NOTE: cache mounting (not first) group 3/0xF7163E3B (REDO01)
    kjbdomatt send to node 1
    kjbdomatt send to node 2
    NOTE: attached to recovery domain 3
    NOTE: LGWR attempting to mount thread 1 for diskgroup 3
    NOTE: LGWR mounted thread 1 for disk group 3
    NOTE: opening chunk 1 at fcn 0.229332 ABA
    NOTE: seq=30 blk=10690
    NOTE: cache mounting group 3/0xF7163E3B (REDO01) succeeded
    NOTE: cache ending mount (success) of group REDO01 number=3 incarn=0xf7163e3b
    kfdp_query(): 8
    kfdp_queryBg(): 8
    NOTE: cache opening disk 0 of grp 4: LOG2 label:ASM_LOG2
    NOTE: F1X0 found on disk 0 fcn 0.0
    NOTE: cache mounting (not first) group 4/0xF7163E3C (REDO02)
    kjbdomatt send to node 1
    kjbdomatt send to node 2
    NOTE: attached to recovery domain 4
    NOTE: LGWR attempting to mount thread 1 for diskgroup 4
    NOTE: LGWR mounted thread 1 for disk group 4
    NOTE: opening chunk 1 at fcn 0.225880 ABA
    NOTE: seq=30 blk=10556
    NOTE: cache mounting group 4/0xF7163E3C (REDO02) succeeded
    NOTE: cache ending mount (success) of group REDO02 number=4 incarn=0xf7163e3c
    kfdp_query(): 9
    kfdp_queryBg(): 9
    NOTE: Instance updated compatible.asm to 10.1.0.0.0 for grp 1
    SUCCESS: diskgroup DATA was mounted
    kfdp_query(): 10
    kfdp_queryBg(): 10
    NOTE: Instance updated compatible.asm to 10.1.0.0.0 for grp 2
    SUCCESS: diskgroup RECOVERY was mounted
    kfdp_query(): 11
    kfdp_queryBg(): 11
    NOTE: Instance updated compatible.asm to 10.1.0.0.0 for grp 3
    SUCCESS: diskgroup REDO01 was mounted
    kfdp_query(): 12
    kfdp_queryBg(): 12
    NOTE: Instance updated compatible.asm to 10.1.0.0.0 for grp 4
    SUCCESS: diskgroup REDO02 was mounted
    SUCCESS: ALTER DISKGROUP ALL MOUNT
    Thu Mar 24 08:26:28 2011
    Starting background process ASMB
    Thu Mar 24 08:26:28 2011
    ASMB started with pid=20, OS id=9597
    NOTE: ASMB process exiting due to lack of ASM file activity for 5 seconds
    Thu Mar 24 08:27:39 2011
    Starting background process ASMB
    Thu Mar 24 08:27:39 2011
    ASMB started with pid=25, OS id=10735
    NOTE: ASMB process exiting due to lack of ASM file activity for 5 seconds
    Do i need to set the compatible parameter?
    Regards,
    Vish

    It looks to me like your server is absolutely buried, and ASM may just be an innocent bystander. What is going on in the database when this happens? Also, run sar samples at 30 second intervals up to when this happens to see what is happening. It's overhead, but you need to find what is causing the problem with the server(s).
    Are you swapping?

  • D1000 and 10gr2 RAC and ASM

    Solaris 9
    10gr2
    Hi All,
    Is there anyway two nodes can share D1000 storage. Both nodes should see all the disks. Not half the disks each node.
    If it can be done, please give me the proceduere.
    I want to try oracle 10g RAC on this using ASM.
    Thanks in advance.

    I am wondering If we can share D1000 between the two nodes and able see all the disks from both nodes. we can see half the disks from each node in split bus mode I guess.

  • Split off RAC and ASM

    Dear all,
    I 'm currently running both RAC DB and the ASM instance @ the same oracle home.
    I want an environment in which the ASM instance should be running on its own oracle home. Could any expert advice the procedures ?

    Billy  Verreynne  wrote:
    Johna Pakas wrote:
    I 'm currently running both RAC DB and the ASM instance @ the same oracle home.
    I want an environment in which the ASM instance should be running on its own oracle home. Could any expert advice the procedures ?Why? And you're likely mistaken if you think that this approach will somehow improve redundancy and availability of either RAC or ASM instances. Suggest you contact Oracle Support and request a RAC Assurance engagement and have your configuration properly reviewed - instead of trying, what seems to be, a not-so-wise hacking approach.
    Re: Database Vault 11g with RAC
    That's my reason behind.

  • Oracle RAC and ASM for SAP in SLES

    Hello,
    We are installing our SAP ERP with Oracle RAC 11.2.0.2 on SLES 11 x86_64. According note 527843 (Oracle RAC support in the SAP environment), this is supported for SAP. But as a requirement ASM Cluster File System must be setup.
    But when Oracle Grid is installed, ACFS installation returns the following error:
    ACFS-9459: ADVM/ACFS is not supported on this OS version: 'sles-release-11.1-1.152'
    So Oracle GRID has been installed without ACFS, so we will not be able to use Oracle Clusterware for providing high availability of SAP.
    As this is supposed to be a feasible combination (Oracle RAC 11.2.0.2 with ASM on SLES 11 SP1, Netweaver 7.0), I'm wondering if it is a bug, a lack of documentation or something I'm missing.
    Kind regards,
    Fermí

    Read this below Oracle Note:
    ACFS not supported on certain platforms [ID 1075058.1]
    oracle@node1:~/app/oracle/product/grid/log/node1> tail -1 alertesiha.log
    [client(14083)]CRS-10001:ADVM/ACFS is not supported on SUSE
    Then these can be ignored, since ACFS/ADVM may not be supported for your platform. As of this writing, Sun, AIX, SuSE 10 are all supported in 11.2.0.2 only.  HP-UX, SuSe 11 and RH6.0, as well as Oracle UEK ((2.6.32-100*) are not yet supported in 11.2.0.2. Keep in mind that this does not prevent your Grid Infrastructure stack (Clusterware and ASM) from starting.  This is just an informational message for these platforms
    Extract from SAP Note:
    RAC 11.2.0.2 (x86 & x86_64 only):
    Oracle Clusterware 11.2.0.2 + ASM/ACFS 11.2.0.2 (currently only for SLES10, RHEL5, OL 5.x (without UEK))
    Thanks
    Srikanth M

  • PRD with RAC and DR with non-rac datagaurd

    Is it possible to implement production with RAC database and DR with non-rac datagaurd.
    if yes, how does the log shipping happening from different nodes of RAC to one DR node?
    Can anybody answer please?
    Thanks
    Vignesh.

    There is also a very good MAA whitepaper on this szenario:
    MAA 10g Setup Guide: Creating a Single Instance Physical Standby Database for a RAC Primary Database
    http://www.oracle.com/technology/deploy/availability/pdf/MAA_WP_10g_RACPrimarySingleInstancePhysicalStandby.pdf

  • Production RAC with ASM and DR with non-rac and ASM

    Hi,
    I have a question whether this configuration is feasible or not. Production environment is 10gR2 RAC running on 2 node cluster. DR site will have single instance with ASM. Disks from primary site will be mirrored to DR site using EMC SRDF. Would i be able to bring up single instance at DR site on mirrored ASM disks once split is done? Currently not incorporating Dataguard for DR solution.
    Wanted to check if anyone has done like this before.
    Thanks

    This is feasible. SRDF is capable of providing data at the DR site that is always in a "crash-consistent state", without doing anything at the production site RAC environment. Starting the database at the DR site will cause it to run crash recovery first. SRDF synchronous or asynchronous feature should be used in this case.
    At the DR site, your default OS devices files could be different. Either change the asm disk strings or configure the device files for the disks such that they are the same as the shared device files at production side. This way, you can use the ASM configuration as is.
    Also the database is to be started in a non-cluster mode, by commenting the cluster related parameters (ie: cluster) at the init*.ora parameter file.

Maybe you are looking for

  • Unable to view stock on a posting date

    I'm implementing inventory scenario in BI. Initialized the stock in R/3 and uploaded into cube 0IC_C03 through 2LIS_03_BX. Also initialized 2LIS_03_BF and uploaded into the same cube. When i run a query, I see opening stock. But if i change posting d

  • Satellite A205: Bluetooth USB dongle will no longer connect to AirCable

    I plugged a generic Bluetooth USB dongle into my Satellite A205 and used it for several hours over three days to connect to an AirCable Serial Bluetooth to run my telescope controller. The generic dongle worked after downloading a Toshiba stack. Each

  • SB Xtreme Audio Notebook optical out problem with Home TheaterSystem-HELP!

    JSB Xtreme Audio Notebook optical out problem with Home TheaterSystem-HELP!, I have this sound card connected?(optical OUT) ?to my home?theater system's digital IN ?through the recommended /8in. optical miniplug-to-optical SPDIF cable. Theater system

  • Array(Non Displayabl​e)

    Hi, I'm currently having problem to display the output with the error array(non displayable). Below are the files attached. Attachments: Book1..xls ‏59 KB Agilent Output.vi ‏109 KB

  • Accessing NFS from Java

    I have requirement to create and write files on a NFS file storage. I need to do this from a Java application running on Windows 2008 Server. How do i implement this requirement?. Is there any Java Library to achieve this?. I have read documentation