Private Interconnect redundancy

Grid Version : 11.2.0.2
OS : Solaris 10 on HP Proliant
Currently we have a 2-node RAC running with 4 live DBs.
Currently our private interconnect is
### Current Private Interconnect
169.21.204.1      scnuprd186-privt1.mvtrs.net  scnuprd186-privt1
169.21.204.4      scnuprd187-privt1.mvtrs.net  scnuprd187-privt1To have redundancy for private interconnect , After repeated requests, our Unix team has finally attached a redundant NIC for each node with a redundant Gigabit-ethernet switch.
So, we need to add the below NIC to the CRS. How can we do that?
###Redundant Private Interconnect (currently attached to the server, but yet to be 'included' in the cluster)
169.21.204.2      scnuprd186-privt2.mvtrs.net  scnuprd186-privt2  # Node1's newly attached redundant NIC
169.21.204.5      scnuprd187-privt2.mvtrs.net  scnuprd187-privt2  # Node2's newly attached redundant NIC

Citizen_2 wrote:
Grid Version : 11.2.0.2
OS : Solaris 10 on HP Proliant
Currently we have a 2-node RAC running with 4 live DBs.
Currently our private interconnect is
### Current Private Interconnect
169.21.204.1      scnuprd186-privt1.mvtrs.net  scnuprd186-privt1
169.21.204.4      scnuprd187-privt1.mvtrs.net  scnuprd187-privt1To have redundancy for private interconnect , After repeated requests, our Unix team has finally attached a redundant NIC for each node with a redundant Gigabit-ethernet switch.You can use IPMP (IP MultiPath) in Solaris.
First, note that these should be NON-ROUTABLE addresses configured on a PRIVATE-Dedicated Switch. It would look something like this:
169.21.204.1 scnuprd186-privt1-IPMPvip.mvtrs.net scnuprd186-privt1-IPMPvip
169.21.204.2 scnuprd186-privt1-nic1.mvtrs.net scnuprd186-privt1-nic1 eth2
169.21.204.3 scnuprd186-privt1-nic2.mvtrs.net scnuprd186-privt1-nic2 eth3
169.21.204.4 scnuprd187-privt1-IPMPvip.mvtrs.net scnuprd187-privt1-IPMPvip
169.21.204.5 scnuprd187-privt1-nic1.mvtrs.net scnuprd187-privt1-nic1 eth2
169.21.204.6 scnuprd187-privt1-nic2.mvtrs.net scnuprd187-privt1-nic2 eth3
IPMP has a "real address" for each "real" interface and the IPMPvip's will "float" between the eth2 and eth3 devices depending on which one is active. Similar to the way the host vip can "float" between nodes. It is the IPMPvip addresses that are provided to the CRS configuration.
I have used this on Sun 6900's and it worked great.
Now, it can get extremely complicated if you were to also use IPMP on the public interfaces as well. It does work, you just need to pay attention to how you configure it.
>
So, we need to add the below NIC to the CRS. How can we do that?
###Redundant Private Interconnect (currently attached to the server, but yet to be 'included' in the cluster)
169.21.204.2      scnuprd186-privt2.mvtrs.net  scnuprd186-privt2  # Node1's newly attached redundant NIC
169.21.204.5      scnuprd187-privt2.mvtrs.net  scnuprd187-privt2  # Node2's newly attached redundant NIC

Similar Messages

  • RAC Private interconnect redundancy

    Hello All,
    We are designing (implementation will be done later) a 2-node RAC Database with GI having version 12.1.0.2 and RDBMS S/W having version 11.2.0.4. 
    We want to make private interconnect redundant but sysadmin does not have two same bandwidth channels, he is giving two NICs with 10Gbe (Giga bit Ethernet) and 1Gbe respectively.
    I got to know that 1 Gbe is sufficient for GES and GCS but will this architecture work fine means any harm in having 2 different bandwidth channels also in case of failure of 10Gbe interface definitely there will be performance degradation. 
    Thanks,
    Hemant.

    DO NOT use two different network bandwidths for your Cluster Interconnect. With two physical NICs, you will either resort to NIC bonding or HAIP, the latter being the recommendation from Oracle Corp since you are using 12c. In either case, both NIC's will be used equally. This means some traffic on the private network will be 'slower' than the other traffic. You do run the risk of having performance issues with this configuration.
    Also...there are two reasons for implementing multiple NICs for the Cluster Interconnect, performance and high availability. I've addressed performance above. On the HA side, dual NICs mean that if one channel goes down, the other channel is available and the cluster can stay operational. There is a law of the universe that says if you have 10gE on one side and 1gE on the other side, you have a 99% chance that if one channel goes down, it will be the 10gE one.   Which means you may not have enough bandwidth on the remaining channel.
    Cheers,
    Brian

  • NICs for Private Interconnect redundancy

    DB/Grid version : 11.2.0.2
    Platform : AIX 6.1
    We are going to install a 2-node RAC on AIX (that thing which is almost good as Solaris )
    Our primary private interconnect is
    ### Primary Private Interconnect
    169.21.204.1      scnuprd186-privt1.mvtrs.net  scnuprd186-privt1
    169.21.204.4      scnuprd187-privt1.mvtrs.net  scnuprd187-privt1For Cluster inteconnect's redundancy , Unix team has attached an extra NIC for each node with an extra Gigabit-ethernet switch for these NICs.
    ###Redundant Private Interconnect attached to the server
    169.21.204.2      scnuprd186-privt2.mvtrs.net  scnuprd186-privt2  # Node1's newly attached redundant NIC
    169.21.204.5      scnuprd187-privt2.mvtrs.net  scnuprd187-privt2  # Node2's newly attached redundant NICExample borrowed from citizen2's post
    Apparently I have 2 ways to implement cluster inteconnect's redundancy
    Option1. NIC bonding at OS level
    Option2. Let grid software do it
    Question1. Which is better : Option 1 or 2 ?
    Question2.
    Regarding Option2.
    From googling and OTN , i gather that , during grid installation you just provide 169.21.204.0 for cluster inteconnect and grid will identify the redundant NIC and switch. And if something goes wrong with the Primary Interconnect setup (shown above) , grid will automatically re-route interconnect traffic using the redundant NIC setup. Is this correct ?
    Question 3.
    My colleague tells me , for the redundant Switch (Gigabit) Unless I configure some Multicasting (AIX specific), I could get errors during installation. He doesn't clearly what it was ? Anyone faced Multicasting related issue on this ?

    Hi,
    My recommendation is to you use the AIX EtherChannel.
    The EtherCannel of AIX is much more powerfull and stable compared with HAIP.
    See how setup AIX EtherChannel on 10 Gigabit Ethernet interfaces
    http://levipereira.wordpress.com/2011/01/26/setting-up-ibm-power-systems-10-gigabit-ethernet-ports-and-aix-6-1-etherchannel-for-oracle-rac-private-interconnectivity/
    If you choose use HAIP I recommend you read this note, and find all notes about bugs of HAIP on AIX.
    11gR2 Grid Infrastructure Redundant Interconnect and ora.cluster_interconnect.haip [ID 1210883.1]
    ASM Crashes as HAIP Does not Failover When Two or More Private Network Fails [ID 1323995.1]
    About Multicasting read it:
    Grid Infrastructure 11.2.0.2 Installation or Upgrade may fail due to Multicasting Requirement [ID 1212703.1]
    Regards,
    Levi Pereira

  • Redundancy at Private interconnect.

    Hi,
    We are planning to setup a 2 node RAC. Our system admin has provided 2 nics for private interconnect. We were looking to use both as private interconnect.
    Operating environment
    Solaris 10
    Oracle 10g R2 (clusterware, rdbms)
    Current configuration of NICs provided for interconnect.
    nxge1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 6
    inet 192.168.1.119 netmask ffffff00 broadcast 192.168.1.255
    ether 0:21:28:69:a7:37
    nxge2: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 7
    inet 192.168.2.119 netmask ffffff00 broadcast 192.168.2.255
    ether 0:21:28:69:a7:38
    My questions:
    As per oracle support note "How to Setup IPMP as Cluster Interconnect (Doc ID 368464.1)"
    we can use IPMP grouping for Interconnect, but it is not very clear.
    1) If I use IPMP group do i need to specify only one physical ip as cluster_interconnect or all the ips associated to NIC. (Will this allow load balancing or only failover).
    2) If we do not want to use IPMP can I specify all the IP address of NICs in cluster_interconnects parameter (This will not allow failover only load balancing).
    Regards
    Veera

    user7636989 wrote:
    Hi,
    We are planning to setup a 2 node RAC. Our system admin has provided 2 nics for private interconnect. We were looking to use both as private interconnect.
    Operating environment
    Solaris 10
    Oracle 10g R2 (clusterware, rdbms)
    Current configuration of NICs provided for interconnect.
    nxge1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 6
    inet 192.168.1.119 netmask ffffff00 broadcast 192.168.1.255
    ether 0:21:28:69:a7:37
    nxge2: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 7
    inet 192.168.2.119 netmask ffffff00 broadcast 192.168.2.255
    ether 0:21:28:69:a7:38A prerequsite for IPMP is that the participating IP interfaces should be in the same IP broadcast
    subnet, but your output above shows them to be on different subnets (192.168.1.0/24 and
    192.168.2.0/24). That would need to be fixed before you can use IPMP (which supports
    both failover and loadbalancing). And if you used IPMP, then you would need to set up
    the cluster_interconnect to be the data address(es) (the one(s) that is(are) not set up with
    the NOFAILOVER flag).
    >
    My questions:
    As per oracle support note "How to Setup IPMP as Cluster Interconnect (Doc ID 368464.1)"
    we can use IPMP grouping for Interconnect, but it is not very clear.
    1) If I use IPMP group do i need to specify only one physical ip as cluster_interconnect or all the ips associated to NIC. (Will this allow load balancing or only failover).
    2) If we do not want to use IPMP can I specify all the IP address of NICs in cluster_interconnects parameter (This will not allow failover only load balancing).
    Regards
    Veera

  • Copper cable / GigE Copper Interface as Private Interconnect for Oracle RAC

    Hello Gurus
    Can some one confirm if the copper Cables ( Cat5/RJ45) can be used for Gig Ethernet i.e. Private interconnects for deploying Oracle RAC 9.x or 10gR2 on Solaris 9/10 .
    i am planning to use 2 X GigE Interfaces (one port each from X4445 Quad Port Ethernet Adapters) & Planning to connect it using copper cables ( all the documents that i came across is been refering to the fiber cables for Private Interconnects , connecting GigE Interfaces , so i am getting bit confused )
    would appretiate if some one can throw some lights on the same.
    regards,
    Nilesh Naik
    thanks

    Cat5/RJ45 can be used for Gig Ethernet Private interconnects for Oracle RAC. I would recommend trunking the two or more interconnects for redundancy. The X4445 adapters are compatible with the Sun Trunking 1.3 software (http://www.sun.com/products/networking/ethernet/suntrunking/). If you have servers that support the Nemo framework (bge, e1000g, xge, nge, rge, ixgb), you can use the Solaris 10 trunking software, dladmin.
    We have a couple of SUN T2000 servers and are using the onboard GigE ports for the Oracle 10gR2 RAC interconnects. We upgraded the onboard NIC drivers to the e1000g and used the Solaris 10 trunking software. The next update of Solaris will have the e1000g drivers as the default for the SUN T2000 servers.

  • Need procedure to change ip address on private interconnect in 11.2.0.3

    Could someone please send me the procedure to change the ip address of the private interconnect in 11gr2 rac (11.2.0.3)
    The interconnect has been configured using the default HAIP resource during installation of a 2 node cluster on the aix 6.1 platform. I have searched metalink but cannot find a doc with the procedure to make the ip address change.
    The sys admins gave us an ip address on the wrong subnet so now we have to change the ip address of the en1 interface.
    If anyone has steps in terms of shutting down the clusterware and correct order to make changes this would be very much appreciated.
    Thanks.

    Thanks, I seen this one also but I was just hoping to see some official documentation from oracle on this topic. I searched metalink and there is a doc id called
    "Grid infrastructure everything you need to know" but it does not speak to this configuration change or even how to disable the clusterware in the event that you need to perform maintenance and do not want the clusterware to automatically come online.
    Although I love google too... but If there are any official documentation on this topic I would really appreciate to know where it can be found?
    Thanks.

  • INS-20802 Oracle Private Interconnect Configuration Assistant failed

    Thought I would post what information I've gathered, after facing this error during install of RAC Grid Infrastructure 11.2.0.1 on Red Hat Enterprise Linux Server release 5.5 64-bit, as Oracle Support is once again unable to help. Maybe this will save someone else some time and the aggravation of dealing with lousy Oracle Support.
    The error occurs after root.sh has successfully completed on all nodes. Oracle Net Configuration Assistant runs successfully, then Oracle Private Interconnect Configuration Assistant launches and subsequently fails with the following.
    [INS-20802] Oracle Private Interconnect Configuration Assistant failed.
    /u01/app/oraInventory/logs/installActions2010-12-13_01-26-10PM.log
    INFO: Starting 'Oracle Private Interconnect Configuration Assistant'
    INFO: Starting 'Oracle Private Interconnect Configuration Assistant'
    INFO: PRIF-26: Error in update the profiles in the cluster
    INFO:
    WARNING:
    INFO: Completed Plugin named: Oracle Private Interconnect Configuration Assistant
    INFO: Oracle Private Interconnect Configuration Assistant failed.
    INFO: Oracle Private Interconnect Configuration Assistant failed.
    I was able to find another error that coincides with the PRIF-26 error: "CRS-2324:Error(s) occurred while trying to push GPnP profile. State may be inconsistent."
    I was also able to duplicate the PRIF-26 error by trying to add a non-existent network interface via oifcfg:
    ./oifcfg setif -global jjj1/192.167.1.0:cluster_interconnect
    PRIF-26: Error in update the profiles in the cluster
    My best guess is the Oracle Private Interconnect Configuration Assistant makes a call to oifcfg. When oifcfg makes an update or adds a public/private interface, some XML files are also updated or maybe cross-referenced. These files are located here: <grid_home>/gpnp/<host>/profiles/peer
    Any updates/changes/addtions to the private or public interfaces include changes for the Grid Plug-n-Play component, which uses the XML files. If the interface name is not contained in the XML files, my best guess is that triggers the "CRS-2324:Error(s) occurred while trying to push GPnP profile. State may be inconsistent.
    I verified everything was configured correctly; the cluster verification utility reported everything was ok. I also ran the cluster verifcation utility against the GP-nP:
    ./cluvfy comp gpnp -verbose
    I also reviewed the public and private interfaces via oifcfg and they are correct:
    [oracle@ryoradb1 bin]$ ./oifcfg getif -global
    eth0 10.10.2.0 global public
    eth1 192.167.1.0 global cluster_interconnect
    [oracle@ryoradb1 bin]$ ./oifcfg iflist -p
    eth0 10.10.2.0 PRIVATE
    eth1 192.167.1.0 UNKNOWN
    My conclusion is the environment is configured correctly, in spite of the error generated by the Oracle Private Configuration Assistant.

    I understand that you have installed 11.2.0.1 not 11.2.0.2 because multicating must be enabled if you have installed 11.2.0.2 and you may face these sort of problems because cluster nodes would not be able to communicate with each other.
    Please check ocssd.log especially on the first node because this file will give more inforamtion as to why first node is not able to push GPnP file. As you have executed cluvfy to confirm the GPnP but to confirm whether GPnP profile is accurate to just narrow down the problem, I would suggest you to try to start cluster in exclusive mode so that you can sure that GPnP profile is good.
    Shutdown CRS on all nodes, if there is any hung processes then kill them before executing the fllowing command to start cluster in exclusive mode.
    $GRID_HOME/bin/crsctl start crs -excl
    If you are able to start cluster in exclusive then it is sure that GPnP is correct and then next step would be to verify the private network.
    See how you goes.
    FYI, I was recently analyzing the same sort of problems where cluster was not able to access GPnP profile and finally I found issues on my private network. Customer had enabled IGMP snooping, which was avoiding multicast communication over the private network but it was 11.2.0.2, which is not the case here.
    Harish Kumar
    http://www.oraxperts.com

  • Crs Not Starting _ private Interconnect Down

    Hello All,
    I Have Installed 2 node 10g R2(10.2.0.1) RAC on Solaris 10 T2000 Machines. Yesterday my Second Node Crs gone down. I tried to start it but it didn't start. Then i checked that Private IP (interconnect) is not Pinging from both the node. But Node 1 was up and working so my Users Can Connect to It.
    But Today morning I see that Crs on node 1aslo goes down .
    Is this is problem of private interconnect.? My network guys are trying to up Private Interconnect.
    If Private Interconnect is down, why node 1 goes down after few hours. i think private interconnect is for interconnect with node 2 but node 2 is down .
    Previously My interconnect was connected with cross cables now i have asked them to connect them through switch.
    Help me Out.
    Regards,
    Pankaj.

    Previously My interconnect was connected with cross cables now i have asked them to connect them through switch
    Even we are planning to do the same.Please share your experienceHope you have done this before - moving to switch
    (Update for record id(s): 105681546)
    QUESTION
    ========
    1.Will the database and the Clusterware need to be shutdown etc?
    2.Will our ip addresses need to be reconfigured?
    3.Are there any steps that need to be carried out before unplugging the CROSS CABLE
    and after the interconnect is connected to the switch...?
    ANSWER
    ======
    1. Yes, you have to stop CRS on each node.
    2. No, not required.Provided you are planning to use same ip addresses.
    3. Steps:
    a. Stop CRS on each node. "crsctl stop crs"
    b. Replace the crossover cable with switch.
    c. Start the CRS on each node. "crsctl start crs"
    Even we are planning to do the same.Please share your experienceFollowed by the above answers from Customer Support, It went smooth, we stopped all the services, and with both the nodes reboot.
    Message was edited by:
    Ravi Prakash

  • Unplug of private interconnect cable restart the system!

    Dear All,
    My Database is 2 Node RAC Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production on Linux.
    I had a strange experience with my private interconnect between the 2 nodes. I need to replace switch between the nodes with a new and better switch. When i plugged wire from one node, the machine restarted. Similarly while I plugged the wire from the other node, the same machine that was restarted earlier, restarted again.
    Have you ever faced such a problem? What can i investigate? Any clue please.
    Also I want to know what is the difference of if we update the remote listener with IP instead of scan IP name?
    SQL> show parameter remote_listener
    NAME TYPE VALUE
    remote_listener string 10.168.20.29:1521
    This is the only thing i doubt creating the problem. All other configurations are as per documentation.
    Your kind help will be appreciated.
    Regards,
    Imran
    Regards,
    Imran

    Yeah as far as I have seen this seems to be the problem.
    CRS was already not accessible by the nodes and when i un plug the cable from one node it restarted and Scan IP was not accessible for clients to connect the database.
    I configured Oracle ASM on SAN device using multipath.
    Now in OEM i see:
    Serviced Databases
    Name     Disk Groups     Failure Groups     Allocated Space (GB)     Availability     Alerts
    racdbdb_racdbdb1     FRA, DATA     n/a     288.27     [Availability]      21
    RACDB-cluster     CRS     n/a     0.26     Not Monitored
    But when i run this query:
    SQL> SELECT NAME, STATE, OFFLINE_DISKS FROM V$ASM_DISKGROUP;
    NAME STATE OFFLINE_DISKS
    CRS MOUNTED 0
    DATA CONNECTED 0
    FRA CONNECTED 0
    How can i check that voting disk is functional? Kindly help
    Regards,
    Imran

  • Unplug of private interconnect cable but the machine didn't restarted

    Dear All,
    I have RAC Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit database on Linux.
    I had  strange experience with my private interconnect between the 2 nodes. When i test unplug private interconnect link in one of the nodes, the machine didn't rebooted. When i check in cluster log, it's stated one of the nodes is rebooted.
    2013-10-07 12:34:18.570
    [cssd(7565)]CRS-1612:Network communication with node centaurus22 (2) missing for 50% of timeout interval.  Removal of this node from cluster in 14.360 seconds
    2013-10-07 12:34:25.572
    [cssd(7565)]CRS-1611:Network communication with node centaurus22 (2) missing for 75% of timeout interval.  Removal of this node from cluster in 7.360 seconds
    2013-10-07 12:34:30.573
    [cssd(7565)]CRS-1610:Network communication with node centaurus22 (2) missing for 90% of timeout interval.  Removal of this node from cluster in 2.360 seconds
    2013-10-07 12:34:32.935
    [cssd(7565)]CRS-1607:Node centaurus22 is being evicted in cluster incarnation 272740834; details at (:CSSNM00007:) in /opt/app/11.2.0/grid/log/centaurus21/cssd/ocssd.log.
    2013-10-07 12:34:34.937
    [cssd(7565)]CRS-1625:Node centaurus22, number 2, was manually shut down
    2013-10-07 12:34:34.952
    [cssd(7565)]CRS-1601:CSSD Reconfiguration complete. Active nodes are centaurus21 .
    2013-10-07 12:34:34.965
    [crsd(8720)]CRS-5504:Node down event reported for node 'centaurus22'.
    2013-10-07 12:34:36.427
    [crsd(8720)]CRS-2773:Server 'centaurus22' has been removed from pool 'Generic'.
    2013-10-07 12:34:36.428
    [crsd(8720)]CRS-2773:Server 'centaurus22' has been removed from pool 'ora.SASDB'.
    2013-10-07 18:46:28.633
    Have you ever faced this problem ?
    Your kindly help will be appreciated.
    Thank you
    Regards,
    Izzudin Hanafie

    Rebootless fencing is introduced in 11.2.0.2 Grid Infrastructure, instead of rebooting a node as in pre-11.2.0.2 when eviction happens, it will attempt to stop GI gracefuly on the evicted node to avoid a node reboot.
    http://www.trivadis.com/uploads/tx_cabagdownloadarea/Trivadis_oracle_clusterware_node_fencing_v.pdf

  • Shared private interconnect between 2 clusters

    Hello to everyone.
    I am just wondering if somebody could answer me or show proper direction.
    Is it allowed for two or more clusters share the same private interconnect network? Or dedicated private interconnect required for each cluster.
    Traffic is not matter here because only oracle grid infrastructure ( say 11.2.0.2) will be installed on each cluster (no oracle RAC). We are going to use CRS failover feature.

    Of course, you can share the network.. make sure that multicast is tested out before the installation..
    Again, be careful with too much of sharing.. if the bandwidth is so full that network timeouts happen, the nodes would evict.. but as long as you good network, this is fine.

  • What is acceptable level of Private Interconnect Latency for RAC

    We have build 3 Node RAC on RHEL5.4 on VMware.
    There is node eviction problem due to loss of Network Heartbeat.
    ocssd.log:[    CSSD]2010-03-05 17:48:21.908 [84704144] >TRACE: clssnmReadDskHeartbeat: node 3, vm-lnx-rds1173, has a disk HB, but no network HB, DHB has rcfg 0, wrtcnt, 2, LATS 1185024, lastSeqNo 2, timestamp 1267791501/1961474
    Ping statistics from Node2 to Node1 are as below
    --- rds1171-priv ping statistics ---
    443 packets transmitted, 443 received, 0% packet loss, time 538119ms
    rtt min/avg/max/mdev = 0.150/2.030/630.212/29.929 ms
    [root@vm-lnx-rds1172 oracle]#
    Can this be reason for Node eviction? What is acceptable level of of private interconnect latency for RAC ?

    What is acceptable level of of private interconnect latency for RAC ?Normal local network latency should be enough. By the way latency settings are very generous.
    Can you check if your to-be-evicted node runs and is reachable when seeing the node eviction messages?
    In addition to that: Can you check the log files of the eviced node. Check for time stamps around "2010-03-05 17:48:21.908". Make sure all systems are NTP synchronized.
    Ronny Egner
    My Blog: http://blog.ronnyegner-consulting.de

  • Gig Ethernet V/S  SCI as Cluster Private Interconnect for Oracle RAC

    Hello Gurus
    Can any one pls confirm if it's possible to configure 2 or more Gigabit Ethernet interconnects ( Sun Cluster 3.1 Private Interconnects) on a E6900 cluster ?
    It's for a High Availability requirement of Oracle 9i RAC. i need to know ,
    1) can i use gigabit ethernet as Private cluster interconnect for Deploying Oracle RAC on E6900 ?
    2) What is the recommended Private Cluster Interconnect for Oracle RAC ? GiG ethernet or SCI with RSM ?
    3) How about the scenarios where one can have say 3 X Gig Ethernet V/S 2 X SCI , as their cluster's Private Interconnects ?
    4) How the Interconnect traffic gets distributed amongest the multiple GigaBit ethernet Interconnects ( For oracle RAC) , & is anything required to be done at oracle Rac Level to enable Oracle to recognise that there are multiple interconnect cards it needs to start utilizing all of the GigaBit ethernet Interfaces for transfering packets ?
    5) what would happen to Oracle RAC if one of the Gigabit ethernet private interconnects fails
    Have tried searching for this info but could not locate any doc that can precisely clarify these doubts that i have .........
    thanks for the patience
    Regards,
    Nilesh

    Answers inline...
    Tim
    Can any one pls confirm if it's possible to configure
    2 or more Gigabit Ethernet interconnects ( Sun
    Cluster 3.1 Private Interconnects) on a E6900
    cluster ?Yes, absolutely. You can configure up to 6 NICs for the private networks. Traffic is automatically striped across them if you specify clprivnet0 to Oracle RAC (9i or 10g). That is TCP connections and UDP messages.
    It's for a High Availability requirement of Oracle
    9i RAC. i need to know ,
    1) can i use gigabit ethernet as Private cluster
    interconnect for Deploying Oracle RAC on E6900 ? Yes, definitely.
    2) What is the recommended Private Cluster
    Interconnect for Oracle RAC ? GiG ethernet or SCI
    with RSM ? SCI is or is in the process of being EOL'ed. Gigabit is usually sufficient. Longer term you may want to consider Infiniband or 10 Gigabit ethernet with RDS.
    3) How about the scenarios where one can have say 3 X
    Gig Ethernet V/S 2 X SCI , as their cluster's
    Private Interconnects ? I would still go for 3 x GbE because it is usually cheaper and will probably work just as well. The latency and bandwidth differences are often masked by the performance of the software higher up the stack. In short, unless you tuned the heck out of your application and just about everything else, don't worry too much about the difference between GbE and SCI.
    4) How the Interconnect traffic gets distributed
    amongest the multiple GigaBit ethernet Interconnects
    ( For oracle RAC) , & is anything required to be done
    at oracle Rac Level to enable Oracle to recognise
    that there are multiple interconnect cards it needs
    to start utilizing all of the GigaBit ethernet
    Interfaces for transfering packets ?You don't need to do anything at the Oracle level. That's the beauty of using Oracle RAC with Sun Cluster as opposed to RAC on its own. The striping takes place automatically and transparently behind the scenes.
    5) what would happen to Oracle RAC if one of the
    Gigabit ethernet private interconnects fails It's completely transparent. Oracle will never see the failure.
    Have tried searching for this info but could not
    locate any doc that can precisely clarify these
    doubts that i have .........This is all covered in a paper that I have just completed and should be published after Christmas. Unfortunately, I cannot give out the paper yet.
    thanks for the patience
    Regards,
    Nilesh

  • Link Based IPMP Private Interconnect - Oracle 10g

    I am configuring IPMP for Oracle 10g RAC private and Private Interconnect.
    Following an Oracle white paper we are configuring LINK Based IPMP in an Active/Active Configuration
    I have information from metalink for configuring the VIP, however I am a little unsure about the configuration of the private interconnect.
    Is it just a case of removing the cluster interconnect from oifcfg and using cluster_interconnect for ASM and Database instances.
    Is there any other considerations for Oracle 10g RAC deployment beside making change using oifcfg as well as using private_interconnect initialization parameter.
    Thanks
    Paul

    The following notes are of help for you:
    How to Setup IPMP as Cluster Interconnect (Doc ID 368464.1)
    Configuring Solaris IP Multipathing (IPMP) for the Oracle 10g VIP (Doc ID 283107.1)
    White paper on IPMP:
    http://www.oracle.com/technetwork/articles/systems-hardware-architecture/ha-rac-networking-ipmp-168440.pdf
    Raj Mareddi
    http://www.freeoraclehelp.com

  • Private Interconnect: Should any nodes other than RAC nodes have one?

    The contractors that set up our four-node production 10g RAC (and a standalone development server) also assigned private interconnect addresses to 2 Apache/ApEx servers and a standalone development database server.
    There are service names in the tnsnames.ora on all servers in our infrastructure referencing these private interconnects- even the non-rac member servers. The nics on these servers are not bound for failover with the nics bound to the public/VIP addresses. These nics are isolated on their own switch.
    Could this configuration be related to lost heartbeats or voting disk errors? We experience rac node expulsions and even arbitrary bounces (reboots!) of all the rac nodes.

    I do not have access to the contractors. . . .can only look at what they have left behind and try to figure out their intention. . .
    I am reading the Ault/Tumha book Oracle 10g Grid and Real Application Clusters and looking through our own settings and config files and learning srvctl and crsctl commands from their examples. Also googling and OTN searching through the library full of documentation. . .
    I still have yet to figure out if the private interconnect spoken about so frequently in cluster configuration documents are the binding to the set of node.vip address specifications in the tnsnames.ora (bound the the first eth adaptor along with the public ip addresses for the nodes) or the binding on the second eth adaptor to the node.prv addresses not found in the local pfile, in the tnsnames.ora, or the listener.ora (but found at the operating system level in the ifconfig). If the node.prv addresses are not the private interconnect then can anyone tell me that they are for?

Maybe you are looking for

  • Can't update ipod (windows-mac issue)

    My mom got a MacBook on sale so this is now what I use for everything; it is faster and more efficient than the PC that we own. On my iTunes, I can do everything exceptupdate my iPod nano. It simply says "Only Macintosh-formatted iPods can be updated

  • Dead iMac, yellow LED. Motherboard?

    Hi. I have a big problem with my iMac 350. When I want to turn my mac on it does not power on. I have changed battegy, resetted PRAM and nothing. This is weird because When I plug the cord I can see LED (near the DIMM slots) turns on and about half s

  • How to connect mic

    please can someone tell me how to connect my mic so logic can read it plzzz!!!!!

  • FTPS queries

    Hi, I am trying to send files to an FTP(s) server over an SSL protocol. The FTP(s) server is at the receiving side. The client has provided us with the public certificate. I have these queries. PLease answer to each query separately. 1. Now that we h

  • HT1595 Setting date and time is going around and around

    Hello could someone help me?