RAC Private interconnect redundancy

Hello All,
We are designing (implementation will be done later) a 2-node RAC Database with GI having version 12.1.0.2 and RDBMS S/W having version 11.2.0.4. 
We want to make private interconnect redundant but sysadmin does not have two same bandwidth channels, he is giving two NICs with 10Gbe (Giga bit Ethernet) and 1Gbe respectively.
I got to know that 1 Gbe is sufficient for GES and GCS but will this architecture work fine means any harm in having 2 different bandwidth channels also in case of failure of 10Gbe interface definitely there will be performance degradation. 
Thanks,
Hemant.

DO NOT use two different network bandwidths for your Cluster Interconnect. With two physical NICs, you will either resort to NIC bonding or HAIP, the latter being the recommendation from Oracle Corp since you are using 12c. In either case, both NIC's will be used equally. This means some traffic on the private network will be 'slower' than the other traffic. You do run the risk of having performance issues with this configuration.
Also...there are two reasons for implementing multiple NICs for the Cluster Interconnect, performance and high availability. I've addressed performance above. On the HA side, dual NICs mean that if one channel goes down, the other channel is available and the cluster can stay operational. There is a law of the universe that says if you have 10gE on one side and 1gE on the other side, you have a 99% chance that if one channel goes down, it will be the 10gE one.   Which means you may not have enough bandwidth on the remaining channel.
Cheers,
Brian

Similar Messages

  • NICs for Private Interconnect redundancy

    DB/Grid version : 11.2.0.2
    Platform : AIX 6.1
    We are going to install a 2-node RAC on AIX (that thing which is almost good as Solaris )
    Our primary private interconnect is
    ### Primary Private Interconnect
    169.21.204.1      scnuprd186-privt1.mvtrs.net  scnuprd186-privt1
    169.21.204.4      scnuprd187-privt1.mvtrs.net  scnuprd187-privt1For Cluster inteconnect's redundancy , Unix team has attached an extra NIC for each node with an extra Gigabit-ethernet switch for these NICs.
    ###Redundant Private Interconnect attached to the server
    169.21.204.2      scnuprd186-privt2.mvtrs.net  scnuprd186-privt2  # Node1's newly attached redundant NIC
    169.21.204.5      scnuprd187-privt2.mvtrs.net  scnuprd187-privt2  # Node2's newly attached redundant NICExample borrowed from citizen2's post
    Apparently I have 2 ways to implement cluster inteconnect's redundancy
    Option1. NIC bonding at OS level
    Option2. Let grid software do it
    Question1. Which is better : Option 1 or 2 ?
    Question2.
    Regarding Option2.
    From googling and OTN , i gather that , during grid installation you just provide 169.21.204.0 for cluster inteconnect and grid will identify the redundant NIC and switch. And if something goes wrong with the Primary Interconnect setup (shown above) , grid will automatically re-route interconnect traffic using the redundant NIC setup. Is this correct ?
    Question 3.
    My colleague tells me , for the redundant Switch (Gigabit) Unless I configure some Multicasting (AIX specific), I could get errors during installation. He doesn't clearly what it was ? Anyone faced Multicasting related issue on this ?

    Hi,
    My recommendation is to you use the AIX EtherChannel.
    The EtherCannel of AIX is much more powerfull and stable compared with HAIP.
    See how setup AIX EtherChannel on 10 Gigabit Ethernet interfaces
    http://levipereira.wordpress.com/2011/01/26/setting-up-ibm-power-systems-10-gigabit-ethernet-ports-and-aix-6-1-etherchannel-for-oracle-rac-private-interconnectivity/
    If you choose use HAIP I recommend you read this note, and find all notes about bugs of HAIP on AIX.
    11gR2 Grid Infrastructure Redundant Interconnect and ora.cluster_interconnect.haip [ID 1210883.1]
    ASM Crashes as HAIP Does not Failover When Two or More Private Network Fails [ID 1323995.1]
    About Multicasting read it:
    Grid Infrastructure 11.2.0.2 Installation or Upgrade may fail due to Multicasting Requirement [ID 1212703.1]
    Regards,
    Levi Pereira

  • How to check my RAC private interconnect working properly?

    All,
    Is there any way to check whether my RAC private interconnect is working properly or not?
    Thanks,
    Mahi

    Mahi wrote:
    All,
    Is there any way to check whether my RAC private interconnect is working properly or not?
    Thanks,
    MahiCVU verifies the connectivity between all of the nodes in the cluster through those interfaces.
    $cluvfy comp nodecon -n all -verbose
    http://docs.oracle.com/cd/E11882_01/rac.112/e16794/cvu.htm

  • Private Interconnect redundancy

    Grid Version : 11.2.0.2
    OS : Solaris 10 on HP Proliant
    Currently we have a 2-node RAC running with 4 live DBs.
    Currently our private interconnect is
    ### Current Private Interconnect
    169.21.204.1      scnuprd186-privt1.mvtrs.net  scnuprd186-privt1
    169.21.204.4      scnuprd187-privt1.mvtrs.net  scnuprd187-privt1To have redundancy for private interconnect , After repeated requests, our Unix team has finally attached a redundant NIC for each node with a redundant Gigabit-ethernet switch.
    So, we need to add the below NIC to the CRS. How can we do that?
    ###Redundant Private Interconnect (currently attached to the server, but yet to be 'included' in the cluster)
    169.21.204.2      scnuprd186-privt2.mvtrs.net  scnuprd186-privt2  # Node1's newly attached redundant NIC
    169.21.204.5      scnuprd187-privt2.mvtrs.net  scnuprd187-privt2  # Node2's newly attached redundant NIC

    Citizen_2 wrote:
    Grid Version : 11.2.0.2
    OS : Solaris 10 on HP Proliant
    Currently we have a 2-node RAC running with 4 live DBs.
    Currently our private interconnect is
    ### Current Private Interconnect
    169.21.204.1      scnuprd186-privt1.mvtrs.net  scnuprd186-privt1
    169.21.204.4      scnuprd187-privt1.mvtrs.net  scnuprd187-privt1To have redundancy for private interconnect , After repeated requests, our Unix team has finally attached a redundant NIC for each node with a redundant Gigabit-ethernet switch.You can use IPMP (IP MultiPath) in Solaris.
    First, note that these should be NON-ROUTABLE addresses configured on a PRIVATE-Dedicated Switch. It would look something like this:
    169.21.204.1 scnuprd186-privt1-IPMPvip.mvtrs.net scnuprd186-privt1-IPMPvip
    169.21.204.2 scnuprd186-privt1-nic1.mvtrs.net scnuprd186-privt1-nic1 eth2
    169.21.204.3 scnuprd186-privt1-nic2.mvtrs.net scnuprd186-privt1-nic2 eth3
    169.21.204.4 scnuprd187-privt1-IPMPvip.mvtrs.net scnuprd187-privt1-IPMPvip
    169.21.204.5 scnuprd187-privt1-nic1.mvtrs.net scnuprd187-privt1-nic1 eth2
    169.21.204.6 scnuprd187-privt1-nic2.mvtrs.net scnuprd187-privt1-nic2 eth3
    IPMP has a "real address" for each "real" interface and the IPMPvip's will "float" between the eth2 and eth3 devices depending on which one is active. Similar to the way the host vip can "float" between nodes. It is the IPMPvip addresses that are provided to the CRS configuration.
    I have used this on Sun 6900's and it worked great.
    Now, it can get extremely complicated if you were to also use IPMP on the public interfaces as well. It does work, you just need to pay attention to how you configure it.
    >
    So, we need to add the below NIC to the CRS. How can we do that?
    ###Redundant Private Interconnect (currently attached to the server, but yet to be 'included' in the cluster)
    169.21.204.2      scnuprd186-privt2.mvtrs.net  scnuprd186-privt2  # Node1's newly attached redundant NIC
    169.21.204.5      scnuprd187-privt2.mvtrs.net  scnuprd187-privt2  # Node2's newly attached redundant NIC

  • Oracle 10g RAC - Private Interconnect on Private non-routable VLAN

    In our data center there is an existing Oracle 10g RAC configured with private VLAN for Interconnect administered by a different group of DBAs.
    We are designing a new, separate Oracle 10g RAC environment to support our application.
    When we discussed with our data center folks to set up a private VLAN for our RAC Interconnect, they suggest to use the same existing Private VLAN used by other Oracle RAC configurations. In that case the Interconnect IPs will be on the same subnet as other Oracle RAC configurations.
    For example, if
    RAC1 with 2 nodes is using 192.168.1.1 and 192.168.1.2 in the VLAN_1 for the Interconect, they want us to use the same VLAN_1 with Interconnect IPs 192.168.1.3 and 192.168.1.4 for our 2 node RAC.
    Is Sharing same subnet on the same Private VLANs for interconnects of different RAC configurations supported?
    Will that cause any performance hit? That means the Interconnect IPs of One RAC configuration is pingable from other RAC configuration.
    Did anyone come across such a design?
    Could not find any info on this on Metalink.
    Thanks

    yes
    this is practically very much feasible.. as you would have only 4 m/c in ip subnet .... and this is very much less than the public subnet which we should refrain from using from interconnect.

  • Copper cable / GigE Copper Interface as Private Interconnect for Oracle RAC

    Hello Gurus
    Can some one confirm if the copper Cables ( Cat5/RJ45) can be used for Gig Ethernet i.e. Private interconnects for deploying Oracle RAC 9.x or 10gR2 on Solaris 9/10 .
    i am planning to use 2 X GigE Interfaces (one port each from X4445 Quad Port Ethernet Adapters) & Planning to connect it using copper cables ( all the documents that i came across is been refering to the fiber cables for Private Interconnects , connecting GigE Interfaces , so i am getting bit confused )
    would appretiate if some one can throw some lights on the same.
    regards,
    Nilesh Naik
    thanks

    Cat5/RJ45 can be used for Gig Ethernet Private interconnects for Oracle RAC. I would recommend trunking the two or more interconnects for redundancy. The X4445 adapters are compatible with the Sun Trunking 1.3 software (http://www.sun.com/products/networking/ethernet/suntrunking/). If you have servers that support the Nemo framework (bge, e1000g, xge, nge, rge, ixgb), you can use the Solaris 10 trunking software, dladmin.
    We have a couple of SUN T2000 servers and are using the onboard GigE ports for the Oracle 10gR2 RAC interconnects. We upgraded the onboard NIC drivers to the e1000g and used the Solaris 10 trunking software. The next update of Solaris will have the e1000g drivers as the default for the SUN T2000 servers.

  • Redundancy at Private interconnect.

    Hi,
    We are planning to setup a 2 node RAC. Our system admin has provided 2 nics for private interconnect. We were looking to use both as private interconnect.
    Operating environment
    Solaris 10
    Oracle 10g R2 (clusterware, rdbms)
    Current configuration of NICs provided for interconnect.
    nxge1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 6
    inet 192.168.1.119 netmask ffffff00 broadcast 192.168.1.255
    ether 0:21:28:69:a7:37
    nxge2: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 7
    inet 192.168.2.119 netmask ffffff00 broadcast 192.168.2.255
    ether 0:21:28:69:a7:38
    My questions:
    As per oracle support note "How to Setup IPMP as Cluster Interconnect (Doc ID 368464.1)"
    we can use IPMP grouping for Interconnect, but it is not very clear.
    1) If I use IPMP group do i need to specify only one physical ip as cluster_interconnect or all the ips associated to NIC. (Will this allow load balancing or only failover).
    2) If we do not want to use IPMP can I specify all the IP address of NICs in cluster_interconnects parameter (This will not allow failover only load balancing).
    Regards
    Veera

    user7636989 wrote:
    Hi,
    We are planning to setup a 2 node RAC. Our system admin has provided 2 nics for private interconnect. We were looking to use both as private interconnect.
    Operating environment
    Solaris 10
    Oracle 10g R2 (clusterware, rdbms)
    Current configuration of NICs provided for interconnect.
    nxge1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 6
    inet 192.168.1.119 netmask ffffff00 broadcast 192.168.1.255
    ether 0:21:28:69:a7:37
    nxge2: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 7
    inet 192.168.2.119 netmask ffffff00 broadcast 192.168.2.255
    ether 0:21:28:69:a7:38A prerequsite for IPMP is that the participating IP interfaces should be in the same IP broadcast
    subnet, but your output above shows them to be on different subnets (192.168.1.0/24 and
    192.168.2.0/24). That would need to be fixed before you can use IPMP (which supports
    both failover and loadbalancing). And if you used IPMP, then you would need to set up
    the cluster_interconnect to be the data address(es) (the one(s) that is(are) not set up with
    the NOFAILOVER flag).
    >
    My questions:
    As per oracle support note "How to Setup IPMP as Cluster Interconnect (Doc ID 368464.1)"
    we can use IPMP grouping for Interconnect, but it is not very clear.
    1) If I use IPMP group do i need to specify only one physical ip as cluster_interconnect or all the ips associated to NIC. (Will this allow load balancing or only failover).
    2) If we do not want to use IPMP can I specify all the IP address of NICs in cluster_interconnects parameter (This will not allow failover only load balancing).
    Regards
    Veera

  • What is acceptable level of Private Interconnect Latency for RAC

    We have build 3 Node RAC on RHEL5.4 on VMware.
    There is node eviction problem due to loss of Network Heartbeat.
    ocssd.log:[    CSSD]2010-03-05 17:48:21.908 [84704144] >TRACE: clssnmReadDskHeartbeat: node 3, vm-lnx-rds1173, has a disk HB, but no network HB, DHB has rcfg 0, wrtcnt, 2, LATS 1185024, lastSeqNo 2, timestamp 1267791501/1961474
    Ping statistics from Node2 to Node1 are as below
    --- rds1171-priv ping statistics ---
    443 packets transmitted, 443 received, 0% packet loss, time 538119ms
    rtt min/avg/max/mdev = 0.150/2.030/630.212/29.929 ms
    [root@vm-lnx-rds1172 oracle]#
    Can this be reason for Node eviction? What is acceptable level of of private interconnect latency for RAC ?

    What is acceptable level of of private interconnect latency for RAC ?Normal local network latency should be enough. By the way latency settings are very generous.
    Can you check if your to-be-evicted node runs and is reachable when seeing the node eviction messages?
    In addition to that: Can you check the log files of the eviced node. Check for time stamps around "2010-03-05 17:48:21.908". Make sure all systems are NTP synchronized.
    Ronny Egner
    My Blog: http://blog.ronnyegner-consulting.de

  • Gig Ethernet V/S  SCI as Cluster Private Interconnect for Oracle RAC

    Hello Gurus
    Can any one pls confirm if it's possible to configure 2 or more Gigabit Ethernet interconnects ( Sun Cluster 3.1 Private Interconnects) on a E6900 cluster ?
    It's for a High Availability requirement of Oracle 9i RAC. i need to know ,
    1) can i use gigabit ethernet as Private cluster interconnect for Deploying Oracle RAC on E6900 ?
    2) What is the recommended Private Cluster Interconnect for Oracle RAC ? GiG ethernet or SCI with RSM ?
    3) How about the scenarios where one can have say 3 X Gig Ethernet V/S 2 X SCI , as their cluster's Private Interconnects ?
    4) How the Interconnect traffic gets distributed amongest the multiple GigaBit ethernet Interconnects ( For oracle RAC) , & is anything required to be done at oracle Rac Level to enable Oracle to recognise that there are multiple interconnect cards it needs to start utilizing all of the GigaBit ethernet Interfaces for transfering packets ?
    5) what would happen to Oracle RAC if one of the Gigabit ethernet private interconnects fails
    Have tried searching for this info but could not locate any doc that can precisely clarify these doubts that i have .........
    thanks for the patience
    Regards,
    Nilesh

    Answers inline...
    Tim
    Can any one pls confirm if it's possible to configure
    2 or more Gigabit Ethernet interconnects ( Sun
    Cluster 3.1 Private Interconnects) on a E6900
    cluster ?Yes, absolutely. You can configure up to 6 NICs for the private networks. Traffic is automatically striped across them if you specify clprivnet0 to Oracle RAC (9i or 10g). That is TCP connections and UDP messages.
    It's for a High Availability requirement of Oracle
    9i RAC. i need to know ,
    1) can i use gigabit ethernet as Private cluster
    interconnect for Deploying Oracle RAC on E6900 ? Yes, definitely.
    2) What is the recommended Private Cluster
    Interconnect for Oracle RAC ? GiG ethernet or SCI
    with RSM ? SCI is or is in the process of being EOL'ed. Gigabit is usually sufficient. Longer term you may want to consider Infiniband or 10 Gigabit ethernet with RDS.
    3) How about the scenarios where one can have say 3 X
    Gig Ethernet V/S 2 X SCI , as their cluster's
    Private Interconnects ? I would still go for 3 x GbE because it is usually cheaper and will probably work just as well. The latency and bandwidth differences are often masked by the performance of the software higher up the stack. In short, unless you tuned the heck out of your application and just about everything else, don't worry too much about the difference between GbE and SCI.
    4) How the Interconnect traffic gets distributed
    amongest the multiple GigaBit ethernet Interconnects
    ( For oracle RAC) , & is anything required to be done
    at oracle Rac Level to enable Oracle to recognise
    that there are multiple interconnect cards it needs
    to start utilizing all of the GigaBit ethernet
    Interfaces for transfering packets ?You don't need to do anything at the Oracle level. That's the beauty of using Oracle RAC with Sun Cluster as opposed to RAC on its own. The striping takes place automatically and transparently behind the scenes.
    5) what would happen to Oracle RAC if one of the
    Gigabit ethernet private interconnects fails It's completely transparent. Oracle will never see the failure.
    Have tried searching for this info but could not
    locate any doc that can precisely clarify these
    doubts that i have .........This is all covered in a paper that I have just completed and should be published after Christmas. Unfortunately, I cannot give out the paper yet.
    thanks for the patience
    Regards,
    Nilesh

  • Private Interconnect: Should any nodes other than RAC nodes have one?

    The contractors that set up our four-node production 10g RAC (and a standalone development server) also assigned private interconnect addresses to 2 Apache/ApEx servers and a standalone development database server.
    There are service names in the tnsnames.ora on all servers in our infrastructure referencing these private interconnects- even the non-rac member servers. The nics on these servers are not bound for failover with the nics bound to the public/VIP addresses. These nics are isolated on their own switch.
    Could this configuration be related to lost heartbeats or voting disk errors? We experience rac node expulsions and even arbitrary bounces (reboots!) of all the rac nodes.

    I do not have access to the contractors. . . .can only look at what they have left behind and try to figure out their intention. . .
    I am reading the Ault/Tumha book Oracle 10g Grid and Real Application Clusters and looking through our own settings and config files and learning srvctl and crsctl commands from their examples. Also googling and OTN searching through the library full of documentation. . .
    I still have yet to figure out if the private interconnect spoken about so frequently in cluster configuration documents are the binding to the set of node.vip address specifications in the tnsnames.ora (bound the the first eth adaptor along with the public ip addresses for the nodes) or the binding on the second eth adaptor to the node.prv addresses not found in the local pfile, in the tnsnames.ora, or the listener.ora (but found at the operating system level in the ifconfig). If the node.prv addresses are not the private interconnect then can anyone tell me that they are for?

  • Private interconnect oracle 10g RAC configuration

    Can you please answer below questions?
    Why does a private interconnect need a switch, and why is straight through cables not supported.

    Hi,
    Why do you need a switch between the nodes".When network plugs are pulled out from one node on a two node cluster, a split brain scenerio occurs. (just it's enough)
    If you are using a crossover cable and you shutdown node (A) you will loose the network (private) link from node (B) (this happens in some servers.), Oracle RAC will not work with private network link down, both nodes will down and will not start until you get link on network private. (Goodbye high availability)
    You will be very unhappy with the error ORA-29740. Prelude to suggest this note: Troubleshooting ORA-29740 in a RAC Environment [ID 219361.1]
    There is no "why" Oracle RAC does not support crossover cable because RAC depends of SWITCH (it's a Hardware Requirements) and any problems in your environment Oracle will force you to implement a supported solution.
    You will have implemented a poor environment, if you not use a GB switch for private network.
    Oracle Words:
    *Physical Layout of the Private Interconnect*
    The basic requirements are described in the Installation Guide for each platform. Additional information about certification can be found on Metalink Certify.
    The interconnect as identified by both subnet number and interface name must be configured on all clustered nodes.
    *A switch between the clustered nodes is an absolute requirement.*
    *Cluster Interconnect in Oracle 10g and 11g [ID 787420.1]*
    Regards,
    Levi Pereira

  • RAC 11R2 Private Interconnect Issue

    Friends
    We had setup our Oracle Clusterware on Solaris Sparc with a version 11.2.0.3 PSU 2 patch sets. Some changes happen at the OS level and the private Interconnect IPs were picked wrong by our Oracle Clusterware registry.
    The clusterware is down. We are not able to bring up the clusterware. There will be a need to change the private IP configuration at the Oracle Clusterware level and now the clusterware is down.
    Is there any way we can change the configuration in private Interconnect ?
    Whenever we are trying to do a change. Getting the error message "PRIF-10: failed to initialize the cluster registry"
    $ oifcfg setif -global vnet2/10.131.239.0:cluster_interconnect
    PRIF-10: failed to initialize the cluster registry
    Thank You !
    Jai

    The clusterware is down. We are not able to bring up the clusterware. There will be a need to change the private IP configuration at the Oracle Clusterware level and now the clusterware is down.
    Is there any way we can change the configuration in private Interconnect ?
    Whenever we are trying to do a change. Getting the error message "PRIF-10: failed to initialize the cluster registry"
    $ oifcfg setif -global vnet2/10.131.239.0:cluster_interconnect
    PRIF-10: failed to initialize the cluster registryThis error happen when clusterware is down and you are trying to change Interconnect configuration, then you must start the Oracle Clusterware on the node to make changes.
    We are not able to bring up the clusterware. Some changes happen at the OS level and the private Interconnect IPsWhy you clusterware is not starting?...please post alertlog and crsd.log of cluster (only relevant info).
    If the error on crsd.log is : PROC-44: Error in network address and interface operations Network address and interface operations error
    This errors indicate a mismatch between OS setting (oifcfg iflist) and gpnp profile setting profile.xml.
    You will restore the OS network configuration back to the original status, start Oracle Clusterware. Then try make the changes again.

  • Need procedure to change ip address on private interconnect in 11.2.0.3

    Could someone please send me the procedure to change the ip address of the private interconnect in 11gr2 rac (11.2.0.3)
    The interconnect has been configured using the default HAIP resource during installation of a 2 node cluster on the aix 6.1 platform. I have searched metalink but cannot find a doc with the procedure to make the ip address change.
    The sys admins gave us an ip address on the wrong subnet so now we have to change the ip address of the en1 interface.
    If anyone has steps in terms of shutting down the clusterware and correct order to make changes this would be very much appreciated.
    Thanks.

    Thanks, I seen this one also but I was just hoping to see some official documentation from oracle on this topic. I searched metalink and there is a doc id called
    "Grid infrastructure everything you need to know" but it does not speak to this configuration change or even how to disable the clusterware in the event that you need to perform maintenance and do not want the clusterware to automatically come online.
    Although I love google too... but If there are any official documentation on this topic I would really appreciate to know where it can be found?
    Thanks.

  • INS-20802 Oracle Private Interconnect Configuration Assistant failed

    Thought I would post what information I've gathered, after facing this error during install of RAC Grid Infrastructure 11.2.0.1 on Red Hat Enterprise Linux Server release 5.5 64-bit, as Oracle Support is once again unable to help. Maybe this will save someone else some time and the aggravation of dealing with lousy Oracle Support.
    The error occurs after root.sh has successfully completed on all nodes. Oracle Net Configuration Assistant runs successfully, then Oracle Private Interconnect Configuration Assistant launches and subsequently fails with the following.
    [INS-20802] Oracle Private Interconnect Configuration Assistant failed.
    /u01/app/oraInventory/logs/installActions2010-12-13_01-26-10PM.log
    INFO: Starting 'Oracle Private Interconnect Configuration Assistant'
    INFO: Starting 'Oracle Private Interconnect Configuration Assistant'
    INFO: PRIF-26: Error in update the profiles in the cluster
    INFO:
    WARNING:
    INFO: Completed Plugin named: Oracle Private Interconnect Configuration Assistant
    INFO: Oracle Private Interconnect Configuration Assistant failed.
    INFO: Oracle Private Interconnect Configuration Assistant failed.
    I was able to find another error that coincides with the PRIF-26 error: "CRS-2324:Error(s) occurred while trying to push GPnP profile. State may be inconsistent."
    I was also able to duplicate the PRIF-26 error by trying to add a non-existent network interface via oifcfg:
    ./oifcfg setif -global jjj1/192.167.1.0:cluster_interconnect
    PRIF-26: Error in update the profiles in the cluster
    My best guess is the Oracle Private Interconnect Configuration Assistant makes a call to oifcfg. When oifcfg makes an update or adds a public/private interface, some XML files are also updated or maybe cross-referenced. These files are located here: <grid_home>/gpnp/<host>/profiles/peer
    Any updates/changes/addtions to the private or public interfaces include changes for the Grid Plug-n-Play component, which uses the XML files. If the interface name is not contained in the XML files, my best guess is that triggers the "CRS-2324:Error(s) occurred while trying to push GPnP profile. State may be inconsistent.
    I verified everything was configured correctly; the cluster verification utility reported everything was ok. I also ran the cluster verifcation utility against the GP-nP:
    ./cluvfy comp gpnp -verbose
    I also reviewed the public and private interfaces via oifcfg and they are correct:
    [oracle@ryoradb1 bin]$ ./oifcfg getif -global
    eth0 10.10.2.0 global public
    eth1 192.167.1.0 global cluster_interconnect
    [oracle@ryoradb1 bin]$ ./oifcfg iflist -p
    eth0 10.10.2.0 PRIVATE
    eth1 192.167.1.0 UNKNOWN
    My conclusion is the environment is configured correctly, in spite of the error generated by the Oracle Private Configuration Assistant.

    I understand that you have installed 11.2.0.1 not 11.2.0.2 because multicating must be enabled if you have installed 11.2.0.2 and you may face these sort of problems because cluster nodes would not be able to communicate with each other.
    Please check ocssd.log especially on the first node because this file will give more inforamtion as to why first node is not able to push GPnP file. As you have executed cluvfy to confirm the GPnP but to confirm whether GPnP profile is accurate to just narrow down the problem, I would suggest you to try to start cluster in exclusive mode so that you can sure that GPnP profile is good.
    Shutdown CRS on all nodes, if there is any hung processes then kill them before executing the fllowing command to start cluster in exclusive mode.
    $GRID_HOME/bin/crsctl start crs -excl
    If you are able to start cluster in exclusive then it is sure that GPnP is correct and then next step would be to verify the private network.
    See how you goes.
    FYI, I was recently analyzing the same sort of problems where cluster was not able to access GPnP profile and finally I found issues on my private network. Customer had enabled IGMP snooping, which was avoiding multicast communication over the private network but it was 11.2.0.2, which is not the case here.
    Harish Kumar
    http://www.oraxperts.com

  • Crs Not Starting _ private Interconnect Down

    Hello All,
    I Have Installed 2 node 10g R2(10.2.0.1) RAC on Solaris 10 T2000 Machines. Yesterday my Second Node Crs gone down. I tried to start it but it didn't start. Then i checked that Private IP (interconnect) is not Pinging from both the node. But Node 1 was up and working so my Users Can Connect to It.
    But Today morning I see that Crs on node 1aslo goes down .
    Is this is problem of private interconnect.? My network guys are trying to up Private Interconnect.
    If Private Interconnect is down, why node 1 goes down after few hours. i think private interconnect is for interconnect with node 2 but node 2 is down .
    Previously My interconnect was connected with cross cables now i have asked them to connect them through switch.
    Help me Out.
    Regards,
    Pankaj.

    Previously My interconnect was connected with cross cables now i have asked them to connect them through switch
    Even we are planning to do the same.Please share your experienceHope you have done this before - moving to switch
    (Update for record id(s): 105681546)
    QUESTION
    ========
    1.Will the database and the Clusterware need to be shutdown etc?
    2.Will our ip addresses need to be reconfigured?
    3.Are there any steps that need to be carried out before unplugging the CROSS CABLE
    and after the interconnect is connected to the switch...?
    ANSWER
    ======
    1. Yes, you have to stop CRS on each node.
    2. No, not required.Provided you are planning to use same ip addresses.
    3. Steps:
    a. Stop CRS on each node. "crsctl stop crs"
    b. Replace the crossover cable with switch.
    c. Start the CRS on each node. "crsctl start crs"
    Even we are planning to do the same.Please share your experienceFollowed by the above answers from Customer Support, It went smooth, we stopped all the services, and with both the nodes reboot.
    Message was edited by:
    Ravi Prakash

Maybe you are looking for

  • Export PDF creates 0kb file and Print Error.

    By "Print Error", I mean a simple pop-up window with the pages logo and the sentence "Print. Error while printing" even though I haven't asked for anything to be printed. The resulting pdf file is zero k in size and useless. I had this problem in iW

  • I can't print any pdf files from my photosmart 3210

    I can't print PDF files from my Photosmart 3210 I'm using a Pavilion Slimline

  • How can I unlock a secure pdf on my ipad?

    I am trying to unlock a book for school that I have the ebook copy too, I can do it just fine on my laptop but when I open it up in the app (acrobat dc) for ipad it won't let me fill in the password, therefore I cant see the book? Am I doing somethin

  • Displaying document flow in a BSP Page

    In my BSP for a service notificaiton, I have the need to add a object that will display the document flow from the notificaiton. In addition the requirment is to be able to retrieve the sales order number from the flow and use it to callup the sales

  • Database adapter returns result different to SQL

    Hi, I created Database adapter for master-detail select: SELECT DISTINCT t1.INSTANCE_ID, t1.SERVICE_ID, t0.CONFIGURATION_KEY, t0.CONFIGURATION_VALUE, t0.INSTANCE_ID FROM SRVINSTANCECONF t0, SERVICE_INSTANCE t1 WHERE (((t0.CONFIGURATION_KEY = #key) AN