OAS with CRS, Private Interconnect

Hi,
When using CRS with OAS only, do you still need a private interconnect like you would with RAC databases?
Thanks.

Thats what I thought but I was told that wasn't true. Did you setup your OAS/CRS combo with a private interconnect?
Thanks!

Similar Messages

  • Unplug of private interconnect cable restart the system!

    Dear All,
    My Database is 2 Node RAC Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production on Linux.
    I had a strange experience with my private interconnect between the 2 nodes. I need to replace switch between the nodes with a new and better switch. When i plugged wire from one node, the machine restarted. Similarly while I plugged the wire from the other node, the same machine that was restarted earlier, restarted again.
    Have you ever faced such a problem? What can i investigate? Any clue please.
    Also I want to know what is the difference of if we update the remote listener with IP instead of scan IP name?
    SQL> show parameter remote_listener
    NAME TYPE VALUE
    remote_listener string 10.168.20.29:1521
    This is the only thing i doubt creating the problem. All other configurations are as per documentation.
    Your kind help will be appreciated.
    Regards,
    Imran
    Regards,
    Imran

    Yeah as far as I have seen this seems to be the problem.
    CRS was already not accessible by the nodes and when i un plug the cable from one node it restarted and Scan IP was not accessible for clients to connect the database.
    I configured Oracle ASM on SAN device using multipath.
    Now in OEM i see:
    Serviced Databases
    Name     Disk Groups     Failure Groups     Allocated Space (GB)     Availability     Alerts
    racdbdb_racdbdb1     FRA, DATA     n/a     288.27     [Availability]      21
    RACDB-cluster     CRS     n/a     0.26     Not Monitored
    But when i run this query:
    SQL> SELECT NAME, STATE, OFFLINE_DISKS FROM V$ASM_DISKGROUP;
    NAME STATE OFFLINE_DISKS
    CRS MOUNTED 0
    DATA CONNECTED 0
    FRA CONNECTED 0
    How can i check that voting disk is functional? Kindly help
    Regards,
    Imran

  • Unplug of private interconnect cable but the machine didn't restarted

    Dear All,
    I have RAC Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit database on Linux.
    I had  strange experience with my private interconnect between the 2 nodes. When i test unplug private interconnect link in one of the nodes, the machine didn't rebooted. When i check in cluster log, it's stated one of the nodes is rebooted.
    2013-10-07 12:34:18.570
    [cssd(7565)]CRS-1612:Network communication with node centaurus22 (2) missing for 50% of timeout interval.  Removal of this node from cluster in 14.360 seconds
    2013-10-07 12:34:25.572
    [cssd(7565)]CRS-1611:Network communication with node centaurus22 (2) missing for 75% of timeout interval.  Removal of this node from cluster in 7.360 seconds
    2013-10-07 12:34:30.573
    [cssd(7565)]CRS-1610:Network communication with node centaurus22 (2) missing for 90% of timeout interval.  Removal of this node from cluster in 2.360 seconds
    2013-10-07 12:34:32.935
    [cssd(7565)]CRS-1607:Node centaurus22 is being evicted in cluster incarnation 272740834; details at (:CSSNM00007:) in /opt/app/11.2.0/grid/log/centaurus21/cssd/ocssd.log.
    2013-10-07 12:34:34.937
    [cssd(7565)]CRS-1625:Node centaurus22, number 2, was manually shut down
    2013-10-07 12:34:34.952
    [cssd(7565)]CRS-1601:CSSD Reconfiguration complete. Active nodes are centaurus21 .
    2013-10-07 12:34:34.965
    [crsd(8720)]CRS-5504:Node down event reported for node 'centaurus22'.
    2013-10-07 12:34:36.427
    [crsd(8720)]CRS-2773:Server 'centaurus22' has been removed from pool 'Generic'.
    2013-10-07 12:34:36.428
    [crsd(8720)]CRS-2773:Server 'centaurus22' has been removed from pool 'ora.SASDB'.
    2013-10-07 18:46:28.633
    Have you ever faced this problem ?
    Your kindly help will be appreciated.
    Thank you
    Regards,
    Izzudin Hanafie

    Rebootless fencing is introduced in 11.2.0.2 Grid Infrastructure, instead of rebooting a node as in pre-11.2.0.2 when eviction happens, it will attempt to stop GI gracefuly on the evicted node to avoid a node reboot.
    http://www.trivadis.com/uploads/tx_cabagdownloadarea/Trivadis_oracle_clusterware_node_fencing_v.pdf

  • Crs Not Starting _ private Interconnect Down

    Hello All,
    I Have Installed 2 node 10g R2(10.2.0.1) RAC on Solaris 10 T2000 Machines. Yesterday my Second Node Crs gone down. I tried to start it but it didn't start. Then i checked that Private IP (interconnect) is not Pinging from both the node. But Node 1 was up and working so my Users Can Connect to It.
    But Today morning I see that Crs on node 1aslo goes down .
    Is this is problem of private interconnect.? My network guys are trying to up Private Interconnect.
    If Private Interconnect is down, why node 1 goes down after few hours. i think private interconnect is for interconnect with node 2 but node 2 is down .
    Previously My interconnect was connected with cross cables now i have asked them to connect them through switch.
    Help me Out.
    Regards,
    Pankaj.

    Previously My interconnect was connected with cross cables now i have asked them to connect them through switch
    Even we are planning to do the same.Please share your experienceHope you have done this before - moving to switch
    (Update for record id(s): 105681546)
    QUESTION
    ========
    1.Will the database and the Clusterware need to be shutdown etc?
    2.Will our ip addresses need to be reconfigured?
    3.Are there any steps that need to be carried out before unplugging the CROSS CABLE
    and after the interconnect is connected to the switch...?
    ANSWER
    ======
    1. Yes, you have to stop CRS on each node.
    2. No, not required.Provided you are planning to use same ip addresses.
    3. Steps:
    a. Stop CRS on each node. "crsctl stop crs"
    b. Replace the crossover cable with switch.
    c. Start the CRS on each node. "crsctl start crs"
    Even we are planning to do the same.Please share your experienceFollowed by the above answers from Customer Support, It went smooth, we stopped all the services, and with both the nodes reboot.
    Message was edited by:
    Ravi Prakash

  • Mysterious IP associated with Private Interconnect appears in ifconfig output

    Grid Version: 11.2.0.4
    Platform : Oracle Linux 6.4
    We use bond1 for our Private Interconnect. The subnet for this is 10.5.51.xxxx
    But another IP appears in ifconfig and oifcfg iflist output. This is shown under bond1 in ifconfig . I have shown it in red below.
    This is seems to be configured by Clusterware. Anyone one knows what this is and the role it plays ?
    # /u01/grid/product/11.2/bin/oifcfg iflist
    bond0  10.5.19.0
    bond1  10.5.51.0
    bond1  169.254.0.0
    bond2  10.5.12.0
    bond3  10.5.34.0
    # /u01/grid/product/11.2/bin/oifcfg getif
    bond1  10.5.51.0  global  cluster_interconnect
    bond2  10.5.12.0  global  public
    # ifconfig -a
    bond0     Link encap:Ethernet  HWaddr A0:36:9F:26:2A:28
              inet addr:10.5.19.46  Bcast:10.5.19.255  Mask:255.255.255.0
              UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
              RX packets:21093181 errors:0 dropped:4246520 overruns:0 frame:0
              TX packets:8781028 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0
              RX bytes:15996129115 (14.8 GiB)  TX bytes:10913945403 (10.1 GiB)
    bond1     Link encap:Ethernet  HWaddr A0:36:9F:26:2A:29
              inet addr:10.5.51.25  Bcast:10.5.51.255  Mask:255.255.255.0
              UP BROADCAST RUNNING MASTER MULTICAST  MTU:9000  Metric:1
              RX packets:16957228829 errors:0 dropped:91363134 overruns:91350881 frame:0
              TX packets:16507123680 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0
              RX bytes:22253066948104 (20.2 TiB)  TX bytes:18486193782633 (16.8 TiB)
    bond1:1   Link encap:Ethernet  HWaddr A0:36:9F:26:2A:29
              inet addr:169.254.123.209  Bcast:169.254.255.255  Mask:255.255.0.0
              UP BROADCAST RUNNING MASTER MULTICAST  MTU:9000  Metric:1
    <output snipped>

    This is expected. From release 11.2.0.2 (if I remember correctly) your interconnect traffic goes over a 169.x.x.x network, configured by HAIP. You are meant to expose all the NICs to Oarcle and let HAIP handle load balancing and failure: do not use bonding.
    I described the mechanism in a tutorial I recorded some time ago, I think it was this one:
    http://skillbuilders.com/webinars/webinar.cfm?id=79&title=Oracle%2011g%20/%2012c%20Grid%20Infrastructure%20Free%20Tutorial
    John Watson
    Oracle Certified Master DBA

  • INS-20802 Oracle Private Interconnect Configuration Assistant failed

    Thought I would post what information I've gathered, after facing this error during install of RAC Grid Infrastructure 11.2.0.1 on Red Hat Enterprise Linux Server release 5.5 64-bit, as Oracle Support is once again unable to help. Maybe this will save someone else some time and the aggravation of dealing with lousy Oracle Support.
    The error occurs after root.sh has successfully completed on all nodes. Oracle Net Configuration Assistant runs successfully, then Oracle Private Interconnect Configuration Assistant launches and subsequently fails with the following.
    [INS-20802] Oracle Private Interconnect Configuration Assistant failed.
    /u01/app/oraInventory/logs/installActions2010-12-13_01-26-10PM.log
    INFO: Starting 'Oracle Private Interconnect Configuration Assistant'
    INFO: Starting 'Oracle Private Interconnect Configuration Assistant'
    INFO: PRIF-26: Error in update the profiles in the cluster
    INFO:
    WARNING:
    INFO: Completed Plugin named: Oracle Private Interconnect Configuration Assistant
    INFO: Oracle Private Interconnect Configuration Assistant failed.
    INFO: Oracle Private Interconnect Configuration Assistant failed.
    I was able to find another error that coincides with the PRIF-26 error: "CRS-2324:Error(s) occurred while trying to push GPnP profile. State may be inconsistent."
    I was also able to duplicate the PRIF-26 error by trying to add a non-existent network interface via oifcfg:
    ./oifcfg setif -global jjj1/192.167.1.0:cluster_interconnect
    PRIF-26: Error in update the profiles in the cluster
    My best guess is the Oracle Private Interconnect Configuration Assistant makes a call to oifcfg. When oifcfg makes an update or adds a public/private interface, some XML files are also updated or maybe cross-referenced. These files are located here: <grid_home>/gpnp/<host>/profiles/peer
    Any updates/changes/addtions to the private or public interfaces include changes for the Grid Plug-n-Play component, which uses the XML files. If the interface name is not contained in the XML files, my best guess is that triggers the "CRS-2324:Error(s) occurred while trying to push GPnP profile. State may be inconsistent.
    I verified everything was configured correctly; the cluster verification utility reported everything was ok. I also ran the cluster verifcation utility against the GP-nP:
    ./cluvfy comp gpnp -verbose
    I also reviewed the public and private interfaces via oifcfg and they are correct:
    [oracle@ryoradb1 bin]$ ./oifcfg getif -global
    eth0 10.10.2.0 global public
    eth1 192.167.1.0 global cluster_interconnect
    [oracle@ryoradb1 bin]$ ./oifcfg iflist -p
    eth0 10.10.2.0 PRIVATE
    eth1 192.167.1.0 UNKNOWN
    My conclusion is the environment is configured correctly, in spite of the error generated by the Oracle Private Configuration Assistant.

    I understand that you have installed 11.2.0.1 not 11.2.0.2 because multicating must be enabled if you have installed 11.2.0.2 and you may face these sort of problems because cluster nodes would not be able to communicate with each other.
    Please check ocssd.log especially on the first node because this file will give more inforamtion as to why first node is not able to push GPnP file. As you have executed cluvfy to confirm the GPnP but to confirm whether GPnP profile is accurate to just narrow down the problem, I would suggest you to try to start cluster in exclusive mode so that you can sure that GPnP profile is good.
    Shutdown CRS on all nodes, if there is any hung processes then kill them before executing the fllowing command to start cluster in exclusive mode.
    $GRID_HOME/bin/crsctl start crs -excl
    If you are able to start cluster in exclusive then it is sure that GPnP is correct and then next step would be to verify the private network.
    See how you goes.
    FYI, I was recently analyzing the same sort of problems where cluster was not able to access GPnP profile and finally I found issues on my private network. Customer had enabled IGMP snooping, which was avoiding multicast communication over the private network but it was 11.2.0.2, which is not the case here.
    Harish Kumar
    http://www.oraxperts.com

  • Shared private interconnect between 2 clusters

    Hello to everyone.
    I am just wondering if somebody could answer me or show proper direction.
    Is it allowed for two or more clusters share the same private interconnect network? Or dedicated private interconnect required for each cluster.
    Traffic is not matter here because only oracle grid infrastructure ( say 11.2.0.2) will be installed on each cluster (no oracle RAC). We are going to use CRS failover feature.

    Of course, you can share the network.. make sure that multicast is tested out before the installation..
    Again, be careful with too much of sharing.. if the bandwidth is so full that network timeouts happen, the nodes would evict.. but as long as you good network, this is fine.

  • Private Interconnect redundancy

    Grid Version : 11.2.0.2
    OS : Solaris 10 on HP Proliant
    Currently we have a 2-node RAC running with 4 live DBs.
    Currently our private interconnect is
    ### Current Private Interconnect
    169.21.204.1      scnuprd186-privt1.mvtrs.net  scnuprd186-privt1
    169.21.204.4      scnuprd187-privt1.mvtrs.net  scnuprd187-privt1To have redundancy for private interconnect , After repeated requests, our Unix team has finally attached a redundant NIC for each node with a redundant Gigabit-ethernet switch.
    So, we need to add the below NIC to the CRS. How can we do that?
    ###Redundant Private Interconnect (currently attached to the server, but yet to be 'included' in the cluster)
    169.21.204.2      scnuprd186-privt2.mvtrs.net  scnuprd186-privt2  # Node1's newly attached redundant NIC
    169.21.204.5      scnuprd187-privt2.mvtrs.net  scnuprd187-privt2  # Node2's newly attached redundant NIC

    Citizen_2 wrote:
    Grid Version : 11.2.0.2
    OS : Solaris 10 on HP Proliant
    Currently we have a 2-node RAC running with 4 live DBs.
    Currently our private interconnect is
    ### Current Private Interconnect
    169.21.204.1      scnuprd186-privt1.mvtrs.net  scnuprd186-privt1
    169.21.204.4      scnuprd187-privt1.mvtrs.net  scnuprd187-privt1To have redundancy for private interconnect , After repeated requests, our Unix team has finally attached a redundant NIC for each node with a redundant Gigabit-ethernet switch.You can use IPMP (IP MultiPath) in Solaris.
    First, note that these should be NON-ROUTABLE addresses configured on a PRIVATE-Dedicated Switch. It would look something like this:
    169.21.204.1 scnuprd186-privt1-IPMPvip.mvtrs.net scnuprd186-privt1-IPMPvip
    169.21.204.2 scnuprd186-privt1-nic1.mvtrs.net scnuprd186-privt1-nic1 eth2
    169.21.204.3 scnuprd186-privt1-nic2.mvtrs.net scnuprd186-privt1-nic2 eth3
    169.21.204.4 scnuprd187-privt1-IPMPvip.mvtrs.net scnuprd187-privt1-IPMPvip
    169.21.204.5 scnuprd187-privt1-nic1.mvtrs.net scnuprd187-privt1-nic1 eth2
    169.21.204.6 scnuprd187-privt1-nic2.mvtrs.net scnuprd187-privt1-nic2 eth3
    IPMP has a "real address" for each "real" interface and the IPMPvip's will "float" between the eth2 and eth3 devices depending on which one is active. Similar to the way the host vip can "float" between nodes. It is the IPMPvip addresses that are provided to the CRS configuration.
    I have used this on Sun 6900's and it worked great.
    Now, it can get extremely complicated if you were to also use IPMP on the public interfaces as well. It does work, you just need to pay attention to how you configure it.
    >
    So, we need to add the below NIC to the CRS. How can we do that?
    ###Redundant Private Interconnect (currently attached to the server, but yet to be 'included' in the cluster)
    169.21.204.2      scnuprd186-privt2.mvtrs.net  scnuprd186-privt2  # Node1's newly attached redundant NIC
    169.21.204.5      scnuprd187-privt2.mvtrs.net  scnuprd187-privt2  # Node2's newly attached redundant NIC

  • Need procedure to change ip address on private interconnect in 11.2.0.3

    Could someone please send me the procedure to change the ip address of the private interconnect in 11gr2 rac (11.2.0.3)
    The interconnect has been configured using the default HAIP resource during installation of a 2 node cluster on the aix 6.1 platform. I have searched metalink but cannot find a doc with the procedure to make the ip address change.
    The sys admins gave us an ip address on the wrong subnet so now we have to change the ip address of the en1 interface.
    If anyone has steps in terms of shutting down the clusterware and correct order to make changes this would be very much appreciated.
    Thanks.

    Thanks, I seen this one also but I was just hoping to see some official documentation from oracle on this topic. I searched metalink and there is a doc id called
    "Grid infrastructure everything you need to know" but it does not speak to this configuration change or even how to disable the clusterware in the event that you need to perform maintenance and do not want the clusterware to automatically come online.
    Although I love google too... but If there are any official documentation on this topic I would really appreciate to know where it can be found?
    Thanks.

  • Redundancy at Private interconnect.

    Hi,
    We are planning to setup a 2 node RAC. Our system admin has provided 2 nics for private interconnect. We were looking to use both as private interconnect.
    Operating environment
    Solaris 10
    Oracle 10g R2 (clusterware, rdbms)
    Current configuration of NICs provided for interconnect.
    nxge1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 6
    inet 192.168.1.119 netmask ffffff00 broadcast 192.168.1.255
    ether 0:21:28:69:a7:37
    nxge2: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 7
    inet 192.168.2.119 netmask ffffff00 broadcast 192.168.2.255
    ether 0:21:28:69:a7:38
    My questions:
    As per oracle support note "How to Setup IPMP as Cluster Interconnect (Doc ID 368464.1)"
    we can use IPMP grouping for Interconnect, but it is not very clear.
    1) If I use IPMP group do i need to specify only one physical ip as cluster_interconnect or all the ips associated to NIC. (Will this allow load balancing or only failover).
    2) If we do not want to use IPMP can I specify all the IP address of NICs in cluster_interconnects parameter (This will not allow failover only load balancing).
    Regards
    Veera

    user7636989 wrote:
    Hi,
    We are planning to setup a 2 node RAC. Our system admin has provided 2 nics for private interconnect. We were looking to use both as private interconnect.
    Operating environment
    Solaris 10
    Oracle 10g R2 (clusterware, rdbms)
    Current configuration of NICs provided for interconnect.
    nxge1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 6
    inet 192.168.1.119 netmask ffffff00 broadcast 192.168.1.255
    ether 0:21:28:69:a7:37
    nxge2: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 7
    inet 192.168.2.119 netmask ffffff00 broadcast 192.168.2.255
    ether 0:21:28:69:a7:38A prerequsite for IPMP is that the participating IP interfaces should be in the same IP broadcast
    subnet, but your output above shows them to be on different subnets (192.168.1.0/24 and
    192.168.2.0/24). That would need to be fixed before you can use IPMP (which supports
    both failover and loadbalancing). And if you used IPMP, then you would need to set up
    the cluster_interconnect to be the data address(es) (the one(s) that is(are) not set up with
    the NOFAILOVER flag).
    >
    My questions:
    As per oracle support note "How to Setup IPMP as Cluster Interconnect (Doc ID 368464.1)"
    we can use IPMP grouping for Interconnect, but it is not very clear.
    1) If I use IPMP group do i need to specify only one physical ip as cluster_interconnect or all the ips associated to NIC. (Will this allow load balancing or only failover).
    2) If we do not want to use IPMP can I specify all the IP address of NICs in cluster_interconnects parameter (This will not allow failover only load balancing).
    Regards
    Veera

  • Gig Ethernet V/S  SCI as Cluster Private Interconnect for Oracle RAC

    Hello Gurus
    Can any one pls confirm if it's possible to configure 2 or more Gigabit Ethernet interconnects ( Sun Cluster 3.1 Private Interconnects) on a E6900 cluster ?
    It's for a High Availability requirement of Oracle 9i RAC. i need to know ,
    1) can i use gigabit ethernet as Private cluster interconnect for Deploying Oracle RAC on E6900 ?
    2) What is the recommended Private Cluster Interconnect for Oracle RAC ? GiG ethernet or SCI with RSM ?
    3) How about the scenarios where one can have say 3 X Gig Ethernet V/S 2 X SCI , as their cluster's Private Interconnects ?
    4) How the Interconnect traffic gets distributed amongest the multiple GigaBit ethernet Interconnects ( For oracle RAC) , & is anything required to be done at oracle Rac Level to enable Oracle to recognise that there are multiple interconnect cards it needs to start utilizing all of the GigaBit ethernet Interfaces for transfering packets ?
    5) what would happen to Oracle RAC if one of the Gigabit ethernet private interconnects fails
    Have tried searching for this info but could not locate any doc that can precisely clarify these doubts that i have .........
    thanks for the patience
    Regards,
    Nilesh

    Answers inline...
    Tim
    Can any one pls confirm if it's possible to configure
    2 or more Gigabit Ethernet interconnects ( Sun
    Cluster 3.1 Private Interconnects) on a E6900
    cluster ?Yes, absolutely. You can configure up to 6 NICs for the private networks. Traffic is automatically striped across them if you specify clprivnet0 to Oracle RAC (9i or 10g). That is TCP connections and UDP messages.
    It's for a High Availability requirement of Oracle
    9i RAC. i need to know ,
    1) can i use gigabit ethernet as Private cluster
    interconnect for Deploying Oracle RAC on E6900 ? Yes, definitely.
    2) What is the recommended Private Cluster
    Interconnect for Oracle RAC ? GiG ethernet or SCI
    with RSM ? SCI is or is in the process of being EOL'ed. Gigabit is usually sufficient. Longer term you may want to consider Infiniband or 10 Gigabit ethernet with RDS.
    3) How about the scenarios where one can have say 3 X
    Gig Ethernet V/S 2 X SCI , as their cluster's
    Private Interconnects ? I would still go for 3 x GbE because it is usually cheaper and will probably work just as well. The latency and bandwidth differences are often masked by the performance of the software higher up the stack. In short, unless you tuned the heck out of your application and just about everything else, don't worry too much about the difference between GbE and SCI.
    4) How the Interconnect traffic gets distributed
    amongest the multiple GigaBit ethernet Interconnects
    ( For oracle RAC) , & is anything required to be done
    at oracle Rac Level to enable Oracle to recognise
    that there are multiple interconnect cards it needs
    to start utilizing all of the GigaBit ethernet
    Interfaces for transfering packets ?You don't need to do anything at the Oracle level. That's the beauty of using Oracle RAC with Sun Cluster as opposed to RAC on its own. The striping takes place automatically and transparently behind the scenes.
    5) what would happen to Oracle RAC if one of the
    Gigabit ethernet private interconnects fails It's completely transparent. Oracle will never see the failure.
    Have tried searching for this info but could not
    locate any doc that can precisely clarify these
    doubts that i have .........This is all covered in a paper that I have just completed and should be published after Christmas. Unfortunately, I cannot give out the paper yet.
    thanks for the patience
    Regards,
    Nilesh

  • Copper cable / GigE Copper Interface as Private Interconnect for Oracle RAC

    Hello Gurus
    Can some one confirm if the copper Cables ( Cat5/RJ45) can be used for Gig Ethernet i.e. Private interconnects for deploying Oracle RAC 9.x or 10gR2 on Solaris 9/10 .
    i am planning to use 2 X GigE Interfaces (one port each from X4445 Quad Port Ethernet Adapters) & Planning to connect it using copper cables ( all the documents that i came across is been refering to the fiber cables for Private Interconnects , connecting GigE Interfaces , so i am getting bit confused )
    would appretiate if some one can throw some lights on the same.
    regards,
    Nilesh Naik
    thanks

    Cat5/RJ45 can be used for Gig Ethernet Private interconnects for Oracle RAC. I would recommend trunking the two or more interconnects for redundancy. The X4445 adapters are compatible with the Sun Trunking 1.3 software (http://www.sun.com/products/networking/ethernet/suntrunking/). If you have servers that support the Nemo framework (bge, e1000g, xge, nge, rge, ixgb), you can use the Solaris 10 trunking software, dladmin.
    We have a couple of SUN T2000 servers and are using the onboard GigE ports for the Oracle 10gR2 RAC interconnects. We upgraded the onboard NIC drivers to the e1000g and used the Solaris 10 trunking software. The next update of Solaris will have the e1000g drivers as the default for the SUN T2000 servers.

  • Private Interconnect: Should any nodes other than RAC nodes have one?

    The contractors that set up our four-node production 10g RAC (and a standalone development server) also assigned private interconnect addresses to 2 Apache/ApEx servers and a standalone development database server.
    There are service names in the tnsnames.ora on all servers in our infrastructure referencing these private interconnects- even the non-rac member servers. The nics on these servers are not bound for failover with the nics bound to the public/VIP addresses. These nics are isolated on their own switch.
    Could this configuration be related to lost heartbeats or voting disk errors? We experience rac node expulsions and even arbitrary bounces (reboots!) of all the rac nodes.

    I do not have access to the contractors. . . .can only look at what they have left behind and try to figure out their intention. . .
    I am reading the Ault/Tumha book Oracle 10g Grid and Real Application Clusters and looking through our own settings and config files and learning srvctl and crsctl commands from their examples. Also googling and OTN searching through the library full of documentation. . .
    I still have yet to figure out if the private interconnect spoken about so frequently in cluster configuration documents are the binding to the set of node.vip address specifications in the tnsnames.ora (bound the the first eth adaptor along with the public ip addresses for the nodes) or the binding on the second eth adaptor to the node.prv addresses not found in the local pfile, in the tnsnames.ora, or the listener.ora (but found at the operating system level in the ifconfig). If the node.prv addresses are not the private interconnect then can anyone tell me that they are for?

  • RAC 11R2 Private Interconnect Issue

    Friends
    We had setup our Oracle Clusterware on Solaris Sparc with a version 11.2.0.3 PSU 2 patch sets. Some changes happen at the OS level and the private Interconnect IPs were picked wrong by our Oracle Clusterware registry.
    The clusterware is down. We are not able to bring up the clusterware. There will be a need to change the private IP configuration at the Oracle Clusterware level and now the clusterware is down.
    Is there any way we can change the configuration in private Interconnect ?
    Whenever we are trying to do a change. Getting the error message "PRIF-10: failed to initialize the cluster registry"
    $ oifcfg setif -global vnet2/10.131.239.0:cluster_interconnect
    PRIF-10: failed to initialize the cluster registry
    Thank You !
    Jai

    The clusterware is down. We are not able to bring up the clusterware. There will be a need to change the private IP configuration at the Oracle Clusterware level and now the clusterware is down.
    Is there any way we can change the configuration in private Interconnect ?
    Whenever we are trying to do a change. Getting the error message "PRIF-10: failed to initialize the cluster registry"
    $ oifcfg setif -global vnet2/10.131.239.0:cluster_interconnect
    PRIF-10: failed to initialize the cluster registryThis error happen when clusterware is down and you are trying to change Interconnect configuration, then you must start the Oracle Clusterware on the node to make changes.
    We are not able to bring up the clusterware. Some changes happen at the OS level and the private Interconnect IPsWhy you clusterware is not starting?...please post alertlog and crsd.log of cluster (only relevant info).
    If the error on crsd.log is : PROC-44: Error in network address and interface operations Network address and interface operations error
    This errors indicate a mismatch between OS setting (oifcfg iflist) and gpnp profile setting profile.xml.
    You will restore the OS network configuration back to the original status, start Oracle Clusterware. Then try make the changes again.

  • Private interconnect of an Oracle 10g cluster

    Can you please answer below questions?
    Is a direct connection between two nodes supported on the private interconnect of an Oracle 10g cluster?
    We know that crossover cables are not supported, but what about a Gigabit network with a straight cable?”

    Hi,
    I really wouldn't suggest that approach, It is definitely not efficient and not flexible
    - If you have 4 nodes, and nodes 1 want to send message to node 4, the package must go through node 2 and 3? Is it efficient? NO absolutely
    - If you have e.g 2 nodes, if one of the link down in one of the nodes, the other nodes link will also down, this will most likely evicting both nodes instead of one node
    - your clusterware nodes iis limited by the cable which is not efficient
    - etc etc etc more disadvantages than advantages
    Cheers
    FZheng

Maybe you are looking for