Oracle 9i rac interconnect

Hai,
I need clarification in oracle 9i rac internetconnect. Recently we have done installation of oracle 9i rac (9.2.0.4) on aix 5.3 with hacmp cluster for 2 nodes. We are not able to mount the second instance of the database after starting the first instance.
The problem was solved after adding cluster_interconnect parameter in the init.ora file. In all other servers, it is not given and oracle is selecting the interconnect automatically. For this new server alone this problem. I want to know from whether oracle is taking this interconnect if we dont mention the cluster_interconnect paramter. Even i have verified /etc/hosts file and entries are available in it. Requeting your help in this regard.

Where the IP information is stored and managed from depends on your implementation such as are you using a third party clusterware. If you are using Oracle clusterware its stored in the OCR.
You can check this using the following query...
SELECT * FROM GV$CLUSTER_INTERCONNECTS;
The source column will tell you where the interconnect information was obtained from.
BTW, if you are using the CLUSTER_INTERCONNECTS parameter you may loose some of the HA features here is something from the 10gR2 documentation..
" Failover and Failback and CLUSTER_INTERCONNECTS
Some operating systems support run-time failover and failback. However, if you use the CLUSTER_INTERCONNECTS initialization parameter, then failover and failback are disabled."
Question, have you tried other options such as NIC pairing and bonding to have dual interconnects instead of using the CLUSTER_INTERCONNECTS parameter.
Please check Metalink Note # 298891.1 talks about configuring NIC bonding on Linux there are similar options for other operating systems.
answered by  Murali Vallath          refer the link: this will be very useful to you.http://kr.forums.oracle.com/forums/thread.jspa?threadID=625931
hope, this will helps you.

Similar Messages

  • Oracle 10g RAC - Private Interconnect on Private non-routable VLAN

    In our data center there is an existing Oracle 10g RAC configured with private VLAN for Interconnect administered by a different group of DBAs.
    We are designing a new, separate Oracle 10g RAC environment to support our application.
    When we discussed with our data center folks to set up a private VLAN for our RAC Interconnect, they suggest to use the same existing Private VLAN used by other Oracle RAC configurations. In that case the Interconnect IPs will be on the same subnet as other Oracle RAC configurations.
    For example, if
    RAC1 with 2 nodes is using 192.168.1.1 and 192.168.1.2 in the VLAN_1 for the Interconect, they want us to use the same VLAN_1 with Interconnect IPs 192.168.1.3 and 192.168.1.4 for our 2 node RAC.
    Is Sharing same subnet on the same Private VLANs for interconnects of different RAC configurations supported?
    Will that cause any performance hit? That means the Interconnect IPs of One RAC configuration is pingable from other RAC configuration.
    Did anyone come across such a design?
    Could not find any info on this on Metalink.
    Thanks

    yes
    this is practically very much feasible.. as you would have only 4 m/c in ip subnet .... and this is very much less than the public subnet which we should refrain from using from interconnect.

  • Oracle RAC Interconnect, PowerVM VLANs, and the Limit of 20

    Hello,
    Our company has a requirement to build a multitude of Oracle RAC clusters on AIX using Power VM on 770s and 795 hardware.
    We presently have 802.1q trunking configured on our Virtual I/O Servers, and have currently consumed 12 of 20 allowed VLANs for a virtual ethernet adapter. We have read the Oracle RAC FAQ on Oracle Metalink and it seems to otherwise discourage the use of sharing these interconnect VLANs between different clusters. This puts us in a scalability bind; IBM limits VLANs to 20 and Oracle says there is a one-to-one relationship between VLANs and subnets and RAC clusters. We must assume we have a fixed number of network interfaces available and that we absolutely have to leverage virtualized network hardware in order to build these environments. "add more network adapters to VIO" isn't an acceptable solution for us.
    Does anyone know if Oracle can afford any flexibility which would allow us to host multiple Oracle RAC interconnects on the same 802.1q trunk VLAN? We will independently guarantee the bandwidth, latency, and redundancy requirements are met for proper Oracle RAC performance, however we don't want a design "flaw" to cause us supportability issues in the future.
    We'd like it very much if we could have a bunch of two-node clusters all sharing the same private interconnect. For example:
    Cluster 1, node 1: 192.168.16.2 / 255.255.255.0 / VLAN 16
    Cluster 1, node 2: 192.168.16.3 / 255.255.255.0 / VLAN 16
    Cluster 2, node 1: 192.168.16.4 / 255.255.255.0 / VLAN 16
    Cluster 2, node 2: 192.168.16.5 / 255.255.255.0 / VLAN 16
    Cluster 3, node 1: 192.168.16.6 / 255.255.255.0 / VLAN 16
    Cluster 3, node 2: 192.168.16.7 / 255.255.255.0 / VLAN 16
    Cluster 4, node 1: 192.168.16.8 / 255.255.255.0 / VLAN 16
    Cluster 4, node 2: 192.168.16.9 / 255.255.255.0 / VLAN 16
    etc.
    Whereas the concern is that Oracle Corp will only support us if we do this:
    Cluster 1, node 1: 192.168.16.2 / 255.255.255.0 / VLAN 16
    Cluster 1, node 2: 192.168.16.3 / 255.255.255.0 / VLAN 16
    Cluster 2, node 1: 192.168.17.2 / 255.255.255.0 / VLAN 17
    Cluster 2, node 2: 192.168.17.3 / 255.255.255.0 / VLAN 17
    Cluster 3, node 1: 192.168.18.2 / 255.255.255.0 / VLAN 18
    Cluster 3, node 2: 192.168.18.3 / 255.255.255.0 / VLAN 18
    Cluster 4, node 1: 192.168.19.2 / 255.255.255.0 / VLAN 19
    Cluster 4, node 2: 192.168.19.3 / 255.255.255.0 / VLAN 19
    Which eats one VLAN per RAC cluster.

    Thank you for your answer!!
    I think I roughly understand the argument behind a 2-node RAC and a 3-node or greater RAC. We, unfortunately, were provided with two physical pieces of hardware to virtualize to support production (and two more to support non-production) and as a result we really have no place to host a third RAC node without placing it within the same "failure domain" (I hate that term) as one of the other nodes.
    My role is primarily as a system engineer, and, generally speaking, our main goals are eliminating single points of failure. We may be misusing 2-node RACs to eliminate single points of failure since it seems to violate the real intentions behind RAC, which is used more appropriately to scale wide to many nodes. Unfortunately, we've scaled out to only two nodes, and opted to scale these two nodes up, making them huge with many CPUs and lots of memory.
    Other options, notably the active-passive failover cluster we have in HACMP or PowerHA on the AIX / IBM Power platform is unattractive as the standby node drives no resources yet must consume CPU and memory resources so that it is prepared for a failover of the primary node. We use HACMP / PowerHA with Oracle and it works nice, however Oracle RAC, even in a two-node configuration, drives load on both nodes unlike with an active-passive clustering technology.
    All that aside, I am posing the question to both IBM, our Oracle DBAs (whom will ask Oracle Support). Typically the answers we get vary widely depending on the experience and skill level of the support personnel we get on both the Oracle and IBM sides... so on a suggestion from a colleague (Hi Kevin!) I posted here. I'm concerned that the answer from Oracle Support will unthinkingly be "you can't do that, my script says to tell you the absolute most rigid interpretation of the support document" while all the time the same document talks of the use of NFS and/or iSCSI storage eye roll
    We have a massive deployment of Oracle EBS and honestly the interconnect doesn't even touch 100mbit speeds even though the configuration has been checked multiple times by Oracle and IBM and with the knowledge that Oracle EBS is supposed to heavily leverage RAC. I haven't met a single person who doesn't look at our environment and suggest jumbo frames. It's a joke at this point... comments like "OMG YOU DON'T HAVE JUMBO FRAMES" and/or "OMG YOU'RE NOT USING INFINIBAND WHATTA NOOB" are commonplace when new DBAs are hired. I maintain that the utilization numbers don't support this.
    I can tell you that we have 8Gb fiber channel storage and 10Gb network connectivity. I would probably assume that there were a bottleneck in the storage infrastructure first. But alas, I digress.
    Mainly I'm looking for a real-world answer to this question. Aside from violating every last recommendation and making oracle support folk gently weep at the suggestion, are there any issues with sharing interconnects between RAC environments that will prevent it's functionality and/or reduce it's stability?
    We have rapid spanning tree configured, as far as I know, and our network folks have tuned the timers razor thin. We have Nexus 5k and Nexus 7k network infrastructure. The typical issues you'd fine with standard spanning tree really don't affect us because our network people are just that damn good.

  • Oracle RAC interconnect performance by GRIDControl

    Hi All,
    We have Oracle 10g Rac and we manage database through Grid control.
    We are not able to see <Performance> and <Interconnects> tabs in rac cluster page , do you guys know why?
    logged into sysman--> at right corner <targets> at left side I could see database list , I selected Rac database name and clicked --> at the top left most corner I see a link like below, so if I click on this hyper link (DBname) it is taking me to cluster page, here it is not able to enable two tabed pans <Performance> and <Interconnects> can anyone please help me how to check this information in Grid Control
    Cluster: DBNAME >
    Thanks in advance

    First click on the target type Cluster Database, that will take you to overall Cluster Database : <your cluster database name> page. There on the top of the page, left side you will see a hyperlink with name Cluster: <cluster name>, click on this cluster name hyperlink that will take you to the Cluster page where interconnect tabs are enabled.
    -Harish Kumar Kalra

  • Oracle 10g RAC on Windows

    Installed Oracle 10g RAC on Windows... Everything was
    working fine, now when I install CRS it completed at 100% and then in the configuration it gives error
    INFO: exitonly tools to be excuted passed: 0
    INFO: Starting to execute configuration assistants
    INFO: Command = C:\WINDOWS\system32\cmd /c call C:\oracle\product\10.2.0\crs/install/crssetup.config.bat
    PROT-1: Failed to initialize ocrconfig
    Step 1: checking status of CRS cluster
    Step 2: creating directories (C:\oracle\product\10.2.0\crs)
    Step 3: configuring OCR repository
    ocr upgrade failed with (-1)
    Execution of the plugin was aborted
    INFO: Configuration assistant "Oracle Clusterware Configuration Assistant" was canceled.
    *** Starting OUICA ***
    Oracle Home set to C:\oracle\product\10.2.0\crs
    Configuration directory is set to C:\oracle\product\10.2.0\crs\cfgtoollogs. All xml files under the directory will be processed
    INFO: The "C:\oracle\product\10.2.0\crs\cfgtoollogs/configToolFailedCommands" script contains all commands that failed, were skipped or were cancelled. This file may be used to run these configuration assistants outside of OUI. Note that you may have to update this script with passwords (if any) before executing the same.
    I am using VMware WorkStation 5
    http://www.oracle-base.com/articles/10g/OracleDB10gR2RACInstallationOnWindows2003UsingVMware.php
    I can see my disks from both machines. Open the "Computer Management" dialog (Start > All Programs > Administrative Tools > Computer Management
    I use Windows XP SP2 as the host operating system
    Any suggestion?
    Message was edited by:
    MEXMAN

    You know its not certified?
    My first thought when I read the subject of this thread, and before reading what you posted, is that you've either done no research on RAC or didn't understand what you've read.
    You can create a RAC cluster using any operating system certified by Oracle.
    But the operating system is the least important part of creating a cluster. The questions you need to be able to address are:
    1. What is your solution to created shared storage?
    2. What is your solution to create the cache fusion interconnect and VIPs?
    If you don't have an answer to these questions you can not build a RAC cluster.

  • Oracle 11g RAC on RHEL 4.0, error on 1 Node while running ./root.sh for CRS

    Hi,
    I am trying to install Oracle 11g RAC on RHEL 4.0 on Vmware and at the end of CRS installation when installer asks to run the two scripts (orainstRoot.sh and root.sh) on one node (on the node where runInstaller is started) throws following error
    [root@LRAC1 crs]# ./root.sh
    WARNING: directory '/xhdd/u01/crs/oracle/product/11.1.0' is not owned by root
    WARNING: directory '/xhdd/u01/crs/oracle/product' is not owned by root
    WARNING: directory '/xhdd/u01/crs/oracle' is not owned by root
    WARNING: directory '/xhdd/u01/crs' is not owned by root
    WARNING: directory '/xhdd/u01' is not owned by root
    WARNING: directory '/xhdd' is not owned by root
    Checking to see if Oracle CRS stack is already configured
    Setting the permissions on OCR backup directory
    Setting up Network socket directories
    /xhdd/u01/crs/oracle/product/11.1.0/crs/bin/ocrconfig: line 78: /xhdd/u01/crs/oracle/product/11.1.0/crs/bin/ocrconfig.bin: cannot execute binary file
    /xhdd/u01/crs/oracle/product/11.1.0/crs/bin/ocrconfig: line 78: /xhdd/u01/crs/oracle/product/11.1.0/crs/bin/ocrconfig.bin: Success
    Failed to upgrade Oracle Cluster Registry configuration
    [root@LRAC1 crs]#
    While on the second node the root.sh script does not give any errors, here the output
    [root@LRAC2 crs]# ./root.sh
    WARNING: directory '/xhdd/u01/crs/oracle/product/11.1.0' is not owned by root
    WARNING: directory '/xhdd/u01/crs/oracle/product' is not owned by root
    WARNING: directory '/xhdd/u01/crs/oracle' is not owned by root
    WARNING: directory '/xhdd/u01/crs' is not owned by root
    WARNING: directory '/xhdd/u01' is not owned by root
    WARNING: directory '/xhdd' is not owned by root
    Checking to see if Oracle CRS stack is already configured
    /etc/oracle does not exist. Creating it now.
    Setting the permissions on OCR backup directory
    Setting up Network socket directories
    Oracle Cluster Registry configuration upgraded successfully
    The directory '/xhdd/u01/crs/oracle/product/11.1.0' is not owned by root. Changing owner to root
    The directory '/xhdd/u01/crs/oracle/product' is not owned by root. Changing owner to root
    The directory '/xhdd/u01/crs/oracle' is not owned by root. Changing owner to root
    The directory '/xhdd/u01/crs' is not owned by root. Changing owner to root
    The directory '/xhdd/u01' is not owned by root. Changing owner to root
    The directory '/xhdd' is not owned by root. Changing owner to root
    Successfully accumulated necessary OCR keys.
    Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
    node <nodenumber>: <nodename> <private interconnect name> <hostname>
    node 1: lrac1 lrac1-priv lrac1
    node 2: lrac2 lrac2-priv lrac2
    Creating OCR keys for user 'root', privgrp 'root'..
    Operation successful.
    Now formatting voting device: /dev/sdd1
    Format of 1 voting devices complete.
    Startup will be queued to init within 30 seconds.
    Adding daemons to inittab
    Expecting the CRS daemons to be up within 600 seconds.
    Cluster Synchronization Services is active on these nodes.
    lrac2
    Cluster Synchronization Services is inactive on these nodes.
    lrac1
    Local node checking complete. Run root.sh on remaining nodes to start CRS daemons.
    Any ideas to solve this issue.
    Thanks in advance,
    Sameer

    I tried reinstalling again and it worked on Vmware Server.Some oracle products are certified on Oracle VM , but not RAC at the moment.
    You can download Oracle VM from otn.oracle.com
    Check ML Note:464754.1 for details about Certified Software on Oracle VM.
    - Virag Sharma
    http://virag.sharma.googlepages.com/

  • 802.3ad (mode=4) bonding for RAC interconnects

    Is anyone using 802.3ad (mode=4) bonding for their RAC interconnects? We have five Dell R710 RAC nodes and we're trying to use the four onboard Broadcom NetXtreme II NICs in a 802.3ad bond with src-dst-mac load balancing. Since we have the hardware to pull this off we thought we'd give it a try and achieve some extra bandwith for the interconnect rather than deploying the traditional acitve/standby interconnect using just two of the NICs. Has anyone tried this config and what was the outcome? Thanks.

    I don't but may be the documents might help ?
    http://www.iop.org/EJ/article/1742-6596/119/4/042015/jpconf8_119_042015.pdf?request-id=bcddc94d-7727-4a8a-8201-4d1b837a1eac
    http://www.oracleracsig.org/pls/apex/Z?p_url=RAC_SIG.download_my_file?p_file=1002938&p_id=1002938&p_cat=documents&p_user=nobody&p_company=994323795175833
    http://www.oracle.com/technology/global/cn/events/download/ccb/10g_rac_bp_en.pdf
    Edited by: Hub on Nov 18, 2009 10:10 AM

  • RAC Interconnect performance

    Hi,
    We are facing RAC Interconnect performance problems.
    Oracle Version: Oracle 9i RAC (9.2.0.7)
    Operating system: SunOS 5.8
    SQL> SELECT b1.inst_id, b2.value "RECEIVED",
    b1.value "RECEIVE TIME",
    ((b1.value / b2.value) * 10) "AVG RECEIVE TIME (ms)"
    FROM gv$sysstat b1, gv$sysstat b2
    WHERE b1.name = 'global cache cr block receive time'
    AND b2.name = 'global cache cr blocks received'
    AND b1.inst_id = b2.inst_id;
    INST_ID RECEIVED RECEIVE TIME AVG RECEIVE TIME (ms)
    1 323849 172359 5.32220263
    2 675806 94537 1.39887778
    After database restart average time increases for Instance 1 and instance 2 remains similar.
    Application performance degrades, restart database solves the issue. This is critical application and can not have frequent downtimes for restart.
    What specific points should I check to find out to improve interconnect performance?
    Thanks
    Dilip Patel.

    Hi,
    Configurations:
    Node: 1
    Hardware Model: Sun-Fire-V890
    OS: SunOS 5.8
    Release: Generic_117350-53
    CPU: 16 sparcv9 cpu(s) running at 1200 MHz
    Memory: 40.0GB
    Node: 2
    Hardware Model: Sun-Fire-V890
    OS: SunOS 5.8
    Release: Generic_117350-53
    CPU: 16 sparcv9 cpu(s) running at 1200 MHz
    Memory: 40.0GB
    CPU Utilization on Node 1 is never exceeded 40%.
    CPU Utilization on Node 2 is between 20% to 30%.
    Application load is more Node 1 compared to Node 2.
    I can observer wait event "global cache cr request" in top 5 wait events on most of the statspack report. Application faces degrade performacne after few days of restart database. No major changes done on application recently.
    Statapack report for Node 1:
    DB Name         DB Id    Instance     Inst Num Release     Cluster Host
    XXXX          2753907139 xxxx1               1 9.2.0.7.0   YES    xxxxx
                  Snap Id     Snap Time      Sessions Curs/Sess Comment
    Begin Snap:     61688 17-Feb-09 09:10:06      253     299.4
      End Snap:     61698 17-Feb-09 10:10:06      285     271.6
       Elapsed:               60.00 (mins)
    Cache Sizes (end)
    ~~~~~~~~~~~~~~~~~
                   Buffer Cache:     2,048M      Std Block Size:          8K
               Shared Pool Size:       384M          Log Buffer:      2,048K
    Load Profile
    ~~~~~~~~~~~~                            Per Second       Per Transaction
                      Redo size:            102,034.92              4,824.60
                  Logical reads:             60,920.35              2,880.55
                  Block changes:                986.07                 46.63
                 Physical reads:              1,981.12                 93.67
                Physical writes:                 28.30                  1.34
                     User calls:              2,651.63                125.38
                         Parses:                500.89                 23.68
                    Hard parses:                 21.44                  1.01
                          Sorts:                 66.91                  3.16
                         Logons:                  3.69                  0.17
                       Executes:                553.34                 26.16
                   Transactions:                 21.15
      % Blocks changed per Read:    1.62    Recursive Call %:     22.21
    Rollback per transaction %:    2.90       Rows per Sort:      7.44
    Instance Efficiency Percentages (Target 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                Buffer Nowait %:   99.99       Redo NoWait %:    100.00
                Buffer  Hit   %:   96.75    In-memory Sort %:    100.00
                Library Hit   %:   98.30        Soft Parse %:     95.72
             Execute to Parse %:    9.48         Latch Hit %:     99.37
    Parse CPU to Parse Elapsd %:   90.03     % Non-Parse CPU:     92.97
    Shared Pool Statistics        Begin   End
                 Memory Usage %:   94.23   94.93
        % SQL with executions>1:   74.96   74.66
      % Memory for SQL w/exec>1:   82.93   72.26
    Top 5 Timed Events
    ~~~~~~~~~~~~~~~~~~                                                     % Total
    Event                                               Waits    Time (s) Ela Time
    db file sequential read                         1,080,532      13,191    40.94
    CPU time                                                       10,183    31.60
    db file scattered read                            456,075       3,977    12.34
    wait for unread message on broadcast channel        4,195       2,770     8.60
    global cache cr request                         1,633,056         873     2.71
    Cluster Statistics for DB: EPIP  Instance: epip1  Snaps: 61688 -61698
    Global Cache Service - Workload Characteristics
    Ave global cache get time (ms):                            0.8
    Ave global cache convert time (ms):                        1.1
    Ave build time for CR block (ms):                          0.1
    Ave flush time for CR block (ms):                          0.2
    Ave send time for CR block (ms):                           0.3
    Ave time to process CR block request (ms):                 0.6
    Ave receive time for CR block (ms):                        4.4
    Ave pin time for current block (ms):                       0.2
    Ave flush time for current block (ms):                     0.0
    Ave send time for current block (ms):                      0.3
    Ave time to process current block request (ms):            0.5
    Ave receive time for current block (ms):                   2.6
    Global cache hit ratio:                                    3.9
    Ratio of current block defers:                             0.0
    % of messages sent for buffer gets:                        3.7
    % of remote buffer gets:                                   0.3
    Ratio of I/O for coherence:                                1.1
    Ratio of local vs remote work:                            10.9
    Ratio of fusion vs physical writes:                        0.0
    Global Enqueue Service Statistics
    Ave global lock get time (ms):                             0.1
    Ave global lock convert time (ms):                         0.0
    Ratio of global lock gets vs global lock releases:         1.0
    GCS and GES Messaging statistics
    Ave message sent queue time (ms):                          0.4
    Ave message sent queue time on ksxp (ms):                  1.8
    Ave message received queue time (ms):                      0.2
    Ave GCS message process time (ms):                         0.1
    Ave GES message process time (ms):                         0.0
    % of direct sent messages:                                 8.0
    % of indirect sent messages:                              49.4
    % of flow controlled messages:                            42.6
    GES Statistics for DB: EPIP  Instance: epip1  Snaps: 61688 -61698
    Statistic                                    Total   per Second    per Trans
    dynamically allocated gcs resourc                0          0.0          0.0
    dynamically allocated gcs shadows                0          0.0          0.0
    flow control messages received                   0          0.0          0.0
    flow control messages sent                       0          0.0          0.0
    gcs ast xid                                      0          0.0          0.0
    gcs blocked converts                         2,830          0.8          0.0
    gcs blocked cr converts                      7,677          2.1          0.1
    gcs compatible basts                             5          0.0          0.0
    gcs compatible cr basts (global)               142          0.0          0.0
    gcs compatible cr basts (local)            142,678         39.6          1.9
    gcs cr basts to PIs                              0          0.0          0.0
    gcs cr serve without current lock                0          0.0          0.0
    gcs error msgs                                   0          0.0          0.0
    gcs flush pi msgs                              798          0.2          0.0
    gcs forward cr to pinged instance                0          0.0          0.0
    gcs immediate (compatible) conver            9,296          2.6          0.1
    gcs immediate (null) converts               52,460         14.6          0.7
    gcs immediate cr (compatible) con          752,507        209.0          9.9
    gcs immediate cr (null) converts         4,047,959      1,124.4         53.2
    gcs msgs process time(ms)                  153,618         42.7          2.0
    gcs msgs received                        2,287,640        635.5         30.0
    gcs out-of-order msgs                            0          0.0          0.0
    gcs pings refused                           70,099         19.5          0.9
    gcs queued converts                              0          0.0          0.0
    gcs recovery claim msgs                          0          0.0          0.0
    gcs refuse xid                                   1          0.0          0.0
    gcs retry convert request                        0          0.0          0.0
    gcs side channel msgs actual                40,400         11.2          0.5
    gcs side channel msgs logical            4,039,700      1,122.1         53.1
    gcs write notification msgs                     46          0.0          0.0
    gcs write request msgs                         972          0.3          0.0
    gcs writes refused                               4          0.0          0.0
    ges msgs process time(ms)                    2,713          0.8          0.0
    ges msgs received                           73,687         20.5          1.0
    global posts dropped                             0          0.0          0.0
    global posts queue time                          0          0.0          0.0
    global posts queued                              0          0.0          0.0
    global posts requested                           0          0.0          0.0
    global posts sent                                0          0.0          0.0
    implicit batch messages received           288,801         80.2          3.8
    implicit batch messages sent               622,610        172.9          8.2
    lmd msg send time(ms)                        2,148          0.6          0.0
    lms(s) msg send time(ms)                         1          0.0          0.0
    messages flow controlled                 3,473,393        964.8         45.6
    messages received actual                   765,292        212.6         10.1
    messages received logical                2,360,972        655.8         31.0
    messages sent directly                     654,760        181.9          8.6
    messages sent indirectly                 4,027,924      1,118.9         52.9
    msgs causing lmd to send msgs               33,481          9.3          0.4
    msgs causing lms(s) to send msgs            13,220          3.7          0.2
    msgs received queue time (ms)              379,304        105.4          5.0
    msgs received queued                     2,359,723        655.5         31.0
    msgs sent queue time (ms)                1,514,305        420.6         19.9
    msgs sent queue time on ksxp (ms)        4,349,174      1,208.1         57.1
    msgs sent queued                         4,032,426      1,120.1         53.0
    msgs sent queued on ksxp                 2,415,381        670.9         31.7
    GES Statistics for DB: EPIP  Instance: epip1  Snaps: 61688 -61698
    Statistic                                    Total   per Second    per Trans
    process batch messages received            278,174         77.3          3.7
    process batch messages sent                913,611        253.8         12.0
    Wait Events for DB: EPIP  Instance: epip1  Snaps: 61688 -61698
    -> s  - second
    -> cs - centisecond -     100th of a second
    -> ms - millisecond -    1000th of a second
    -> us - microsecond - 1000000th of a second
    -> ordered by wait time desc, waits desc (idle events last)
                                                                       Avg
                                                         Total Wait   wait    Waits
    Event                               Waits   Timeouts   Time (s)   (ms)     /txn
    db file sequential read         1,080,532          0     13,191     12     14.2
    db file scattered read            456,075          0      3,977      9      6.0
    wait for unread message on b        4,195      1,838      2,770    660      0.1
    global cache cr request         1,633,056      8,417        873      1     21.4
    db file parallel write              8,243          0        260     32      0.1
    buffer busy waits                  16,811          0        168     10      0.2
    log file parallel write           187,783          0        158      1      2.5
    log file sync                      75,143          0        147      2      1.0
    buffer busy global CR               9,713          0        102     10      0.1
    global cache open x                31,157      1,230         50      2      0.4
    enqueue                            58,261         14         45      1      0.8
    latch free                         33,398      7,610         44      1      0.4
    direct path read (lob)              9,925          0         36      4      0.1
    library cache pin                   8,777          1         34      4      0.1
    SQL*Net break/reset to clien       82,982          0         32      0      1.1
    log file sequential read              409          0         31     75      0.0
    log switch/archive                      3          3         29   9770      0.0
    SQL*Net more data to client       201,538          0         16      0      2.6
    global cache open s                 8,585        342         14      2      0.1
    global cache s to x                11,098        148         11      1      0.1
    control file sequential read        6,845          0          8      1      0.1
    db file parallel read               1,569          0          7      4      0.0
    log file switch completion             35          0          7    194      0.0
    row cache lock                     15,780          0          6      0      0.2
    process startup                        69          0          6     82      0.0
    global cache null to x              1,759         48          6      3      0.0
    direct path write (lob)               685          0          5      7      0.0
    DFS lock handle                     8,713          0          3      0      0.1
    control file parallel write         1,350          0          2      2      0.0
    wait for master scn                 1,194          0          1      1      0.0
    CGS wait for IPC msg               30,830     30,715          1      0      0.4
    global cache busy                      14          1          1     75      0.0
    ksxr poll remote instances         30,997     12,692          1      0      0.4
    direct path read                      752          0          0      1      0.0
    switch logfile command                  3          0          0    148      0.0
    log file single write                  24          0          0     13      0.0
    library cache lock                    668          0          0      0      0.0
    KJC: Wait for msg sends to c        1,161          0          0      0      0.0
    buffer busy global cache               26          0          0      6      0.0
    IPC send completion sync              261        260          0      0      0.0
    PX Deq: reap credit                 3,477      3,440          0      0      0.0
    LGWR wait for redo copy             1,751          0          0      0      0.0
    async disk IO                       1,059          0          0      0      0.0
    direct path write                     298          0          0      0      0.0
    slave TJ process wait                   1          1          0     18      0.0
    PX Deq: Execute Reply                   3          1          0      3      0.0
    PX Deq: Join ACK                        8          4          0      1      0.0
    global cache null to s                  8          0          0      1      0.0
    ges inquiry response                   16          0          0      0      0.0
    Wait Events for DB: EPIP  Instance: epip1  Snaps: 61688 -61698
    -> s  - second
    -> cs - centisecond -     100th of a second
    -> ms - millisecond -    1000th of a second
    -> us - microsecond - 1000000th of a second
    -> ordered by wait time desc, waits desc (idle events last)
                                                                       Avg
                                                         Total Wait   wait    Waits
    Event                               Waits   Timeouts   Time (s)   (ms)     /txn
    PX Deq: Parse Reply                     6          2          0      1      0.0
    PX Deq Credit: send blkd                2          1          0      0      0.0
    PX Deq: Signal ACK                      3          1          0      0      0.0
    library cache load lock                 1          0          0      0      0.0
    buffer deadlock                         6          6          0      0      0.0
    lock escalate retry                     4          4          0      0      0.0
    SQL*Net message from client     9,470,867          0    643,285     68    124.4
    queue messages                     42,829     41,144     42,888   1001      0.6
    wakeup time manager                   601        600     16,751  27872      0.0
    gcs remote message                795,414    120,163     13,606     17     10.4
    jobq slave wait                     2,546      2,462      7,375   2897      0.0
    PX Idle Wait                        2,895      2,841      7,021   2425      0.0
    virtual circuit status                120        120      3,513  29273      0.0
    ges remote message                142,306     69,912      3,504     25      1.9
    SQL*Net more data from clien      206,559          0         19      0      2.7
    SQL*Net message to client       9,470,903          0         14      0    124.4
    PX Deq: Execution Msg                 313        103          2      7      0.0
    Background Wait Events for DB: EPIP  Instance: epip1  Snaps: 61688 -61698
    -> ordered by wait time desc, waits desc (idle events last)
                                                                       Avg
                                                         Total Wait   wait    Waits
    Event                               Waits   Timeouts   Time (s)   (ms)     /txn
    db file parallel write              8,243          0        260     32      0.1
    log file parallel write           187,797          0        158      1      2.5
    log file sequential read              316          0         22     70      0.0
    enqueue                            56,204          0         15      0      0.7
    control file sequential read        5,694          0          6      1      0.1
    DFS lock handle                     8,682          0          3      0      0.1
    db file sequential read               276          0          2      8      0.0
    control file parallel write         1,334          0          2      2      0.0
    wait for master scn                 1,194          0          1      1      0.0
    CGS wait for IPC msg               30,830     30,714          1      0      0.4
    ksxr poll remote instances         30,972     12,681          1      0      0.4
    latch free                            356         54          1      2      0.0
    direct path read                      752          0          0      1      0.0
    log file single write                  24          0          0     13      0.0
    LGWR wait for redo copy             1,751          0          0      0      0.0
    async disk IO                         812          0          0      0      0.0
    global cache cr request                69          0          0      1      0.0
    row cache lock                         45          0          0      1      0.0
    direct path write                     298          0          0      0      0.0
    library cache pin                      29          0          0      1      0.0
    rdbms ipc reply                        29          0          0      0      0.0
    buffer busy waits                      10          0          0      0      0.0
    library cache lock                      2          0          0      0      0.0
    global cache open x                     2          0          0      0      0.0
    rdbms ipc message                 179,764     36,258     29,215    163      2.4
    gcs remote message                795,409    120,169     13,605     17     10.4
    pmon timer                          1,388      1,388      3,508   2527      0.0
    ges remote message                142,295     69,912      3,504     25      1.9
    smon timer                            414          0      3,463   8366      0.0
              -------------------------------------------------------------

  • RAC Interconnect Transfer rate vs NIC's Bandwidth

    Hi Guru,
    I need some clarification for RAC interconnect terminology between "private interconnect transfer rate" and "NIC bandwidth".
    We have 11gR2 RAC with multiple databases.
    So we need to find out what the current resource status is.
    We have two physical NICs each node. And 8G is for public and 2G is for private (interconnect).
    Technically, we have 4G for Private network bandwidth.
    If I look at the "Private Interconnect Transfer rate" though OEM or IPTraf (linux tool), it is showing 20 ~30 MB/Sec.
    There is no any issue at all at this moment.
    Please correct me if I am wrong.
    The transfer rate will be fine till 500M or 1G/Sec. Because the current NIC's capacity is 4G. Does it make sense ?
    I'm sure there are multiple things to consider,but I'm kind of stumped on the whole transfer rate vs bandwidth. Is there any way to calculate what a typical transfer would be....
    OR How do I say our interconnect are good enough ....based on the transfer rate ?
    Another question is ....
    In our case, how do I set up the warning threshold and Critical threshold for "Private Interconnect Transer rate" in OEM ?
    Any comments will be appreciated.
    Please advise.

    Interconnect performance sways more to latency than bandwidth IMO. In simplistic terms, memory is shared across the Interconnect. What is important for accessing memory? The size of the pipe? Or the speed of the pipe?
    A very fast small pipe will typically perform significantly better than a large and slower pipe.
    Even the size of the pipe is not that straight forward. Standard IP MTU size is 1500. You can run jumbo and super-jumbo frame MTU sizes on the Interconnect - where for example a MTU size of 65K is significantly larger than a 1500 byte MTU. Which means significantly more data can be transferred over the Interconnect at a much reduced overhead.
    Personally, I would not consider Ethernet (GigE included) for the Interconnect. Infiniband is faster, more scalable, and offers an actual growth path to 128Gb/s and higher.
    Oracle also uses Infiniband (QDR/40Gb) for their Exadata Database Machine product's Interconnect. Infiniband also enables one to run Oracle Interconnect over RDS instead of UDP. I've seen Oracle reports to the OFED committee saying that using RDS in comparison with UDP, reduced CPU utilisation by 50% and decreased latency by 50%.
    I also do not see the logic of having a faster public network and a slower Interconnect.
    IMO there are 2 very fundamental components in RAC that determines what is the speed and performance achievable with that RAC - the speed, performance and scalability of the I/O fabric layer and for the Interconnect layer.
    And Exadata btw uses Infiniband for both these critical layers. Not fibre. Not GigE.

  • Oracle Cluster private Interconnect

    What are the different speeds and technologies that we can configure the oracle private interconnect for RAC 11g?

    ghd wrote:
    What are the different speeds and technologies that we can configure the oracle private interconnect for RAC 11g?The recommended technology (looking at what Oracle's Database Machine uses) is QDR (Quad Data Rate/40Gbs) Infiniband, using the RDS (Reliable Datagram Sockets). This provides (according to Oracle testing), a 50% faster cache-to-cache block throughput with 50% less CPU time - in comparison to using UDP as the RAC Interconnect wire protocol.
    Oracle presented these results to the Infiniband/OFED members in a presentation called Oracle’s Next-Generation Interconnect Protocol (PDF).
    The Infiniband roadmap shows that the NDR (Next Data Rate) will scale to 320Gb/s.
    There is absolutely nothing I have seen from the Ethernet vendors that show GigE matching Infiniband.
    From Top 500, listing the biggest and fastest 500 clusters on this planet, Infiniband has a 41.8% market share, in comparison with the 41.4% share of GigE.
    Compare this to 2005 (when we first got Infiniband for RAC). Back then Infiniband had a 3.2% market share. GigE had a 42.8% share. So there has been an incredible growth in using Infiniband as Interconnect - unlike GigE that has been stagnant and now is the 2nd place as top500 Interconnect family architecture.
    What is needed for using Infiniband for Oracle RAC? A HCA (Host Channel Adapter) card for each RAC server (high speed PCI cards, dual port). An Infiniband switch (2 ports per RAC server needed). And cables of course. All these are sold by most server h/w vendors. Costs are quite comparable to 10Gb/s GigE (and even cheaper) in my experience.

  • Dedicated switches needed for RAC interconnect or not?

    Currently working on an Extended RAC cluster design implementation, I asked the network engineer for dedicated switches for the RAC interconnects.
    Here is a little background:
    There are 28 RAC clusters over 2X13 physical RAC nodes with separate Oracle_Home for each instance with atleast 2+ instances on each RAC node. So 13 RAC nodes will be in each site(Data-Center). This is basically an Extended RAC solution for SAP databases on RHEL 6 using ASM and Clusterware for Oracle 11gR2. The RAC nodes are Blades in a c7000 enclosure (in each site). The distance between the sites is 55+ kms.
    Oracle recommends to have Infiniband(20GBps) as the network backbone, but here DWDM will be used with 2X10 Gbps (each at 10 GBps) links for the RAC interconnect between the sites. There will be separate 2x1GBps redundant link for the Production network and 2x2 GBps FC(Fiber-Channel) redundant links for the SAN/Storage(ASM traffic will go here) network. There will be switches for the Public-production network and the SAN network each.
    Oracle recommends dedicated switches(which will give acceptable latency/bandwith) with switch redundancy to route the dedicated/non-routable VLANs for the RAC interconnect (private/heartbeat/global cache transfer) network. Since the DWDM interlinks is 2x10Gbps - do I still need the dedicated switches?
    If yes, then how many?
    Your inputs will be greatly appreciated.. and help me take a decision.
    Many Thanks in advance..
    Abhijit

    Absolutely agree.. the chances of overload in a HA(RAC) solution and ultmate RAC node eviction are very high(with very high latency) and for exactly this reason I even suggested inexpensive switches to route the VLANs for the RAC interconnect through these switches. The ASM traffic will get routed through the 2x2GB FC links through SAN-Directors (1 in each site).
    Suggested the network folks to use Up-links from the c7000 enclosure and route the RAC VLAN through these inexpensive switches for the interconnect traffic. We have another challenge here: HP has certified using VirtualConnect/Flex-Fabric architecture for Blades in c7000 to allocate VLANs for RAC interconnect. But this is only for one site, and does not span Production/DR sites separated over a distance.
    Btw, do you have any standard switch model to select from.. and how many to go for a RAC configuration of 13 Extended RAC clusters with each cluster hosting 2+ RAC instances to host total of 28 SAP instances.
    Many Thanks again!
    Abhijit

  • Teamed NICs for RAC interconnect

    Hi there,
    We have an Oralce 10g RAC with 2 nodes. there are only one NIC for RAC interconnect in both servers.
    now we want to add one redundant NIC into each server for RAC interconnect as well.
    Could you please guide me some documents about this "teamed NICs for RAC interconnect "?
    Your help is greatly appreciated!
    Thanks,
    Scott

    Search around for NIC bonding. The exact process will depend on your OS.
    Linux, see Metalink note 298891.1 - Configuring Linux for the Oracle 10g VIP or private interconnect using bonding driver
    Regards,
    Greg Rahn
    http://structureddata.org

  • Need to change IP address and host name on a ORACLE 10g RAC setup.

    Hi Forum,
    I have a task to change the IP address, host name and host related details for the ORACLE 10g RAC 2 node setup. So Can anyone tell me the procedure to do the same on both the nodes.
    Regards
    Prakash

    change IP Address Public + VIP + Interconnect
    http://surachartopun.com/2007/01/i-want-to-change-ip-address-on-oracle.html
    change VIP name
    http://surachartopun.com/2009/01/change-vip-hostname-on-oracle-rac.html
    If you'd like to change HOSTNAME... Metalink (NOTE:220970.1)
    Can I change the public hostname in my Oracle Database 10g Cluster using Oracle Clusterware?
    Hostname changes are not supported in Oracle Clusterware (CRS), unless you want to perform a deletenode followed by a new addnode operation.
    The hostname is used to store among other things the flag files and Oracle Clusterware stack will not start if hostname is changed.
    Changing the Private Interconnect
    o Change of the IP Address: change the IP address in the hosts file and/or DNS and make sure that ASM and the database also use the same interconnect
    o Change of the private node name used by Oracle Clusterware: requires a reinstall of Oracle Clusterware
    So, Public or Private hostnames can only be changed by removing/adding nodes, or reinstalling Oracle Clusterware. VIP Hostnames can be changed
    But you can read this idea (reinstall CRS)
    http://www.pythian.com/news/482/changing-hostnames-in-oracle-rac
    http://surachartopun.com/2008/12/change-hostnames-oracle-rac.html
    Edited by: Surachart Opun (HunterX) on Oct 25, 2009 10:58 PM

  • SCI and RSM for Oracle 9i RAC

    Hi gurus,
    Could you please confirm if we can configure 2 or more SCI interconnects (for Private Interconnects) on a E15K cluster ?
    This is for a High Availability requirement for Oracle 9i RAC. What i need to know is that,
    Assume that i have configured 3 SCI Cards on NODE 1 and 3 SCI Cards on NODE 2. Sun Cluster 3.1 is configured on both nodes.
    1) If i activate all the cluster interconnects on NODE 1 and NODE 2, will Oracle 9i RAC based traffic flow on all the 3 cards for EACH Node, thereby providing 3 gbps of bandwidth ?
    2) What if there is a failure in one of the cards? will the other 2 continue to function, if failover mechanism is configured?
    3) Do i need to do something special for Oracle to be enable to recognise that 1 card has failed and it needs to start transfering packets using the other 2 surviving cards?
    These questions are assuming that it is an SCI card using RSM protocol.
    Will this also be applicable for a Gigabit Ethernet Card running UDP? I have heard that SCI and RSM is a very good combination and just behind the Sun Fireplane interconnects which also uses RSM.
    Is that true?
    Thanks a lot folks.
    Regards
    Spockaris

    Could you please confirm if we can configure 2 or more
    SCI interconnects (for Private Interconnects) on a
    E15K cluster ?This is being tested now. More below...
    This is for a High Availability requirement for Oracle
    9i RAC. What i need to know is that,
    Assume that i have configured 3 SCI Cards on NODE 1
    and 3 SCI Cards on NODE 2. Sun Cluster 3.1 is
    configured on both nodes.
    1) If i activate all the cluster interconnects on NODE
    1 and NODE 2, will Oracle 9i RAC based traffic flow on
    all the 3 cards for EACH Node, thereby providing 3
    gbps of bandwidth ?Bandwidth does not seem to be a problem for RAC.
    Latency is a problem, but adding interconnects won't
    solve the latency problem unless bandwidth was also
    the problem. I haven't been able to find any customers
    who have interconnect bandwidth problems on any
    Sun Cluster, let alone RAC. If you know of any, please
    let me know.
    2) What if there is a failure in one of the cards?
    will the other 2 continue to function, if failover
    mechanism is configured?Yes, that is how it works today for interconnects.
    3) Do i need to do something special for Oracle to be
    enable to recognise that 1 card has failed and it
    needs to start transfering packets using the other 2
    surviving cards?No, under Sun Cluster, this is managed for Oracle.
    These questions are assuming that it is an SCI card
    using RSM protocol.
    Will this also be applicable for a Gigabit Ethernet
    Card running UDP? I have heard that SCI and RSM is a
    very good combination and just behind the Sun
    Fireplane interconnects which also uses RSM.
    Is that true?It is true that low latency networks help performance of
    RAC clusters. However, there does not seem to be a
    direct correlation between interconnect latency and
    performance. There are a number of studies completed
    and in progress on this topic. We can generally say that
    for the same RAC configuration for the same workload,
    the lower latency interconnects will perform better. But
    beyond that, the problem gets complex quickly and YMMV.
    -- richard

  • Oracle 10g RAC installation on HP-UX 11.31 for SAP ERP 6.04

    Dear experts,
    We are trying to install SAP ERP 6.0 EHP4 with Oracle 10g RAC on HP-UX 11.31. Please note that, We are using VERITAS cluster filesystem (CFS) for this purpose and not HP-UX ServiceGuard ClusterFileSystem.
    *As per SAP procedure, we have installaed plain plain SAP system with single instance Oracle (ref: Configuration of SAP NetWeaver for Oracle Database 10gRAC guide). Now we are first trying to install Oracle Clusterware (CRS) and then we will install Oracle RAC software. Is this procedure right?*
    Which file (and its path) we have to use for CRS installation and its patch? Is it runInstaller under /oracle/stage/102_64/clusterware? With this file we can install CRS 10.2.0.1. Similarly for Oracle RAC, which files we are supposed to use, i.e for installation and patch upgrade?
    Also can we use clusterware package available from Oracle directly in the SAP environment?
    We tried to install CRS with runInstaller while running Configuration Assistant after root.sh script.
    The following commands are failing in Configuration Assistant:
    /oracle/CRS/102_64/bin/racgons add_config erpprdd1:4948 erpprdd2:4948
    /oracle/CRS/102_64/bin/oifcfg setif -global  lan1/20.20.20.0:cluster_interconnect lan0/192.168.3.0:public
    /oracle/CRS/102_64/bin/cluvfy stage -post crsinst -n erpprdd1,erpprdd2
    1st command does not give any output if run manually.
    2nd commands o/p: PRIF-12: failed to initialize cluster support services
    3rd commands fails in every check
    Please suggest the solution and expecting answers to the above mentioned questions.
    Thanks & Regards,
    Tejas

    >
    Charles Yu wrote:
    > Q1:  Oracle RAC with 9.2.x on HP-UX?
    > A:   For HA environment, cluster software is: MC/SG on HP-UX 11.31; there are  optional components of MC/SG  for supporting Oracle RAC and SAP application.  I was confused that I could not find the installation guide regarding 4.6C on MC/SG HA environment of HP-UX.
    > Charles
    Relevant docs for Service Guard (SG) cluster are available at http://docs.hp.com. Hope you have checked for the suport of oracle 92x on HP-UX 11.31
    >
    Charles Yu wrote:
    > Q2: Any reason why you don't use a supported database version?
    > A:   Actually, in order to avoiding the the risk of database upgrade and minimizing the migration risk,  top level has decided that keeping the same Oracle version. Indeed, we don't plan the migration of application. On the other hand, it is complicated to do the assessment for application migration.
    > Charles
    You can go for combined os migration and db release upgrade also at a stretch with the same downtime.

Maybe you are looking for

  • Installation problem with lightroom 3.4

    After a 5th chat attempt , the last one lasting 2 hrs in addition to giving user permission i am no further ahead. I am installing lightroom 3.4 on a win 7 64 bit dell computer. Installation goes well until i click the iconto launch the application.

  • How do i re-install iTunes

    recently i got a new DVD/CD drive for my laptop, from doing this I can no longer sync my iPod. I've been able to upload songs into my library but when i plug my iPod in iTunes doesn't reconize it. When i first open iTunes it's asking me that i need t

  • Deleting photos from iPad moments

    Does deleting photos from iPad "moments" delete them from all devices (like from Photo Stream) or only from the iPad?

  • MapViewer Support in IPad Safari Browser

    Dear All, IHAC looking for displaying map data in ipad safari browser and I would like to know if our map viewer supports ipad safari browser? Thanks, Kenny

  • Create Connection to MYSQL database which is on server using Dreamweaver CC

    Hello,        I created a site using Dreamweaver CC and i have used the FTP server for this. Now i want to connect it to the database that i created on my server with the table. I am unable to figure out what IP address should i provide to connect to