Rac interconnect network change

i tried to change interconnect network card and when starting it does not start:
i see in the log that it goes to eth1 but i have eth2 - how can i change it ?
2010-09-26 14:34:18.566: [ default][1454735104]prifcg_retrieveocrifvec: current root key = SYSTEM.css.interfaces.nodevcpdb02
2010-09-26 14:34:18.569: [ default][1454735104]prifcg_retrieveocrifvec: PROCR_KEY_NOT_EXISTING = 4, rc = 4
2010-09-26 14:34:18.569: [ default][1454735104]Number of interfaces on node 2 = 0
2010-09-26 14:34:18.569: [ default][1454735104]prifcg_retrieveocrifvec: current root key = SYSTEM.css.interfaces.global
2010-09-26 14:34:18.570: [ default][1454735104]ocrif_vec[0]:name_prifcgif=eth0, subnet_prifcgif=10.97.6.0, usetype_prifcgif=80, node_prifcgif=global
2010-09-26 14:34:18.572: [ default][1454735104]ocrif_vec[1]:name_prifcgif=eth1, subnet_prifcgif=10.20.30.0, usetype_prifcgif=1, node_prifcgif=global
2010-09-26 14:34:18.573: [GIPCXCPT][1454735104] gipcShutdownF: skipping shutdown, count 1, from [ clsgpnp0.c : 1021], ret gipcretSuccess (0)
2010-09-26 14:34:18.575: [ default][1454735104]Oifcfg exited with retval = 0
2010-09-26 15:40:02.699: [ default][4280823552]prifcgini: clsd LOG initialized
2010-09-26 15:40:02.699: [ default][4280823552] prifcgini: LEM initialized
vcpdb01:oravcp # oifcfg iflist
eth0 10.97.6.0
eth2 10.20.30.0
vcpdb01:oravcp #

hi
get
% $ORA_CRS_HOME/bin/oifcfg getif
eth0 10.2.156.0 global public
eth1 192.168.0.0 global cluster_interconnect
set
% $ORA_CRS_HOME/bin/oifcfg delif -global eth0
% $ORA_CRS_HOME/bin/oifcfg setif –global eth0/10.2.166.0:public
% $ORA_CRS_HOME/bin/oifcfg delif –global eth1
% $ORA_CRS_HOME/bin/oifcfg setif –global eth1/192.168.1.0:cluster_interconnect
syntax: oifcfg setif <interface-name>/<subnet>:<cluster_interconnect|public>

Similar Messages

  • Oracle RAC Interconnect, PowerVM VLANs, and the Limit of 20

    Hello,
    Our company has a requirement to build a multitude of Oracle RAC clusters on AIX using Power VM on 770s and 795 hardware.
    We presently have 802.1q trunking configured on our Virtual I/O Servers, and have currently consumed 12 of 20 allowed VLANs for a virtual ethernet adapter. We have read the Oracle RAC FAQ on Oracle Metalink and it seems to otherwise discourage the use of sharing these interconnect VLANs between different clusters. This puts us in a scalability bind; IBM limits VLANs to 20 and Oracle says there is a one-to-one relationship between VLANs and subnets and RAC clusters. We must assume we have a fixed number of network interfaces available and that we absolutely have to leverage virtualized network hardware in order to build these environments. "add more network adapters to VIO" isn't an acceptable solution for us.
    Does anyone know if Oracle can afford any flexibility which would allow us to host multiple Oracle RAC interconnects on the same 802.1q trunk VLAN? We will independently guarantee the bandwidth, latency, and redundancy requirements are met for proper Oracle RAC performance, however we don't want a design "flaw" to cause us supportability issues in the future.
    We'd like it very much if we could have a bunch of two-node clusters all sharing the same private interconnect. For example:
    Cluster 1, node 1: 192.168.16.2 / 255.255.255.0 / VLAN 16
    Cluster 1, node 2: 192.168.16.3 / 255.255.255.0 / VLAN 16
    Cluster 2, node 1: 192.168.16.4 / 255.255.255.0 / VLAN 16
    Cluster 2, node 2: 192.168.16.5 / 255.255.255.0 / VLAN 16
    Cluster 3, node 1: 192.168.16.6 / 255.255.255.0 / VLAN 16
    Cluster 3, node 2: 192.168.16.7 / 255.255.255.0 / VLAN 16
    Cluster 4, node 1: 192.168.16.8 / 255.255.255.0 / VLAN 16
    Cluster 4, node 2: 192.168.16.9 / 255.255.255.0 / VLAN 16
    etc.
    Whereas the concern is that Oracle Corp will only support us if we do this:
    Cluster 1, node 1: 192.168.16.2 / 255.255.255.0 / VLAN 16
    Cluster 1, node 2: 192.168.16.3 / 255.255.255.0 / VLAN 16
    Cluster 2, node 1: 192.168.17.2 / 255.255.255.0 / VLAN 17
    Cluster 2, node 2: 192.168.17.3 / 255.255.255.0 / VLAN 17
    Cluster 3, node 1: 192.168.18.2 / 255.255.255.0 / VLAN 18
    Cluster 3, node 2: 192.168.18.3 / 255.255.255.0 / VLAN 18
    Cluster 4, node 1: 192.168.19.2 / 255.255.255.0 / VLAN 19
    Cluster 4, node 2: 192.168.19.3 / 255.255.255.0 / VLAN 19
    Which eats one VLAN per RAC cluster.

    Thank you for your answer!!
    I think I roughly understand the argument behind a 2-node RAC and a 3-node or greater RAC. We, unfortunately, were provided with two physical pieces of hardware to virtualize to support production (and two more to support non-production) and as a result we really have no place to host a third RAC node without placing it within the same "failure domain" (I hate that term) as one of the other nodes.
    My role is primarily as a system engineer, and, generally speaking, our main goals are eliminating single points of failure. We may be misusing 2-node RACs to eliminate single points of failure since it seems to violate the real intentions behind RAC, which is used more appropriately to scale wide to many nodes. Unfortunately, we've scaled out to only two nodes, and opted to scale these two nodes up, making them huge with many CPUs and lots of memory.
    Other options, notably the active-passive failover cluster we have in HACMP or PowerHA on the AIX / IBM Power platform is unattractive as the standby node drives no resources yet must consume CPU and memory resources so that it is prepared for a failover of the primary node. We use HACMP / PowerHA with Oracle and it works nice, however Oracle RAC, even in a two-node configuration, drives load on both nodes unlike with an active-passive clustering technology.
    All that aside, I am posing the question to both IBM, our Oracle DBAs (whom will ask Oracle Support). Typically the answers we get vary widely depending on the experience and skill level of the support personnel we get on both the Oracle and IBM sides... so on a suggestion from a colleague (Hi Kevin!) I posted here. I'm concerned that the answer from Oracle Support will unthinkingly be "you can't do that, my script says to tell you the absolute most rigid interpretation of the support document" while all the time the same document talks of the use of NFS and/or iSCSI storage eye roll
    We have a massive deployment of Oracle EBS and honestly the interconnect doesn't even touch 100mbit speeds even though the configuration has been checked multiple times by Oracle and IBM and with the knowledge that Oracle EBS is supposed to heavily leverage RAC. I haven't met a single person who doesn't look at our environment and suggest jumbo frames. It's a joke at this point... comments like "OMG YOU DON'T HAVE JUMBO FRAMES" and/or "OMG YOU'RE NOT USING INFINIBAND WHATTA NOOB" are commonplace when new DBAs are hired. I maintain that the utilization numbers don't support this.
    I can tell you that we have 8Gb fiber channel storage and 10Gb network connectivity. I would probably assume that there were a bottleneck in the storage infrastructure first. But alas, I digress.
    Mainly I'm looking for a real-world answer to this question. Aside from violating every last recommendation and making oracle support folk gently weep at the suggestion, are there any issues with sharing interconnects between RAC environments that will prevent it's functionality and/or reduce it's stability?
    We have rapid spanning tree configured, as far as I know, and our network folks have tuned the timers razor thin. We have Nexus 5k and Nexus 7k network infrastructure. The typical issues you'd fine with standard spanning tree really don't affect us because our network people are just that damn good.

  • RAC interconnect switch?

    Hi,
    We are in the process of migrating from a 10g single instance database to 2 node RAC (Windows Server 2008 OS, EMC storage with 2 SAN swithes,…) and we have some doubts about interconnect.
    We are having difficulty in selecting the correct interconnect speed for the interconnect network, difficulty in selecting the switch/switches, …
    1.     Because there are 2 nodes, and 4 Ethernet cables for interconnect, whether to use one or two switches? Using a switch can be a solution but a single switch become a big single point of failure.
    2.     Whether we can get in performance if we use 2 switches (bonding,…) ?
    3.     As mentioned, there are 4 Ethernet cables, is it good idea to use existing 1Gb switches that we use for public network or to buy 1Gb switches that will be used only for private interconnection?
    4.     Can we use simple 16 or 8 port GigE switches?
    Maybe you can point me out to some GigE and SAN switches (for nodes - storage connection) which you've seen that they work without any problems with RAC.
    How can we best deisgn the networks for the interconnects?
    Thank in advance!

    user9106065 wrote:
    So the best solution for interconnection would be InfiniBand or 10GigE.If you look at what Oracle itself chose for their RAC hardware product range, then yes. Infiniband is a better choice.
    What do you think about InfiniBand and Windows Server OS?Last used Windows as a server o/s back in the mid 90's. :-)
    No idea how robust the OFED driver stack is on Windows. It ships with Oracle Linux as Oracle uses it for their RAC products.
    What is the difference in price for InfiniBand and GigE switches?About the same I would think. A 40Gb 12 to 24 port switches a few years ago were actually cheaper than a 10GigE switch of the same port count. Pricing has come down for both though. We have recently bought a couple of 32 port QDR switches at far below $10,000 a switch.
    Cabling is needed and HCA (PCI) cards too. The cards are cheaper I think than HBAs (fibre channel cards). The only issue we had in this regard is getting pizza box/blade servers with 2 PCI slots for supporting both HBA and HCA. Recent server models often have only one PCI slot as oppose to the prior models of a few years ago. So when choosing a newer servers and you need both HBA and HCA, you need to make sure there are in fact 2 PCI slots in the server.
    And again, can you point me out to some InfiniBand switches which you've seen that they work without any problems with 2 node RAC?Oracle used Voltaire IB (Infiniband) switches for the first Oracle Database Machine/Exadata product. The only top500 cluster in Africa is basically (almost) next door to us here in Cape Town. They are also using Voltaire switches.
    If I'm not mistaken, the same Voltaire switches are OEM'ed and sold by Oracle/Sun and HP and others. I have an HP quote for about the same below $10,000 per switch price. Of course, you can get away with a much smaller switch for a 2 node RAC - and a 2nd switch is only a consideration if you can justify the cost for redundancy in the Interconnect redundancy layer.
    Voltaire pretty much seems to lead the market in this respect. Cisco used to sell IB switches too - but some of these were horribly buggy (especially the ones with FC gateways). Cisco acquired TopSpin back then - we still have a couple of old 10Gb TopSpin switches (bought from Cisco) and these have been pretty rock solid through the years. But QDR (40Gb) is what one should be looking at and not the older SDR or DDR technologies.
    You should be able to shop around your existing vendors (HP, Oracle/Sun, etc) for IB switches - with the exception of Cisco that no longer does IB switches (afaik).

  • RAC Interconnect performance

    Hi,
    We are facing RAC Interconnect performance problems.
    Oracle Version: Oracle 9i RAC (9.2.0.7)
    Operating system: SunOS 5.8
    SQL> SELECT b1.inst_id, b2.value "RECEIVED",
    b1.value "RECEIVE TIME",
    ((b1.value / b2.value) * 10) "AVG RECEIVE TIME (ms)"
    FROM gv$sysstat b1, gv$sysstat b2
    WHERE b1.name = 'global cache cr block receive time'
    AND b2.name = 'global cache cr blocks received'
    AND b1.inst_id = b2.inst_id;
    INST_ID RECEIVED RECEIVE TIME AVG RECEIVE TIME (ms)
    1 323849 172359 5.32220263
    2 675806 94537 1.39887778
    After database restart average time increases for Instance 1 and instance 2 remains similar.
    Application performance degrades, restart database solves the issue. This is critical application and can not have frequent downtimes for restart.
    What specific points should I check to find out to improve interconnect performance?
    Thanks
    Dilip Patel.

    Hi,
    Configurations:
    Node: 1
    Hardware Model: Sun-Fire-V890
    OS: SunOS 5.8
    Release: Generic_117350-53
    CPU: 16 sparcv9 cpu(s) running at 1200 MHz
    Memory: 40.0GB
    Node: 2
    Hardware Model: Sun-Fire-V890
    OS: SunOS 5.8
    Release: Generic_117350-53
    CPU: 16 sparcv9 cpu(s) running at 1200 MHz
    Memory: 40.0GB
    CPU Utilization on Node 1 is never exceeded 40%.
    CPU Utilization on Node 2 is between 20% to 30%.
    Application load is more Node 1 compared to Node 2.
    I can observer wait event "global cache cr request" in top 5 wait events on most of the statspack report. Application faces degrade performacne after few days of restart database. No major changes done on application recently.
    Statapack report for Node 1:
    DB Name         DB Id    Instance     Inst Num Release     Cluster Host
    XXXX          2753907139 xxxx1               1 9.2.0.7.0   YES    xxxxx
                  Snap Id     Snap Time      Sessions Curs/Sess Comment
    Begin Snap:     61688 17-Feb-09 09:10:06      253     299.4
      End Snap:     61698 17-Feb-09 10:10:06      285     271.6
       Elapsed:               60.00 (mins)
    Cache Sizes (end)
    ~~~~~~~~~~~~~~~~~
                   Buffer Cache:     2,048M      Std Block Size:          8K
               Shared Pool Size:       384M          Log Buffer:      2,048K
    Load Profile
    ~~~~~~~~~~~~                            Per Second       Per Transaction
                      Redo size:            102,034.92              4,824.60
                  Logical reads:             60,920.35              2,880.55
                  Block changes:                986.07                 46.63
                 Physical reads:              1,981.12                 93.67
                Physical writes:                 28.30                  1.34
                     User calls:              2,651.63                125.38
                         Parses:                500.89                 23.68
                    Hard parses:                 21.44                  1.01
                          Sorts:                 66.91                  3.16
                         Logons:                  3.69                  0.17
                       Executes:                553.34                 26.16
                   Transactions:                 21.15
      % Blocks changed per Read:    1.62    Recursive Call %:     22.21
    Rollback per transaction %:    2.90       Rows per Sort:      7.44
    Instance Efficiency Percentages (Target 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                Buffer Nowait %:   99.99       Redo NoWait %:    100.00
                Buffer  Hit   %:   96.75    In-memory Sort %:    100.00
                Library Hit   %:   98.30        Soft Parse %:     95.72
             Execute to Parse %:    9.48         Latch Hit %:     99.37
    Parse CPU to Parse Elapsd %:   90.03     % Non-Parse CPU:     92.97
    Shared Pool Statistics        Begin   End
                 Memory Usage %:   94.23   94.93
        % SQL with executions>1:   74.96   74.66
      % Memory for SQL w/exec>1:   82.93   72.26
    Top 5 Timed Events
    ~~~~~~~~~~~~~~~~~~                                                     % Total
    Event                                               Waits    Time (s) Ela Time
    db file sequential read                         1,080,532      13,191    40.94
    CPU time                                                       10,183    31.60
    db file scattered read                            456,075       3,977    12.34
    wait for unread message on broadcast channel        4,195       2,770     8.60
    global cache cr request                         1,633,056         873     2.71
    Cluster Statistics for DB: EPIP  Instance: epip1  Snaps: 61688 -61698
    Global Cache Service - Workload Characteristics
    Ave global cache get time (ms):                            0.8
    Ave global cache convert time (ms):                        1.1
    Ave build time for CR block (ms):                          0.1
    Ave flush time for CR block (ms):                          0.2
    Ave send time for CR block (ms):                           0.3
    Ave time to process CR block request (ms):                 0.6
    Ave receive time for CR block (ms):                        4.4
    Ave pin time for current block (ms):                       0.2
    Ave flush time for current block (ms):                     0.0
    Ave send time for current block (ms):                      0.3
    Ave time to process current block request (ms):            0.5
    Ave receive time for current block (ms):                   2.6
    Global cache hit ratio:                                    3.9
    Ratio of current block defers:                             0.0
    % of messages sent for buffer gets:                        3.7
    % of remote buffer gets:                                   0.3
    Ratio of I/O for coherence:                                1.1
    Ratio of local vs remote work:                            10.9
    Ratio of fusion vs physical writes:                        0.0
    Global Enqueue Service Statistics
    Ave global lock get time (ms):                             0.1
    Ave global lock convert time (ms):                         0.0
    Ratio of global lock gets vs global lock releases:         1.0
    GCS and GES Messaging statistics
    Ave message sent queue time (ms):                          0.4
    Ave message sent queue time on ksxp (ms):                  1.8
    Ave message received queue time (ms):                      0.2
    Ave GCS message process time (ms):                         0.1
    Ave GES message process time (ms):                         0.0
    % of direct sent messages:                                 8.0
    % of indirect sent messages:                              49.4
    % of flow controlled messages:                            42.6
    GES Statistics for DB: EPIP  Instance: epip1  Snaps: 61688 -61698
    Statistic                                    Total   per Second    per Trans
    dynamically allocated gcs resourc                0          0.0          0.0
    dynamically allocated gcs shadows                0          0.0          0.0
    flow control messages received                   0          0.0          0.0
    flow control messages sent                       0          0.0          0.0
    gcs ast xid                                      0          0.0          0.0
    gcs blocked converts                         2,830          0.8          0.0
    gcs blocked cr converts                      7,677          2.1          0.1
    gcs compatible basts                             5          0.0          0.0
    gcs compatible cr basts (global)               142          0.0          0.0
    gcs compatible cr basts (local)            142,678         39.6          1.9
    gcs cr basts to PIs                              0          0.0          0.0
    gcs cr serve without current lock                0          0.0          0.0
    gcs error msgs                                   0          0.0          0.0
    gcs flush pi msgs                              798          0.2          0.0
    gcs forward cr to pinged instance                0          0.0          0.0
    gcs immediate (compatible) conver            9,296          2.6          0.1
    gcs immediate (null) converts               52,460         14.6          0.7
    gcs immediate cr (compatible) con          752,507        209.0          9.9
    gcs immediate cr (null) converts         4,047,959      1,124.4         53.2
    gcs msgs process time(ms)                  153,618         42.7          2.0
    gcs msgs received                        2,287,640        635.5         30.0
    gcs out-of-order msgs                            0          0.0          0.0
    gcs pings refused                           70,099         19.5          0.9
    gcs queued converts                              0          0.0          0.0
    gcs recovery claim msgs                          0          0.0          0.0
    gcs refuse xid                                   1          0.0          0.0
    gcs retry convert request                        0          0.0          0.0
    gcs side channel msgs actual                40,400         11.2          0.5
    gcs side channel msgs logical            4,039,700      1,122.1         53.1
    gcs write notification msgs                     46          0.0          0.0
    gcs write request msgs                         972          0.3          0.0
    gcs writes refused                               4          0.0          0.0
    ges msgs process time(ms)                    2,713          0.8          0.0
    ges msgs received                           73,687         20.5          1.0
    global posts dropped                             0          0.0          0.0
    global posts queue time                          0          0.0          0.0
    global posts queued                              0          0.0          0.0
    global posts requested                           0          0.0          0.0
    global posts sent                                0          0.0          0.0
    implicit batch messages received           288,801         80.2          3.8
    implicit batch messages sent               622,610        172.9          8.2
    lmd msg send time(ms)                        2,148          0.6          0.0
    lms(s) msg send time(ms)                         1          0.0          0.0
    messages flow controlled                 3,473,393        964.8         45.6
    messages received actual                   765,292        212.6         10.1
    messages received logical                2,360,972        655.8         31.0
    messages sent directly                     654,760        181.9          8.6
    messages sent indirectly                 4,027,924      1,118.9         52.9
    msgs causing lmd to send msgs               33,481          9.3          0.4
    msgs causing lms(s) to send msgs            13,220          3.7          0.2
    msgs received queue time (ms)              379,304        105.4          5.0
    msgs received queued                     2,359,723        655.5         31.0
    msgs sent queue time (ms)                1,514,305        420.6         19.9
    msgs sent queue time on ksxp (ms)        4,349,174      1,208.1         57.1
    msgs sent queued                         4,032,426      1,120.1         53.0
    msgs sent queued on ksxp                 2,415,381        670.9         31.7
    GES Statistics for DB: EPIP  Instance: epip1  Snaps: 61688 -61698
    Statistic                                    Total   per Second    per Trans
    process batch messages received            278,174         77.3          3.7
    process batch messages sent                913,611        253.8         12.0
    Wait Events for DB: EPIP  Instance: epip1  Snaps: 61688 -61698
    -> s  - second
    -> cs - centisecond -     100th of a second
    -> ms - millisecond -    1000th of a second
    -> us - microsecond - 1000000th of a second
    -> ordered by wait time desc, waits desc (idle events last)
                                                                       Avg
                                                         Total Wait   wait    Waits
    Event                               Waits   Timeouts   Time (s)   (ms)     /txn
    db file sequential read         1,080,532          0     13,191     12     14.2
    db file scattered read            456,075          0      3,977      9      6.0
    wait for unread message on b        4,195      1,838      2,770    660      0.1
    global cache cr request         1,633,056      8,417        873      1     21.4
    db file parallel write              8,243          0        260     32      0.1
    buffer busy waits                  16,811          0        168     10      0.2
    log file parallel write           187,783          0        158      1      2.5
    log file sync                      75,143          0        147      2      1.0
    buffer busy global CR               9,713          0        102     10      0.1
    global cache open x                31,157      1,230         50      2      0.4
    enqueue                            58,261         14         45      1      0.8
    latch free                         33,398      7,610         44      1      0.4
    direct path read (lob)              9,925          0         36      4      0.1
    library cache pin                   8,777          1         34      4      0.1
    SQL*Net break/reset to clien       82,982          0         32      0      1.1
    log file sequential read              409          0         31     75      0.0
    log switch/archive                      3          3         29   9770      0.0
    SQL*Net more data to client       201,538          0         16      0      2.6
    global cache open s                 8,585        342         14      2      0.1
    global cache s to x                11,098        148         11      1      0.1
    control file sequential read        6,845          0          8      1      0.1
    db file parallel read               1,569          0          7      4      0.0
    log file switch completion             35          0          7    194      0.0
    row cache lock                     15,780          0          6      0      0.2
    process startup                        69          0          6     82      0.0
    global cache null to x              1,759         48          6      3      0.0
    direct path write (lob)               685          0          5      7      0.0
    DFS lock handle                     8,713          0          3      0      0.1
    control file parallel write         1,350          0          2      2      0.0
    wait for master scn                 1,194          0          1      1      0.0
    CGS wait for IPC msg               30,830     30,715          1      0      0.4
    global cache busy                      14          1          1     75      0.0
    ksxr poll remote instances         30,997     12,692          1      0      0.4
    direct path read                      752          0          0      1      0.0
    switch logfile command                  3          0          0    148      0.0
    log file single write                  24          0          0     13      0.0
    library cache lock                    668          0          0      0      0.0
    KJC: Wait for msg sends to c        1,161          0          0      0      0.0
    buffer busy global cache               26          0          0      6      0.0
    IPC send completion sync              261        260          0      0      0.0
    PX Deq: reap credit                 3,477      3,440          0      0      0.0
    LGWR wait for redo copy             1,751          0          0      0      0.0
    async disk IO                       1,059          0          0      0      0.0
    direct path write                     298          0          0      0      0.0
    slave TJ process wait                   1          1          0     18      0.0
    PX Deq: Execute Reply                   3          1          0      3      0.0
    PX Deq: Join ACK                        8          4          0      1      0.0
    global cache null to s                  8          0          0      1      0.0
    ges inquiry response                   16          0          0      0      0.0
    Wait Events for DB: EPIP  Instance: epip1  Snaps: 61688 -61698
    -> s  - second
    -> cs - centisecond -     100th of a second
    -> ms - millisecond -    1000th of a second
    -> us - microsecond - 1000000th of a second
    -> ordered by wait time desc, waits desc (idle events last)
                                                                       Avg
                                                         Total Wait   wait    Waits
    Event                               Waits   Timeouts   Time (s)   (ms)     /txn
    PX Deq: Parse Reply                     6          2          0      1      0.0
    PX Deq Credit: send blkd                2          1          0      0      0.0
    PX Deq: Signal ACK                      3          1          0      0      0.0
    library cache load lock                 1          0          0      0      0.0
    buffer deadlock                         6          6          0      0      0.0
    lock escalate retry                     4          4          0      0      0.0
    SQL*Net message from client     9,470,867          0    643,285     68    124.4
    queue messages                     42,829     41,144     42,888   1001      0.6
    wakeup time manager                   601        600     16,751  27872      0.0
    gcs remote message                795,414    120,163     13,606     17     10.4
    jobq slave wait                     2,546      2,462      7,375   2897      0.0
    PX Idle Wait                        2,895      2,841      7,021   2425      0.0
    virtual circuit status                120        120      3,513  29273      0.0
    ges remote message                142,306     69,912      3,504     25      1.9
    SQL*Net more data from clien      206,559          0         19      0      2.7
    SQL*Net message to client       9,470,903          0         14      0    124.4
    PX Deq: Execution Msg                 313        103          2      7      0.0
    Background Wait Events for DB: EPIP  Instance: epip1  Snaps: 61688 -61698
    -> ordered by wait time desc, waits desc (idle events last)
                                                                       Avg
                                                         Total Wait   wait    Waits
    Event                               Waits   Timeouts   Time (s)   (ms)     /txn
    db file parallel write              8,243          0        260     32      0.1
    log file parallel write           187,797          0        158      1      2.5
    log file sequential read              316          0         22     70      0.0
    enqueue                            56,204          0         15      0      0.7
    control file sequential read        5,694          0          6      1      0.1
    DFS lock handle                     8,682          0          3      0      0.1
    db file sequential read               276          0          2      8      0.0
    control file parallel write         1,334          0          2      2      0.0
    wait for master scn                 1,194          0          1      1      0.0
    CGS wait for IPC msg               30,830     30,714          1      0      0.4
    ksxr poll remote instances         30,972     12,681          1      0      0.4
    latch free                            356         54          1      2      0.0
    direct path read                      752          0          0      1      0.0
    log file single write                  24          0          0     13      0.0
    LGWR wait for redo copy             1,751          0          0      0      0.0
    async disk IO                         812          0          0      0      0.0
    global cache cr request                69          0          0      1      0.0
    row cache lock                         45          0          0      1      0.0
    direct path write                     298          0          0      0      0.0
    library cache pin                      29          0          0      1      0.0
    rdbms ipc reply                        29          0          0      0      0.0
    buffer busy waits                      10          0          0      0      0.0
    library cache lock                      2          0          0      0      0.0
    global cache open x                     2          0          0      0      0.0
    rdbms ipc message                 179,764     36,258     29,215    163      2.4
    gcs remote message                795,409    120,169     13,605     17     10.4
    pmon timer                          1,388      1,388      3,508   2527      0.0
    ges remote message                142,295     69,912      3,504     25      1.9
    smon timer                            414          0      3,463   8366      0.0
              -------------------------------------------------------------

  • RAC Interconnect Transfer rate vs NIC's Bandwidth

    Hi Guru,
    I need some clarification for RAC interconnect terminology between "private interconnect transfer rate" and "NIC bandwidth".
    We have 11gR2 RAC with multiple databases.
    So we need to find out what the current resource status is.
    We have two physical NICs each node. And 8G is for public and 2G is for private (interconnect).
    Technically, we have 4G for Private network bandwidth.
    If I look at the "Private Interconnect Transfer rate" though OEM or IPTraf (linux tool), it is showing 20 ~30 MB/Sec.
    There is no any issue at all at this moment.
    Please correct me if I am wrong.
    The transfer rate will be fine till 500M or 1G/Sec. Because the current NIC's capacity is 4G. Does it make sense ?
    I'm sure there are multiple things to consider,but I'm kind of stumped on the whole transfer rate vs bandwidth. Is there any way to calculate what a typical transfer would be....
    OR How do I say our interconnect are good enough ....based on the transfer rate ?
    Another question is ....
    In our case, how do I set up the warning threshold and Critical threshold for "Private Interconnect Transer rate" in OEM ?
    Any comments will be appreciated.
    Please advise.

    Interconnect performance sways more to latency than bandwidth IMO. In simplistic terms, memory is shared across the Interconnect. What is important for accessing memory? The size of the pipe? Or the speed of the pipe?
    A very fast small pipe will typically perform significantly better than a large and slower pipe.
    Even the size of the pipe is not that straight forward. Standard IP MTU size is 1500. You can run jumbo and super-jumbo frame MTU sizes on the Interconnect - where for example a MTU size of 65K is significantly larger than a 1500 byte MTU. Which means significantly more data can be transferred over the Interconnect at a much reduced overhead.
    Personally, I would not consider Ethernet (GigE included) for the Interconnect. Infiniband is faster, more scalable, and offers an actual growth path to 128Gb/s and higher.
    Oracle also uses Infiniband (QDR/40Gb) for their Exadata Database Machine product's Interconnect. Infiniband also enables one to run Oracle Interconnect over RDS instead of UDP. I've seen Oracle reports to the OFED committee saying that using RDS in comparison with UDP, reduced CPU utilisation by 50% and decreased latency by 50%.
    I also do not see the logic of having a faster public network and a slower Interconnect.
    IMO there are 2 very fundamental components in RAC that determines what is the speed and performance achievable with that RAC - the speed, performance and scalability of the I/O fabric layer and for the Interconnect layer.
    And Exadata btw uses Infiniband for both these critical layers. Not fibre. Not GigE.

  • Dedicated switches needed for RAC interconnect or not?

    Currently working on an Extended RAC cluster design implementation, I asked the network engineer for dedicated switches for the RAC interconnects.
    Here is a little background:
    There are 28 RAC clusters over 2X13 physical RAC nodes with separate Oracle_Home for each instance with atleast 2+ instances on each RAC node. So 13 RAC nodes will be in each site(Data-Center). This is basically an Extended RAC solution for SAP databases on RHEL 6 using ASM and Clusterware for Oracle 11gR2. The RAC nodes are Blades in a c7000 enclosure (in each site). The distance between the sites is 55+ kms.
    Oracle recommends to have Infiniband(20GBps) as the network backbone, but here DWDM will be used with 2X10 Gbps (each at 10 GBps) links for the RAC interconnect between the sites. There will be separate 2x1GBps redundant link for the Production network and 2x2 GBps FC(Fiber-Channel) redundant links for the SAN/Storage(ASM traffic will go here) network. There will be switches for the Public-production network and the SAN network each.
    Oracle recommends dedicated switches(which will give acceptable latency/bandwith) with switch redundancy to route the dedicated/non-routable VLANs for the RAC interconnect (private/heartbeat/global cache transfer) network. Since the DWDM interlinks is 2x10Gbps - do I still need the dedicated switches?
    If yes, then how many?
    Your inputs will be greatly appreciated.. and help me take a decision.
    Many Thanks in advance..
    Abhijit

    Absolutely agree.. the chances of overload in a HA(RAC) solution and ultmate RAC node eviction are very high(with very high latency) and for exactly this reason I even suggested inexpensive switches to route the VLANs for the RAC interconnect through these switches. The ASM traffic will get routed through the 2x2GB FC links through SAN-Directors (1 in each site).
    Suggested the network folks to use Up-links from the c7000 enclosure and route the RAC VLAN through these inexpensive switches for the interconnect traffic. We have another challenge here: HP has certified using VirtualConnect/Flex-Fabric architecture for Blades in c7000 to allocate VLANs for RAC interconnect. But this is only for one site, and does not span Production/DR sites separated over a distance.
    Btw, do you have any standard switch model to select from.. and how many to go for a RAC configuration of 13 Extended RAC clusters with each cluster hosting 2+ RAC instances to host total of 28 SAP instances.
    Many Thanks again!
    Abhijit

  • "NETWORK CHANGE DETECTED" - Broadband dropping out

    About 4 to 5 weeks ago I started finding my broadband connection would just 'drop out'.
    I'd lose it. No email or internet browsing, nothing.
    It might come back on it's own, it might not.
    Generally I'd run diagnoistic's, and some time sit would come back immediately, some times I'd have to run it a few times
    Sometimes I'd give up and walk away.
    Often I get the message "NETWORK CHANGE DETECTED", and the connection would re-establish itself.
    It may run for minutes, and I'd be back at square one, or it could run OK for days.
    All extremely frustrating.
    I've read past posts and see that this problem has come up before, but I cannot find a "do this to fix it" resposnse.
    Seeing that it's happened to others, I'm hoping it's something Apple can advise on.
    Does anyone know why this happens and how to fix it. I'm really at my wits end.
    I've been in contact with my internet provider who give me the usual answers.
    Mac-Mini (2.4Ghz Intel Core 2 Duo) running OS 10.6.7
    8Gb Ram
    Netcomm Modem NB6PLus4W
    Broadband ran fine from Late December 2010 until mid may 2011. No idea whats cause this issue.
    Thanks all.
    Robert

    This is interesting I just put up my first question to the community also with these things in common
    I am in Australia
    with TPG on ADSL2+
    I have a netcomm NB14WN wireless router
    mac version 10.6.7 then updated to 10.6.8 - hasnt made any difference.
    since I joined with TPG I have also had at first very slow and unreliable speeds (worse than dial up) and several nights a week it would repeatedly disconnect and the network change detected.
    So since numerous discussions with tpg and several 'rebooting' everything they had a level 2 engineeer check and update the servers and now with the ethernet cable it is stable and fast ( same bandwidth and speed test every time).  What I still have at this stage is an unreliable wireless connection and we have checked for interference in the house etc and now they have finally sent me a replacement modem to see if that makes a difference (should pick it up tomorrow).  The problem is that some days the wireless is fine and somedays it isn't.  It played up again two hours ago - put on the ethernet cable and it was fine, took the cable off a couple of hours ago and the wireless is fine at present whilst I am typing this......So will let you know how the other wireless router goes - apparently they check this one before they send it out but this one is from Netcomm also - I wonder if it is that - my dad and I looked it up on the net and it only sells for $100AUS which is pretty cheap - when I touch it underneath it is also very hot.

  • It doesn't recognise my password for wifi. Unable to connect. Tried ressitng the network,changed the wap settings,nothing. Pswd is 9 characters. MacBook

    iPad 2 doesn't recognise my password for wifi. Unable to connect. Tried resetting the network,changed the wap settings,nothing. Pswd is 9 characters. MacBook and iPhone accepts the password, not my ipad 2

    Hey brainmasala,
    Thanks for using Apple Support Communities.
    Sounds like you are not able to connect to a Wi-Fi network with your iPad. This article has a section for Unable to connect to a Wi-Fi network and the second article is the recommended network settings. If after these it still has the same issue you may want to restore the device to factory settings.
    iOS: Troubleshooting Wi-Fi networks and connections
    http://support.apple.com/kb/TS1398
    iOS and OS X: Recommended settings for Wi-Fi routers and access points
    http://support.apple.com/kb/HT4199
    Use iTunes to restore your iOS device to factory settings
    http://support.apple.com/kb/HT1414
    Have a nice day,
    Mario

  • Oracle 10g RAC - public network interface down

    Hi all,
    I have a question about Oracle RAC and network interface.
    We're using Oracle 10gR2 RAC with two nodes on Linux Red Hat.
    Let's assume that the public network interface goes down.
    I would like to know what happens with existing connections
    on node with network interface with problems.
    Are connections frozen, actives?
    Can the users continue to use theses existing connections using the another node of RAC?
    I know that the listener goes down and any other connections is allowed.
    Thank you very much!!!!

    Tads wrote:
    Hi all,
    I have a question about Oracle RAC and network interface.
    We're using Oracle 10gR2 RAC with two nodes on Linux Red Hat.
    Let's assume that the public network interface goes down.
    I would like to know what happens with existing connections
    on node with network interface with problems.
    Are connections frozen, actives?
    Can the users continue to use theses existing connections using the another node of RAC?If the interface is down? what do you think? All connections to this node will die. How does your application handle fail-over, does it attempt to reconnect or just have a complete application failure?
    You should spend some time in a test lab where you can test this stuff for yourself. Read the documentation and there are tons of sites out there that purport to have all of your RAC/TAF/FAN/FAF questions. - I would read and trust the documentation first.
    >
    I know that the listener goes down and any other connections is allowed.
    Thank you very much!!!!

  • 802.3ad (mode=4) bonding for RAC interconnects

    Is anyone using 802.3ad (mode=4) bonding for their RAC interconnects? We have five Dell R710 RAC nodes and we're trying to use the four onboard Broadcom NetXtreme II NICs in a 802.3ad bond with src-dst-mac load balancing. Since we have the hardware to pull this off we thought we'd give it a try and achieve some extra bandwith for the interconnect rather than deploying the traditional acitve/standby interconnect using just two of the NICs. Has anyone tried this config and what was the outcome? Thanks.

    I don't but may be the documents might help ?
    http://www.iop.org/EJ/article/1742-6596/119/4/042015/jpconf8_119_042015.pdf?request-id=bcddc94d-7727-4a8a-8201-4d1b837a1eac
    http://www.oracleracsig.org/pls/apex/Z?p_url=RAC_SIG.download_my_file?p_file=1002938&p_id=1002938&p_cat=documents&p_user=nobody&p_company=994323795175833
    http://www.oracle.com/technology/global/cn/events/download/ccb/10g_rac_bp_en.pdf
    Edited by: Hub on Nov 18, 2009 10:10 AM

  • Every time there's a power cut or I turn off my router, my Macbook doesn't detect the network and doesn't accept the password. Finally, after many, many attempts at solving with 'diagnostics', rebooting...the 'network changes' detected message shows.

    How can I get it to look for a network change - or find the change immediately - and speed up the process? Is there really a change? This has really frustrated me many, many times.
    I'm using 10.6.8 and see there's an update that supposedly helps airport problems but that people hatve problems with it. I'm having enough problems without those too!
    Thank you for any ideas.

    Take your battery out and reset your PMU.
    http://docs.info.apple.com/article.html?artnum=303319
    Next time you shut down, wait until the computer is totally shut down -- i.e., make sure you don't hear your fan running, the screen is black and that the light is out on the front of the computer -- before you close the lid.
    I had that exact thing happen once, and since I done the above, it's never happened since.
    -Bmer
    Mac Owners Support Group
    Join Us @ MacOSG.com
    ITMS: MacOSG Podcast
     An Apple User Group 

  • Network Change Detected

    At a seemingly-random point either a web page won't load (Safari tells me I am not connected to the Internet), or my email tells me it is having problems getting to the server. However, the 'bars' in my menubar show that I am still logged onto my home network and I still have a connection and a strong-enough signal. The rest of my family is still online and my iPhone is showing that it's online on the wireless connection.
    Safari offers to run Network Diagnostics: I do that, and it comes to a point where it says: "Network Change Detected: Your network configuration has changed. Click OK to proceed to the next step."
    When I do that, and Continue (without changing the options that the dialog is showing me); it completes and tells me my connection is fine, which it then is. After that I can go online, retrieve/send email, etc. ... until the next time (usually a few minutes later)
    My OS is OSX 10.6.4 and my version of safari is 5.0 (6533.16). I've got a linksys router with a password. Is there anything that I can do to my computer or my router to fix this?

    I'm sorry to bump this, but I really wish someone could shed some light on the situation. I realize now that it's probably not a problem with my MacBook and instead with my router, but if anyone has any idea why a router would all of a sudden stop letting wireless computers connect to the internet I would love your input. Thank you.

  • Every network change break the sound

    Hello,
    I've got this problem for a few month now. At first I tough it was a bad update or a missconfig I did, but many update later it's still present and I believe my other computer has the same problem.
    What happen is that every time something related to networking change, the sound system seam to be disabled. Here is some of the action I do to make it fail:
    - Connect to a VPN (wich create TUN interface)
    - Start a virtualbox vm with internal network enabled (create a vboxX interface)
    - plug in my cellphone
    - or simply plug any usb thumbdrive
    what is also strange is that disconnecting and reconnecting to some vpn make the sound work again. Same thing when plugin a usb drive. So that has been my work around lately.
    I mostly listen to music with amarok and when the problem happen I get a notification that the sound card is disable and switching to the next one, the same notification appear when it get back working.
    I use a fully up to date arch, with KDE. My headphone are plugged via an HDMI screen. I haven't been able to find any error in the log when it happen, but I might not be looking at the right place.
    Does somebody have any idea what happening here?

    I already looked at the log (or journal, it's the same thing for me) and it doesn't contain any error at those time.
    That's why I'm asking for idea here.

  • "Network Change Detected" - repeats

    This has been happening for roughly the past two weeks. As far as I know, I have done nothing to change my Network configuration.
    Situation: Powerbook G4, running 10.4.11 -- online via my wireless connection at home. At a seemingly-random point either a web page won't load (Safari tells me I am not connected to the Internet), or my email tells me it is having problems getting to the server. However, the 'bars' in my menubar show that I am still logged onto my home network and I still have a connection and a strong-enough signal. My husband is still online on his notebook and my iPhone is showing that it's online on the wireless connection.
    Safari offers to run Network Diagnostics: I do that, and it comes to a point where it says: "Network Change Detected: Your network configuration has changed. Click OK to proceed to the next step."
    When I do that, and Continue (without changing the options that the dialog is showing me); it completes and tells me my connection is fine, which it then is. After that I can go online, retrieve/send email, etc. ... until the next time (sometimes a day, sometimes a few hours).
    What is the "Network Change" that Network Diagnostics has detected? I have not done anything to change my Network. In fact, I have my Preferences 'locked' (ever since that problem with the 'looping' by the Network preferences, several months back).
    Since I now know how to get things back working it's only a bother. But I am left wondering if there is something more dire that this signals. Any ideas?
    (The router is a Belkin; the connection is Comcast cable modem. These have not changed.)

    Hi deemacm, and a warm welcome to the forums!
    To stop the pop-up, Go to System Preferences: Security. Check the box next to "Require password to unlock each secure system preference." Then lock Security.
    Try this cure for Security update...
    http://discussions.apple.com/thread.jspa?threadID=1730909&tstart=0
    The locations are actually...
    /Library/Preferences/SystemConfiguration/preferences.plist
    /Library/Preferences/SystemConfiguration/com.apple.airport.preferences.plist
    /Library/Preferences/SystemConfiguration/NetworkInterfaces.plist
    /Library/Preferences/SystemConfiguration/com.apple.nat.plist
    Then... Try putting these numbers in Network>TCP/IP>DNS Servers, for the Airport Interface...
    208.67.222.222
    208.67.220.220
    Then Apply
    DNS Servers are a bit like Phone books where you look up a name and it gives you the phone number, in our case, you put in apple.com and it comes back with 17.149.160.49 behind the scenes.
    These Servers have been patched to guard against DNS poisoning, and are faster/more reliable than most ISP's DNS Servers.

  • Unable to connect to wifi-network change detected

    I have the OSX 10.6.8 running on my macbook and since a couple of days i'm not able to connect to wifi.Network diagnostics gives me a 'Network change detected' error every time it appears to connect.The WiFi settings have not changed and I'm able to connect to it with my android tablet and phone.Any pointers appreciated.

    check this link out for more info
    http://www.ehow.com/how_4537240_using-wifi-data-plan-t.html.
    I saw this in another post
    >>>>>>>>>>>>>>>>>>>>>>>>~
    Hope I was able to help
    If you have received information that has helped you out, please give them Kudos.
    If your problem has been solved, please click on the post that resolved the issue

Maybe you are looking for

  • What is Microsoft Document Connection for the Mac? What does it do?

    What is Microsoft Document Connection for the Mac? What does it do?

  • Commit isn't updating a database column

    I've got an ADF/UIX application that has a Commit button. If added a column to the database, and have propagated the schema change through my model and view controller. The new text field displays the data in the database, but the Commit button doesn

  • Printing Master Pages

    Hi everyone, I have created a set of master pages and would like to print them or make a simple catalog for my reference. Id there a way of printing maseter pages without creating a real page out of the master and can I make size display inside the g

  • Checkbox in alvgrid display

    hi experts,          i want to insert a checkbox in the first field of the table that is displayed in the grid view.i want to include the checkbox on the leftmost of the first field.                       can anyone give me some idea .

  • PL Sql doubts

    Dear buddies, I have a problem to fix. I have a table: t1 STKey: Just a primary key created using a sequence Stable: source table name SKey: Key that matches with the target table Ttable: Target table's name Tkey: The primary key in the target table