Cluster 3.2 Interconnect Weirdness

I am installing Cluster 3.2 with Solaris 10 update 3 on a V210 and V240 and I am having an interconnect problem.
I have bge2 and bge3 on each node cabled back to back via a cross-over cable. I am able to plumb up the interfaces, assign a test IP and ping back and forth with no problem so I know the cables and ports are OK on each server.
After putting bge2 and bge3 down/unplumbing on each node I run scinstall from one of the nodes everything goes well until this point:
Cluster Creation
Log file - /var/cluster/logs/install/scinstall.log.blah
Checking installation status ... done
The Sun Cluster software is installed on "marge".
The Sun Cluster software is installed on "homer".
Started sccheck on "marge".
Started sccheck on "homer".
sccheck completed with no errors or warnings for "marge".
sccheck completed with no errors or warnings for "homer".
Configuring "homer" ... done
Rebooting "homer" ...
The node Homer boots into cluster mode but clintr status shows this:
=== Cluster Transport Paths ===
Endpoint1 Endpoint2 Status
clintr show:
--- Transport Adapters for homer ---
Transport Adapter: bge2
State: Disabled
Transport Type: dlpi
device_name: bge
device_instance: 2
lazy_free: 1
dlpi_heartbeat_timeout: 10000
dlpi_heartbeat_quantum: 1000
nw_bandwidth: 80
bandwidth: 70
Transport Adapter: bge3
State: Disabled
Transport Type: dlpi
device_name: bge
device_instance: 3
lazy_free: 1
dlpi_heartbeat_timeout: 10000
dlpi_heartbeat_quantum: 1000
nw_bandwidth: 80
bandwidth: 70
I am guessing that Marge will not reboot into the cluster because she cannot talk to Homer over the interconnects after he reboots.
I'm not using switches in my scinstall config, just good old network interfaces and crossover cables.
fconfig -a on Homer shows bge0 and bge1 as my public interfaces in IPMP configuration and clprivnet0 using the default 172.16 address range. bge2 and bge3 are nowhere to be found in the listing.
The physical ports on Homer are not lit anymore, almost like something is missing to plumb up the interface.
Do I need an /etc/hostname.bge2 and /etc/hostname.bge3 on each host with 'up' in it? Do I have to manually assign bge2 and bge3 to clprivnet0 (and if so , how)?
I'm missing something subtle here, any insight would be most appreciated.

Did a rebuild with current patches, did not run JASS or an in-house hardening script on the nodes and I get the same result.
I think I might have a bug issue with the bge driver or something.
On the first node before running scinstall all nic lights are lit.
After that first node reboots from scinstall the nic lights stay lit until right before the CMM messages begin appearing on the console. At that point the interconnect lights go out and stay out physically.
No errors were detected with sccheck or in the install logs for the cluster.
I tried a rebuild of the cluster nodes using a switch (ProCurve 2626) for the interconnects rather than an ethernet cable or cross-over cable.
I have a hme interface in my V210 and V240 and I am going to use that for one of the interconnects to see if it matters.
Basically at this point it is definitely not something physical (bad cable, bad switch port, etc) but something in the cluster configuration from scinstall that is not digging the interconnects and keeping the cluster nodes from conversing.
Since the cluster isn't working anyway I can do a clintr enable node:port,switch@port and see that the ports and switch ports show as enabled by clintr status does not show an interconnect active and the physical ports are not lit.
I do see references to bgeX/0 unregistered in /var/adm/messages but I haven't found information as to what this means or what to do about it exactly yet.
Closest thing so far is this:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6453203

Similar Messages

  • Can read or write possible from MQ Cluster using Oracle Interconnect

    Hi,
    Can we read or write from MQ Cluster using Oracle Interconnect? If we can then how can we do it?
    Regards,
    Koushik

    Sorry guys if I am missing something here but would you not be better just using the available Oracle ESB AQ Adapter?
    I've been able to pick messages up from an AQ Queue and on to both Oracle and another vendors JMS queues and vica-versa
    Cheers
    A.

  • Gig Ethernet V/S  SCI as Cluster Private Interconnect for Oracle RAC

    Hello Gurus
    Can any one pls confirm if it's possible to configure 2 or more Gigabit Ethernet interconnects ( Sun Cluster 3.1 Private Interconnects) on a E6900 cluster ?
    It's for a High Availability requirement of Oracle 9i RAC. i need to know ,
    1) can i use gigabit ethernet as Private cluster interconnect for Deploying Oracle RAC on E6900 ?
    2) What is the recommended Private Cluster Interconnect for Oracle RAC ? GiG ethernet or SCI with RSM ?
    3) How about the scenarios where one can have say 3 X Gig Ethernet V/S 2 X SCI , as their cluster's Private Interconnects ?
    4) How the Interconnect traffic gets distributed amongest the multiple GigaBit ethernet Interconnects ( For oracle RAC) , & is anything required to be done at oracle Rac Level to enable Oracle to recognise that there are multiple interconnect cards it needs to start utilizing all of the GigaBit ethernet Interfaces for transfering packets ?
    5) what would happen to Oracle RAC if one of the Gigabit ethernet private interconnects fails
    Have tried searching for this info but could not locate any doc that can precisely clarify these doubts that i have .........
    thanks for the patience
    Regards,
    Nilesh

    Answers inline...
    Tim
    Can any one pls confirm if it's possible to configure
    2 or more Gigabit Ethernet interconnects ( Sun
    Cluster 3.1 Private Interconnects) on a E6900
    cluster ?Yes, absolutely. You can configure up to 6 NICs for the private networks. Traffic is automatically striped across them if you specify clprivnet0 to Oracle RAC (9i or 10g). That is TCP connections and UDP messages.
    It's for a High Availability requirement of Oracle
    9i RAC. i need to know ,
    1) can i use gigabit ethernet as Private cluster
    interconnect for Deploying Oracle RAC on E6900 ? Yes, definitely.
    2) What is the recommended Private Cluster
    Interconnect for Oracle RAC ? GiG ethernet or SCI
    with RSM ? SCI is or is in the process of being EOL'ed. Gigabit is usually sufficient. Longer term you may want to consider Infiniband or 10 Gigabit ethernet with RDS.
    3) How about the scenarios where one can have say 3 X
    Gig Ethernet V/S 2 X SCI , as their cluster's
    Private Interconnects ? I would still go for 3 x GbE because it is usually cheaper and will probably work just as well. The latency and bandwidth differences are often masked by the performance of the software higher up the stack. In short, unless you tuned the heck out of your application and just about everything else, don't worry too much about the difference between GbE and SCI.
    4) How the Interconnect traffic gets distributed
    amongest the multiple GigaBit ethernet Interconnects
    ( For oracle RAC) , & is anything required to be done
    at oracle Rac Level to enable Oracle to recognise
    that there are multiple interconnect cards it needs
    to start utilizing all of the GigaBit ethernet
    Interfaces for transfering packets ?You don't need to do anything at the Oracle level. That's the beauty of using Oracle RAC with Sun Cluster as opposed to RAC on its own. The striping takes place automatically and transparently behind the scenes.
    5) what would happen to Oracle RAC if one of the
    Gigabit ethernet private interconnects fails It's completely transparent. Oracle will never see the failure.
    Have tried searching for this info but could not
    locate any doc that can precisely clarify these
    doubts that i have .........This is all covered in a paper that I have just completed and should be published after Christmas. Unfortunately, I cannot give out the paper yet.
    thanks for the patience
    Regards,
    Nilesh

  • Oracle RAC interconnect performance by GRIDControl

    Hi All,
    We have Oracle 10g Rac and we manage database through Grid control.
    We are not able to see <Performance> and <Interconnects> tabs in rac cluster page , do you guys know why?
    logged into sysman--> at right corner <targets> at left side I could see database list , I selected Rac database name and clicked --> at the top left most corner I see a link like below, so if I click on this hyper link (DBname) it is taking me to cluster page, here it is not able to enable two tabed pans <Performance> and <Interconnects> can anyone please help me how to check this information in Grid Control
    Cluster: DBNAME >
    Thanks in advance

    First click on the target type Cluster Database, that will take you to overall Cluster Database : <your cluster database name> page. There on the top of the page, left side you will see a hyperlink with name Cluster: <cluster name>, click on this cluster name hyperlink that will take you to the Cluster page where interconnect tabs are enabled.
    -Harish Kumar Kalra

  • VCS to Sun Cluster migration

    I am planning to migrate a 2-node cluster from VCS to Sun Cluster. how much downtime does this involve? is there any documentation that i can reference?

    Hi all,
    In the following I outlined the principle steps how to migrate a cluster in place. Tis will be one of the subtopics of an upcoming blog about VCS to SC migration.
    Pavel, you should revisit SC 3.2 definitely and explicitly the bui. We had various VCS admins on different projects who told us, the gap became that small, that VCS is not worth the additional costs.
    Bear in mind that migrating in place is the most complex scenario, and doing it on a complete alternative platform is a much simpler process. But lets proceed with the assumptions and process:
    Let us assume a two node cluster where you want to migrate from VCS with VXVM to Solaris Cluster and Solaris Volume Manager. I assume as well that your data is mirrored. The steps below are a principle outline of the migration process, to get the necessary cluster administration commands you need to consult the appropriate documentation.
    1.Reduce the VCS cluster to a one node cluster and disconnect the interconnect. The interconnect has to be disconnected to allow a Solaris Cluster installation on the other node. Solaris Cluster check the interconnect for unwanted traffic
    2.Split the storage in two halfs, and disallow the access from the VCS cluster to the future Solaris cluster part. This can be achieved in example by modifying the switch zoning, or lun masking. At this point in time your application is still running, but you have no high availability and no data redundancy any more.
    3.Install a single node Solaris Cluster on the second host, it is advisable to start with a fresh Solaris install.
    4.Configure the full Solaris Cluster topology with a temporary copy of your date. The data has to be installed by backup/restore, because you are changing the volume manager as well. It is important here, that you use different IP addresses for the logical hosts to avoid duplicate addresses. Now the new single node Solaris Cluster is ready to take the actual data.
    5.When you are ready for an application downtime, transfer the actual data from the Veritas Cluster again to the Solaris Cluster, and shut down the remaining VCS single node cluster.
    6.Change the IP Addresses of the logical host in the Solaris Cluster to the final value and enable all relevant resources. From now on your application will be running on the new Solaris Cluster.
    7.Reestablish the interconnect, destroy the VCS cluster and install Solaris Cluster packages on the old VCS node, but do not configure the node yet.
    8.Allow data access to the storage for both nodes with appropriate methods.
    9.Add the second node to the Solaris Cluster including the Solaris Cluster device groups, this step will take an other short application downtime.
    10.Mirror your data. From this point you have full redundancy and full high availability again.
    Cheers
    Detlef

  • CRS not starting

    OS: OEL 5 U4 x86_64
    DB: Oracle 11.2.0.1 EE
    Grid Infrastructure: Oracle 11.2.0.1
    CRS and Voting disk Storage: ASM
    Datafile and FRA storage: ASM
    I'm not sure exactly what caused this, but anyways, I changed MTU from 1500 to 900 online. After some time, 3 out of 4 nodes in the cluster went down and CRS refuses to start on these nodes after trying the switch back from MTU 9000 to 1500, reboots, and making sure disk permissions and ownership are correct. The logs are not too helpful (and cryptic) so I'm at a loss and appreciate any ideas or help.
    The installation was successful, the RAC was up for a few days while running some tests (including restart of a node). Currently only a single node has everything up and functional, the others are not working. Below are some output that might help:
    [root@ucstst11 bin]# ./crsctl check cluster -n ucstst11
    ucstst11:
    CRS-4535: Cannot communicate with Cluster Ready Services
    CRS-4530: Communications failure contacting Cluster Synchronization Services daemon
    CRS-4533: Event Manager is online
    [root@ucstst11 bin]# ./crsctl start cluster -n ucstst11
    CRS-2672: Attempting to start 'ora.cssd' on 'ucstst11'
    CRS-2672: Attempting to start 'ora.diskmon' on 'ucstst11'
    CRS-2676: Start of 'ora.diskmon' on 'ucstst11' succeeded
    CRS-4404: The following nodes did not reply within the allotted time:
    ucstst11
    [root@ucstst11 bin]# ./crsctl check crs
    CRS-4638: Oracle High Availability Services is online
    CRS-4535: Cannot communicate with Cluster Ready Services
    CRS-4530: Communications failure contacting Cluster Synchronization Services daemon
    CRS-4533: Event Manager is online
    [root@ucstst11 bin]# ./crsctl start crs
    CRS-4640: Oracle High Availability Services is already active
    CRS-4000: Command Start failed, or completed with errors.
    [root@ucstst11 bin]# oracleasm querydisk -p CRSVOL01
    Disk "CRSVOL01" is a valid ASM disk
    /dev/sdz1: LABEL="CRSVOL01" TYPE="oracleasm"
    /dev/sdcj1: LABEL="CRSVOL01" TYPE="oracleasm"
    [root@ucstst11 bin]# ll /dev/sdz1 /dev/sdcj1
    brw-rw---- 1 oracle dba 69, 113 Mar 27 19:00 /dev/sdcj1
    brw-rw---- 1 oracle dba 65, 145 Mar 27 19:00 /dev/sdz1
    [root@ucstst11 bin]# oracleasm querydisk -d CRSVOL01
    Disk "CRSVOL01" is a valid ASM disk on device [65, 145]
    From the functional node:
    [root@ucstst12 bin]# ./crsctl check cluster -all
    ucstst12:
    CRS-4537: Cluster Ready Services is online
    CRS-4529: Cluster Synchronization Services is online
    CRS-4533: Event Manager is online
    Cluster verification now hangs when it tries to contact the other nodes.
    Please help!

    For the most part this issue has been resolved. The SA partially changed to jumbo frames (OS, but not the switch), we reverted all the jumbo frame changes and the system is back online, except for one node (the one which was working ironically) not being reported via "crsctl check cluster -all", and one instance not starting due to it not seeing an interconnect (weird).
    We did attempt to fully implement jumbo frames but that did not work hence the reversion.

  • SC 3.2 second node panics on boot

    I am trying to get a two node (potentially three if the cluster works :) ) cluster running in a solaris 10 x86 (AMD64) environment. Following are the machine specifications:
    AMD 64 single core
    SATA2 hdd partitioned as / (100+gb), swap (4gb) and /globaldevices (1gb)
    Solaris 10 Generic_127112-07
    Completely patched
    2 gb RAM
    NVidia nge nic
    Syskonnect skge nic
    Realtek rge nic
    Sun Cluster 3.2
    Two unmanaged gigabit switches
    The cluster setup would look like the following:
    DB03 (First node of the cluster)
    db03nge0 -- public interconnect
    db03skge0 -- private interconnect 1 -- connected to sw07
    db03rge0 -- private interconnect 2 -- connected to sw09
    /globaldevices -- local disk
    DB02 (Second node of the cluster)
    db02nge0 -- public interconnect
    db02skge0 -- private interconnect 1 -- connected to sw07
    db02rge0 -- private interconnect 2 -- connected to sw09
    /globaldevices -- local disk
    DB01 (Third node of the cluster)
    db01nge0 -- public interconnect
    db01skge0 -- private interconnect 1 -- connected to sw07
    db01rge0 -- private interconnect 2 -- connected to sw09
    /globaldevices -- local disk
    All external/public communication happens at the nge0 nic.
    Switch sw07 and sw09 connects these machines for private interconnect.
    All of them have a local disk partition mounted as /globaldevices
    Another fourth server which is not a part of the cluster environment acts as a quorum server. The systems connect to the quorum server over nge nic. the quorum device name is cl01qs
    Next, I did a single node configuration on DB03 through scinstall utility and it completed successfully. The DB03 system reboot and acquired quorum vote from the quorum server and came up fine.
    Then, I added the second node to the cluster (running the scinstall command from the second node). The scinstall completes successfully and goes down for a reboot.
    i can see the following from the first node:
    db03nge0# cluster show 
    Cluster ===                                   
    Cluster Name:                                   cl01
      installmode:                                     disabled
      private_netaddr:                                 172.16.0.0
      private_netmask:                                 255.255.248.0
      max_nodes:                                       64
      max_privatenets:                                 10
      udp_session_timeout:                             480
      global_fencing:                                  pathcount
      Node List:                                       db03nge0, db02nge0
      Host Access Control ===                     
      Cluster name:                                 cl01
        Allowed hosts:                                 Any
        Authentication Protocol:                       sys
      Cluster Nodes ===                           
      Node Name:                                    db03nge0
        Node ID:                                       1
        Enabled:                                       yes
        privatehostname:                               clusternode1-priv
        reboot_on_path_failure:                        disabled
        globalzoneshares:                              1
        defaultpsetmin:                                1
        quorum_vote:                                   1
        quorum_defaultvote:                            1
        quorum_resv_key:                               0x479C227E00000001
        Transport Adapter List:                        skge0, rge0
      Node Name:                                    db02nge0
        Node ID:                                       2
        Enabled:                                       yes
        privatehostname:                               clusternode2-priv
        reboot_on_path_failure:                        disabled
        globalzoneshares:                              1
        defaultpsetmin:                                1
        quorum_vote:                                   0
        quorum_defaultvote:                            1
        quorum_resv_key:                               0x479C227E00000002
        Transport Adapter List:                        skge0, rge0Now, the problem part, when scinstall completes on the second node, it sends the machine for a reboot and, the second node encounters a panic and shuts itself down. This panic and reboot cycle keeps on going unless I place the second node in non-cluster mode. The output from both the nodes looks like the following:
    First Node DB03 (Primary)
    Jan 27 18:34:49 db03nge0 genunix: [ID 537175 kern.notice] NOTICE: CMM: Node db02nge0 (nodeid: 2, incarnation #: 1201476860) has become reachable.
    Jan 27 18:34:49 db03nge0 genunix: [ID 387288 kern.notice] NOTICE: clcomm: Path db03nge0:rge0 - db02nge0:rge0 online
    Jan 27 18:34:49 db03nge0 genunix: [ID 387288 kern.notice] NOTICE: clcomm: Path db03nge0:skge0 - db02nge0:skge0 online
    Jan 27 18:34:49 db03nge0 genunix: [ID 377347 kern.notice] NOTICE: CMM: Node db02nge0 (nodeid = 2) is up; new incarnation number = 1201476860.
    Jan 27 18:34:49 db03nge0 genunix: [ID 108990 kern.notice] NOTICE: CMM: Cluster members: db03nge0 db02nge0.
    Jan 27 18:34:49 db03nge0 Cluster.Framework: [ID 801593 daemon.notice] stdout: releasing reservations for scsi-2 disks shared with db02nge0
    Jan 27 18:34:49 db03nge0 genunix: [ID 279084 kern.notice] NOTICE: CMM: node reconfiguration #7 completed.
    Jan 27 18:34:59 db03nge0 genunix: [ID 446068 kern.notice] NOTICE: CMM: Node db02nge0 (nodeid = 2) is down.
    Jan 27 18:34:59 db03nge0 genunix: [ID 108990 kern.notice] NOTICE: CMM: Cluster members: db03nge0.
    Jan 27 18:34:59 db03nge0 genunix: [ID 489438 kern.notice] NOTICE: clcomm: Path db03nge0:skge0 - db02nge0:skge0 being drained
    Jan 27 18:34:59 db03nge0 genunix: [ID 489438 kern.notice] NOTICE: clcomm: Path db03nge0:rge0 - db02nge0:rge0 being drained
    Jan 27 18:35:00 db03nge0 genunix: [ID 279084 kern.notice] NOTICE: CMM: node reconfiguration #8 completed.
    Jan 27 18:35:00 db03nge0 Cluster.Framework: [ID 801593 daemon.notice] stdout: fencing node db02nge0 from shared devices
    Jan 27 18:35:59 db03nge0 genunix: [ID 604153 kern.notice] NOTICE: clcomm: Path db03nge0:skge0 - db02nge0:skge0 errors during initiation
    Jan 27 18:35:59 db03nge0 genunix: [ID 618107 kern.warning] WARNING: Path db03nge0:skge0 - db02nge0:skge0 initiation encountered errors, errno = 62. Remote node may be down or unreachable through this path.
    Jan 27 18:35:59 db03nge0 genunix: [ID 604153 kern.notice] NOTICE: clcomm: Path db03nge0:rge0 - db02nge0:rge0 errors during initiation
    Jan 27 18:35:59 db03nge0 genunix: [ID 618107 kern.warning] WARNING: Path db03nge0:rge0 - db02nge0:rge0 initiation encountered errors, errno = 62. Remote node may be down or unreachable through this path.
    Jan 27 18:40:27 db03nge0 genunix: [ID 273354 kern.notice] NOTICE: CMM: Node db02nge0 (nodeid = 2) is dead.Second Node DB02 (Secondary node just added to cluster)
    Jan 27 18:33:43 db02nge0 ipf: [ID 774698 kern.info] IP Filter: v4.1.9, running.
    Jan 27 18:33:50 db02nge0 svc.startd[8]: [ID 652011 daemon.warning] svc:/system/pools:default: Method "/lib/svc/method/svc-pools start" failed with exit status 96.
    Jan 27 18:33:50 db02nge0 svc.startd[8]: [ID 748625 daemon.error] system/pools:default misconfigured: transitioned to maintenance (see 'svcs -xv' for details)
    Jan 27 18:34:20 db02nge0 genunix: [ID 965873 kern.notice] NOTICE: CMM: Node db03nge0 (nodeid = 1) with votecount = 1 added.
    Jan 27 18:34:20 db02nge0 genunix: [ID 965873 kern.notice] NOTICE: CMM: Node db02nge0 (nodeid = 2) with votecount = 0 added.
    Jan 27 18:34:20 db02nge0 genunix: [ID 884114 kern.notice] NOTICE: clcomm: Adapter rge0 constructed
    Jan 27 18:34:20 db02nge0 genunix: [ID 884114 kern.notice] NOTICE: clcomm: Adapter skge0 constructed
    Jan 27 18:34:20 db02nge0 genunix: [ID 843983 kern.notice] NOTICE: CMM: Node db02nge0: attempting to join cluster.
    Jan 27 18:34:23 db02nge0 skge: [ID 418734 kern.notice] skge0: Network connection up on port A
    Jan 27 18:34:23 db02nge0 skge: [ID 249518 kern.notice]     Link Speed:      1000 Mbps
    Jan 27 18:34:23 db02nge0 skge: [ID 966250 kern.notice]     Autonegotiation: Yes
    Jan 27 18:34:23 db02nge0 skge: [ID 676895 kern.notice]     Duplex Mode:     Full
    Jan 27 18:34:23 db02nge0 skge: [ID 825410 kern.notice]     Flow Control:    Symmetric
    Jan 27 18:34:23 db02nge0 skge: [ID 512437 kern.notice]     Role:            Slave
    Jan 27 18:34:23 db02nge0 rge: [ID 801725 kern.info] NOTICE: rge0: link up 1000Mbps Full_Duplex (initialized)
    Jan 27 18:34:24 db02nge0 genunix: [ID 537175 kern.notice] NOTICE: CMM: Node db03nge0 (nodeid: 1, incarnation #: 1201416440) has become reachable.
    Jan 27 18:34:24 db02nge0 genunix: [ID 387288 kern.notice] NOTICE: clcomm: Path db02nge0:rge0 - db03nge0:rge0 online
    Jan 27 18:34:24 db02nge0 genunix: [ID 525628 kern.notice] NOTICE: CMM: Cluster has reached quorum.
    Jan 27 18:34:24 db02nge0 genunix: [ID 377347 kern.notice] NOTICE: CMM: Node db03nge0 (nodeid = 1) is up; new incarnation number = 1201416440.
    Jan 27 18:34:24 db02nge0 genunix: [ID 377347 kern.notice] NOTICE: CMM: Node db02nge0 (nodeid = 2) is up; new incarnation number = 1201476860.
    Jan 27 18:34:24 db02nge0 genunix: [ID 108990 kern.notice] NOTICE: CMM: Cluster members: db03nge0 db02nge0.
    Jan 27 18:34:24 db02nge0 genunix: [ID 387288 kern.notice] NOTICE: clcomm: Path db02nge0:skge0 - db03nge0:skge0 online
    Jan 27 18:34:25 db02nge0 genunix: [ID 279084 kern.notice] NOTICE: CMM: node reconfiguration #7 completed.
    Jan 27 18:34:25 db02nge0 genunix: [ID 499756 kern.notice] NOTICE: CMM: Node db02nge0: joined cluster.
    Jan 27 18:34:25 db02nge0 cl_dlpitrans: [ID 624622 kern.notice] Notifying cluster that this node is panicking
    Jan 27 18:34:25 db02nge0 unix: [ID 836849 kern.notice]
    Jan 27 18:34:25 db02nge0 ^Mpanic[cpu0]/thread=ffffffff8202a1a0:
    Jan 27 18:34:25 db02nge0 genunix: [ID 335743 kern.notice] BAD TRAP: type=e (#pf Page fault) rp=fffffe8000636b90 addr=30 occurred in module "cl_comm" due to a NULL pointer dereference
    Jan 27 18:34:25 db02nge0 cl_dlpitrans: [ID 624622 kern.notice] Notifying cluster that this node is panicking
    Jan 27 18:34:25 db02nge0 unix: [ID 836849 kern.notice]
    Jan 27 18:34:25 db02nge0 ^Mpanic[cpu0]/thread=ffffffff8202a1a0:
    Jan 27 18:34:25 db02nge0 genunix: [ID 335743 kern.notice] BAD TRAP: type=e (#pf Page fault) rp=fffffe8000636b90 addr=30 occurred in module "cl_comm" due to a NULL pointer dereference
    Jan 27 18:34:25 db02nge0 unix: [ID 100000 kern.notice]
    Jan 27 18:34:25 db02nge0 unix: [ID 839527 kern.notice] cluster:
    Jan 27 18:34:25 db02nge0 unix: [ID 753105 kern.notice] #pf Page fault
    Jan 27 18:34:25 db02nge0 unix: [ID 532287 kern.notice] Bad kernel fault at addr=0x30
    Jan 27 18:34:25 db02nge0 unix: [ID 243837 kern.notice] pid=4, pc=0xfffffffff262c3f6, sp=0xfffffe8000636c80, eflags=0x10202
    Jan 27 18:34:25 db02nge0 unix: [ID 211416 kern.notice] cr0: 8005003b<pg,wp,ne,et,ts,mp,pe> cr4: 6f0<xmme,fxsr,pge,mce,pae,pse>
    Jan 27 18:34:25 db02nge0 unix: [ID 354241 kern.notice] cr2: 30 cr3: efd4000 cr8: c
    Jan 27 18:34:25 db02nge0 unix: [ID 592667 kern.notice]  rdi: ffffffff8c932b18 rsi: ffffffffc055a8e6 rdx:               10
    Jan 27 18:34:25 db02nge0 unix: [ID 592667 kern.notice]  rcx: ffffffff8d10d0c0  r8:                0  r9:                0
    Jan 27 18:34:25 db02nge0 unix: [ID 592667 kern.notice]  rax:               10 rbx:                0 rbp: fffffe8000636cd0
    Jan 27 18:34:25 db02nge0 unix: [ID 592667 kern.notice]  r10:                0 r11: fffffffffbce2d40 r12: ffffffff8216a008
    Jan 27 18:34:25 db02nge0 unix: [ID 592667 kern.notice]  r13:              800 r14:                0 r15: ffffffff8216a0d8
    Jan 27 18:34:25 db02nge0 unix: [ID 592667 kern.notice]  fsb: ffffffff80000000 gsb: fffffffffbc25520  ds:               43
    Jan 27 18:34:25 db02nge0 unix: [ID 592667 kern.notice]   es:               43  fs:                0  gs:              1c3
    Jan 27 18:34:25 db02nge0 unix: [ID 592667 kern.notice]  trp:                e err:                0 rip: fffffffff262c3f6
    Jan 27 18:34:25 db02nge0 unix: [ID 592667 kern.notice]   cs:               28 rfl:            10202 rsp: fffffe8000636c80
    Jan 27 18:34:25 db02nge0 unix: [ID 266532 kern.notice]   ss:               30
    Jan 27 18:34:25 db02nge0 unix: [ID 100000 kern.notice]
    Jan 27 18:34:25 db02nge0 genunix: [ID 655072 kern.notice] fffffe8000636aa0 unix:die+da ()
    Jan 27 18:34:25 db02nge0 genunix: [ID 655072 kern.notice] fffffe8000636b80 unix:trap+d86 ()
    Jan 27 18:34:25 db02nge0 genunix: [ID 655072 kern.notice] fffffe8000636b90 unix:cmntrap+140 ()
    Jan 27 18:34:25 db02nge0 genunix: [ID 655072 kern.notice] fffffe8000636cd0 cl_comm:__1cKfp_adapterNget_fp_header6MpCLHC_pnEmsgb__+163 ()
    Jan 27 18:34:25 db02nge0 genunix: [ID 655072 kern.notice] fffffe8000636d30 cl_comm:__1cJfp_holderVupdate_remote_macaddr6MrnHnetworkJmacinfo_t__v_+e5 ()
    Jan 27 18:34:25 db02nge0 genunix: [ID 655072 kern.notice] fffffe8000636d80 cl_comm:__1cLpernodepathOstart_matching6MnM_ManagedSeq_4nL_NormalSeq_4nHnetworkJmacinfo_t__
    _n0C____v_+180 ()
    Jan 27 18:34:25 db02nge0 genunix: [ID 655072 kern.notice] fffffe8000636e60 cl_comm:__1cGfpconfIfp_ns_if6M_v_+195 ()
    Jan 27 18:34:25 db02nge0 genunix: [ID 655072 kern.notice] fffffe8000636e70 cl_comm:.XDKsQAiaUkSGENQ.__1fTget_idlversion_impl1AG__CCLD_+320bf51b ()
    Jan 27 18:34:25 db02nge0 genunix: [ID 655072 kern.notice] fffffe8000636ed0 cl_orb:cllwpwrapper+106 ()
    Jan 27 18:34:25 db02nge0 genunix: [ID 655072 kern.notice] fffffe8000636ee0 unix:thread_start+8 ()
    Jan 27 18:34:25 db02nge0 unix: [ID 100000 kern.notice]
    Jan 27 18:34:25 db02nge0 genunix: [ID 672855 kern.notice] syncing file systems...
    Jan 27 18:34:25 db02nge0 genunix: [ID 433738 kern.notice]  [1]
    Jan 27 18:34:25 db02nge0 genunix: [ID 733762 kern.notice]  33
    Jan 27 18:34:26 db02nge0 genunix: [ID 433738 kern.notice]  [1]
    Jan 27 18:34:26 db02nge0 genunix: [ID 733762 kern.notice]  2
    Jan 27 18:34:27 db02nge0 genunix: [ID 433738 kern.notice]  [1]
    Jan 27 18:34:48 db02nge0 last message repeated 20 times
    Jan 27 18:34:49 db02nge0 genunix: [ID 622722 kern.notice]  done (not all i/o completed)
    Jan 27 18:34:50 db02nge0 genunix: [ID 111219 kern.notice] dumping to /dev/dsk/c1d0s1, offset 860356608, content: kernel
    Jan 27 18:34:55 db02nge0 genunix: [ID 409368 kern.notice] ^M100% done: 92936 pages dumped, compression ratio 5.02,
    Jan 27 18:34:55 db02nge0 genunix: [ID 851671 kern.notice] dump succeeded
    Jan 27 18:35:41 db02nge0 genunix: [ID 540533 kern.notice] ^MSunOS Release 5.10 Version Generic_127112-07 64-bit
    Jan 27 18:35:41 db02nge0 genunix: [ID 943907 kern.notice] Copyright 1983-2007 Sun Microsystems, Inc.  All rights reserved.
    Jan 27 18:35:41 db02nge0 Use is subject to license terms.
    Jan 27 18:35:41 db02nge0 unix: [ID 126719 kern.info] features: 1076fdf<cpuid,sse3,nx,asysc,sse2,sse,pat,cx8,pae,mca,mmx,cmov,pge,mtrr,msr,tsc,lgpg>
    Jan 27 18:35:41 db02nge0 unix: [ID 168242 kern.info] mem = 3144188K (0xbfe7f000)
    Jan 27 18:35:41 db02nge0 rootnex: [ID 466748 kern.info] root nexus = i86pcI don't know what is the next step to overcome this problem. I have tried the same with DB01 machine, but that machine is also throwing a kernel panic at the same point. From what I can see from the logs, it seems as if the secondary node(s) do join the cluster:
    Jan 27 18:34:20 db02nge0 genunix: [ID 965873 kern.notice] NOTICE: CMM: Node db03nge0 (nodeid = 1) with votecount = 1 added.
    Jan 27 18:34:20 db02nge0 genunix: [ID 965873 kern.notice] NOTICE: CMM: Node db02nge0 (nodeid = 2) with votecount = 0 added.
    Jan 27 18:34:20 db02nge0 genunix: [ID 884114 kern.notice] NOTICE: clcomm: Adapter rge0 constructed
    Jan 27 18:34:20 db02nge0 genunix: [ID 884114 kern.notice] NOTICE: clcomm: Adapter skge0 constructed
    Jan 27 18:34:20 db02nge0 genunix: [ID 843983 kern.notice] NOTICE: CMM: Node db02nge0: attempting to join cluster.
    Jan 27 18:34:23 db02nge0 rge: [ID 801725 kern.info] NOTICE: rge0: link up 1000Mbps Full_Duplex (initialized)
    Jan 27 18:34:24 db02nge0 genunix: [ID 537175 kern.notice] NOTICE: CMM: Node db03nge0 (nodeid: 1, incarnation #: 1201416440) has become reachable.
    Jan 27 18:34:24 db02nge0 genunix: [ID 387288 kern.notice] NOTICE: clcomm: Path db02nge0:rge0 - db03nge0:rge0 online
    Jan 27 18:34:24 db02nge0 genunix: [ID 525628 kern.notice] NOTICE: CMM: Cluster has reached quorum.
    Jan 27 18:34:24 db02nge0 genunix: [ID 377347 kern.notice] NOTICE: CMM: Node db03nge0 (nodeid = 1) is up; new incarnation number = 1201416440.
    Jan 27 18:34:24 db02nge0 genunix: [ID 377347 kern.notice] NOTICE: CMM: Node db02nge0 (nodeid = 2) is up; new incarnation number = 1201476860.
    Jan 27 18:34:24 db02nge0 genunix: [ID 108990 kern.notice] NOTICE: CMM: Cluster members: db03nge0 db02nge0.
    Jan 27 18:34:24 db02nge0 genunix: [ID 387288 kern.notice] NOTICE: clcomm: Path db02nge0:skge0 - db03nge0:skge0 online
    Jan 27 18:34:25 db02nge0 genunix: [ID 279084 kern.notice] NOTICE: CMM: node reconfiguration #7 completed.
    Jan 27 18:34:25 db02nge0 genunix: [ID 499756 kern.notice] NOTICE: CMM: Node db02nge0: joined cluster.but, then, immediately due to some reason encounter the kernel panick.
    The only thing which is coming to my mind is that the skge driver is somehow causing the problem while it is a part of the cluster interconnect. Don't know, but another thread somewhere on the internet was facing a similar problem:
    http://unix.derkeiler.com/Mailing-Lists/SunManagers/2005-12/msg00114.html
    The next step looks like inter-changing the nge and skge nics and trying it again.
    Any help is much appreciated.
    Thanks in advance.
    tualha

    I'm not sure I can solve your problem but I have some suggestions that you might want to consider. I can't find anything in the bugs database that is identical to this, but that may be because we haven't certified the adapters you are using and thus never came across the problem.
    Although I'm not that hot on kernel debugging, looking at the stack traces seems to suggest that there might have been a problem with MAC addresses. Can you check that you have the equivalent of local_mac_address = true set, so that each adapter has a separate MAC address. If they don't it might confuse the cl_com module which seems to have had the fault.
    If that checks out, then I would try switching the syskonnect adapter to being the public network and making the nge adapter the other private network. Again, I don't think any of these adapters have every been tested so there is no guarantee they will work.
    Other ideas to try are to set the adapters to not auto-negotiate speeds, disable jumbo frames, check that they don't have any power saving modes that might put them to sleep periodically, etc.
    Let us know if any of these make any difference.
    Tim
    ---

  • FiledownloadresourcePath

    if i have a webdynpro project that have file upload and download feature
    when i click upload the file will store in directory specify by me,
    later on...when i try to download the file....
    how to the system know which directory i stored......
    or the system will only retrieve from specific location ?

    normally it will store in this location right...
    localhost\d$\usr\sap\N01\DVEBMGS01\j2ee\cluster\server0
    something very weird happen...
    when i try my run my webdynpro locally...it can export successfully...it will prompt me to download
    when i run in the http://xxx.com/webdynpro/MailApp
    it cannot export the file....the external window popup and close back
    [code]
    try {
                        final byte[] content = this.getByteArrayFromResourcePath("OutboundMail.xls");
                   //final byte[] content = this.getByteArrayFromResourcePath("
    localhost
    d$
    usr
    sap
    N01
    DVEBMGS01
    j2ee
    cluster
    server0
    tm.xls");
                   final IWDCachedWebResource resource = WDWebResource.getWebResource(content, WDWebResourceType.XLS);
                   resource.setResourceName("MailOutbound_Report_Summary");
                   try {
                   final IWDWindow window = wdComponentAPI.getWindowManager().createExternalWindow(resource.getAbsoluteURL(), "WD_Filedownload", false);
    //                 wdComponentAPI.getMessageManager().reportSuccess("resourcePath"+resource.getAbsoluteURL());
                   window.open();
                   catch(Exception e)
                   wdComponentAPI.getMessageManager().reportException(new WDNonFatalException(e), false);
                   } catch (IOException e) {
    //                  TODO Auto-generated catch block
                   e.printStackTrace();
    private byte[] getByteArrayFromResourcePath(String resourcePath)
                throws FileNotFoundException, IOException {
                FileInputStream in = new FileInputStream(new File(resourcePath));
                ByteArrayOutputStream out = new ByteArrayOutputStream();
                int len;
                byte[] part = new byte[10 * 1024];
                while ((len = in.read(part)) != -1) {
                out.write(part, 0, len);
                in.close();
                return out.toByteArray();
    public Map DownloadToExcel(ArrayList columnInfos1,String type)
              byte[] b = null;
              String linktoFile = null;
              StringBuffer err = new StringBuffer();
              StringBuffer xml_file = new StringBuffer();
              int noofelem = wdTableNode.size();
              ArrayList columnInfos = trimHeaderTexts(columnInfos1);
              String nodename = wdTableNode.getNodeInfo().getName().trim();
              String _nodename = nodename.substring(0, 1).toUpperCase()+nodename.substring(1).toLowerCase();
              xml_file.append("<?xml version='1.0' encoding='UTF-8' standalone='no'?><")
                             .append(_nodename)
                             .append(">\n");
              int size = columnInfos.size();
              for(int i =0;i<noofelem;i++){     
                        IWDNodeElement elem = wdTableNode.getElementAt(i);
                        xml_file.append("<")
                                  .append(_nodename)
                                  .append("Element>");
                        for (int j = 0;j<columnInfos.size();j++)
                             String attributeName = (String)columnInfos.get(j);
                             xml_file.append("<")
                                       .append(attributeName)
                                       .append(">")     
                                       .append(elem.getAttributeValue(attributeName))
                                       .append("</")
                                       .append(attributeName)
                                       .append(">\n");
                        xml_file.append("</")
                                  .append(_nodename)
                                  .append("Element>\n");
                   xml_file.append("</")
                             .append(_nodename)
                             .append(">\n");
                   try {
                             //modify here
                        if (type=="MailInbound"){
                             FileOutputStream fout=new FileOutputStream("tm.xml");
                             OutputStreamWriter out=new OutputStreamWriter(fout,"UTF-8");
                                                out.write(xml_file.toString());
                                                 out.flush();
                                                 out.close();
                                                 File xmlDocument = new File("tm.xml");
                                                 generateInboundMailExcel(xmlDocument);
                        }else if(type=="MailOutbound"){
                             FileOutputStream fout=new FileOutputStream("OutboundMail.xml");
                             OutputStreamWriter out=new OutputStreamWriter(fout,"UTF-8");
                                                 out.write(xml_file.toString());
                                                 out.flush();
                                                 out.close();
                                                 File xmlDocument = new File("OutboundMail.xml");
                                                 generateOutboundMailExcel(xmlDocument);
                        //b =  xml_file.toString().getBytes("UTF-8");                         
                   //     IWDCachedWebResource xlfile = WDWebResource.getWebResource(b,WDWebResourceType.XLS);
                              //xlfile.setResourceName(wdTableNode.getNodeInfo().getName()+" List");
                             //linktoFile = xlfile.getURL();
                             //System.err.println("Link To Url: " +linktoFile);
                             }     catch (WDURLException e1) {
                                       err.append(""+e1.getCause());
                        }catch (UnsupportedEncodingException e) {
                                  err.append(""+e.getCause());
                   }catch(IOException e){
                        System.err.println("IOException:" +e.getMessage());
              //     Map m = new HashMap();
              //     m.put("data",b);
              //     m.put("url",linktoFile);
              //     m.put("error",""+err.toString());
              //     System.err.println("Hash Map:" +m.toString());
           //     return m;
    public static void generateInboundMailExcel(File xmlDocument) {
                                       try {// Creating a Workbook
                                          HSSFWorkbook wb = new HSSFWorkbook();
                                          HSSFSheet spreadSheet = wb.createSheet("spreadSheet");
                                          spreadSheet.setColumnWidth((short) 0, (short) (256 * 25));
                                          spreadSheet.setColumnWidth((short) 1, (short) (256 * 25));
                                          // Parsing XML Document
                                          DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();
                                          DocumentBuilder builder = factory.newDocumentBuilder();
                                          Document document = builder.parse(xmlDocument);
                                          System.err.println("tableemailinelement");
                                          NodeList nodeList = document.getElementsByTagName("TablesemailinElement");
                                          //Create font style
                                          HSSFFont font=wb.createFont();
                                          font.setFontHeightInPoints((short)10);
                                          font.setFontName("Verdana");
                                          font.setBoldweight(HSSFFont.BOLDWEIGHT_BOLD);
                                          HSSFCellStyle style=wb.createCellStyle();
                                          style.setFont(font);
                                          // Creating Rows
                                          System.out.println("create row");
                                          HSSFRow row = spreadSheet.createRow(0);
                                          HSSFCell cell = row.createCell((short) 0);
                                          cell.setCellValue("DATE IN");
                                          cell.setCellStyle(style);
                                          cell = row.createCell((short) 1);
                                          cell.setCellValue("EMAIL ID");
                                          cell.setCellStyle(style);
                                          cell = row.createCell((short) 2);
                                          cell.setCellValue("SUBJECT");
                                          cell.setCellStyle(style);
                                          cell = row.createCell((short) 3);
                                          cell.setCellValue("DATE RECEIVED");
                                          cell.setCellStyle(style);
                                          cell = row.createCell((short) 4);
                                          cell.setCellValue("ACK DELIVERY");
                                          cell.setCellStyle(style);
                                          cell = row.createCell((short) 5);
                                          cell.setCellValue("DATE ACK");
                                         cell.setCellStyle(style);
                                          cell = row.createCell((short) 6);
                                          cell.setCellValue("SENDER");
                                          cell.setCellStyle(style);
                                          cell = row.createCell((short) 7);
                                          cell.setCellValue("RECEIPIENT");
                                          cell.setCellStyle(style);
                                          for(int i=0;i<nodeList.getLength();i++){
                                               System.err.println("loop: node" );
                                          HSSFRow rowno=spreadSheet.createRow(i+1);     
                                          HSSFCellStyle cellStyle = wb.createCellStyle();
                                              cellStyle.setBorderRight(HSSFCellStyle.BORDER_MEDIUM);
                                              cellStyle.setBorderTop(HSSFCellStyle.BORDER_MEDIUM);
                                              cellStyle.setBorderLeft(HSSFCellStyle.BORDER_MEDIUM);
                                              cellStyle.setBorderBottom(HSSFCellStyle.BORDER_MEDIUM);
                                              //***********take note for string and date*************
                                              cell = rowno.createCell((short) 0);
                                                   cell.setCellValue(((Element) (nodeList.item(i)))
                                                       .getElementsByTagName("dateIn").item(0)
                                                       .getFirstChild().getNodeValue());
                                            cell.setCellType(HSSFCell.CELL_TYPE_STRING);
                                              cell = rowno.createCell((short) 1);
                                                   cell.setCellValue(((Element) (nodeList.item(i)))
                                                       .getElementsByTagName("emailId").item(0)
                                                       .getFirstChild().getNodeValue());
                                                   cell.setCellType(HSSFCell.CELL_TYPE_STRING);
                                             cell = rowno.createCell((short) 2);
                                                 cell.setCellValue(((Element) (nodeList.item(i)))
                                                      .getElementsByTagName("subject").item(0)
                                                      .getFirstChild().getNodeValue());
                                                 cell.setCellType(HSSFCell.CELL_TYPE_STRING);          
                                            cell = rowno.createCell((short) 3);
                                                 cell.setCellValue(((Element) (nodeList.item(i)))
                                                      .getElementsByTagName("dateReceive").item(0)
                                                      .getFirstChild().getNodeValue());     
                                                cell.setCellType(HSSFCell.CELL_TYPE_STRING);
                                            cell = rowno.createCell((short) 4);
                                                 cell.setCellValue(((Element) (nodeList.item(i)))
                                                      .getElementsByTagName("ackDelivery").item(0)
                                                      .getFirstChild().getNodeValue());     
                                                 cell.setCellType(HSSFCell.CELL_TYPE_STRING); 
                                            cell = rowno.createCell((short) 5);
                                                 cell.setCellValue(((Element) (nodeList.item(i)))
                                                      .getElementsByTagName("dateAck").item(0)
                                                      .getFirstChild().getNodeValue());     
                                                 cell.setCellType(HSSFCell.CELL_TYPE_STRING);      
                                            cell = rowno.createCell((short) 6);
                                                 cell.setCellValue(((Element) (nodeList.item(i)))
                                                      .getElementsByTagName("sender").item(0)
                                                      .getFirstChild().getNodeValue());
                                                cell.setCellType(HSSFCell.CELL_TYPE_STRING);          
                                            cell = rowno.createCell((short) 7);
                                                 cell.setCellValue(((Element) (nodeList.item(i)))
                                                      .getElementsByTagName("receipient").item(0)
                                                      .getFirstChild().getNodeValue());     
                                                 cell.setCellType(HSSFCell.CELL_TYPE_STRING);     
                                          // Outputting to Excel spreadsheet
                                          System.err.println("Outputing");
                                          FileOutputStream output = new FileOutputStream(new File("tm.xls"));
                                          wb.write(output);
                                          output.flush();
                                          output.close();
                                         // byte[] bs = new byte[(int) xmlDocument.length()];
                                         // IWDCachedWebResource xlfile = WDWebResource.getWebResource(bs,WDWebResourceType.XLS);
                                         // xlfile.setResourceName("Report");
                                       } catch (IOException e) {
                                          System.out.println("IOException " + e.getMessage());
                                       } catch (ParserConfigurationException e) {
                                          System.out
                                              .println("ParserConfigurationException " + e.getMessage());
                                       } catch (SAXException e) {
                                          System.out.println("SAXException " + e.getMessage());
         public static void generateOutboundMailExcel(File xmlDocument) {
                                            try {// Creating a Workbook
                                               HSSFWorkbook wb = new HSSFWorkbook();
                                               HSSFSheet spreadSheet = wb.createSheet("spreadSheet");
                                               spreadSheet.setColumnWidth((short) 0, (short) (256 * 25));
                                               spreadSheet.setColumnWidth((short) 1, (short) (256 * 25));
                                               // Parsing XML Document
                                               DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();
                                               DocumentBuilder builder = factory.newDocumentBuilder();
                                               Document document = builder.parse(xmlDocument);
                                               System.err.println("tableemailoutelement");
                                               NodeList nodeList = document.getElementsByTagName("TablesemailoutElement");
                                               //Create font style
                                               HSSFFont font=wb.createFont();
                                               font.setFontHeightInPoints((short)10);
                                               font.setFontName("Verdana");
                                               font.setBoldweight(HSSFFont.BOLDWEIGHT_BOLD);
                                               HSSFCellStyle style=wb.createCellStyle();
                                               style.setFont(font);
                                               // Creating Rows
                                               System.out.println("create row");
                                               HSSFRow row = spreadSheet.createRow(0);
                                               HSSFCell cell = row.createCell((short) 0);
                                               cell.setCellValue("DATE IN");
                                               cell.setCellStyle(style);
                                               cell = row.createCell((short) 1);
                                               cell.setCellValue("EMAIL ID");
                                               cell.setCellStyle(style);
                                               cell = row.createCell((short) 2);
                                               cell.setCellValue("SUBJECT");
                                               cell.setCellStyle(style);
                                               cell = row.createCell((short) 3);
                                               cell.setCellValue("ACK DELIVERY");
                                               cell.setCellStyle(style);
                                               cell = row.createCell((short) 4);
                                               cell.setCellValue("DATE SENT");
                                               cell.setCellStyle(style);
                                               cell = row.createCell((short) 5);
                                               cell.setCellValue("SENDER");
                                               cell.setCellStyle(style);
                                               cell = row.createCell((short) 6);
                                               cell.setCellValue("RECEIVER");
                                               cell.setCellStyle(style);
                                               for(int i=0;i<nodeList.getLength();i++){
                                                 System.err.println("loop: node" );
                                               HSSFRow rowno=spreadSheet.createRow(i+1);     
                                               HSSFCellStyle cellStyle = wb.createCellStyle();
                                                   cellStyle.setBorderRight(HSSFCellStyle.BORDER_MEDIUM);
                                                   cellStyle.setBorderTop(HSSFCellStyle.BORDER_MEDIUM);
                                                   cellStyle.setBorderLeft(HSSFCellStyle.BORDER_MEDIUM);
                                                   cellStyle.setBorderBottom(HSSFCellStyle.BORDER_MEDIUM);
                                              //**********************take note for the cell date and string*************************
                                                   cell = rowno.createCell((short) 0);
                                                        cell.setCellValue(((Element) (nodeList.item(i)))
                                                            .getElementsByTagName("dateIn").item(0)
                                                            .getFirstChild().getNodeValue());
                                                 cell.setCellType(HSSFCell.CELL_TYPE_STRING);
                                                   cell = rowno.createCell((short) 1);
                                                        cell.setCellValue(((Element) (nodeList.item(i)))
                                                            .getElementsByTagName("emailId").item(0)
                                                            .getFirstChild().getNodeValue());
                                                        cell.setCellType(HSSFCell.CELL_TYPE_STRING);
                                                  cell = rowno.createCell((short) 2);
                                                      cell.setCellValue(((Element) (nodeList.item(i)))
                                                           .getElementsByTagName("subject").item(0)
                                                           .getFirstChild().getNodeValue());
                                                      cell.setCellType(HSSFCell.CELL_TYPE_STRING);
                                                 cell = rowno.createCell((short) 3);
                                                      cell.setCellValue(((Element) (nodeList.item(i)))
                                                           .getElementsByTagName("ackDelivery").item(0)
                                                           .getFirstChild().getNodeValue());     
                                                      cell.setCellType(HSSFCell.CELL_TYPE_STRING);               
                                                 cell = rowno.createCell((short) 4);
                                                      cell.setCellValue(((Element) (nodeList.item(i)))
                                                           .getElementsByTagName("dateSend").item(0)
                                                           .getFirstChild().getNodeValue());     
                                                      cell.setCellType(HSSFCell.CELL_TYPE_STRING);
                                                 cell = rowno.createCell((short) 5);
                                                      cell.setCellValue(((Element) (nodeList.item(i)))
                                                           .getElementsByTagName("sender").item(0)
                                                           .getFirstChild().getNodeValue());
                                                      cell.setCellType(HSSFCell.CELL_TYPE_STRING);          
                                                 cell = rowno.createCell((short) 6);
                                                      cell.setCellValue(((Element) (nodeList.item(i)))
                                                           .getElementsByTagName("receipient").item(0)
                                                           .getFirstChild().getNodeValue());     
                                                      cell.setCellType(HSSFCell.CELL_TYPE_STRING);     
                                               // Outputting to Excel spreadsheet
                                               System.err.println("Outputing");
                                               FileOutputStream output = new FileOutputStream(new File("OutboundMail.xls"));
                                               wb.write(output);
                                               output.flush();
                                               output.close();
                                              // byte[] bs = new byte[(int) xmlDocument.length()];
                                              // IWDCachedWebResource xlfile = WDWebResource.getWebResource(bs,WDWebResourceType.XLS);
                                              // xlfile.setResourceName("Report");
                                            } catch (IOException e) {
                                               System.out.println("IOException " + e.getMessage());
                                            } catch (ParserConfigurationException e) {
                                               System.out
                                                   .println("ParserConfigurationException " + e.getMessage());
                                            } catch (SAXException e) {
                                               System.out.println("SAXException " + e.getMessage());
    [/code]
    Message was edited by:
            yzme yzme
    Message was edited by:
            yzme yzme
    Message was edited by:
            yzme yzme

  • RAC with 10G using shared directories

    We want to test Oracle 10G with Real Applications Cluster, but we do not have a SAN yet, can we use a disk from a normal server, share this disk and create a map network drive in the two servers that i want to install in the RAC? and use them like a shared disk??

    This is the article about what I was refering:
    Setting Up Linux with FireWire-Based Shared Storage for Oracle9i RAC
    By Wim Coekaerts
    If you’re all fired up about FireWire and you want to set up a two-node cluster for development and testing purposes for your Oracle RAC (Real Application Clusters) database on Linux, here’s an installation and configuration QuickStart guide to help you get started. But first, a caveat: Neither Oracle nor any other vendor currently supports the patch; it is intended for testing and demonstration only.
    The QuickStart instructions step you through the installation of the Oracle database and the use of our patched kernel for configuring Linux for FireWire as well as the installation and configuration of Oracle Cluster File System (OCFS) on a FireWire shared-storage device. Oracle RAC uses shared storage in conjunction with a multinode extension of a database to allow scalability and provide failover security.
    The hardware typically used for shared storage (a fibre-channel system) is expensive (see my column on clustering with FireWire on Oracle Technology Network (OTN) for some background on shared-storage solutions and the new kernel patch). However, once you’ve installed and set up the kernel patch, you will be on your way to setting up a Linux cluster suitable for your development team to use for demo testing and QA—a solution that costs considerably less than the traditional ones.
    The patch is available to the Linux and open source community under the GNU General Public License (GPL). You can download it from the Linux Open Source Projects page, available from the Community Code section of OTN. See the Toolbox sidebar for more information.
    Figure 1: Two-node Linux cluster using FireWire shared drive
    By following this guide, you’ll install the patched kernel on each machine that will comprise a node of the cluster. You’ll basically build a two-node test configuration composed of two machines connected over a 10Base-T network, with each machine linked via FireWire to the drive used for shared storage, as shown in see Figure 1.
    If you haven’t used FireWire on either machine before, be sure to install and configure the FireWire interconnect in each machine and test it with a FireWire drive or other device before you get started, to ensure that the baseline system is working. The FireWire interconnects we tested are based on Texas Instruments (TI, one of the coauthors of the IEEE specification on which FireWire is based) chipsets, and we used a 120GB Western Digital External FireWire (IEEE 1394) hard drive.
    Table 1 lists the minimum hardware requirements per node for a two-node cluster and some of the additional requirements for clusters of more than two nodes. You can use a standard laptop equipped with a PCMCIA FireWire card for any of the nodes in the cluster. We’ve successfully tested a laptop-based cluster following the same installation process described in this article.
    As shown in Table 1, for more than two nodes, you must add a four- or five-port FireWire hub to the configuration, to support connections from the additional machines to the drive. Just plug each Linux box into a port in the hub, and plug the FireWire drive into the hub as well. Without a hub, the configuration won’t have enough power for the total cable length on the bus.
    The instructions in this article are for a two-node cluster configuration. To create a cluster of more than two nodes, configure each additional node (node 3, node 4) by repeating these steps for each of the additional nodes and also be sure to do the following:
    Modify the command syntax or script files to account for the proper node number, machine name, and other details specific to the node.
    Create an extra set of log files and undo tablespaces on the shared storage for each additional node.
    It’s not yet possible to use our patched FireWire drivers to build a cluster of more than four nodes.
    Step 1: Download Everything You Need
    Before you get started, spend some time downloading all the software you’ll need from OTN. If you’re not an OTN member, you’ll have to join first, but it’s free.
    Keep in mind that these Linux kernel FireWire driver patches are true open source projects. You can download the source code and customize it for your own implementations as long as you adhere to the GPL agreement.
    See "Toolbox" for a list of the software you should download and have available before you get started.
    Step 2. Install Linux
    Once you’ve downloaded or purchased the Red Hat Linux Advanced Server 2.1 distribution (or another distribution that you’ve already gotten to work with Oracle9i Database, Release 2), you can install Linux on the local hard drive of each node (this takes about 25 minutes per node). We’ll keep the configuration basic, but you should configure one of the network cards on each machine for a private LAN (this provides the interconnect between nodes in the cluster); for example:
    hostname: node1
    ip address: 192.168.1.50
    hostname: node2
    ip address: 192.168.1.51
    Because this is a private LAN, you don’t need "real" IP addresses. Just make sure that if you do hook up either of these machines to a live network, the IP addresses don’t conflict with those of other machines. Also, be sure you download all the software you need for these machines before configuring the private network if you haven’t also configured or don’t have a second network interface card (NIC) in the machines.
    Step 3. Install Oracle9i Database
    If you haven’t done so already, you must download the Oracle software set for Oracle9i Database Release 2 (9.2.0.1.0) for Linux, or if you’re an OTN TechTracks
    For each machine that will comprise a node in the cluster, you must do the following:
    Create a mount point, /oracle/home, for the Oracle software files on the local hard disk of each machine.
    Create a new user, oracle (in either the dba or the oracle group), in /home/oracle on each machine.
    Start the Oracle Universal Installer from the CD or the mount point on the local hard disk to which you’ve copied the installation files; that is, enter runInstaller. The Oracle Universal Installer menu displays.
    From the menu, choose Cluster Manager as the first product to install, and install it with only its own node name as public and private nodes for now. Cluster Manager is just a few megabytes, so installation should take only a minute or two.
    When the installation is complete, exit from the Oracle Universal Installer and restart it (using the runInstaller script). Choose the database installation option, and do a full software-only installation (don’t create a database).
    Step 4. Configure FireWire (IEEE 1394)
    If you haven’t done so already, download the patched Linux kernel file (fw-test-kernel-2.4.20-image.tar.gz) from OTN’s Community Code area.
    Assuming that fw-test-kernel-2.4.19-image.tar.gz is available at the root mount point on each node, now do the following:
    Log on to each machine as the root user and execute these commands to uncompress and unpack the files that comprise the modules:
    cd /
    tar zxvf /fw-test-kernel-2.4.19-image.tar.gz
    modify /etc/grub.conf
    If you’re using the lilo bootloader utility instead of grub, replace grub.conf in the last statement above with /etc/lilo.conf.
    To the bottom of /etc/grub.conf or /etc/lilo.conf, add the name of the new kernel:
    title FireWire Kernel (2.4.19)
    root (hd0,0)
    kernel /vmlinuz-2.4.19 ro root=/dev/hda3
    Now reboot the system by using this kernel on both nodes. To simplify the startup process so that you don’t have to modify the boot-up commands each time, you should also add the following statements to /etc/modules.conf on each node:
    options sbp2 sbp2_exclusive_login=0
    post-install sbp2 insmod sd_mod
    post-remove sbp2 rmmod sd_mod
    During every system boot, load the FireWire drivers on each node; for example:
    modprobe ohci1394
    modprobe sbp2
    If you use dmesg (display messages from the kernel ring buffer), you should see a log message similar to the following:
    Attached scsi disk sda at scsi0, channel 0, id 0, lun 0
    SCSI device sda: 35239680 512-byte hdwr sectors (18043 MB)
    sda: sda1 sda2 sda3
    This particular message indicates that the Linux kernel has recognized an 18GB disk with three partitions.
    The first time you use the FireWire drive, run fdisk from one of the nodes and partition the disk as you like. (If both nodes have the modules loaded while you’re running fdisk on one node, you should reboot the other system or unload and reload all the FireWire and SCSI modules to make sure the new partition table is loaded.)
    Step 5. Configure OCFS
    We strongly recommend that you use OCFS in conjunction with the patched kernel so that you don’t have to partition your disks manually. If you haven’t done so already, download the precompiled modules (fw-kernel-ocfs.tar.gz) from OTN’s Community Code area. (See the "Toolbox" sidebar for more information.)
    Untar the file on each node, and use ocfsformat on one node to format the file system on the shared disk, as in the following example:
    ocfsformat -f -l /dev/sda1 -c 128 -v ocfsvol
    -m /ocfs -n node1 -u 1011 -p 755 -g 1011
    where 1011 is the UID and GID of the Oracle account and 755 is the directory permission. The partition that we’ll use is /dev/sda1, and -c 128 means that we’ll use a 128KB cluster size; the cluster size can be 4, 8, 16, 32, 128, 256, 512, or 1,024KB.
    As the root user, create an /ocfs mountpoint directory on each node.
    To configure and load the kernel module on each node, create a configuration file /etc/ocfs.conf. For example:
    ipcdlm:
    ip_address = 192.168.1.50
    ip_port = 9999
    subnet_mask = 255.255.252.0
    type = udp
    hostname = node1 (on node2, put node2’s hostname here)
    active = yes
    Be sure that each node has the correct values with respect to IP addresses, subnet masks, and node names. Assuming that you’re using the example configuration, node 1 uses the IP address 192.168.1.50 ; while on node 2, put 192.168.1.51
    Use the insmod command to load the OCFS driver on each node. The basic syntax is as follows:
    insmod ocfs.o name=<nodename>
    For example:
    insmod /root/ocfs.o name=node1
    Each time the system boots, the module must be loaded on each node that comprises the cluster.
    To mount the OCFS partition, enter the following on each node:
    mount -t ocfs /dev/sda1 /ocfs
    You now have a shared file system, owned by user oracle, mounted on each node. The shared file system will be used for all data, log, and control files. The modules have also been loaded, and the Oracle database software has been installed.
    You’re now ready for the final steps—configuring the Cluster Manager software and creating a database. To streamline this process, you can create a small script (env.sh) in the Oracle home to set up the environment, as follows:
    export ORACLE_HOME=/home/Oracle/9i
    export ORACLE_SID=node1
    export LD_LIBRARY_PATH=/home/Oracle/9i/lib
    export PATH=$ORACLE_HOME/bin:$PATH
    You can do the same for the second node—just change the second line above to export ORACLE_SID=node2.
    Execute (source) this file (env.sh) when you log in or from .login scripts as root or oracle.
    Step 6. Configure Cluster Manager
    Cluster Manager maintains the status of the nodes and the Oracle instances across the cluster and runs on each node of the cluster.
    As user root or oracle, go to $ORACLE_HOME/oracm/admin on each node and create or change the cmcfg.ora and the ocmargs.ora files according to Listing 1.
    Be sure that the HostName in the cmcfg.ora file is correct for the machine — that is, node 1 has a file that contains node1, and node 2 has a file that contains node2.
    Before starting the database, make sure the Cluster Manager software is running. For convenience’s sake, add Cluster Manager to the rc script. As user root on each node, set up the Oracle environment variables (source env.sh):
    cd $ORACLE_HOME/oracm/bin
    ./ocmstart.sh
    The file ocmstart.sh is an Oracle-provided sample startup script that starts both the Watchdog daemon and Cluster Manager.
    Step 7. Configure Oracle init.ora, and Create a Database
    Listing 2 contains an example init.ora in $ORACLE_HOME/dbs. You can use it on each node to create initnode1.ora and initnode2.ora, respectively, by making the appropriate adjustments—that is, change node1 to node2 throughout the listing.
    You must now create the directories for the log files on node 1, as follows:
    cd $ORACLE_HOME
    mkdir admin ; cd admin ; mkdir node1 ; cd node1 ;
    mkdir udump ; mkdir bdump ; mkdir cdump
    Again, do the same for node 2, replacing node1 in the syntax example with node2.
    Make a link for the Oracle password file on each node (these files may not yet exist):
    cd $ORACLE_HOME/dbs
    ln -sf /ocfs/orapw orapw
    Now that you have the setup, the next step is to create a database. To simplify this process, use the shell script (create.sh) in Listing 3. Be sure to run the script from node 1 only, and be sure to run it only once. Run this script as user oracle, and if all has goes well, you will have created the database, added a second undo tablespace, and added and enabled a second log thread.
    You can start the database from either node in the cluster, as follows:
    sqlplus ’/ as sysdba’
    startup
    Finally, you can configure the Oracle listener, $ORACLE_HOME/network/admin/listener.ora, as you normally would on both nodes and start that as well.
    You should now be all set up!
    Wim Coekaerts ( [email protected]) is principal member of technical staff, Corporate Architecture, Development. His team works on continuing enhancements to the Linux kernel and publishes source code under the GPL in OTN’s Community Code section. For more information about Oracle and Linux, visit the OTN Linux Center or the Linux Forum.
    Toolbox
    Don’t tackle this as your first "getting to know Linux and Oracle project." This article is brief and doesn’t provide detailed, blow-by-blow instructions for beginners. You should be comfortable with the UNIX operating system and with Oracle database installation in a UNIX environment. You’ll need all the software and hardware items in this list:
    Oracle9i Database Release 2 (9.2.0.1.0) for Linux (Intel). Download the Enterprise Edition, which is required for Oracle RAC.
    Linux distribution. We recommend Red Hat Linux Advanced Server 2.1, but you can download Red Hat 8.0 free from Red Hat. (However, please note that Red Hat doesn’t support the downloaded version.)
    Linux kernel patch for FireWire driver support, available under the Firewire Patches section. (Note that we’re updating these constantly, so the precise name may have changed.)
    OCFS for Linux. OCFS is not strictly required, but we recommend that you use it because it simplifies installation and configuration of the storage for the cluster. The file you need is fw-kernel-ocfs.tar.gz.
    Two Intel-based PCs
    Two NICs in each machine (although we’re only concerned in these instructions with configuring the private LAN that provides the heartbeat communication between the nodes in the cluster)
    Two FireWire interconnect cards
    One large FireWire drive for shared storage
    To supplement this QuickStart, you should also take a look at the supporting documentation, especially these materials:
    Release Notes for Oracle9i for Linux (Intel)
    Oracle9i Real Application Clusters Setup and Configuration
    Oracle Cluster Management Software for Linux (Appendix F in the Oracle9i Administrator’s Reference Release 2 (9.2.0.1.0) for UNIX Systems)
    Table 1: Hardware inventory and worksheet for FireWire-based cluster
    Requirements Your configuration details:
    Per node minimum Node 1 Node 2
    Minimum CPU 500 MHz (Celeron, AMD, Pentium)
    Minimum RAM 256 MB
    Local hard drive free space 3 GB
    FireWire card 1 (TI chipset)
    Network interface card 2 (1 for node interconnect; 1 for public network)
    Per cluster minimum Your configuration details:
    FireWire hard drive 1 300-GB
    4-port FireWire hub Required for 3-node cluster
    5-port FireWire hub Required for 4-node cluster
    http://otn.oracle.com/oramag/webcolumns/2003/techarticles/coekaertsfirewiresetup.html
    Joel Pérez
    http://otn.oracle.com/experts

  • Storage live migration leaves files in the old location

    I have Hyper-v cluster and a new Scale-put file server cluster. I am in the process of storage live migrating my vms into the SOFS cluster. Some Vms migrate over fine and other leave references to the old storage location when the migration completes.
    I have configured constrained delegation between the Hyper-V host and also SMB delegation on the sofs nodes.
    Any ideas on why it is not the storage live migration is not completely migrating over to the new location?
    thanks so much for your time

    Yes i have all updates on sofs cluster nodes a hyper-v cluster nodes.
    Its weird that  hard drive vm property shows new location,  but seems that the vm xml config files are actually still referenced and being used in vm properties.
    Thanks for the help

  • Checking the LUNs when clusterware doesn't come up

    Grid Version : 11.2.0.3.6
    Platform  : Oracle Enterprise Linux 6.2
    In our 2-Node RAC, Node2 got evicted. Once Node2 booted up CRS didn't start up. I couldn't find anything significant in Grid alert.log , ocssd.log or crsd.log.
    In node2, I was able to do fdisk -l on all LUNs in the OCR_VOTE diskgroup. After few hours of headache and escalations we discovered that LUNs were not actually accesible to the clusterware in Node2 although fdisk -l was correctly showing the partition.
    When the cluster was down, I wanted to check if voting disk was actually accessible to the CRS ( GI ), but I couldn't (as shown below).
    # ./crsctl start crs
    CRS-4640: Oracle High Availability Services is already active
    CRS-4000: Command Start failed, or completed with errors.
    # ./crsctl query css votedisk
    Unable to communicate with the Cluster Synchronization Services daemon.
    How can I check if the voting disk is accessible to the CRS in a node when the CRS is down ?

    There are 2 layers needed for starting CRS.
    Storage. IMO, multipath is mandatory for managing cluster storage at physical level. To check whether the storage is available, use the multipath -l command to a device listing. I usually use multipath -l | grep <keyword> to list the LUNs, where keyword identifies the LUN entry. E.g.
    [root@xx-rac01 ~]# multipath -l | grep VRAID | sort
    VNX-LUN0 (360060160abf02e00f8712272de99e111) dm-8 DGC,VRAID
    VNX-LUN1 (360060160abf02e009050a27bde99e111) dm-3 DGC,VRAID
    VNX-LUN2 (360060160abf02e009250a27bde99e111) dm-9 DGC,VRAID
    VNX-LUN3 (360060160abf02e009450a27bde99e111) dm-4 DGC,VRAID
    VNX-LUN4 (360060160abf02e009650a27bde99e111) dm-0 DGC,VRAID
    VNX-LUN5 (360060160abf02e009850a27bde99e111) dm-5 DGC,VRAID
    VNX-LUN6 (360060160abf02e009a50a27bde99e111) dm-1 DGC,VRAID
    VNX-LUN7 (360060160abf02e009c50a27bde99e111) dm-6 DGC,VRAID
    VNX-LUN8 (360060160abf02e009e50a27bde99e111) dm-2 DGC,VRAID
    VNX-LUN9 (360060160abf02e00a050a27bde99e111) dm-7 DGC,VRAID
    If the LUN count is wrong and one or more LUNs are missing, I would check /var/log/messages for starters. One can also run a multipath flush and rediscovery (and up the verbosity level if errors are thrown).
    If all the LUNs are there, check device permissions and make sure that the Oracle s/w stack has access.
    The other layer that needs to be working is the Interconnect. There are 2 basic things to check. Does the local Interconnect interface exist? This can be checked using ifconfig. And does this Interconnect interface communicate with other cluster node's Interconnect interfaces? This can be checked using ping - or if Infiniband is used, via the ibhost and other ib commands.
    So if CRS does not start - these 2 checks (storage and Interconnect) would be my first port of call, as in my experience the vast majority of times one of these 2 layers failed.

  • Does library module " libskxgxp92_64.so " exist.

    Hi All,
    while doing a failover test for cluster nodes and databases we came accross an issue and its study showed that we had to link some libraries namely libskgxp9.so (inter-node communication functions). The document which details these steps from metalink note : 254815.1 is mentioning that we have to do
    $ cp /opt/ORCLcluster/lib/9iR2/libskxgxp92_64.so $ORACLE_HOME/lib/libskgxp9.so (for 64-bit Oracle) .
    When I looked for this particular "libskxgxp92_64.so", I could not find it anywhere. Does it really exist ?
    It is 9206 on SUN OS to be clustered using Veritas Cluster, Veritas LLT interconnect.
    Inputs of individuals who faced exactly similar issue and have the solution for this will be highly appreciated.
    Thanks

    Thanks APC. Yes I checked that and copied from there but I am wondering if this exist or not. I googled and checked with others with no answer, that is why thought of posting on this forum. If any one has come accross with such please respond.
    Cheers

  • Aggregates, VLAN's, Jumbo-Frames and cluster interconnect opinions

    Hi All,
    I'm reviewing my options for a new cluster configuration and would like the opinions of people with more expertise than myself out there.
    What I have in mind as follows:
    2 x X4170 servers with 8 x NIC's in each.
    On each 4170 I was going to configure 2 aggregates with 3 nics in each aggregate as follows
    igb0 device in aggr1
    igb1 device in aggr1
    igb2 device in aggr1
    igb3 stand-alone device for iSCSI network
    e1000g0 device in aggr2
    e1000g1 device in aggr2
    e1000g2 device in aggr3
    e1000g3 stand-alone device of iSCSI network
    Now, on top of these aggregates, I was planning on creating VLAN interfaces which will allow me to connect to our two "public" network segments and for the cluster heartbeat network.
    I was then going to configure the vlan's in an IPMP group for failover. I know there are some questions around that configuration in the sense that IPMP will not detect a nic failure if a NIC goes offline in the aggregate, but I could monitor that in a different manner.
    At this point, my questions are:
    [1] Are vlan's, on top of aggregates, supported withing Solaris Cluster? I've not seen anything in the documentation to mention that it is, or is not for that matter. I see that vlan's are supported, inluding support for cluster interconnects over vlan's.
    Now with the standalone interface I want to enable jumbo frames, but I've noticed that the igb.conf file has a global setting for all nic ports, whereas I can enable it for a single nic port in the e1000g.conf kernel driver. My questions are as follows:
    [2] What is the general feeling with mixing mtu sizes on the same lan/vlan? Ive seen some comments that this is not a good idea, and some say that it doesnt cause a problem.
    [3] If the underlying nic, igb0-2 (aggr1) for example, has 9k mtu enabled, I can force the mtu size (1500) for "normal" networks on the vlan interfaces pointing to my "public" network and cluster interconnect vlan. Does anyone have experience of this causing any issues?
    Thanks in advance for all comments/suggestions.

    For 1) the question is really "Do I need to enable Jumbo Frames if I don't want to use them (neither public nore private network)" - the answer is no.
    For 2) each cluster needs to have its own seperate set of VLANs.
    Greets
    Thorsten

  • Private interconnect of an Oracle 10g cluster

    Can you please answer below questions?
    Is a direct connection between two nodes supported on the private interconnect of an Oracle 10g cluster?
    We know that crossover cables are not supported, but what about a Gigabit network with a straight cable?”

    Hi,
    I really wouldn't suggest that approach, It is definitely not efficient and not flexible
    - If you have 4 nodes, and nodes 1 want to send message to node 4, the package must go through node 2 and 3? Is it efficient? NO absolutely
    - If you have e.g 2 nodes, if one of the link down in one of the nodes, the other nodes link will also down, this will most likely evicting both nodes instead of one node
    - your clusterware nodes iis limited by the cable which is not efficient
    - etc etc etc more disadvantages than advantages
    Cheers
    FZheng

  • RAC 10g on Sun Cluster 3.1 U3 and Interconnect

    Hello,
    I have the following Interconnects on my Sun Cluster:
    ce5: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 6
         inet 1.1.1.1 netmask ffffff80 broadcast 1.1.1.127
         ether 0:3:ba:95:fa:23
    ce5: flags=2008841<UP,RUNNING,MULTICAST,PRIVATE,IPv6> mtu 1500 index 6
         ether 0:3:ba:95:fa:23
         inet6 fe80::203:baff:fe95:fa23/10
    ce0: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 7
         inet 1.1.0.129 netmask ffffff80 broadcast 1.1.0.255
         ether 0:3:ba:95:f9:97
    ce0: flags=2008841<UP,RUNNING,MULTICAST,PRIVATE,IPv6> mtu 1500 index 7
         ether 0:3:ba:95:f9:97
         inet6 fe80::203:baff:fe95:f997/10
    clprivnet0: flags=1009843<UP,BROADCAST,RUNNING,MULTICAST,MULTI_BCAST,PRIVATE,IPv4> mtu 1500 index 8
         inet 1.1.193.1 netmask ffffff00 broadcast 1.1.193.255
         ether 0:0:0:0:0:1
    In the Installation of RAC the routine will ask me which Interface I will use for RAC Interconnect and I do not know if it does not matter which Interface I choose, because I nevertheless in any case I have an SPOF.
    Can anybody help??
    Thank you very much

    Sorry for the late reply, but the interface to pick is the clprivnet0. This load-balances over the available private interconnects under the covers and so does not represent a single point of failure.
    Tim
    ---

Maybe you are looking for

  • Can you create an animated gif in LabVIEW?

    Hello, I was wondering if there was a function within LabVIEW which can create an animated .gif file. Most of my online searches result in "how tos" for putting pre-existing gifs on the front panel.  I don't need to be able to display the gif anywher

  • Access users in a Web Dynpro application on different WAS ??????

    Hi, Scenario - I have a local WAS on which i have deployed a Web Dynpro Based application. There is another WAS on which I have portal installed. I need to access the roles assigned to a particular user in my Web Dynpro application. Query - I need to

  • Call RFC with DLL created with DCOM CONNECTOR

    I try to call a RFC with a DLL that i created with the SAP R/3 DCOM CONNECTOR (Release 4.5.B): My external soft calls the DLL, which calls the RFC, which calls a transaction (Call Transaction Mode N, Update S)... When i test the RFC in SAP (SE37), it

  • USB drive doesn't appear

    Often when I plug my USB drive (Seagate) into the USB port, it does not appear in the Finder. I have to reboot the computer for it to appear. Is there a reason why this happens, and is there anything I can do besides reboot? thanks Scott

  • How do i make use of smileys using vivaz pro?

    Hello, My name is Umar Bakari, i have been a fan of sony ericsson for about 5years now, i am so much in love with the product. i have a problem using emotions simileys on vivaz pro unlike the k810i i use to have. Any solution to my problem? Best rega