Sun cluster Node ID Order

I installed Sun cluster 3.1 and it all seemed successful. However, the node ID and the private hostnames seemed twisted. "comdb1" has the node ID of 2 and "comdb2" has the node ID of 1. I installed the software from "comdb1", so it should have used that as node 1, right? I pasted below some info from 'scconf -p'
Cluster node name: comdb1
Node ID: 2
Node enabled: yes
Node private hostname: clusternode2-priv
Node quorum vote count: 1
Node reservation key: 0x4472697D00000002
Node transport adapters: ce4 ce1
Cluster node name: comdb2
Node ID: 1
Node enabled: yes
Node private hostname: clusternode1-priv
Node quorum vote count: 1
Node reservation key: 0x4472697D00000001
Node transport adapters: ce4 ce1
Thank you in advance.

Consequently, when installing Oracle 10g RAC, the database name "db1" is created on node2 and "db2" is created on node1 since it relies on the private node name and the node ID. Otherwise I wouldn't bother with how the cluster software names its node ID.
Thanks again,
Luke

Similar Messages

  • Connected to an idle instance in sun cluster nodes.

    i have two sun cluster node sharing common storage.
    Two schema's:
    test1 for nodeA
    test2 for nodeB.
    My requirement is as below:
    Login into b node.
    export ORACLE_SID=test1.
    sqlplus / as sysdba.
    But i am getting as
    "connected to an idle instance"
    Is there any way to connect to node A schema from Node B.

    I found the answer..
    sqlplus <sysdbauser>/<password>@test1 as sysdba

  • Sun cluster failed when switching, mount /global/ I/O error .

    Hi all,
    I am having a problem during switching two Sun Cluster nodes.
    Environment:
    Two nodes with Solaris 8 (Generic_117350-27), 2 Sun D2 arrays & Vxvm 3.2 and Sun Cluster 3.0.
    Porblem description:
    scswitch failed , then scshutdown and boot up the both nodes. One node failed because of vxvm boot failure.
    The other node is booting up normally but cannot mount /global directories. Manually mount is working fine.
    # mount /global/stripe01
    mount: I/O error
    mount: cannot mount /dev/vx/dsk/globdg/stripe-vol01
    # vxdg import globdg
    # vxvol -g globdg startall
    # mount /dev/vx/dsk/globdg/mirror-vol03 /mnt
    # echo $?
    0
    port:root:/global/.devices/node@1/dev/vx/dsk 169# mount /global/stripe01
    mount: I/O error
    mount: cannot mount /dev/vx/dsk/globdg/stripe-vol01
    Need help urgently
    Jeff

    I would check your patch levels. I seem to remember there was a linker patch that cause an issue with mounting /global/.devices/node@X
    Tim
    ---

  • SUNWjass on sun cluster

    Hi,
    I would like to do hardenning on sun cluster nodes by using SUNWjass.
    Can anybody tell me what all profile I need to apply? When I apply Cluster Security Hardenning driver profile, Cluster interconnect stop functioning untill I disable the IP filter.
    Seeking suggessions on the filter entries on /etc/ipf/ipf.conf file
    Thanks and Regards
    Ushas Symon

    Hi Tim,
    I would like to get clarified on the same question, There are many profiles which can be applied as part of hardenning, (Ex, Cluster config, Cluster Security, Server Config, Server Security etc), For a sun cluster which in failover configuration, Do I need to install both, Server Security as well as cluster security, or either one only?
    I am afraid if it would make some changes and if something goes wrong, I will have to backout the jass profile.
    Just for clarification
    Thanks and Regards
    Ushas

  • Veritas volume replicator under sun cluster

    Hi,
    Can we use VVR under sun cluster server? We want to replicate data on sun cluster nodes to a seperate box which is not part of the sun cluster. Is there any special configuration needed?
    Thanks and appreciate any response.

    Don't forget you will also need to obtain a VXVM Cluster functionality license to use VXVM in a Sun cluster, which is a separate license key to the base VXVM.

  • 11g r2 non rac using asm for sun cluster os (two node but non-rac)

    I am going to install grid installation for non-rac using asm for two node sun cluster environment..
    How to create candidate disk in solaris cluster (sparc os) to install grid home in asm.. please provide me the steps if anyone knows

    Please refer the thread Re: 11GR2 ASM in non-rac node not starting... failing with error ORA-29701
    and this doc http://docs.oracle.com/cd/E11882_01/install.112/e24616/presolar.htm#CHDHAAHE

  • SUN Cluster 3.2, Solaris 10, Corrupted IPMP group on one node.

    Hello folks,
    I recently made a network change on nodename2 to add some resilience to IPMP (adding a second interface but still using a single IP address).
    After a reboot, I cannot keep this host from rebooting. For the one minute that it stays up, I do get the following result from scstat that seems to suggest a problem with the IPMP configuration. I rolled back my IPMP change, but it still doesn't seem to register the IPMP group in scstat.
    nodename2|/#scstat
    -- Cluster Nodes --
    Node name Status
    Cluster node: nodename1 Online
    Cluster node: nodename2 Online
    -- Cluster Transport Paths --
    Endpoint Endpoint Status
    Transport path: nodename1:bge3 nodename2:bge3 Path online
    -- Quorum Summary from latest node reconfiguration --
    Quorum votes possible: 3
    Quorum votes needed: 2
    Quorum votes present: 3
    -- Quorum Votes by Node (current status) --
    Node Name Present Possible Status
    Node votes: nodename1 1 1 Online
    Node votes: nodename2 1 1 Online
    -- Quorum Votes by Device (current status) --
    Device Name Present Possible Status
    Device votes: /dev/did/rdsk/d3s2 0 1 Offline
    -- Device Group Servers --
    Device Group Primary Secondary
    Device group servers: jms-ds nodename1 nodename2
    -- Device Group Status --
    Device Group Status
    Device group status: jms-ds Online
    -- Multi-owner Device Groups --
    Device Group Online Status
    -- IPMP Groups --
    Node Name Group Status Adapter Status
    scstat:  unexpected error.
    I did manage to run scstat on nodename1 while nodename2 was still up between reboots, here is that result (it does not show any IPMP group(s) on nodename2)
    nodename1|/#scstat
    -- Cluster Nodes --
    Node name Status
    Cluster node: nodename1 Online
    Cluster node: nodename2 Online
    -- Cluster Transport Paths --
    Endpoint Endpoint Status
    Transport path: nodename1:bge3 nodename2:bge3 faulted
    -- Quorum Summary from latest node reconfiguration --
    Quorum votes possible: 3
    Quorum votes needed: 2
    Quorum votes present: 3
    -- Quorum Votes by Node (current status) --
    Node Name Present Possible Status
    Node votes: nodename1 1 1 Online
    Node votes: nodename2 1 1 Online
    -- Quorum Votes by Device (current status) --
    Device Name Present Possible Status
    Device votes: /dev/did/rdsk/d3s2 1 1 Online
    -- Device Group Servers --
    Device Group Primary Secondary
    Device group servers: jms-ds nodename1 -
    -- Device Group Status --
    Device Group Status
    Device group status: jms-ds Degraded
    -- Multi-owner Device Groups --
    Device Group Online Status
    -- IPMP Groups --
    Node Name Group Status Adapter Status
    IPMP Group: nodename1 sc_ipmp1 Online bge2 Online
    IPMP Group: nodename1 sc_ipmp0 Online bge0 Online
    -- IPMP Groups in Zones --
    Zone Name Group Status Adapter Status
    I believe that I should be able to delete the IPMP group for the second node from the cluster and re-add it, but I'm sure about how to go about doing this. I welcome your comments or thoughts on what I can try before rebuilding this node from scratch.
    -AG

    I was able to restart both sides of the cluster. Now both sides are online, but neither side can access the shared disk.
    Lots of warnings. I will keep poking....
    Rebooting with command: boot
    Boot device: /pci@1e,600000/pci@0/pci@a/pci@0/pci@8/scsi@1/disk@0,0:a File and args:
    SunOS Release 5.10 Version Generic_141444-09 64-bit
    Copyright 1983-2009 Sun Microsystems, Inc. All rights reserved.
    Use is subject to license terms.
    Hardware watchdog enabled
    Hostname: nodename2
    Jul 21 10:00:16 in.mpathd[221]: No test address configured on interface ce3; disabling probe-based failure detection on it
    Jul 21 10:00:16 in.mpathd[221]: No test address configured on interface bge0; disabling probe-based failure detection on it
    /usr/cluster/bin/scdidadm: Could not stat "../../devices/iscsi/[email protected],0:c,raw" - No such file or directory.
    Warning: Path node loaded - "../../devices/iscsi/[email protected],0:c,raw".
    /usr/cluster/bin/scdidadm: Could not stat "../../devices/iscsi/[email protected],1:c,raw" - No such file or directory.
    Warning: Path node loaded - "../../devices/iscsi/[email protected],1:c,raw".
    Booting as part of a cluster
    NOTICE: CMM: Node nodename1 (nodeid = 1) with votecount = 1 added.
    NOTICE: CMM: Node nodename2 (nodeid = 2) with votecount = 1 added.
    WARNING: CMM: Open failed for quorum device /dev/did/rdsk/d3s2 with error 2.
    NOTICE: clcomm: Adapter bge3 constructed
    NOTICE: CMM: Node nodename2: attempting to join cluster.
    NOTICE: CMM: Node nodename1 (nodeid: 1, incarnation #: 1279727883) has become reachable.
    NOTICE: clcomm: Path nodename2:bge3 - nodename1:bge3 online
    WARNING: CMM: Open failed for quorum device /dev/did/rdsk/d3s2 with error 2.
    NOTICE: CMM: Cluster has reached quorum.
    NOTICE: CMM: Node nodename1 (nodeid = 1) is up; new incarnation number = 1279727883.
    NOTICE: CMM: Node nodename2 (nodeid = 2) is up; new incarnation number = 1279728026.
    NOTICE: CMM: Cluster members: nodename1 nodename2.
    NOTICE: CMM: node reconfiguration #3 completed.
    NOTICE: CMM: Node nodename2: joined cluster.
    NOTICE: CCR: Waiting for repository synchronization to finish.
    WARNING: CCR: Invalid CCR table : dcs_service_9 cluster global.
    WARNING: CMM: Open failed for quorum device /dev/did/rdsk/d3s2 with error 2.
    ip: joining multicasts failed (18) on clprivnet0 - will use link layer broadcasts for multicast
    ==> WARNING: DCS: Error looking up services table
    ==> WARNING: DCS: Error initializing service 9 from file
    /usr/cluster/bin/scdidadm: Could not stat "../../devices/iscsi/[email protected],0:c,raw" - No such file or directory.
    Warning: Path node loaded - "../../devices/iscsi/[email protected],0:c,raw".
    /usr/cluster/bin/scdidadm: Could not stat "../../devices/iscsi/[email protected],1:c,raw" - No such file or directory.
    Warning: Path node loaded - "../../devices/iscsi/[email protected],1:c,raw".
    /dev/md/rdsk/d22 is clean
    Reading ZFS config: done.
    NOTICE: iscsi session(6) iqn.1994-12.com.promise.iscsiarray2 online
    nodename2 console login: obtaining access to all attached disks
    starting NetWorker daemons:
    Rebooting with command: boot
    Boot device: /pci@1e,600000/pci@0/pci@a/pci@0/pci@8/scsi@1/disk@0,0:a File and args:
    SunOS Release 5.10 Version Generic_141444-09 64-bit
    Copyright 1983-2009 Sun Microsystems, Inc. All rights reserved.
    Use is subject to license terms.
    Hardware watchdog enabled
    Hostname: nodename1
    /usr/cluster/bin/scdidadm: Could not stat "../../devices/iscsi/[email protected],0:c,raw" - No such file or directory.
    Warning: Path node loaded - "../../devices/iscsi/[email protected],0:c,raw".
    /usr/cluster/bin/scdidadm: Could not stat "../../devices/iscsi/[email protected],1:c,raw" - No such file or directory.
    Warning: Path node loaded - "../../devices/iscsi/[email protected],1:c,raw".
    Booting as part of a cluster
    NOTICE: CMM: Node nodename1 (nodeid = 1) with votecount = 1 added.
    NOTICE: CMM: Node nodename2 (nodeid = 2) with votecount = 1 added.
    WARNING: CMM: Open failed for quorum device /dev/did/rdsk/d3s2 with error 2.
    NOTICE: clcomm: Adapter bge3 constructed
    NOTICE: CMM: Node nodename1: attempting to join cluster.
    NOTICE: bge3: link up 1000Mbps Full-Duplex
    NOTICE: clcomm: Path nodename1:bge3 - nodename2:bge3 errors during initiation
    WARNING: Path nodename1:bge3 - nodename2:bge3 initiation encountered errors, errno = 62. Remote node may be down or unreachable through this path.
    WARNING: CMM: Open failed for quorum device /dev/did/rdsk/d3s2 with error 2.
    NOTICE: CMM: Cluster doesn't have operational quorum yet; waiting for quorum.
    NOTICE: bge3: link down
    NOTICE: bge3: link up 1000Mbps Full-Duplex
    NOTICE: CMM: Node nodename2 (nodeid: 2, incarnation #: 1279728026) has become reachable.
    NOTICE: clcomm: Path nodename1:bge3 - nodename2:bge3 online
    WARNING: CMM: Open failed for quorum device /dev/did/rdsk/d3s2 with error 2.
    NOTICE: CMM: Cluster has reached quorum.
    NOTICE: CMM: Node nodename1 (nodeid = 1) is up; new incarnation number = 1279727883.
    NOTICE: CMM: Node nodename2 (nodeid = 2) is up; new incarnation number = 1279728026.
    NOTICE: CMM: Cluster members: nodename1 nodename2.
    NOTICE: CMM: node reconfiguration #3 completed.
    NOTICE: CMM: Node nodename1: joined cluster.
    WARNING: CMM: Open failed for quorum device /dev/did/rdsk/d3s2 with error 2.
    ip: joining multicasts failed (18) on clprivnet0 - will use link layer broadcasts for multicast
    /usr/cluster/bin/scdidadm: Could not stat "../../devices/iscsi/[email protected],0:c,raw" - No such file or directory.
    Warning: Path node loaded - "../../devices/iscsi/[email protected],0:c,raw".
    /usr/cluster/bin/scdidadm: Could not stat "../../devices/iscsi/[email protected],1:c,raw" - No such file or directory.
    Warning: Path node loaded - "../../devices/iscsi/[email protected],1:c,raw".
    /dev/md/rdsk/d26 is clean
    Reading ZFS config: done.
    NOTICE: iscsi session(6) iqn.1994-12.com.promise.iscsiarray2 online
    nodename1 console login: obtaining access to all attached disks
    starting NetWorker daemons:
    nsrexecd
    mount: /dev/md/jms-ds/dsk/d100 is already mounted or /opt/esbshares is busy

  • Invalid node name in Sun Cluster 3.1 installation

    Dear all,
    I need your advice in Sun Cluster 3.1 8/05 installation.
    My colleague was installing Sun Cluster 3.1 8/05 on 2 servers Sun Netra 440 that given hostname 01-in-01 and 01-in-02. But when he want to configuring the cluster, the problem occured.
    The error message is:
    running scinstall: invalid node name
    And when we changed the host name to in-01 and in-02, the cluster can be configured well.
    Why did this problem happened?
    Is it related with the given hostname that using numeric in the beginning? If yes, can you give the documentation that state about that?
    Or maybe you have another explanation?
    Thank you for your help.
    regards,
    Henry

    A bug is being logged against this. (though obviously you could manually fix the shell script yourself if you were in a hurry).
    The problem partly stems from the restriction on hostnames being relaxed by RFC 1123 which relaxed RFC 952's limitation of the first character to only alpha characters.). See man hosts for more info. I guess our code didn't catch up :-)
    Tim
    ---

  • Node shutdown in sun cluster

    I have a two node cluster configured for High availability..
    My resourcegroup is online on node1..
    so the resources, logical hostname resource and my application resource are online on node1.
    when the node1 is shutdown, the resource group is failovered to node2 and is online..But when the node1 is brought back, the logical hostname is plumbed up on node1 also.. So both nodes have the logical hostname plumbed up..(from ifconfig -a output)
    which is causing the problem.
    My question is "Does sun cluster check the status of resources in the resource group on the node where my resource group is offline" . If it does, what additional configuration is required.

    This is a pretty old post and you probably have the answer by now )or have abandoned all hope), but it seems to me that what you want is to reset the resource/resource group dependencies for node1.
    If node1 is coming online under logicalhostname without all the resources coming up, you just don't have the resource dependency set up. You can do this in the SunPlex Manager GUI pretty easily. That should make it so the node doesn't get added to the logicalhostname resource group until X dependencies are met (what X stands for is entirely up to you; I didn't see the resource you want to come up first listed.)

  • Do I need 3 machines for Sun Two-Node Cluster HA for Apache?

    The vendor is trying to set up a Sun Cluster HA for Apache and said it requires a dedicated machine
    for monitoring purpose(The Sun Cluster HA for Apache probe??). Is this monitoring machine mandatory and have to be dedicated for that purpose? And can I use 1 machine to monitor more than 1 cluster? Because the vendor requires 2 extra machine to monitor 2 set of 2-node cluster..and the document I read from Sun only mentions the 3rd machine is required for admin console.
    Also, if I only have 2 node cluster, can I configure Apache as a scalable + failover service at the same time or I have to choose either one? Thanks!!

    Incorrect. You can set up a Solaris Cluster on a single node if you wish although two nodes give you redundancy. The probe for Apache runs from the node on which Apache is executing.
    If you have a two node cluster you could have multiple implementations of Apache: fail-over and scalable. You can mix them until you run out of resources: CPU, networking bandwidth, IP addresses, etc.
    Tim
    ---

  • Sun Cluster with three nodes

    I need a manual or advices for introducing a third node in a RAC with Sun Cluster. I don't know if qourum votes readjust automatic or I have add new quorum votes manualy, if I have to add a thirn mediator in svm ... etc
    A lot of thanks and sorry for my english

    After you have added your nodes to the cluster you will need to expand the RGs node list to include the new nodes if you need the RG to run on them. This is not automatic. Something like:
    # clrg set -n <nodelist> <rg_name>
    Is what you need.
    I'm not sure I understand what you said about the quorum count. Only nodes and quorum devices (QD) or quorum servers (QS) get a vote, cabinets do not. So each node gets a vote and a QD/QS gets a vote count equal to the number of nodes it connects to minus 1. Thus with a two node cluster, you have 3 votes with one QD. With a 4 node cluster with one fully connected QD/QS, you have 7 votes (after re-adding it).
    Hope that helps,
    Tim
    P.S. <shameless plug> I can recommend a good book on the product: "Oracle Solaris Cluster Essentials" ;-)

  • SUn Cluster with multiple nodes

    Hi,
    I am planning to setup Sun Cluster with 3+ nodes.
    is it possible to configure sun cluster with just two network interfaces on each box. (One for Public access and second for cluster interconnection via dedicated switch)?
    Thanks in advance for the help.

    Yes, but only if the adapters support VLAN tagging.
    That way you would be able to create two tagged VLANs on each adapter, one for public and one for private traffic. You'd also need a switch that supported VLAN tagging, but I think these are fairly common now.
    If your NICs do not support tagging then you'd need to get another adapter to allow Sun Cluster to install.
    Regards,
    Tim
    ---

  • Can I use Sun VTS to test on a Sun Cluster 3.1 node?

    Hi,
    I have seen some online documentation which suggests Sun VTS is not supported for use on a cluster node -
    http://docs.sun.com/app/docs/doc/819-2993/intro-7?l=en&a=view&q=VTS
    However there is not much detail here and I was wondering if anyone out there has used Sun VTS on a running cluster node before - or if there is an alternative we could use for SC3.1?
    The node is an E2900, sol 10 with SC3.1...
    would be interested in testing CPU and Memory only - no shared disks.
    Thanks :)

    My guess is that by running VTS, you could impose serious load on the system that will affect the node behavior, leading to e.g failover. Further, some tests with regard to storage might interfere in a problematic way.
    Since VTS is a debugging tool, I would recommend to boot a suspect node outside the cluster in non-cluster mode. Then carefully only enable the CPU and memory tests. Don't do any storage tests.
    Note that I did not do this myself - and as it is not supported, you do it at your own risk.
    Greets
    Thorsten

  • Help, node panic in the sun cluster 3.3.

    We have a 2-node cluster which are connected to the same storage.
    The resource owner of the cluster lost its power accidently, but it caused another node in the cluster panic.
    We have checked the log of the panic node. It said "reservation conflict" before panic.
    Does anybody know what is wrong ?

    Hi.
    What storage use as shared ?
    How many path have every node to this storage ?
    Cluster node reserve storage LUN for dedicated access ( storage reservation) but at failover spare nodes can't set reservation for shared devices that cause panic.
    Check this settings and recomendation:
    https://blogs.oracle.com/js/entry/prevent_reservation_conflict_panic_if
    Regards.
    Edited by: Nik on 10.08.2012 1:37

  • Information about Sun Cluster 3.1 5Q4 and Storage Foundation 4.1

    Hi,
    I have 2 Sunfire V440 with Solaris 9 last release 9/05 with last cluster patchs.. , Qlogic HBA fibre card on a seven disks shared on a Emc Clariion cx500. I have installed and configured Sun cluster 3.1 and Veritas Storage Foundation 4.1 MP1. My problems is when i run format wcommand on each node, I see the disks in a different order and veritas SF (4.1) is also picking up the disks in a different order.
    1. Storage Foundation 4.1 is compatible with Sun cluster 3.1 2005Q4?????
    2. Do you have a how to or other procedure for Storage foundation 4.1 with Sun Cluster 3.1.
    I'm very confuse with veritas Storage foundation
    Thanks!
    J-F Aubin

    This combination does not work today, but it will be available later.
    Since Sun and Veritas are two separate companies, it takes more
    time than expected to synchronize releases. Products supported by
    Sun for Sun Cluster installation undergo extensive testing, which also
    takes time.
    -- richard

Maybe you are looking for