Invalid node name in Sun Cluster 3.1 installation

Dear all,
I need your advice in Sun Cluster 3.1 8/05 installation.
My colleague was installing Sun Cluster 3.1 8/05 on 2 servers Sun Netra 440 that given hostname 01-in-01 and 01-in-02. But when he want to configuring the cluster, the problem occured.
The error message is:
running scinstall: invalid node name
And when we changed the host name to in-01 and in-02, the cluster can be configured well.
Why did this problem happened?
Is it related with the given hostname that using numeric in the beginning? If yes, can you give the documentation that state about that?
Or maybe you have another explanation?
Thank you for your help.
regards,
Henry

A bug is being logged against this. (though obviously you could manually fix the shell script yourself if you were in a hurry).
The problem partly stems from the restriction on hostnames being relaxed by RFC 1123 which relaxed RFC 952's limitation of the first character to only alpha characters.). See man hosts for more info. I guess our code didn't catch up :-)
Tim
---

Similar Messages

  • Node shutdown in sun cluster

    I have a two node cluster configured for High availability..
    My resourcegroup is online on node1..
    so the resources, logical hostname resource and my application resource are online on node1.
    when the node1 is shutdown, the resource group is failovered to node2 and is online..But when the node1 is brought back, the logical hostname is plumbed up on node1 also.. So both nodes have the logical hostname plumbed up..(from ifconfig -a output)
    which is causing the problem.
    My question is "Does sun cluster check the status of resources in the resource group on the node where my resource group is offline" . If it does, what additional configuration is required.

    This is a pretty old post and you probably have the answer by now )or have abandoned all hope), but it seems to me that what you want is to reset the resource/resource group dependencies for node1.
    If node1 is coming online under logicalhostname without all the resources coming up, you just don't have the resource dependency set up. You can do this in the SunPlex Manager GUI pretty easily. That should make it so the node doesn't get added to the logicalhostname resource group until X dependencies are met (what X stands for is entirely up to you; I didn't see the resource you want to come up first listed.)

  • Cannot import a disk group after sun cluster 3.1 installation

    Installed Sun Cluster 3.1u3 on nodes with veritas VxVM running and disk groups used. After cluster configuration and reboot, we can no longer import our disk groups. The vxvm displays message: Disk group dg1: import failed: No valid disk found containing disk group.
    Did anyone run into the same problem?
    The dump of the private region for every single disk in the VM returns the following error:
    # /usr/lib/vxvm/diag.d/vxprivutil dumpconfig /dev/did/rdsk/d22s2
    VxVM vxprivutil ERROR V-5-1-1735 scan operation failed:
    Format error in disk private region
    Any help or suggestion would be greatly appreciated
    Thx
    Max

    If I understand correctly, you had VxVM configured before you installed Sun Cluster - correct? When you install Sun Cluster you can no longer import your disk groups.
    First thing you need to know is that you need to register the disk groups with Sun Cluster - this happens automatically with Solaris Volume Manager but is a manual process with VxVM. Note you will also have to update the configuration after any changes to the disk group too, e.g. permission changes, volume creation, etc.
    You need to use the scsetup menu to achieve this, though it can be done via the command line using an scconf command.
    Having said that, I'm still confused by the error. See if the above solves the problem first.
    Regards,
    Tim
    ---

  • Sun Cluster + meta set shared disks -

    Guys, I am looking for some instructions that most sun administrators would mostly know i believe.
    I am trying to create some cluster resource groups and resources etc., but before that i am creating the file systems that is going to be used by two nodes in the sun cluster 3.2. we use SVM.
    I have some drives that i plan to use for this specific cluster resource group that is yet to be created.
    i know i have to create a metaset since thats how other resource groups in my environment are setup already so i will go with the same concept.
    # metaset -s TESTNAME
    Set name = TESTNAME, Set number = 5
    Host Owner
    server1
    server2
    Mediator Host(s) Aliases
    server1
    server2
    # metaset -s TESTNAME -a /dev/did/dsk/d15
    metaset: server1: TESTNAME: drive d15 is not common with host server2
    # scdidadm -L | grep d6
    6 server1:/dev/rdsk/c10t6005076307FFC4520000000000004133d0 /dev/did/rdsk/d6
    6 server2:/dev/rdsk/c10t6005076307FFC4520000000000004133d0 /dev/did/rdsk/d6
    # scdidadm -L | grep d15
    15 server1:/dev/rdsk/c10t6005076307FFC4520000000000004121d0 /dev/did/rdsk/d15
    Do you see what i am trying to say ? If i want to add d6 in the metaset it will go through fine, but not for d15 since it shows only against one node as you see from the scdidadm output above.
    Please Let me know how i share the drive d15 same as d6 with the other node too. thanks much for your help.
    -Param
    Edited by: paramkrish on Feb 18, 2010 11:01 PM

    Hi, Thanks for your reply. You got me wrong. I am not asking you to be liable for the changes you recommend since i know thats not reasonable while asking for help. I am aware this is not a support site but a forum to exchange information that people already are aware of.
    We have a support contract but that is only for the sun hardware and those support folks are somewhat ok when it comes to the Solaris and setup but not that experts. I will certainly seek their help when needed and thats my last option. Since i thought this problem that i see is possibly something trivial i quickly posted a question in this forum.
    We do have a test environment but that do not have two nodes but a 1 node with zone clusters. hence i dont get to see this similar problem in the test environment and also the "cldev populate" would be of no use as well to me if i try it in the test environment i think since we dont have two nodes.
    I will check the logs as you suggested and will get back if i find something. If you have any other thoughts feel free to let me know ( dont bother about the risks since i know i can take care of that ).
    -Param

  • Apply one non-kernel Solaris10 patch at Sun Cluster ***Beginner Question***

    Dear Sir/Madam,
    Our two Solaris 10 servers are running Sun Cluster 3.3. One server "cluster-1" has one online running zone "classical". Another server
    "cluster-2" has two online running zones, namely "romantic" and "modern". We are tying to install a regular non-kernel patch #145200-03 at cluster-1 LIVE which doesn't have prerequisite and no need to reboot afterwards. Our goal is to install this patch at the global zone,
    three local zones, i.e., classical, romantic and modern at both cluster servers, cluster-1 and cluster02.
    Unfortunately, when we began our patching at cluster-1, it could patch the running zone "classical" but we were getting the following errors which prevent it from continuing with patching at zones, i.e., "romantic" and "modern" which are running on cluster-2. And when we try to patch cluster-2, we are getting similiar patching error about failing to boot non-global zone "classical" which is in cluster-1.
    Any idea how I could resolve this ? Do we have to shut down the cluster in order to apply this patch ? I would prefer to apply this
    patch with the Sun Cluster running. If not, what's the preferred way to apply simple non-reboot patch at all the zones at both nodes in the Sun Cluster ?
    Like to hear from folks who have experience in dealing with patching in Sun Cluster.
    Thanks, Mr. Channey
    p.s. Below are output form the patch #145200-03 run, zoneadm and clrg
    outputs at cluster-1
    root@cluster-1# patchadd 145200-03
    Validating patches...
    Loading patches installed on the system...
    Done!
    Loading patches requested to install.
    Done!
    Checking patches that you specified for installation.
    Done!
    Approved patches will be installed in this order:
    145200-03
    Preparing checklist for non-global zone check...
    Checking non-global zones...
    Failed to boot non-global zone romantic
    exiting
    root@cluster-1# zoneadm list -iv
    ID NAME STATUS PATH BRAND IP
    0 global running / native shared
    15 classical running /zone-classical native shared
    - romantic installed /zone-romantic native shared
    - modern installed /zone-modern native shared
    root@cluster-1# clrg status
    === Cluster Resource Groups ===
    Group Name Node Name Suspended Status
    classical cluster-1 No Online
    cluster-2 No Offline
    romantic cluster-1 No Offline
    cluster-2 No Online
    modern cluster-1 No Offline
    cluster-2 No Online

    Hi Hartmut,
    I kind of got the idea. Just want to make sure. The zones 'romantic' and 'modern' show "installed" as the current status at cluster-1. These 2 zones are in fact running and online at cluster-2. So I will issue your commands below at cluster-2 to detach these zones to "configured" status :
    cluster-2 # zoneadm -z romantic detach
    cluster-2 # zoneadm -z modern detach
    Afterwards, I apply the Solaris patch at cluster-2. Then, I go to cluster-1 and apply the same Solaris patch. Once I am done patching both cluster-1 and cluster-2, I will
    go back to cluster-2 and run the following commands to force these zones back to "installed" status :
    cluster-2 # zoneadm -z romantic attach -f
    cluster-2 # zoneadm -z modern attach -f
    CORRECT ?? Please let me know if I am wrong or if there's any step missing. Thanks much, Humphrey
    root@cluster-1# zoneadm list -iv
    ID NAME STATUS PATH BRAND IP
    0 global running / native shared
    15 classical running /zone-classical native shared
    - romantic installed /zone-romantic native shared
    - modern installed /zone-modern native shared

  • Sun cluster patch for solaris 10 x86

    I have Solaris 10 6/06 installed on x4100 box with 2 node clustering using Sun Cluster 3.1 8/05. I just want to know is there any latest patches available for the OS to prevent cluster related bugs. what are they? My kernel patch is 118855-19.
    any inputs needed. let me know.

    Well, I would run S10 updatemanager and get the latest patches that way.
    Tim
    ---

  • How to add a new instance to a 2 node SQL 2008 R2 Cluster under the same networkname

    Hello All,
    I'm fairly new to sql and have just deployed a new 2 node SQL 2008 R2 cluster.
    During installation, I created a networkname SQLCLU and a named instance I01.
    I now would like to add another Instance (I02) under the same networkname.
    What is the proper procedure to do this?
    I don't believe that I should do this using the action: "New SQL Server failover cluster" from the setup menu, but rather using the action "New Installation or add features to an existing installation"
    With the second option, I'm however not sure how I should make this instance clustered..
    Should I also execute the "New Installation or add features to an existing installation" action on the 2nd node in the cluster?
    Many thanks for your  advice!
    Filip

    You cannot use the same network name for 2 instances. You need to use a different network name for the second instance I02
    A failover cluster instance contains:
    A combination of one or more disks in a Microsoft Cluster Service (MSCS) cluster group, also known as a resource group. Each resource group can contain at most one instance of SQL Server.
    A network name for the failover cluster instance.
    One or more IP addresses assigned to the failover cluster instance.
    One instance of SQL Server that includes SQL Server, SQL Server Agent, the Full-text Search (FTS) service, and Replication. You can install a failover cluster with SQL Server only, Analysis Services
    only, or SQL Server and Analysis Services
    http://msdn.microsoft.com/en-in/library/ms179410(v=sql.105).aspx

  • Sun Cluster question

    Hello everyone
    I've inherited an Oracle Solaris system holding ASE Sybase databases. The system consists of two nodes inside a Sun Cluster. Each of the nodes is hosting 2 Sybase database instances, where one of the nodes is active and other is standing by. The scenario at hand is that when any of the databases on one node fails for whatever reason, the whole system gets shifted to the second node to keep the environment going. That works fine.
    My intended scenario:
    Each node is holding 2 database instances, both nodes ARE working at the same time so that each one is serving one instance of the database. In the event of failure on one node, the other one should assume the role of BOTH database instances till the first one gets fixed.
    The question is: is that possible? and if it is, does that require breaking the whole cluster and rebuilding it? or can this be done online without bringing down the system?
    Thanks a lot in advance

    What you propose will not work either. E.g. there is no logic implemented to fence the underlying zpool from one node to the other in such a configuration.
    Also the current SUNW.HAStoragePlus(5) manpage document:
            Note -   SUNW.HAStoragePlus does not support  file  sys-
                     tems created on ZFS volumes.
                     You cannot use SUNW.HAStoragePlus  to  manage  a
                     ZFS storage pool that contains a file system for
                     which the ZFS  mountpoint  property  is  set  to
                     legacy or none.[...]
    Greets
    Thorsten

  • Sun Cluster 3.0 MQ Series 5.2 configuration

    Hi All,
    we have to review MQ Series installation/configuration on 2 solaris 8 Clustered with Sun Cluster 3.0 machines. The present configuration has a global filesystem /var/mqm with one queue manager.
    According to Sun Cluster 3.1 dataservice for websphere MQ(5.3 ndr) there are 2 ways of filesystem layout
    FFS: with local qmgrs (data and log) at each cluster node
    GFS: with global filesystem qmgrs (data and log).
    Are there any special consideration about shmem and ipc directories in <qmgr>/data?
    Does this scenario also apply to 3.0 /5.2 ?
    Does FFS configuration allow persistant messages failover at takeover?
    Are there any dataservices/docs available for MQ on 3.0?
    Thanks in advance.

    To deploy multiple qmgrs requires /var/mqm to be mounted as a GFS. The reason for this is to overcome IPC key clashes. The recommended file system layout is as follows -> represents a symlink, assuming two qmgrs - qmgr1 & qmgr2
    Using FFS (recommended - /local/mqm etc.. are mounted as FFS with /etc/vfstab)
    /var/mqm -> /global/mqm
    /global/mqm/qmgrs/qmgr1 -> /local/mqm/qmgr/qmgr1
    /global/mqm/qmgrs/qmgr2 -> /local/mqm/qmgr/qmgr2
    /global/mqm/log/qmgr1 -> /local/mqm/log/qmgr1
    /global/mqm/log/qmgr2 -> /local/mqm/log/qmgr2
    Using GFS (mainly early SC3.0 as HAStoragePlus wasn't available until later on)
    All mounted as GFS with /etc/vfstab
    /var/mqm -> /global/mqm
    /global/mqm/qmgrs/qmgr1
    /global/mqm/qmgrs/qmgr2
    /global/mqm/log/qmgr1
    /global/mqm/log/qmgr2
    Finally, FFS (Failover File System) is recommend because, at present, whenever GFS is used for the qmgr & log files, MQ Series is unable to determine that the qmgr may have been started on another node. e.g. Assuming GFS, and MQ Series is started on Node A, it is possible (but don't do it) to start MQ Series on Node B.
    The Sun Cluster Agent provides some protection against this. Instead it's recommened to deploy FFS as above.
    The agent for WebSphere MQ for SC 3.1 is available and supported on SC3.0 update 3 as well as SC3.1. There is also a patch available for the WebSphere MQ Agent which deals with IPC cleanup, for single or multiple qmgrs.
    Docs available can be found at
    http://docs.sun.com/db/prod/7192#hic - Just select Sun Cluster Data Service for WebSphere MQ
    Finally, the above scenario also applies to SC3.0/5.2 as well as SC3.1/5.3 and either GFS/FFS allow for persistant messages to be available after a failover.
    Regards
    Neil

  • SUn CLuster 3.2 install - scinstall on x86 32 bit Solaris 5.10

    Ok - I have 2 machines with 32-bit x86 SOlaris 5.10 - I installed the Sun Cluster 3.2 software but everytime I try scinstall it says rebooting other node and the other node never brings sun cluster up -
    Questions:
    1. The private interface does not come up on reboot - should it - and should I have an entry for cluster-priv1 in /etc/hosts
    2. I know I am supposed to do a scvx -xv - I then try to enable the cluster service, but everything says disabled
    I have tried this 5 times no luck - I have lots of cluster experience - and can get Oracle CRS working fine
    Any thoughts

    Yeahh, guys!!!
    I was trying to establish a two-node cluster using VirtualBox + Solaris x86 + Sun Cluster 3.2. The node where I was running scinstall to configure my cluster environment was rebooting the other node in the end of the configuration process but it was hanging in the "Rebooting node01..." message just because it was not able to establish the cluster.
    After see your comments, I changed Solaris x86 to Solaris Express Community Edition and Sun Cluster to Cluster Express and now everything is working fine!
    Thanks!
    Jansen Sena <[email protected]>

  • SUN Cluster 3.2, Solaris 10, Corrupted IPMP group on one node.

    Hello folks,
    I recently made a network change on nodename2 to add some resilience to IPMP (adding a second interface but still using a single IP address).
    After a reboot, I cannot keep this host from rebooting. For the one minute that it stays up, I do get the following result from scstat that seems to suggest a problem with the IPMP configuration. I rolled back my IPMP change, but it still doesn't seem to register the IPMP group in scstat.
    nodename2|/#scstat
    -- Cluster Nodes --
    Node name Status
    Cluster node: nodename1 Online
    Cluster node: nodename2 Online
    -- Cluster Transport Paths --
    Endpoint Endpoint Status
    Transport path: nodename1:bge3 nodename2:bge3 Path online
    -- Quorum Summary from latest node reconfiguration --
    Quorum votes possible: 3
    Quorum votes needed: 2
    Quorum votes present: 3
    -- Quorum Votes by Node (current status) --
    Node Name Present Possible Status
    Node votes: nodename1 1 1 Online
    Node votes: nodename2 1 1 Online
    -- Quorum Votes by Device (current status) --
    Device Name Present Possible Status
    Device votes: /dev/did/rdsk/d3s2 0 1 Offline
    -- Device Group Servers --
    Device Group Primary Secondary
    Device group servers: jms-ds nodename1 nodename2
    -- Device Group Status --
    Device Group Status
    Device group status: jms-ds Online
    -- Multi-owner Device Groups --
    Device Group Online Status
    -- IPMP Groups --
    Node Name Group Status Adapter Status
    scstat:  unexpected error.
    I did manage to run scstat on nodename1 while nodename2 was still up between reboots, here is that result (it does not show any IPMP group(s) on nodename2)
    nodename1|/#scstat
    -- Cluster Nodes --
    Node name Status
    Cluster node: nodename1 Online
    Cluster node: nodename2 Online
    -- Cluster Transport Paths --
    Endpoint Endpoint Status
    Transport path: nodename1:bge3 nodename2:bge3 faulted
    -- Quorum Summary from latest node reconfiguration --
    Quorum votes possible: 3
    Quorum votes needed: 2
    Quorum votes present: 3
    -- Quorum Votes by Node (current status) --
    Node Name Present Possible Status
    Node votes: nodename1 1 1 Online
    Node votes: nodename2 1 1 Online
    -- Quorum Votes by Device (current status) --
    Device Name Present Possible Status
    Device votes: /dev/did/rdsk/d3s2 1 1 Online
    -- Device Group Servers --
    Device Group Primary Secondary
    Device group servers: jms-ds nodename1 -
    -- Device Group Status --
    Device Group Status
    Device group status: jms-ds Degraded
    -- Multi-owner Device Groups --
    Device Group Online Status
    -- IPMP Groups --
    Node Name Group Status Adapter Status
    IPMP Group: nodename1 sc_ipmp1 Online bge2 Online
    IPMP Group: nodename1 sc_ipmp0 Online bge0 Online
    -- IPMP Groups in Zones --
    Zone Name Group Status Adapter Status
    I believe that I should be able to delete the IPMP group for the second node from the cluster and re-add it, but I'm sure about how to go about doing this. I welcome your comments or thoughts on what I can try before rebuilding this node from scratch.
    -AG

    I was able to restart both sides of the cluster. Now both sides are online, but neither side can access the shared disk.
    Lots of warnings. I will keep poking....
    Rebooting with command: boot
    Boot device: /pci@1e,600000/pci@0/pci@a/pci@0/pci@8/scsi@1/disk@0,0:a File and args:
    SunOS Release 5.10 Version Generic_141444-09 64-bit
    Copyright 1983-2009 Sun Microsystems, Inc. All rights reserved.
    Use is subject to license terms.
    Hardware watchdog enabled
    Hostname: nodename2
    Jul 21 10:00:16 in.mpathd[221]: No test address configured on interface ce3; disabling probe-based failure detection on it
    Jul 21 10:00:16 in.mpathd[221]: No test address configured on interface bge0; disabling probe-based failure detection on it
    /usr/cluster/bin/scdidadm: Could not stat "../../devices/iscsi/[email protected],0:c,raw" - No such file or directory.
    Warning: Path node loaded - "../../devices/iscsi/[email protected],0:c,raw".
    /usr/cluster/bin/scdidadm: Could not stat "../../devices/iscsi/[email protected],1:c,raw" - No such file or directory.
    Warning: Path node loaded - "../../devices/iscsi/[email protected],1:c,raw".
    Booting as part of a cluster
    NOTICE: CMM: Node nodename1 (nodeid = 1) with votecount = 1 added.
    NOTICE: CMM: Node nodename2 (nodeid = 2) with votecount = 1 added.
    WARNING: CMM: Open failed for quorum device /dev/did/rdsk/d3s2 with error 2.
    NOTICE: clcomm: Adapter bge3 constructed
    NOTICE: CMM: Node nodename2: attempting to join cluster.
    NOTICE: CMM: Node nodename1 (nodeid: 1, incarnation #: 1279727883) has become reachable.
    NOTICE: clcomm: Path nodename2:bge3 - nodename1:bge3 online
    WARNING: CMM: Open failed for quorum device /dev/did/rdsk/d3s2 with error 2.
    NOTICE: CMM: Cluster has reached quorum.
    NOTICE: CMM: Node nodename1 (nodeid = 1) is up; new incarnation number = 1279727883.
    NOTICE: CMM: Node nodename2 (nodeid = 2) is up; new incarnation number = 1279728026.
    NOTICE: CMM: Cluster members: nodename1 nodename2.
    NOTICE: CMM: node reconfiguration #3 completed.
    NOTICE: CMM: Node nodename2: joined cluster.
    NOTICE: CCR: Waiting for repository synchronization to finish.
    WARNING: CCR: Invalid CCR table : dcs_service_9 cluster global.
    WARNING: CMM: Open failed for quorum device /dev/did/rdsk/d3s2 with error 2.
    ip: joining multicasts failed (18) on clprivnet0 - will use link layer broadcasts for multicast
    ==> WARNING: DCS: Error looking up services table
    ==> WARNING: DCS: Error initializing service 9 from file
    /usr/cluster/bin/scdidadm: Could not stat "../../devices/iscsi/[email protected],0:c,raw" - No such file or directory.
    Warning: Path node loaded - "../../devices/iscsi/[email protected],0:c,raw".
    /usr/cluster/bin/scdidadm: Could not stat "../../devices/iscsi/[email protected],1:c,raw" - No such file or directory.
    Warning: Path node loaded - "../../devices/iscsi/[email protected],1:c,raw".
    /dev/md/rdsk/d22 is clean
    Reading ZFS config: done.
    NOTICE: iscsi session(6) iqn.1994-12.com.promise.iscsiarray2 online
    nodename2 console login: obtaining access to all attached disks
    starting NetWorker daemons:
    Rebooting with command: boot
    Boot device: /pci@1e,600000/pci@0/pci@a/pci@0/pci@8/scsi@1/disk@0,0:a File and args:
    SunOS Release 5.10 Version Generic_141444-09 64-bit
    Copyright 1983-2009 Sun Microsystems, Inc. All rights reserved.
    Use is subject to license terms.
    Hardware watchdog enabled
    Hostname: nodename1
    /usr/cluster/bin/scdidadm: Could not stat "../../devices/iscsi/[email protected],0:c,raw" - No such file or directory.
    Warning: Path node loaded - "../../devices/iscsi/[email protected],0:c,raw".
    /usr/cluster/bin/scdidadm: Could not stat "../../devices/iscsi/[email protected],1:c,raw" - No such file or directory.
    Warning: Path node loaded - "../../devices/iscsi/[email protected],1:c,raw".
    Booting as part of a cluster
    NOTICE: CMM: Node nodename1 (nodeid = 1) with votecount = 1 added.
    NOTICE: CMM: Node nodename2 (nodeid = 2) with votecount = 1 added.
    WARNING: CMM: Open failed for quorum device /dev/did/rdsk/d3s2 with error 2.
    NOTICE: clcomm: Adapter bge3 constructed
    NOTICE: CMM: Node nodename1: attempting to join cluster.
    NOTICE: bge3: link up 1000Mbps Full-Duplex
    NOTICE: clcomm: Path nodename1:bge3 - nodename2:bge3 errors during initiation
    WARNING: Path nodename1:bge3 - nodename2:bge3 initiation encountered errors, errno = 62. Remote node may be down or unreachable through this path.
    WARNING: CMM: Open failed for quorum device /dev/did/rdsk/d3s2 with error 2.
    NOTICE: CMM: Cluster doesn't have operational quorum yet; waiting for quorum.
    NOTICE: bge3: link down
    NOTICE: bge3: link up 1000Mbps Full-Duplex
    NOTICE: CMM: Node nodename2 (nodeid: 2, incarnation #: 1279728026) has become reachable.
    NOTICE: clcomm: Path nodename1:bge3 - nodename2:bge3 online
    WARNING: CMM: Open failed for quorum device /dev/did/rdsk/d3s2 with error 2.
    NOTICE: CMM: Cluster has reached quorum.
    NOTICE: CMM: Node nodename1 (nodeid = 1) is up; new incarnation number = 1279727883.
    NOTICE: CMM: Node nodename2 (nodeid = 2) is up; new incarnation number = 1279728026.
    NOTICE: CMM: Cluster members: nodename1 nodename2.
    NOTICE: CMM: node reconfiguration #3 completed.
    NOTICE: CMM: Node nodename1: joined cluster.
    WARNING: CMM: Open failed for quorum device /dev/did/rdsk/d3s2 with error 2.
    ip: joining multicasts failed (18) on clprivnet0 - will use link layer broadcasts for multicast
    /usr/cluster/bin/scdidadm: Could not stat "../../devices/iscsi/[email protected],0:c,raw" - No such file or directory.
    Warning: Path node loaded - "../../devices/iscsi/[email protected],0:c,raw".
    /usr/cluster/bin/scdidadm: Could not stat "../../devices/iscsi/[email protected],1:c,raw" - No such file or directory.
    Warning: Path node loaded - "../../devices/iscsi/[email protected],1:c,raw".
    /dev/md/rdsk/d26 is clean
    Reading ZFS config: done.
    NOTICE: iscsi session(6) iqn.1994-12.com.promise.iscsiarray2 online
    nodename1 console login: obtaining access to all attached disks
    starting NetWorker daemons:
    nsrexecd
    mount: /dev/md/jms-ds/dsk/d100 is already mounted or /opt/esbshares is busy

  • 2 node Sun Cluster 3.2, resource groups not failing over.

    Hello,
    I am currently running two v490s connected to a 6540 Sun Storagetek array. After attempting to install the latest OS patches the cluster seems nearly destroyed. I backed out the patches and right now only one node can process the resource groups properly. The other node will appear to take over the Veritas disk groups but will not mount them automatically. I have been working on this for over a month and have learned alot and fixed alot of other issues that came up, but the cluster is just not working properly. Here is some output.
    bash-3.00# clresourcegroup switch -n coins01 DataWatch-rg
    clresourcegroup: (C776397) Request failed because node coins01 is not a potential primary for resource group DataWatch-rg. Ensure that when a zone is intended, it is explicitly specified by using the node:zonename format.
    bash-3.00# clresourcegroup switch -z zcoins01 -n coins01 DataWatch-rg
    clresourcegroup: (C298182) Cannot use node coins01:zcoins01 because it is not currently in the cluster membership.
    clresourcegroup: (C916474) Request failed because none of the specified nodes are usable.
    bash-3.00# clresource status
    === Cluster Resources ===
    Resource Name Node Name State Status Message
    ftp-rs coins01:zftp01 Offline Offline
    coins02:zftp01 Offline Offline - LogicalHostname offline.
    xprcoins coins01:zcoins01 Offline Offline
    coins02:zcoins01 Offline Offline - LogicalHostname offline.
    xprcoins-rs coins01:zcoins01 Offline Offline
    coins02:zcoins01 Offline Offline - LogicalHostname offline.
    DataWatch-hasp-rs coins01:zcoins01 Offline Offline
    coins02:zcoins01 Offline Offline
    BDSarchive-res coins01:zcoins01 Offline Offline
    coins02:zcoins01 Offline Offline
    I am really at a loss here. Any help appreciated.
    Thanks

    My advice is to open a service call, provided you have a service contract with Oracle. There is much more information required to understand that specific configuration and to analyse the various log files. This is beyond what can be done in this forum.
    From your description I can guess that you want to failover a resource group between non-global zones. And it looks like the zone coins01:zcoins01 is reported to not be in cluster membership.
    Obviously node coins01 needs to be a cluster member. If it is reported as online and has joined the cluster, then you need to verify if the zone zcoins01 is really properly up and running.
    Specifically you need to verify that it reached the multi-user milestone and all cluster related SMF services are running correctly (ie. verify "svcs -x" in the non-global zone).
    You mention Veritas diskgroups. Note that VxVM diskgroups are handled in the global cluster level (ie. in the global zone). The VxVM diskgroup is not imported for a non-global zone. However, with SUNW.HAStoragePlus you can ensure that file systems on top of VxVM diskgroups can be mounted into a non-global zone. But again, more information would be required to see how you configued things and why they don't work as you expect it.
    Regards
    Thorsten

  • Sun cluster Node ID Order

    I installed Sun cluster 3.1 and it all seemed successful. However, the node ID and the private hostnames seemed twisted. "comdb1" has the node ID of 2 and "comdb2" has the node ID of 1. I installed the software from "comdb1", so it should have used that as node 1, right? I pasted below some info from 'scconf -p'
    Cluster node name: comdb1
    Node ID: 2
    Node enabled: yes
    Node private hostname: clusternode2-priv
    Node quorum vote count: 1
    Node reservation key: 0x4472697D00000002
    Node transport adapters: ce4 ce1
    Cluster node name: comdb2
    Node ID: 1
    Node enabled: yes
    Node private hostname: clusternode1-priv
    Node quorum vote count: 1
    Node reservation key: 0x4472697D00000001
    Node transport adapters: ce4 ce1
    Thank you in advance.

    Consequently, when installing Oracle 10g RAC, the database name "db1" is created on node2 and "db2" is created on node1 since it relies on the private node name and the node ID. Otherwise I wouldn't bother with how the cluster software names its node ID.
    Thanks again,
    Luke

  • Connected to an idle instance in sun cluster nodes.

    i have two sun cluster node sharing common storage.
    Two schema's:
    test1 for nodeA
    test2 for nodeB.
    My requirement is as below:
    Login into b node.
    export ORACLE_SID=test1.
    sqlplus / as sysdba.
    But i am getting as
    "connected to an idle instance"
    Is there any way to connect to node A schema from Node B.

    I found the answer..
    sqlplus <sysdbauser>/<password>@test1 as sysdba

  • 11g r2 non rac using asm for sun cluster os (two node but non-rac)

    I am going to install grid installation for non-rac using asm for two node sun cluster environment..
    How to create candidate disk in solaris cluster (sparc os) to install grid home in asm.. please provide me the steps if anyone knows

    Please refer the thread Re: 11GR2 ASM in non-rac node not starting... failing with error ORA-29701
    and this doc http://docs.oracle.com/cd/E11882_01/install.112/e24616/presolar.htm#CHDHAAHE

Maybe you are looking for

  • Adding pages

    I just got assigned the task of maintaining the website at my work, but I only have the basics down when it comes to Dreamweaver. What I need to do is add some more webpages to the website and I wasn't sure if there was a way i could use one of the p

  • JDev 10.1.3 Prod Bug in Structure Window when using PL/SQL and Stored Types

    Using JDev 10.1.3 build 3673 When loading/editing a PL/SQL package with code like the following: rec := NEW Record001Row; Where Record001Row is a stored type on the database gives an "Error Expecting ;" in the JDev Structure Window. Once the above li

  • Lightroom 4.2 installation error - missing *.msi pathway

    Hi I am having some problems installing the upgrade to Lightroom 4.1 I ran the install from a 'check for updates' and experienced an error that there was a network error and that Adobe_Lightroom_x64.msi could not be found. I cannot find this file any

  • Retrieving the parent container....

    I am trying to access a class method from a class that extends JFrame from a JPanel. This is how I have my containers nested: JFrame (MainFrame) > JPanel (cardPanel) > JPanel (ticketForm) My JFrame class (MainFrame) has a method that I need to access

  • Configuration Help Needed - Layout of Public Access and Private Access Network

    HELP      I need help in laying out the following network - herre is my first stab at it. 1. Actiontec DSL Modem - DHCP On     ----- Model GT701-WG    WAN IP Address 12.17.66.203    WAN SubNet Mask 255.255.255.255    Lan IP Address 192.168.0.1    Lan