EP Cluster Patching

Dear Portal Gurus,
Pls let me know how to upgrade/patch a EP cluster setup like WAS/EP CI/DI et all.
Pls also let me know the sequence.
What are the Dos & Donts to it in a cluster environment.
Thanks.
Jack
Message was edited by: Jack Jones

Jack,
Each of the SAP Notes associated with the patched should give information as to what to do in a clustered environment.
Basically, in EP6 SP2 you patch the primary node and then it synchs up with the rest of the nodes in the cluster.
Hope that helps.
Regards,
Keith

Similar Messages

  • Cluster Patch on Solaris Zones

    Hi all,
    Lately, there is a requirement for us to perform a cluster patch on some of
    our Solaris Zone VMs. Under such circumstances, is there a need to patch
    the global zone also ? What are the precautionary measures to take in case
    things break and system goes into panic mode ?
    Pls provide all necessary steps and procedures. Thanks a lot.
    Regards,
    LP

    There are many available patch clusters, however you would normally use the 'Recommended Patch Cluster" or the latest dated cluster, for example "10/08 Patch Bundle". Note that the latest bundles have been split into multiple parts due to the size increase from older bundles.
    Please refer to the [download instructions|http://sunsolve.sun.com/html/cluster_bundle.html] and the readme for the relevant cluster that you download. Note that a password is required to apply that patch clusters and that the password can be found in the readme.
    If you run into any problems with the clusters, please contact [email protected] .

  • J2SE Cluster Patches

    The application which i am trying to install in Solaris 10 has the bellow dependency.
    "On the server where you will install the SAS Web Client component, you must install
    the J2SE Cluster Patches for Solaris. To download these patches, search for “J2SE
    Cluster Patches” for your version of Solaris at http://www.sun.com/."
    I am not able to locate above patch in sun.com
    Please anyone point to me the right URL ?
    Thanks.

    This page includes the J2SE patches
    http://sunsolve.sun.com/show.do?target=patch-access
    You need to scroll down a bit in the recommended patches menu.

  • What is the difference between cluster patch and bundle patch

    hi every one
    can any one please tell me the difference between cluster patch and bundle patch ... for what purpose we apply cluster patch and bundled patch

    There is no inherent difference between a patch cluster and a patch bundle that I am aware of.
    Its just a name for collections of patches.
    For whatever reason, the regular collections of recommended patches that sun puts out happen to be referred to as the recommended patch cluster.
    The "all patches to bring you up to 5/08" that sun is putting out happens to be being called the 5/08 patch bundle.
    I don't think they meant anything by the distinction. But thats the first time the term "patch bundle" has been used that I'm aware of.
    So maybe there is some new patch delivery/installation technology implied. But I doubt it.

  • To detect the cluster patch installed

    Hi,
    I am installing a cluster patch. Once installation is completed i want to check the cluster patch information installed in the system.
    Also in the next case if i install another set of cluster patch, will it overwrite the information already in the system or it will create a new .
    I am looking for the above details for solaris 9 and solaris 10

    Hi,
    I am installing a cluster patch. Once installation is completed i want to check the cluster patch information installed in the system.
    Also in the next case if i install another set of cluster patch, will it overwrite the information already in the system or it will create a new .
    I am looking for the above details for solaris 9 and solaris 10

  • Passcode for installing cluster patch

    Hi,
    I installed solaris 10 for x86 and now I would install cluster patch.
    I downloaded zip cluster and unzipped in a dir.
    When I start installation with "install_cluster" script, it ask me a passcode saying that in needed to allow install_cluster to continue running.
    I never have seen a passcode for a cluster patch.
    Anyone can help me please?

    This thread prompted me to try to reproduce what was seen by the original post.
    I have an old Pentium-III 440BX system with Solaris 10 GA (3/05) installed and never patched.
    So ...
    1. Ignore my earlier comments about signed versus unsigned patches.
    It doesn't apply to the full cluster, only individual patches that you might download, sometime.
    2. Yes, it pays to read the documentation.
    The README file is clear about the passcode of "newboot".
    3. That README file also cautions about the potential for multiple reboots,
    depending on how far down-rev the system might be.
    The document suggests three reboots could happen.
    My old box needed four.
    The cluster update script initiated the reboots.
    The old 440BX chipset board interpreted the reboot signal
    as a kernel page fault panic, sync'd the filesystems, and bounced the box.
    No manual intervention was needed to cause the reboots.
    I just had to start the update process all over again after each restart,
    then sit back and watch the return codes tell me that such-and-so patch was
    already installed, then finally get far enough along to apply the next patches per the script.
    I'm now fully patched. Running showrev on a bunch of the patch numbers confirms it.
    Curious, though. GRUB was installed but it's not being used.
    The old x86 box is still using the original bootloader. No big deal.
    I had been thinking of wiping that 7.5GB drive and installing the current release, anyway.

  • Sun cluster patch for solaris 10 x86

    I have Solaris 10 6/06 installed on x4100 box with 2 node clustering using Sun Cluster 3.1 8/05. I just want to know is there any latest patches available for the OS to prevent cluster related bugs. what are they? My kernel patch is 118855-19.
    any inputs needed. let me know.

    Well, I would run S10 updatemanager and get the latest patches that way.
    Tim
    ---

  • Problem: installing Recommend Patch Cluster: patch 140900-10 fails

    I don't know if this is correct location for this thread (admin/moderator move as required).
    We have a number of Solaris x86 hosts (ie 4) and I'm trying to get the patched and up to date.
    I obtained the latest Recommended Patch Cluster (dated 28Sep2010).
    I installed it on host A yesterday, 2 hours 9 mins, no problems.
    I tried to install it on host B today, and that was just a critical failure.
    Host B failed to install 140900-01, because the non global zones rejected the patch because dependancy 138218-01 wasn't installed in the non global zones.
    Looking closer, it appears that 138218-01 is for global zone installation only; so it makes sense that the non global zones rejected it.
    However, both hosts were in single user mode when installing the cluster and both hosts have non global zones.
    140900-01 installed on host A with log:
    | This appears to be an attempt to install the same architecture and
    | version of a package which is already installed. This installation
    | will attempt to overwrite this package.
    |
    | Dryrun complete.
    | No changes were made to the system.
    |
    | This appears to be an attempt to install the same architecture and
    | version of a package which is already installed. This installation
    | will attempt to overwrite this package.
    |
    |
    | Installation of <SUNWcsu> was successful.
    140900-1 failed on host B with this log:
    | Validating patches...
    |
    | Loading patches installed on the system...
    |
    | Done!
    |
    | Loading patches requested to install.
    |
    | Done!
    |
    | Checking patches that you specified for installation.
    |
    | Done!
    |
    |
    | Approved patches will be installed in this order:
    |
    | 140900-01
    |
    |
    | Preparing checklist for non-global zone check...
    |
    | Checking non-global zones...
    |
    | Checking non-global zones...
    |
    | Restoring state for non-global zone X...
    | Restoring state for non-global zone Y...
    | Restoring state for non-global zone Z...
    |
    | This patch passes the non-global zone check.
    |
    | None.
    |
    |
    | Summary for zones:
    |
    | Zone X
    |
    | Rejected patches:
    | 140900-01
    | Patches that passed the dependency check:
    | None.
    |
    |
    | Zone Y
    |
    | Rejected patches:
    | 140900-01
    | Patches that passed the dependency check:
    | None.
    |
    |
    | Zone Z
    |
    | Rejected patches:
    | 140900-01
    | Patches that passed the dependency check:
    | None.
    |
    | Fatal failure occurred - impossible to install any patches.
    | X: For patch 140900-01, required patch 138218-01 does not exist.
    | Y: For patch 140900-01, required patch 138218-01 does not exist.
    | Z: For patch 140900-01, required patch 138218-01 does not exist.
    | Fatal failure occurred - impossible to install any patches.
    | ----
    | patchadd exit code : 1
    | application of 140900-01 failed : unhandled subprocess exit status '1' (exit 1 b
    | ranch)
    | finish time : 2010.10.02 09:47:18
    | FINISHED : application of 140900-01
    It looks as if the patch strategy is completely different for the two hosts; both are solaris x86 (A is a V40z; B is a V20z).
    Trying to use "-G" with patchadd crabs the patch is for global zones only and stops (ie it's almost as if it thinks installation is being attempted in a non global zone).
    Without "-G" same behavior as patch cluster install attempt (no surprise there).
    I have inherited this machines; I am told each of these machines were upgraded to Solaris 10u6 in June 2009.
    Other than the 4-5 months it took oracle to get our update entitlement to us, we've had no problems.
    I'm at a loss here, I don't see anyone else having this type of issue.
    Can someone point the way...?
    What am I missing...?

    Well, I guess what you see isn't what you get.
    Guess I'm used to the fact USENET just left things formatted the way they were :-O

  • Sun Cluster - Patching implication

    Hi,
    Currently, my system is running on a HA. Will the sun recommended patches will affect the stability of the cluster. Anything to watchout on pre-patch? Need advice.
    Thank you

    Please check the individual patch READMEs. They will document anything special that needs to be done. After that, you should follow the patching procedure discussed in:
    http://docs.sun.com/app/docs/doc/819-2971/6n57mi2de?a=view
    Regards,
    Tim
    ---

  • PatchPro interractive cluster patch

    Hi!
    Why in patchpro web-site there is no choice to select Solaris 9 MU8 and sun cluster 3.1 8/05 to generate path list?
    Does it mean that no patches is required for this products?

    Clarify Problem.
    When patch 118822-30 was applied it failed with:
    Installing 118822-30...
    /platform/sun4u/kernel/drv/sparcv9/trapstat: undefined symbol 'page_num_user_pagesizes'
    /platform/sun4u/kernel/drv/sparcv9/trapstat: undefined symbol 'userszc_2_szc'
    /platform/sun4u/kernel/drv/sparcv9/trapstat: undefined symbol 'segkmem_alloc_permanent'
    WARNING: mod_load: cannot load module 'trapstat'
    /platform/sun4u/kernel/misc/sparcv9/kmdbmod: undefined symbol 'kdi_tlb_page_unlock'
    /platform/sun4u/kernel/misc/sparcv9/kmdbmod: undefined symbol 'kdi_tlb_page_lock'
    /platform/sun4u/kernel/misc/sparcv9/kmdbmod: undefined symbol 'kdi_watchdog_disable'
    /platform/sun4u/kernel/misc/sparcv9/kmdbmod: undefined symbol 'kdi_watchdog_restore'
    WARNING: mod_load: cannot load module 'kmdbmod'
    /kernel/drv/sparcv9/kmdb: undefined symbol 'kctl_modload_activate'
    /kernel/drv/sparcv9/kmdb: undefined symbol 'kctl_deactivate'
    /kernel/drv/sparcv9/kmdb: undefined symbol 'kctl_detach'
    /kernel/drv/sparcv9/kmdb: undefined symbol 'kctl_attach'
    WARNING: mod_load: cannot load module 'kmdb'
    WARNING: kmdb: unable to resolve dependency, module 'misc/kmdbmod' not foundThis patch failure has messed up the ability to create zone's too.
    Should I try to re-apply the patch?
    I don't know why the above errors are occurring.

  • Cluster patches

    Does anyone know how to find the list of patches for a Solaris Cluster version? On SunSolve there were lists for 3.1, 3.2, etc. I am actually looking for the latest patches for Solaris Cluster 3.1.

    Thanks, Tim, I know how to search but there used to be a page that listed all of the recommended patches for each version of cluster and platform so they were all in a list - easier to work with that way. I am wondering if this still exists somewhere. MyOracleSupport is different and not always easy to find stuff on. A patchset / cluster / bundle would sure be nice...

  • Solaris 8 Patch Cluster 200801 installation  / disk utilization spikes

    Hello All,
    Since we installed the latest Patch Cluster patch our disk utilization has shot up substantially. Is there anyone having same experience? If you do, could you please help us to identify some areas to take a look?
    Thanks very much.
    Yi

    hi again. i just fixed my problem. apparently when i unzipped the file and then burned it onto cd...the data got corrupted. i fixed it by copying the zip file onto cd and then extracting the patches onto the solaris machine.

  • 118872-03 ksh patch breaks set construct wildcard

    Not sure where to report solaris 10 bug, anyone know? So I will post it here.... Solaris 10 sparc patch 118872-03 breaks set construct wildcard. For example:
    % ls [A-Z]*
    will return filename of both upper and lowercase..
    back out 118872-03 fix the problem. 118872-02 works..
    -- Frogdeep

    OK... not so sure the problem is with 118872-03 now. I look carefully even with the 118872-02 patch I still have the problem. So it is some other patch that is causing the problem. Let's say my system is fully patched.
    On system we only have recommendated cluster patched:
    % touch A aa B C D ee
    % ls [a-z]
    [a-z]: No such file or directory
    On fully patched box:
    % touch A aa B C D ee
    % ls [a-z]
    A B C D
    Doh! Help! Something is not right :*(
    -- Frogdeep

  • Whole root or sparse root zones and patching

    Hi all,
    A while back, I did some cluster patching tests on a system with only sparse root zones, and one with whole root zones...and I seem to recall that the patch time was about equal which surprised me. I had thought the sparse root model mainly is patching the global zone, and even though patchadd may need to run through the sparse NGZ's...that it isn't doing much other than updating /var/sadm info in the NGZ's.
    Has anyone seen this to be true or if there are major patching improvements using a "sparse" root NGZ model over a "whole" root model?
    thanks much.

    My testing showed the same results and I was a bit surprised as well. As I dug into it further my understanding was that the majority of the patch application time goes into figuring out what to patch, not actually copying files around. That work must be done for the sparse zones in the same way as for the full root zones, we just save the few milliseconds of actually backing up and replacing the file.
    I suspect there is a large amount of slack that could be optimized in the patching process (both with and without zones), but I don't understand it nearly well enough to say that with any authority.

  • Adding node back into cluster after removal...

    Hi,
    I removed a cluster node using "scconf -r -h <node>" (carried out all the other usual removal steps before getting this command to work).
    Because this is a pair+1 cluster and the node i was trying to remove was physically attached to the quroum device (scsi), I had to create a dummy node before the removal command above would work.
    I reinstalled solaris, SC3.1u4 framwork, patches etc. and then tried to run scsinstall again on the node (reintroduced the node to the cluster again first using scconf -a -T node=<node>).
    However! during the scsinstall i got the following problem:
    Updating file ("ntp.conf.cluster") on node n20-2-sup ... done
    Updating file ("hosts") on node n20-2-sup ... done
    Updating file ("ntp.conf.cluster") on node n20-3-sup ... done
    Updating file ("hosts") on node n20-3-sup ... done
    scrconf: RPC: Unknown host
    scinstall:  Failed communications with "bogusnode"
    scinstall: scinstall did NOT complete successfully!
    Press Enter to continue:
    Was not sure what to do at this point, but since the other clusternodes could now see my 'new' node again, i removed the dummy node, rebooted the new node and said a little prayer...
    Now, my node will not boot as part of the cluster:
    Rebooting with command: boot
    Boot device: /pci@8,600000/SUNW,qlc@4/fp@0,0/disk@w21000004cfa3e691,0:a File and args:
    SunOS Release 5.10 Version Generic_127111-06 64-bit
    Copyright 1983-2007 Sun Microsystems, Inc. All rights reserved.
    Use is subject to license terms.
    Hostname: n20-1-sup
    /usr/cluster/bin/scdidadm: Could not load DID instance list.
    Cannot open /etc/cluster/ccr/did_instances.
    Booting as part of a cluster
    NOTICE: CMM: Node n20-1-sup (nodeid = 1) with votecount = 0 added.
    NOTICE: CMM: Node n20-2-sup (nodeid = 2) with votecount = 2 added.
    NOTICE: CMM: Node n20-3-sup (nodeid = 3) with votecount = 1 added.
    NOTICE: CMM: Node bogusnode (nodeid = 4) with votecount = 0 added.
    NOTICE: clcomm: Adapter qfe5 constructed
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-2-sup:qfe5 being constructed
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-3-sup:qfe5 being constructed
    NOTICE: clcomm: Adapter qfe1 constructed
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-2-sup:qfe1 being constructed
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-3-sup:qfe1 being constructed
    NOTICE: CMM: Node n20-1-sup: attempting to join cluster.
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-2-sup:qfe1 being initiated
    NOTICE: CMM: Node n20-2-sup (nodeid: 2, incarnation #: 1205318308) has become reachable.
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-2-sup:qfe1 online
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-3-sup:qfe5 being initiated
    NOTICE: CMM: Node n20-3-sup (nodeid: 3, incarnation #: 1205265086) has become reachable.
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-3-sup:qfe5 online
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-3-sup:qfe1 being initiated
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-3-sup:qfe1 online
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-2-sup:qfe5 being initiated
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-2-sup:qfe5 online
    NOTICE: CMM: Cluster has reached quorum.
    NOTICE: CMM: Node n20-1-sup (nodeid = 1) is up; new incarnation number = 1205346037.
    NOTICE: CMM: Node n20-2-sup (nodeid = 2) is up; new incarnation number = 1205318308.
    NOTICE: CMM: Node n20-3-sup (nodeid = 3) is up; new incarnation number = 1205265086.
    NOTICE: CMM: Cluster members: n20-1-sup n20-2-sup n20-3-sup.
    NOTICE: CMM: node reconfiguration #18 completed.
    NOTICE: CMM: Node n20-1-sup: joined cluster.
    NOTICE: CMM: Node (nodeid = 4) with votecount = 0 removed.
    NOTICE: CMM: Cluster members: n20-1-sup n20-2-sup n20-3-sup.
    NOTICE: CMM: node reconfiguration #19 completed.
    WARNING: clcomm: per node IP config clprivnet0:-1 (349): 172.16.193.1 failed with 19
    WARNING: clcomm: per node IP config clprivnet0:-1 (349): 172.16.193.1 failed with 19
    cladm: CLCLUSTER_ENABLE: No such device
    UNRECOVERABLE ERROR: Sun Cluster boot: Could not initialize cluster framework
    Please reboot in non cluster mode(boot -x) and Repair
    syncing file systems... done
    WARNING: CMM: Node being shut down.
    Program terminated
    {1} ok
    Any ideas how i can recover this situation without having to reinstall the node again?
    (have a flash with OS, sc3.1u4 framework etc... so not the end of the world but...)
    Thanks a mil if you can help here!
    - headwrecked

    Hi - got sorted with this problem...
    basically just removed (scinstall -r) the sc3.1u4 software from the node which was not booting, and then re-installed the software (this time the dummy node had been removed so it did not try to contact this node and the scinstall completed without any errors)
    I think the only problem with the procedure i used to remove and readd the node was that i forgot to remove the dummy node before re-adding the actaul cluster node again...
    If anyone can confirm this to be the case then great - if not... well its working now so this thread can be closed.
    root@n20-1-sup # /usr/cluster/bin/scinstall -r
    Verifying that no unexpected global mounts remain in /etc/vfstab ... done
    Verifying that no device services still reference this node ... done
    Archiving the following to /var/cluster/uninstall/uninstall.1036/archive:
    /etc/cluster ...
    /etc/path_to_inst ...
    /etc/vfstab ...
    /etc/nsswitch.conf ...
    Updating vfstab ... done
    The /etc/vfstab file was updated successfully.
    The original entry for /global/.devices/node@1 has been commented out.
    And, a new entry has been added for /globaldevices.
    Mounting /dev/dsk/c3t0d0s6 on /globaldevices ... done
    Attempting to contact the cluster ...
    Trying "n20-2-sup" ... okay
    Trying "n20-3-sup" ... okay
    Attempting to unconfigure n20-1-sup from the cluster ... failed
    Please consider the following warnings:
    scrconf: Failed to remove node (n20-1-sup).
    scrconf: All two-node clusters must have at least one shared quorum device.
    Additional housekeeping may be required to unconfigure
    n20-1-sup from the active cluster.
    Removing the "cluster" switch from "hosts" in /etc/nsswitch.conf ... done
    Removing the "cluster" switch from "netmasks" in /etc/nsswitch.conf ... done
    ** Removing Sun Cluster framework packages **
    Removing SUNWkscspmu.done
    Removing SUNWkscspm..done
    Removing SUNWksc.....done
    Removing SUNWjscspmu.done
    Removing SUNWjscspm..done
    Removing SUNWjscman..done
    Removing SUNWjsc.....done
    Removing SUNWhscspmu.done
    Removing SUNWhscspm..done
    Removing SUNWhsc.....done
    Removing SUNWfscspmu.done
    Removing SUNWfscspm..done
    Removing SUNWfsc.....done
    Removing SUNWescspmu.done
    Removing SUNWescspm..done
    Removing SUNWesc.....done
    Removing SUNWdscspmu.done
    Removing SUNWdscspm..done
    Removing SUNWdsc.....done
    Removing SUNWcscspmu.done
    Removing SUNWcscspm..done
    Removing SUNWcsc.....done
    Removing SUNWscrsm...done
    Removing SUNWscspmr..done
    Removing SUNWscspmu..done
    Removing SUNWscspm...done
    Removing SUNWscva....done
    Removing SUNWscmasau.done
    Removing SUNWscmasar.done
    Removing SUNWmdmu....done
    Removing SUNWmdmr....done
    Removing SUNWscvm....done
    Removing SUNWscsam...done
    Removing SUNWscsal...done
    Removing SUNWscman...done
    Removing SUNWscgds...done
    Removing SUNWscdev...done
    Removing SUNWscnmu...done
    Removing SUNWscnmr...done
    Removing SUNWscscku..done
    Removing SUNWscsckr..done
    Removing SUNWscu.....done
    Removing SUNWscr.....done
    Removing the following:
    /etc/cluster ...
    /dev/did ...
    /devices/pseudo/did@0:* ...
    The /etc/inet/ntp.conf file has not been updated.
    You may want to remove it or update it after uninstall has completed.
    The /var/cluster directory has not been removed.
    Among other things, this directory contains
    uninstall logs and the uninstall archive.
    You may remove this directory once you are satisfied
    that the logs and archive are no longer needed.
    Log file - /var/cluster/uninstall/uninstall.1036/log
    root@n20-1-sup #
    Ran the scinstall again:
    >>> Confirmation <<<
    Your responses indicate the following options to scinstall:
    scinstall -ik \
    -C N20_Cluster \
    -N n20-2-sup \
    -M patchdir=/var/cluster/patches \
    -A trtype=dlpi,name=qfe1 -A trtype=dlpi,name=qfe5 \
    -m endpoint=:qfe1,endpoint=switch1 \
    -m endpoint=:qfe5,endpoint=switch2
    Are these the options you want to use (yes/no) [yes]?
    Do you want to continue with the install (yes/no) [yes]?
    Checking device to use for global devices file system ... done
    Installing patches ... failed
    scinstall: Problems detected during extraction or installation of patches.
    Adding node "n20-1-sup" to the cluster configuration ... skipped
    Skipped node "n20-1-sup" - already configured
    Adding adapter "qfe1" to the cluster configuration ... skipped
    Skipped adapter "qfe1" - already configured
    Adding adapter "qfe5" to the cluster configuration ... skipped
    Skipped adapter "qfe5" - already configured
    Adding cable to the cluster configuration ... skipped
    Skipped cable - already configured
    Adding cable to the cluster configuration ... skipped
    Skipped cable - already configured
    Copying the config from "n20-2-sup" ... done
    Copying the postconfig file from "n20-2-sup" if it exists ... done
    Copying the Common Agent Container keys from "n20-2-sup" ... done
    Setting the node ID for "n20-1-sup" ... done (id=1)
    Verifying the major number for the "did" driver with "n20-2-sup" ... done
    Checking for global devices global file system ... done
    Updating vfstab ... done
    Verifying that NTP is configured ... done
    Initializing NTP configuration ... done
    Updating nsswitch.conf ...
    done
    Adding clusternode entries to /etc/inet/hosts ... done
    Configuring IP Multipathing groups in "/etc/hostname.<adapter>" files
    IP Multipathing already configured in "/etc/hostname.qfe2".
    Verifying that power management is NOT configured ... done
    Ensure that the EEPROM parameter "local-mac-address?" is set to "true" ... done
    Ensure network routing is disabled ... done
    Updating file ("ntp.conf.cluster") on node n20-2-sup ... done
    Updating file ("hosts") on node n20-2-sup ... done
    Updating file ("ntp.conf.cluster") on node n20-3-sup ... done
    Updating file ("hosts") on node n20-3-sup ... done
    Log file - /var/cluster/logs/install/scinstall.log.938
    Rebooting ...
    Mar 13 13:59:13 n20-1-sup reboot: rebooted by root
    Terminated
    root@n20-1-sup # syncing file systems... done
    rebooting...
    R
    LOM event: +103d+20h44m26s host reset
    screen not found.
    keyboard not found.
    Keyboard not present. Using lom-console for input and output.
    Sun Netra T4 (2 X UltraSPARC-III+) , No Keyboard
    Copyright 1998-2003 Sun Microsystems, Inc. All rights reserved.
    OpenBoot 4.10.1, 4096 MB memory installed, Serial #52960491.
    Ethernet address 0:3:ba:28:1c:eb, Host ID: 83281ceb.
    Initializing 15MB Rebooting with command: boot
    Boot device: /pci@8,600000/SUNW,qlc@4/fp@0,0/disk@w21000004cfa3e691,0:a File and args:
    SunOS Release 5.10 Version Generic_127111-06 64-bit
    Copyright 1983-2007 Sun Microsystems, Inc. All rights reserved.
    Use is subject to license terms.
    Hostname: n20-1-sup
    Configuring devices.
    devfsadm: minor_init failed for module /usr/lib/devfsadm/linkmod/SUNW_scmd_link.so
    Loading smf(5) service descriptions: 24/24
    /usr/cluster/bin/scdidadm: Could not load DID instance list.
    Cannot open /etc/cluster/ccr/did_instances.
    Booting as part of a cluster
    NOTICE: CMM: Node n20-1-sup (nodeid = 1) with votecount = 0 added.
    NOTICE: CMM: Node n20-2-sup (nodeid = 2) with votecount = 2 added.
    NOTICE: CMM: Node n20-3-sup (nodeid = 3) with votecount = 1 added.
    NOTICE: clcomm: Adapter qfe5 constructed
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-2-sup:qfe5 being constructed
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-3-sup:qfe5 being constructed
    NOTICE: clcomm: Adapter qfe1 constructed
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-2-sup:qfe1 being constructed
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-3-sup:qfe1 being constructed
    NOTICE: CMM: Node n20-1-sup: attempting to join cluster.
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-2-sup:qfe1 being initiated
    NOTICE: CMM: Node n20-2-sup (nodeid: 2, incarnation #: 1205318308) has become reachable.
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-2-sup:qfe1 online
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-3-sup:qfe5 being initiated
    NOTICE: CMM: Node n20-3-sup (nodeid: 3, incarnation #: 1205265086) has become reachable.
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-3-sup:qfe5 online
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-2-sup:qfe5 being initiated
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-2-sup:qfe5 online
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-3-sup:qfe1 being initiated
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-3-sup:qfe1 online
    NOTICE: CMM: Cluster has reached quorum.
    NOTICE: CMM: Node n20-1-sup (nodeid = 1) is up; new incarnation number = 1205416931.
    NOTICE: CMM: Node n20-2-sup (nodeid = 2) is up; new incarnation number = 1205318308.
    NOTICE: CMM: Node n20-3-sup (nodeid = 3) is up; new incarnation number = 1205265086.
    NOTICE: CMM: Cluster members: n20-1-sup n20-2-sup n20-3-sup.
    NOTICE: CMM: node reconfiguration #23 completed.
    NOTICE: CMM: Node n20-1-sup: joined cluster.
    ip: joining multicasts failed (18) on clprivnet0 - will use link layer broadcasts for multicast
    NOTICE: CMM: Votecount changed from 0 to 1 for node n20-1-sup.
    NOTICE: CMM: Cluster members: n20-1-sup n20-2-sup n20-3-sup.
    NOTICE: CMM: node reconfiguration #24 completed.
    Mar 13 14:02:23 in.ndpd[351]: solicit_event: giving up on qfe1
    Mar 13 14:02:23 in.ndpd[351]: solicit_event: giving up on qfe5
    did subpath /dev/rdsk/c1t3d0s2 created for instance 2.
    did subpath /dev/rdsk/c2t3d0s2 created for instance 12.
    did subpath /dev/rdsk/c1t3d1s2 created for instance 3.
    did subpath /dev/rdsk/c1t3d2s2 created for instance 6.
    did subpath /dev/rdsk/c1t3d3s2 created for instance 7.
    did subpath /dev/rdsk/c1t3d4s2 created for instance 8.
    did subpath /dev/rdsk/c1t3d5s2 created for instance 9.
    did subpath /dev/rdsk/c1t3d6s2 created for instance 10.
    did subpath /dev/rdsk/c1t3d7s2 created for instance 11.
    did subpath /dev/rdsk/c2t3d1s2 created for instance 13.
    did subpath /dev/rdsk/c2t3d2s2 created for instance 14.
    did subpath /dev/rdsk/c2t3d3s2 created for instance 15.
    did subpath /dev/rdsk/c2t3d4s2 created for instance 16.
    did subpath /dev/rdsk/c2t3d5s2 created for instance 17.
    did subpath /dev/rdsk/c2t3d6s2 created for instance 18.
    did subpath /dev/rdsk/c2t3d7s2 created for instance 19.
    did instance 20 created.
    did subpath n20-1-sup:/dev/rdsk/c0t6d0 created for instance 20.
    did instance 21 created.
    did subpath n20-1-sup:/dev/rdsk/c3t0d0 created for instance 21.
    did instance 22 created.
    did subpath n20-1-sup:/dev/rdsk/c3t1d0 created for instance 22.
    Configuring DID devices
    t_optmgmt: System error: Cannot assign requested address
    obtaining access to all attached disks
    n20-1-sup console login:

Maybe you are looking for