Patch cluster 3/26/10 will hose X4500 disk management task in multiuser

I just want to warn user about the potential problem that you will encounter if you have a SUN X4500 system.
After applying patch cluster 3/26/10 on a SUN X4500 (thumper), now I can not do many disk management tasks in multiuser mode, such as
format
zpool create
and possibly many more commands (involving disk devices) would just hang.
Drop it to single user mode, however, you CAN still do these management task. I had to create zpool in single user mode
zfs create pool/zfsset still works in multiuser mode.
Thank you for your attention.

I'm having exactly the same problem on an x4500 server running OpenSolaris snv_127. We suffered a disk failure (c7t0d0), so I resilvered one of the pools onto a spare drive, which succeeded:
# zpool history data
<snip>
2010-04-20.14:45:08 zpool offline data c7t0d0
2010-04-20.14:58:19 zpool replace -f data c7t0d0 c7t1d0
# zpool status data
pool: data
state: ONLINE
scrub: resilver completed after 43h58m with 0 errors on Thu Apr 22 10:56:50 2010
config:
NAME STATE READ WRITE CKSUM
data ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
c11t1d0 ONLINE 0 0 0
c10t1d0 ONLINE 0 0 0
c13t1d0 ONLINE 0 0 0
c12t0d0 ONLINE 0 0 0
c8t1d0 ONLINE 0 0 0
c12t1d0 ONLINE 0 0 0
c10t0d0 ONLINE 0 0 0
c10t7d0 ONLINE 0 0 0
c12t2d0 ONLINE 0 0 0
c8t0d0 ONLINE 0 0 0
c7t1d0 ONLINE 0 0 0 381G resilvered
As you can see, the drive c7t0d0 has been successfully replaced and no longer appears in any zpool:
# zpool status | grep c7t0d0
# iostat -En | grep c7t0d0
c7t0d0 Soft Errors: 2 Hard Errors: 4 Transport Errors: 0
# cfgadm -l | grep c7t0d0
sata1/0::dsk/c7t0d0 disk connected configured ok
However when I attempt to unconfigure the sata port in preperation for replacing the drive, it doesn't work:
# cfgadm -c unconfigure sata1/0
Unconfigure the device at: /devices/pci@0,0/pci1022,7458@1/pci11ab,11ab@1:0
This operation will suspend activity on the SATA device
Continue (yes/no)? yes
cfgadm: Hardware specific failure: Failed to unconfig device at ap_id: /devices/pci@0,0/pci1022,7458@1/pci11ab,11ab@1:0
Also just pulling out the drive doesn't work, you end up with an unusable unconfigured port:
# cfgadm -l | grep unusable
sata6/0 disk connected unconfigured unusable
It's highly frustrating, because it means to replace disks we'll have to shut this server down. Not ideal!
I'm fairly certain this used to work fine.
If anyone has a solution, I'm all ears. I'm going to update this box to snv_130 soon (mainly to get the new zfs resilvering code) but I'm not hopeful about this bug/issue.
Regards,
Alasdair

Similar Messages

  • How to backout a Recommended Patch cluster deployment in Solstice disksuite

    Hello Admin's,
    I am planning to use the following Plan of action for deploying the latest Solaris 8 Recommended Patch cluster on the production server's i support.My
    concern is if the patching activity fails or the application and Oracle databases dont come up after deploying the patch cluster.How do i revert back to system to its original state using the submirror's which i have detached prior to patching the system :
    1) Will shutdown the applications and the databases on the server:
    2) Will capture the output of the following commands :
    df -k
    ifconfig -a
    contents of the files /etc/passwd /etc/shadow /etc/vfstab /etc/system
    metastat -p
    netstat -rn
    prtvtoc /dev/rdsk/c1t0d0s0
    prtvtoc /dev/rdsk/c1t1d0s0
    3) We bring the system to the ok prompt
    4) We will try to boot the system from both the disk's which are part of the d10 metadevice for root filesystem
    =======================================================================================
    user1@myserver>pwd ; df -k / ; ls -lt | egrep '(c1t0d0s0|c1t1d0s0)' ; prtconf -vp | grep bootpath ; metastat d10
    /dev/dsk
    Filesystem kbytes used avail capacity Mounted on
    /dev/md/dsk/d10 8258597 3435895 4740117 43% /
    lrwxrwxrwx 1 root root 43 Jul 28 2003 c1t0d0s0 -> ../../devices/pci@1c,600000/scsi@2/sd@0,0:a
    lrwxrwxrwx 1 root root 43 Jul 28 2003 c1t1d0s0 -> ../../devices/pci@1c,600000/scsi@2/sd@1,0:a
    bootpath: '/pci@1c,600000/scsi@2/disk@0,0:a'
    d10: Mirror
    Submirror 0: d11
    State: Okay
    Submirror 1: d12
    State: Okay
    Pass: 1
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 16779312 blocks
    d11: Submirror of d10
    State: Okay
    Size: 16779312 blocks
    Stripe 0:
    Device Start Block Dbase State Hot Spare
    c1t0d0s0 0 No Okay
    d12: Submirror of d10
    State: Okay
    Size: 16779312 blocks
    Stripe 0:
    Device Start Block Dbase State Hot Spare
    c1t1d0s0 0 No Okay
    user1@myserver>
    ===================================================================================
    ok nvalias backup_root <disk path>
    Redefine the boot-device variable to reference both the primary and secondary submirrors, in the order in which you want to access them. For example:
    ok printenv boot-device
    boot-device= disk net
    ok setenv boot-device disk backup_root net
    boot-device= disk backup_root net
    In the event of primary root disk failure, the system automatically boots from the secondary submirror. To test the secondary submirror, boot the system manually, as follows:
    ok boot backup_root
    user1@myserver>metadb -i
    flags first blk block count
    a m p luo 16 1034 /dev/dsk/c1t0d0s7
    a p luo 1050 1034 /dev/dsk/c1t0d0s7
    a p luo 2084 1034 /dev/dsk/c1t0d0s7
    a p luo 16 1034 /dev/dsk/c1t1d0s7
    a p luo 1050 1034 /dev/dsk/c1t1d0s7
    a p luo 2084 1034 /dev/dsk/c1t1d0s7
    o - replica active prior to last mddb configuration change
    u - replica is up to date
    l - locator for this replica was read successfully
    c - replica's location was in /etc/lvm/mddb.cf
    p - replica's location was patched in kernel
    m - replica is master, this is replica selected as input
    W - replica has device write errors
    a - replica is active, commits are occurring to this replica
    M - replica had problem with master blocks
    D - replica had problem with data blocks
    F - replica had format problems
    S - replica is too small to hold current data base
    R - replica had device read errors
    user1@myserver>df -k
    Filesystem kbytes used avail capacity Mounted on
    /dev/md/dsk/d10 8258597 3435896 4740116 43% /
    /dev/md/dsk/d40 2053605 929873 1062124 47% /usr
    /proc 0 0 0 0% /proc
    fd 0 0 0 0% /dev/fd
    mnttab 0 0 0 0% /etc/mnttab
    /dev/md/dsk/d30 2053605 937231 1054766 48% /var
    swap 2606008 24 2605984 1% /var/run
    swap 6102504 3496520 2605984 58% /tmp
    /dev/md/dsk/d60 13318206 8936244 4248780 68% /u01
    /dev/md/dsk/d50 5161437 2916925 2192898 58% /opt
    user1@myserver>metastat d40
    d40: Mirror
    Submirror 0: d41
    State: Okay
    Submirror 1: d42
    State: Okay
    Pass: 1
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 4194828 blocks
    d41: Submirror of d40
    State: Okay
    Size: 4194828 blocks
    Stripe 0:
    Device Start Block Dbase State Hot Spare
    c1t0d0s4 0 No Okay
    d42: Submirror of d40
    State: Okay
    Size: 4194828 blocks
    Stripe 0:
    Device Start Block Dbase State Hot Spare
    c1t1d0s4 0 No Okay
    user1@myserver>metastat d30
    d30: Mirror
    Submirror 0: d31
    State: Okay
    Submirror 1: d32
    State: Okay
    Pass: 1
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 4194828 blocks
    d31: Submirror of d30
    State: Okay
    Size: 4194828 blocks
    Stripe 0:
    Device Start Block Dbase State Hot Spare
    c1t0d0s3 0 No Okay
    d32: Submirror of d30
    State: Okay
    Size: 4194828 blocks
    Stripe 0:
    Device Start Block Dbase State Hot Spare
    c1t1d0s3 0 No Okay
    user1@myserver>metastat d50
    d50: Mirror
    Submirror 0: d51
    State: Okay
    Submirror 1: d52
    State: Okay
    Pass: 1
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 10487070 blocks
    d51: Submirror of d50
    State: Okay
    Size: 10487070 blocks
    Stripe 0:
    Device Start Block Dbase State Hot Spare
    c1t0d0s5 0 No Okay
    d52: Submirror of d50
    State: Okay
    Size: 10487070 blocks
    Stripe 0:
    Device Start Block Dbase State Hot Spare
    c1t1d0s5 0 No Okay
    user1@myserver>metastat -p
    d10 -m d11 d12 1
    d11 1 1 c1t0d0s0
    d12 1 1 c1t1d0s0
    d20 -m d21 d22 1
    d21 1 1 c1t0d0s1
    d22 1 1 c1t1d0s1
    d30 -m d31 d32 1
    d31 1 1 c1t0d0s3
    d32 1 1 c1t1d0s3
    d40 -m d41 d42 1
    d41 1 1 c1t0d0s4
    d42 1 1 c1t1d0s4
    d50 -m d51 d52 1
    d51 1 1 c1t0d0s5
    d52 1 1 c1t1d0s5
    d60 -m d61 d62 1
    d61 1 1 c1t0d0s6
    d62 1 1 c1t1d0s6
    user1@myserver>pkginfo -l SUNWmdg
    PKGINST: SUNWmdg
    NAME: Solstice DiskSuite Tool
    CATEGORY: system
    ARCH: sparc
    VERSION: 4.2.1,REV=1999.11.04.18.29
    BASEDIR: /
    VENDOR: Sun Microsystems, Inc.
    DESC: Solstice DiskSuite Tool
    PSTAMP: 11/04/99-18:32:06
    INSTDATE: Apr 16 2004 11:10
    VSTOCK: 258-6252-11
    HOTLINE: Please contact your local service provider
    STATUS: completely installed
    FILES: 150 installed pathnames
    6 shared pathnames
    19 directories
    1 executables
    7327 blocks used (approx)
    user1@myserver>
    =======================================================================================
    5) After successfully testing the above we will bring the system to the single user mode
    # reboot -- -s
    6) Detach the following sub-mirrors :
    # metadetach -f d10 d12
    #metadetach -f d30 d32
    #metadetach -f d40 d42
    #metadetach -f d50 d52
    # metastat =====> (to check if the submirror�s are successfully detached)
    7) Applying patch on the server
    After patch installation is complete will be rebooting the server to single user mode
    # reboot -- -s
    confirming if patch installation was successful (uname �a) .
    8) Will be booting the server to the multiuser mode (init 3) and confirming with the database and the application teams if the
    Applications/databases are working fine .Once confirmed successful will be reattaching the d12 submirror.
    # metattach d12 d10
    # metattach d30 d32
    #metattach d40 d42
    # metattach d50 d52
    # metastat d10 =====> (to check the submirror is successfully reattached)
    user1@myserver>uname -a ; cat /etc/release ; date
    SunOS myserver 5.8 Generic_117350-04 sun4u sparc SUNW,Sun-Fire-V210
    Solaris 8 HW 12/02 s28s_hw1wos_06a SPARC
    Copyright 2002 Sun Microsystems, Inc. All Rights Reserved.
    Assembled 12 December 2002
    Mon Apr 14 17:10:09 BST 2008
    -----------------------------------------------------------------------------------------------------------------------------

    Recommended patch sets are and to the best of my knowledge have always been regenerated twice a month.
    I think your thinking of maintenance releases. When they generate a new CD image which can be used to do an upgrade install.
    They try to generate those every 6 months, but the schedule often slips.
    The two most recent were sol10 11/06 and sol10 8/07.

  • Solaris 8 patches added to Solaris 9 patch cluster and vice versa

    Has anyone noticed this? On the Solaris 8 and 9 patch cluster readmes, it shows sol 9 patches have been added to the sol 8 cluster and sol 8 patches have been added to the sol 9 cluster. what's the deal? I haven't found any information about whether or not this is a mistake.

    Desiree,
    Solaris 9's kernel patch 112233-12 was the last revision for that particular patch number. The individual zipfile became so large that it was subsequently supplanted by 117191-xx and that has also been supplanted when its zipfile became very large, by 118558-xx.
    Consequently you will never see any newer version than 112233-12 on that particular patch.
    What does <b>uname -a</b> show you for that system?
    Solaris 8 SPARC was similarly effected, for 108528, 117000 and 117350 kernel patches.
    If you have login privileges to Sunsolve, find <font color="teal">Infodoc 76028</font>.

  • Latest patch cluster idated march 14 nstalled on solaris10

    I installed the latest patch cluster on my solaris10 system and afterwards the Solaris Management console would not work. I kept getting the error that the server was not running. I followed the limited trouble-shooting guidelines for stopping and starting the service but still did not work. After reviwing the patch cluster I discoved there was a patch that updated smc. There was no special instructions for this patch. I removed the patch and the smc worked find. Has anyone had any problems with patch 121308-12.

    Patch 119313-19(SPARC), 119314-20(x86) will resolve your problem.
    But these patch are NOT included in the recommanded pathces.
    You need to join Sun's SunSolve or Sun's standard support.

  • August Patch Cluster Problems

    Has anyone had the following issue after installing the latest Patch Cluster?
    After a reboot I get
    couldn't set locale correctly
    To correct this I have to edit /etc/default/init
    and remove
    LC_COLLATE=en_GB.ISO8859-1
    LC_CTYPE=en_GB.ISO8859-1
    LC_MESSAGES=C
    LC_MONETARY=en_GB.ISO8859-1
    LC_NUMERIC=en_GB.ISO8859-1
    LC_TIME=en_GB.ISO8859-1
    If I then create a flash archive and use this flash archive the jumpstart process then puts the locale info back and the problem appears again.
    It's not critical as I don't need to be on the latest Patch Cluster but would wondered if I'm the only one having issues?

    If you open the directory in CDE's file manager, right click on the zipped file and select unzip. The cluster will be unzipped to a directory structure called x86_recommended or something of the sort. Change to that directory to run the patch cluster install script. The patch script is looking for that directory structure.
    Lee

  • MB.ACT light OFF --- after Solaris 9 recommended patch cluster applied

    After applying the recommended patch cluster to several SunFire V210s/V240s the "system ready" or "activity" light on the front panel no longer illuminates. The systems come up fine and are fully functional, just no light.
    While this has absolutely no impact on usability it is a bit disconcerting for the folks in the server room when glancing at the rack and not seeing the typical green light.
    I first noticed the problem in mid 2007 and have tried patch clusters as recent as last month, but the light remains out.
    One interesting discovery is that if I reset (hard) the ALOM module the light comes back on. However, if the system is subsequently rebooted for any reason the light does not come back on...
    If anyone can shed some light on the issue I would greatly appreciate it!
    -Sap

    continued.
    I removed patch 138217-01 (svccfg & svcprop patch) and re-applied it.
    Then, after booting into S-mode, I tried to continue with the broken patch 139555-08 (kernel).
    It ends with
    ! root@jumpstart:/var/spool/patch # patchadd 139555-08
    Validating patches...
    Loading patches installed on the system...
    Done!
    Loading patches requested to install.
    Preparing checklist for non-global zone check...
    Checking non-global zones...
    This patch passes the non-global zone check.
    139555-08
    Summary for zones:
    Zone oem
    Rejected patches:
    None.
    Patches that passed the dependency check:
    139555-08
    Patching global zone
    Adding patches...
    Checking installed patches...
    Executing prepatch script...
    Installing patch packages...
    Pkgadd failed. See /var/tmp/139555-08.log.21103 for details
    Patchadd is terminating.
    ! root@jumpstart:/var/spool/patch #the logfile shows
    ! root@jumpstart:/var/spool/patch # vi /var/tmp/139555-08.log.21103
    Dryrun complete.
    No changes were made to the system.
    This appears to be an attempt to install the same architecture and
    version of a package which is already installed.  This installation
    will attempt to overwrite this package.
    pkgadd: ERROR: couldn't lock in /var/run/.patchSafeMode/root/var/sadm/install (s
    erver running?): Resource temporarily unavailable
    Installation of <SUNWarc> failed (internal error).
    No changes were made to the system.Help!
    -- Nick

  • Solaris reboot loop after recommended patch cluster

    Hi,
    I have a problem with my solaris 10 x86 install on my intel notebook. after installing the latest recommended patch cluster and the necessary reboot my latop stays in a loop.
    one of the last lines I was able to see:
    panic&#91;cpu0&#93;/thread=fec1d660: boot_mapin():No pp for pfnum=20fff
    fec33b4c genunix:main+1b ()
    skipping system dump - ....
    rebooting...
    any ideas ?
    cu+thx

    This horrible patch caught me out too. I found this and it fixed my problem.
    http://weblog.erenkrantz.com/weblog/software/solaris
    <snip>
    &#91;...at the Solaris boot prompt; enable kmdb, debugging, and single-user
    so that you can remove the patch and reboot...&#93;
    boot -kds
    &#91;...wait for it to boot...&#93;
    physmax:w
    :c
    &#91;...you'll see 'stop on write of'...&#93;
    physmax/X
    &#91;...you'll see something like the following line:
    physmax: bff7f
    this is a hex number; add one; so if you see bff7f,
    your next line will need to be bff80...&#93;
    physxmax/W bff80
    :c
    &#91;...system will boot and go into single user mode...
    now, go toss those patches...&#93;
    patchrm 118844-19 120662-03 118345-12 118376-04 \
    118565-03 118886-01 119076-10 118813-01 \
    118881-02 120082-07 119851-02
    shutdown -i6 -y -g0 "sun should test their patches"
    </snip>

  • Problem: installing Recommend Patch Cluster: patch 140900-10 fails

    I don't know if this is correct location for this thread (admin/moderator move as required).
    We have a number of Solaris x86 hosts (ie 4) and I'm trying to get the patched and up to date.
    I obtained the latest Recommended Patch Cluster (dated 28Sep2010).
    I installed it on host A yesterday, 2 hours 9 mins, no problems.
    I tried to install it on host B today, and that was just a critical failure.
    Host B failed to install 140900-01, because the non global zones rejected the patch because dependancy 138218-01 wasn't installed in the non global zones.
    Looking closer, it appears that 138218-01 is for global zone installation only; so it makes sense that the non global zones rejected it.
    However, both hosts were in single user mode when installing the cluster and both hosts have non global zones.
    140900-01 installed on host A with log:
    | This appears to be an attempt to install the same architecture and
    | version of a package which is already installed. This installation
    | will attempt to overwrite this package.
    |
    | Dryrun complete.
    | No changes were made to the system.
    |
    | This appears to be an attempt to install the same architecture and
    | version of a package which is already installed. This installation
    | will attempt to overwrite this package.
    |
    |
    | Installation of <SUNWcsu> was successful.
    140900-1 failed on host B with this log:
    | Validating patches...
    |
    | Loading patches installed on the system...
    |
    | Done!
    |
    | Loading patches requested to install.
    |
    | Done!
    |
    | Checking patches that you specified for installation.
    |
    | Done!
    |
    |
    | Approved patches will be installed in this order:
    |
    | 140900-01
    |
    |
    | Preparing checklist for non-global zone check...
    |
    | Checking non-global zones...
    |
    | Checking non-global zones...
    |
    | Restoring state for non-global zone X...
    | Restoring state for non-global zone Y...
    | Restoring state for non-global zone Z...
    |
    | This patch passes the non-global zone check.
    |
    | None.
    |
    |
    | Summary for zones:
    |
    | Zone X
    |
    | Rejected patches:
    | 140900-01
    | Patches that passed the dependency check:
    | None.
    |
    |
    | Zone Y
    |
    | Rejected patches:
    | 140900-01
    | Patches that passed the dependency check:
    | None.
    |
    |
    | Zone Z
    |
    | Rejected patches:
    | 140900-01
    | Patches that passed the dependency check:
    | None.
    |
    | Fatal failure occurred - impossible to install any patches.
    | X: For patch 140900-01, required patch 138218-01 does not exist.
    | Y: For patch 140900-01, required patch 138218-01 does not exist.
    | Z: For patch 140900-01, required patch 138218-01 does not exist.
    | Fatal failure occurred - impossible to install any patches.
    | ----
    | patchadd exit code : 1
    | application of 140900-01 failed : unhandled subprocess exit status '1' (exit 1 b
    | ranch)
    | finish time : 2010.10.02 09:47:18
    | FINISHED : application of 140900-01
    It looks as if the patch strategy is completely different for the two hosts; both are solaris x86 (A is a V40z; B is a V20z).
    Trying to use "-G" with patchadd crabs the patch is for global zones only and stops (ie it's almost as if it thinks installation is being attempted in a non global zone).
    Without "-G" same behavior as patch cluster install attempt (no surprise there).
    I have inherited this machines; I am told each of these machines were upgraded to Solaris 10u6 in June 2009.
    Other than the 4-5 months it took oracle to get our update entitlement to us, we've had no problems.
    I'm at a loss here, I don't see anyone else having this type of issue.
    Can someone point the way...?
    What am I missing...?

    Well, I guess what you see isn't what you get.
    Guess I'm used to the fact USENET just left things formatted the way they were :-O

  • Installing new patch cluster after os is hardened with Jass 4.2

    hi, i have a Solaris 10 system that's hardened with Jass 4.2. what is the correct way/best practice to apply the latest recommended patch cluster?
    will applying the latest recommended patch cluster 'un harden' the system?
    thanks.

    Well, I guess what you see isn't what you get.
    Guess I'm used to the fact USENET just left things formatted the way they were :-O

  • Need patchset/patch cluster for solaris 10 5/09 update 7 to update8

    Please let me know where i can download the patch cluster or patchset to upgrade solaris 10 5/09 from update 7 to update8
    Solaris 10 5/09 s10s_u7wos_08 SPARC to Solaris 10 5/09 s10s_u8wos_08 SPARC
    Its a 64 bit SPARC
    Thanks in advance.
    With Regards
    Suresh Rao Jc
    Edited by: user13549783 on 14-Apr-2012 05:55

    Moderator Advice:
    Welcome to the OTN forums.
    This is your very first post to the forums, so before you go any further, a piece of advice:
    When you give your post a title of "Hi All" it does not tell anyone anything about what your inquiry might be.
    The topic title that you place onto each one of your posts is just as important as what you place into your post. It tells everyone whether they need to spend any time with it. It tells everyone whether you are worth spending time with, whether you will be able to understand what they type in a reply, how technical they can be in a response.
    These are technical forums. They are not chat rooms for casual feel-good gatherings. They are not `blogs`. The site is also NOT a way to make contact to Oracle's technical support staff. The OTN community is from the entire globe and is from end-users that happen to have an interest in the topics you read here.
    Spend some time reading the forum FAQ,
    https://wikis.oracle.com/display/Forums/Forums+FAQ
    which is linked at the top corner of every page.
    Also, spend some time with the forums Terms Of Use
    http://www.oracle.com/us/legal/terms/index.html
    which is linked at the bottom of every page.
    In particular, spend some time reviewing section #6 in that T.O.U.
    When you make your posts more informative and easier to read then responses will be more helpful.
    ... now go back and edit your post. Change the title to something that better describes your actual question.

  • Solaris 10 SPARC Recommended Patch Cluster for  2008 quarter 3 version

    Dear All,
    Could any one please guide me, where I can download "Solaris 10 SPARC Recommended Patch Cluster" for 2008 quarter 3 version.
    I have checked in sunsolve.sun.com, I'm able to find only the latest release.
    please guide me.
    Thanks and regards,
    veera

    Ok,
    Here's a cute little formula to try using your systems parameters to gain some "head's-up" on an expected or estimated time to complete the patch run.
    You will need a few things to prepare.
    Number of "real" CPU's (not hyperthreads)
    Speed of each CPU as a whole number i.e 2.87Ghz = 2,867
    the total number of patches from the cluster i.e. Sept 15th = 381 (Solaris 10)
    Network factor if using NFS = 2.5
    local cluster file factor = 1.5
    Patch cluster on CDROM factor = 3.25
    Now, combine all of those elements in this equation:
    { ( #1 * #2 ) / #3 * factor(#4 or #5 or #6) }
    This will yield a number in minutes of patch run time. Then all you will need to add is the standard boot up time to get the total of all patching and reboot.
    Of course you could write a script to extract all of this information and feed it to "bc -l" to get a quick figure.
    Example, on one of my Solaris 10 boxes with the information filled in to the equation:
    #> echo "((4*2660)/381)*1.5"|bc -l
    #> 41.88976377952755905511
    It actually took 44.2 minutes to complete the patching of this box plus another 12 minutes to reboot. But all in all a pretty fair estimate I think.

  • Pkgadd failed when applying the patch cluster

    Hi,
    When I apply the patch cluster to servers that have been jumpstarted to Sol10U3 and U4, I am getting a package add failure. The only way around this is to add install user in passwd:
    install:x:0:1:installpatch:/root:/bin/true
    Is there something that needs to be configured to avoid this or is the install user required?
    Thanks

    See the Solaris FAQ:
    http://www.science.uva.nl/pub/solaris/solaris2.html#q5.59
    5.59) Patch installation often fails with "checkinstall" errors.
    When installing a patch, the Solaris 2.5+ patch installation procedure will execute the script "checkinstall" with uid nobody.
    If any of the patch files or if any part of the path leading up to the patch directory cannot be read by nobody, an error similar to the following will appear:
    patchadd .                    # or ./installpatch .
    Generating list of files to be patched...
    Verifying sufficient filesystem capacity (exhaustive method) ...
    Installing patch packages...
    pkgadd: ERROR: checkinstall script did not complete successfully
    You can workaround this in two ways, one is to make sure that the user "nobody" can read all patch files and execute a "pwd" in the patch directory or add an account "install" to /etc/passwd:
         install:x:0:1:installpatch braindamage:/:/bin/true
    Installpatch and patchadd use "nobody" as a fallback if it cannot find the "install" user.

  • Patch cluster question.

    What if ,all of my patches in solaris10 05/08 there are not find following:
    118731-01 122660-10 119254-59 138217-01 .... about 30
    How i can fill up 30 patches? Do command patchadd separetely for everyone? Or make all patch cluster in S mode? Do it in global? I must stop zones befor patchadd command?
    Please help?
    Edit/Delete Message

    Hi Hartmut,
    I kind of got the idea. Just want to make sure. The zones 'romantic' and 'modern' show "installed" as the current status at cluster-1. These 2 zones are in fact running and online at cluster-2. So I will issue your commands below at cluster-2 to detach these zones to "configured" status :
    cluster-2 # zoneadm -z romantic detach
    cluster-2 # zoneadm -z modern detach
    Afterwards, I apply the Solaris patch at cluster-2. Then, I go to cluster-1 and apply the same Solaris patch. Once I am done patching both cluster-1 and cluster-2, I will
    go back to cluster-2 and run the following commands to force these zones back to "installed" status :
    cluster-2 # zoneadm -z romantic attach -f
    cluster-2 # zoneadm -z modern attach -f
    CORRECT ?? Please let me know if I am wrong or if there's any step missing. Thanks much, Humphrey
    root@cluster-1# zoneadm list -iv
    ID NAME STATUS PATH BRAND IP
    0 global running / native shared
    15 classical running /zone-classical native shared
    - romantic installed /zone-romantic native shared
    - modern installed /zone-modern native shared

  • Patch cluster backout

    How to remove patch cluster without going through patchrm command on each patch individually?
    The latest cluster that we installed is not testing well. One application is generating the Segmentation Fault Error due to an updated version of the X Windows Library . See more info on this bug:
    http://forums.vni.com/showthread.php?p=1599#post1599
    I only see two options, either implement a fix for this bug (if it is available out there some where) or remove the entire patch bundle. If I know which patch is the culprit patch then I may be able to uninstall that patch it self instead of removing the entire bundle.
    Need Help. Thanks!
    MK

    If you are talking about the Recommended Patch Cluster for SunOS 10 :
    http://sunsolve.sun.com/private-cgi/show.pl?target=patches/patch-access
    I have downloaded the cluster to my home directory, and reviewed the cluster for a possible Uninstall script, which in this case doesnt exist.
    I did read the README file located in the cluster that states the following :
    SAVE AND BACKOUT OPTIONS:
    By default, the cluster installation procedure uses the patchadd
    command save feature to save the base objects being patched. Prior to
    installing the patches the cluster installation script will first
    determine if enough system disk space is available in /var/sadm/patch
    to save the base objects and will terminate if not. Patches can only
    be individually backed out with the original object restored if the
    save option was used when installing this cluster. Please later refer
    to the patchrm command manual page for instructions and more
    information. It is possible to override the save feature by using the
    [-nosave] option when executing the cluster installation script. Using
    the nosave option, however, means that you will not be able to backout
    individual patches if the need arises.

  • 9_Recommended patch cluster breaks ssh

    Hi all,
    I recently installed the latest 9_Recommended patch cluster and it has rendered ssh inoperable. I get the following error when I try to start ssh:
    ld.so.1: sshd: fatal: libgss.so.1: open failed: No such file or directory
    In doing some research, it seems that there is now a dependency on the SUNWgss packages. I either need to find a place where I can get this package from or remove the patch that has broken ssh.
    Any ideas?

    I have a case open as well. Here is the current response:
    RFE for patch 113273-11
    Take a look at this
    But he did not give any approved workaround so I pinged him again and
    await feedback.
    For the "unable to initialize mechanism library
    [usr/lib/gss/gl/mech_krb5.so]" (see bug 6392328)
    The following workarounds were provided by PTS
    and one of my customers confimred that workaound suggestion
    number 1 worked for him
    1) Turn off GSS-API support in ssh_config(4) and
    sshd_config(4)
    Add to /etc/ssh/ssh_config and /etc/ssh/sshd_config:
    GSSAPIAuthentication=no
    GSSAPIKeyExchange=no
    2) Configure /etc/krb5/krb5.conf (even if Kerberos V is not in use) so
    it contains no syntax errors.
    attached krb5.conf was given to me by my backline support for you to
    use.
    All patches for SSH Solaris9
    Solaris 9 Patches:
    >>>
    113273-10 SunOS 5.9: /usr/lib/ssh/sshd Patch
    114356-06 SunOS 5.9: /usr/bin/ssh Patch
    Solaris 9 Backport Patches:
    Those patches are:
    112908-24
    117177-02
    114356-07
    113273-11
    Based on previous patch dependencies you will want to confirm that
    these previously released patches are also installed:
    112908-01
    112921-07
    112922-02
    112923-03
    112924-01

Maybe you are looking for