Latest patch cluster idated march 14 nstalled on solaris10

I installed the latest patch cluster on my solaris10 system and afterwards the Solaris Management console would not work. I kept getting the error that the server was not running. I followed the limited trouble-shooting guidelines for stopping and starting the service but still did not work. After reviwing the patch cluster I discoved there was a patch that updated smc. There was no special instructions for this patch. I removed the patch and the smc worked find. Has anyone had any problems with patch 121308-12.

Patch 119313-19(SPARC), 119314-20(x86) will resolve your problem.
But these patch are NOT included in the recommanded pathces.
You need to join Sun's SunSolve or Sun's standard support.

Similar Messages

  • GLOBAL ZONE stuck in shutting_down after applying latest patch cluster

    Hi,
    after installing the latest patch cluster, my zones on the system are not accessable
    root@typhoon # zoneadm list -civ
    ID NAME STATUS PATH BRAND IP
    0 global shutting_down / native shared
    - typhoon-log installed /local/zone/typhoon-log native shared
    - typhoon-ftp installed /local/zone/typhoon-ftp native shared
    - typhoon-rndrepos installed /local/zone/typhoon-rndrepos native shared
    - typhoon-drpreview installed /local/zone/typhoon-test native shared
    - typhoon-ossec installed /local/zone/typhoon-ossec native shared
    - typhoon-drjboss installed /local/zone/typhoon-drjboss native shared
    - typhoon-webdata installed /datapool/web/zones/webdata native shared
    - typhoon-webzone installed /datapool/web/zones/typhoon-webzone native shared
    # uname -a
    SunOS typhoon 5.10 Generic_139556-08 i86pc i386 i86pc
    there is one service reporting in maintenane
    # svcs -xv
    svc:/system/filesystem/volfs:default (Volume Management filesystem)
    State: maintenance since Wed Jul 15 20:04:07 2009
    Reason: Restarting too quickly.
    See: http://sun.com/msg/SMF-8000-L5
    See: man -M /usr/man -s 7FS volfs
    See: /var/svc/log/system-filesystem-volfs:default.log
    Impact: This service is not running.
    I tried stopping and starting the vold, but without any effect
    the / is on a zfs pool
    # zpool list
    NAME SIZE USED AVAIL CAP HEALTH ALTROOT
    datapool 576G 105G 471G 18% ONLINE -
    datapool2 148G 111G 37.0G 74% ONLINE -
    rpool 120G 37.1G 82.9G 30% ONLINE -
    root@typhoon # zpool status
    pool: datapool
    state: ONLINE
    scrub: scrub completed after 0h31m with 0 errors on Thu Jul 16 02:31:53 2009
    config:
    NAME STATE READ WRITE CKSUM
    datapool ONLINE 0 0 0
    mirror ONLINE 0 0 0
    c2d1s3 ONLINE 0 0 0
    c1d1s3 ONLINE 0 0 0
    errors: No known data errors
    pool: datapool2
    state: ONLINE
    scrub: scrub completed after 1h53m with 0 errors on Wed Jul 15 23:53:06 2009
    config:
    NAME STATE READ WRITE CKSUM
    datapool2 ONLINE 0 0 0
    mirror ONLINE 0 0 0
    c2d0s0 ONLINE 0 0 0
    c1d0s3 ONLINE 0 0 0
    errors: No known data errors
    pool: rpool
    state: ONLINE
    scrub: scrub completed after 0h26m with 0 errors on Thu Jul 16 06:20:39 2009
    config:
    NAME STATE READ WRITE CKSUM
    rpool ONLINE 0 0 0
    mirror ONLINE 0 0 0
    c1d1s0 ONLINE 0 0 0
    c2d1s0 ONLINE 0 0 0
    errors: No known data errors
    root@typhoon #
    # tail /var/svc/log/system-filesystem-volfs:default.log
    [ Jul 16 08:03:11 Method "start" exited with status 0 ]
    Thu Jul 16 08:03:11 2009 fatal: mounting of "/vol" failed
    [ Jul 16 08:03:11 Stopping because all processes in service exited. ]
    [ Jul 16 08:03:11 Executing stop method (:kill) ]
    [ Jul 16 08:03:11 Executing start method ("/lib/svc/method/svc-volfs start") ]
    [ Jul 16 08:03:11 Method "start" exited with status 0 ]
    Thu Jul 16 08:03:11 2009 fatal: mounting of "/vol" failed
    [ Jul 16 08:03:11 Stopping because all processes in service exited. ]
    [ Jul 16 08:03:11 Executing stop method (:kill) ]
    [ Jul 16 08:03:11 Restarting too quickly, changing state to maintenance ]
    please advise, (this was on test run before implemention this patch cluster on the production server).
    And I need to get a solution, to get my zone back running and then decide what to do with this patch cluster
    thx
    Edited by: Denis@SDN on Jul 16, 2009 1:06 AM

    Sound familiar?
    [http://opensolaris.org/jive/thread.jspa?threadID=105001&tstart=0]
    This guy killed a process as workaround
    [http://alittlestupid.com/2009/07/04/solaris-zone-stuck-in-shutting_down-state/]
    We patched some SPARC systems recently and no issues though that's little consolation to you x86 admins.

  • Latest Patch cluster (Feb 11, 2010) unable to run in Single user mode.

    Hi ,
    This is my first post in this forum.
    Hope you people will help to resolve the issue.
    My issue - I have downloaded the latest patch cluster (Feb 11, 20100) and tried to install it from single user mode.
    Nothing happened. I have waited for more than 30 mins, still no progres.
    Then I killed the process, and tried to run it in Multi user mode (To verify any issue with extraction or file) -- Its working(and killed the process) !!!
    Anyone facing this strange issue ?
    Appreciate your help to complete the patching.
    Thanks.
    sas.

    Hi,
    If you're encountering issues with a patch cluster bundle your best bet is to contact [email protected] From the description it sounds like there may be a filesystem not mounted in SUM that the cluster is trying to patch.

  • Latest Patch Cluster

    Hi,
    Can some help me, how to find and download the latest patch cluster package for sparc machine.
    Thanks
    Karthik

    Moderator Action:
    Not a hardware question. It is an OS question.
    Your post has been moved from the hardware forum you had put it into
    to a Solaris forum.
    (we guessed Solaris 10 -- you didn't botrher to mention what you are using.)
    Suggestion:
    Since patches and patch clusters are only available to those with service contract login credentials to Oracle Support, you might try to log into MOS and search over there.

  • Non priviledged commands fail after patch cluster install

    For several years, to allow users to access system functions for higher clock resolutions in their
    applications, we have made the following mods to two files (per Sun recommendations):
    In the /etc/system file, add lines:
    * 10 ms clock resolution*
    set higher_tick = 1*
    In /etc/user_attr, add:
    dssuser::::type=normal;defaultpriv=proc_clock_highres,file_link_any,proc_info,proc_session,proc_fork,proc_exec
    Everything was fine until I installed the latest patch cluster (from 2-14-2011) on our Netra 210. No issues installing the cluster. Upon reboot, the dssuser cannot do some simple things, like use ftp or ping. Messages like this:
    ping: unknown host
    ftp: socket: Permission denied.
    The root user has no problems using any of these commands. As soon as I comment out the dssuser line from the /etc/user_attr file, everything works as expected. But we need the highres clock settings for our applications. I am unfamiliar with the workings and intent of the entries in user_attr so any help would be most appreciated.
    Mark

    The problem is that after patch implementation, the special privileges for any user defined in the /etc/user_attr file were modified to append the string "!net_privaddr" specifically denying use of common commands ping and ftp for example. The output of ppriv:
    $ ppriv $$
    14870: -sh
    flags = <none>
    E: basic,proc_clock_highres,!net_privaddr
    I: basic,proc_clock_highres,!net_privaddr
    P: basic,proc_clock_highres,!net_privaddr
    L: all
    I had to add the keyword net_privaddr in the user_attr file like:
    dssuser::::type=normal;defaultpriv=proc_clock_highres,file_link_any,proc_info,proc_session,proc_fork,proc_exec,net_privaddr
    to get past this problem. A bug or what?

  • August Patch Cluster Problems

    Has anyone had the following issue after installing the latest Patch Cluster?
    After a reboot I get
    couldn't set locale correctly
    To correct this I have to edit /etc/default/init
    and remove
    LC_COLLATE=en_GB.ISO8859-1
    LC_CTYPE=en_GB.ISO8859-1
    LC_MESSAGES=C
    LC_MONETARY=en_GB.ISO8859-1
    LC_NUMERIC=en_GB.ISO8859-1
    LC_TIME=en_GB.ISO8859-1
    If I then create a flash archive and use this flash archive the jumpstart process then puts the locale info back and the problem appears again.
    It's not critical as I don't need to be on the latest Patch Cluster but would wondered if I'm the only one having issues?

    If you open the directory in CDE's file manager, right click on the zipped file and select unzip. The cluster will be unzipped to a directory structure called x86_recommended or something of the sort. Change to that directory to run the patch cluster install script. The patch script is looking for that directory structure.
    Lee

  • SPARC10 not booting after 2.5.1 Recommended Patch Cluster Install

    Hi everyone,
    Just finished installing the "latest" patch cluster on my ancient SPARC10, and now it doesn't boot. Here's what I see:
    ok boot /iommu/sbus/espdma&#64;f,400000/esp&#64;f,800000/sd&#64;0,0
    screen not found.
    Can't open input device.
    Keyboard not present. Using tty for input and output.
    SPARCstation 10 (1 X 390Z55), No Keyboard
    ROM Rev. 2.25, 112 MB memory installed, Serial #6332710.
    Ethernet address 8:0:20:12:7f:99, Host ID: 7260a126.
    Rebooting with command: /iommu/sbus/espdma&#64;f,400000/esp&#64;f,800000/sd&#64;0,0
    Boot device: /iommu/sbus/espdma&#64;f,400000/esp&#64;f,800000/sd&#64;0,0 File and args:
    not found: abort_enable
    not found: abort_enable
    not found: tod_validate
    not found: tod_fault_reset
    krtld: error during initial load/link phase
    Memory Address not Aligned
    Type help for more information
    ok
    At first I thought I had a RAM problem. I've swapped RAM, but that didn't make it boot. I then thought I had a system board problem. I've swapped that as well, and still no dice. I wish I'd have kept the list of patches that successfully installed like my gut told me to, but I didn't. Searching Google and the Sun websites didn't turn up any fixes to the problem, so I'm hoping I can find help here.
    Did the patches clobber something the machine needs to boot? I believe the OS disks are mirrored, and I've tried booting off each submirror, but the result is the same.
    If I can avoid booting off of CD-ROM, that would be ideal since I don't have an external CD-ROM available to boot from.
    Thanks for any help!

    Ouch !
    Haven't seen that one in years !
    It's your choice of that ancient Operating Environment, combined with a cpu faster than 419MHz.
    I hope you still have the extra OECD disk that was packed with that system when it was first shipped from Sun.
    Otherwise you may just be out of luck with using Solaris 2.5.1
    Here's a link at a non-Sun 3rd party web site to SRDB document *20576*
    http://www.sunshack.org/data/sh/2.1/infoserver.central/data/syshbk/collections/intsrdb/20576.html
    Perhaps MAAL and haroldb can contribute their thoughts on this.

  • Solaris 8 Patch Cluster 200801 installation  / disk utilization spikes

    Hello All,
    Since we installed the latest Patch Cluster patch our disk utilization has shot up substantially. Is there anyone having same experience? If you do, could you please help us to identify some areas to take a look?
    Thanks very much.
    Yi

    hi again. i just fixed my problem. apparently when i unzipped the file and then burned it onto cd...the data got corrupted. i fixed it by copying the zip file onto cd and then extracting the patches onto the solaris machine.

  • Patch cluster pass code - NEW from Sun just to inconvenience you

    Wow. A few of the admins here, myself included, had to throw the latest patch cluster onto a few of our servers. Sun is now requiring a pass code as an argument! And they've hidden the pass code in the README with the excuse that it's to get you to read it.
    Anyway, this "pass code" is "s10cluster". So, you have to run "./installcluster --s10cluster" to get the thing to install. {grumble} A pass code to install a cluster. Ridiculous.{grumble}
    That said, some of the other changes they've made to the install script are very welcome.

    Sorry for spamming i just realized what do i make wrong after
    3 hours of programig.
    ..({id'} has a missing @ mark :)

  • Definite Way to Determine Patch Level / Patch Cluster Version?

    Operating System is Solaris 10 x86 09/10
    I need to know what is the official way to determine what was the latest patch cluster applied to the system.
    I have tried the following:
    uname -a
    showrev -p
    cat /etc/release
    They give me the Solaris version, patches applied, and kernel versions but they do not give me the name or version of the latest patch set/cluster applied.
    For example, I know that the patch set for 01/2013 was applied. How can I prove this? Is there a command that will display that?
    Is there another way to show the dates of the latest patches applied?
    Basically I need to be able to show a third-party that the system has been updated to a certain date. Right now I have a feeling that if they see 09/10 they will assume that is when the system was last updated.
    Thank you in advance.

    In $ORACLE_HOME/inventory/ContentsXML/comps.xml, all the information related to oneoffs installed is kept. Infact OPatch reads this infor and reports during lsinventory.
    But it is recommended to handle this file very carefully, if this file is damaged, then its almost that your home is gone, coz....no update can be done without proper inventory.

  • How can install the latest SUN patch cluster on individual zones?

    I have a global zone running Solaris 10 (11/06). I have a number of whole root zones running. I only want to apply the latest Sun Recommended patch cluster to a few of the individual local zones. Can this be done? How do I run install_cluster at the global and have it apply to only some of the local zones?
    thanks in advance,
    Brock

    bgammel wrote:
    I have a global zone running Solaris 10 (11/06). I have a number of whole root zones running. I only want to apply the latest Sun Recommended patch cluster to a few of the individual local zones. Can this be done? How do I run install_cluster at the global and have it apply to only some of the local zones?It really makes no sense to talk about a patch cluster this way. For some individual patches in the cluster, it may be possible for others not.
    If a patch applies to the kernel, it makes no sense to apply to some zones and not others (or not the global zone). Otherwise you'd have files out of sync with the running kernel.
    For other patches that are purely application-specific, it could be just fine. Something that patches 'sed' for instance should apply cleanly to a whole-root zone.
    You'll need to read the notes for the individual patches.
    Darren

  • Management console problem after installing recommended patch cluster

    Hello everyone.
    I just got a new SUN 440 server, it didn't come with the solaris os installed and so I installed solaris 9 12/03 to it. The installation went fine and I could start the management console with no problems at all. Then I went to sunsolve.sun.com to download the latest recommended patch clusters for solaris 9 (this was yesterday Sept.-06-2006), I downloaded the file, checked it with md5sum, unzipped it, went to single user mode and ran the install_patch script. After the patch cluster finished installing I rebooted the server for the patches to take effect, I logged in, tried to run console management 2.1 and it starts OK but I cannot use any services it offers because it says (in the console events):
    "Server Not Running: no solaris management console server was available on the specified server. Please ensure there is a Solaris Management Console server available on the specified host and that it is running"
    When I click on "See Exception" I get (for a particular service that is... in this case com.sun.admin.fsmgr.client.VFsMgr - for managing file systems):
    java.rmi.RemoteException: Server RMI is null:
    at sun.com.management.viperimpl.client.ViperClient.lookupServer(ViperClient.java:376)and so on...
    I already checked that the service is running on port 898 and it seems that there is no other app. using that same port. I also restarted the service using /etc/init.d.init stop/status/start but I get the same results.
    I don't know solaris very much and don't know what else to do, please help.
    Thanks in advance.

    Well, I guess what you see isn't what you get.
    Guess I'm used to the fact USENET just left things formatted the way they were :-O

  • How to backout a Recommended Patch cluster deployment in Solstice disksuite

    Hello Admin's,
    I am planning to use the following Plan of action for deploying the latest Solaris 8 Recommended Patch cluster on the production server's i support.My
    concern is if the patching activity fails or the application and Oracle databases dont come up after deploying the patch cluster.How do i revert back to system to its original state using the submirror's which i have detached prior to patching the system :
    1) Will shutdown the applications and the databases on the server:
    2) Will capture the output of the following commands :
    df -k
    ifconfig -a
    contents of the files /etc/passwd /etc/shadow /etc/vfstab /etc/system
    metastat -p
    netstat -rn
    prtvtoc /dev/rdsk/c1t0d0s0
    prtvtoc /dev/rdsk/c1t1d0s0
    3) We bring the system to the ok prompt
    4) We will try to boot the system from both the disk's which are part of the d10 metadevice for root filesystem
    =======================================================================================
    user1@myserver>pwd ; df -k / ; ls -lt | egrep '(c1t0d0s0|c1t1d0s0)' ; prtconf -vp | grep bootpath ; metastat d10
    /dev/dsk
    Filesystem kbytes used avail capacity Mounted on
    /dev/md/dsk/d10 8258597 3435895 4740117 43% /
    lrwxrwxrwx 1 root root 43 Jul 28 2003 c1t0d0s0 -> ../../devices/pci@1c,600000/scsi@2/sd@0,0:a
    lrwxrwxrwx 1 root root 43 Jul 28 2003 c1t1d0s0 -> ../../devices/pci@1c,600000/scsi@2/sd@1,0:a
    bootpath: '/pci@1c,600000/scsi@2/disk@0,0:a'
    d10: Mirror
    Submirror 0: d11
    State: Okay
    Submirror 1: d12
    State: Okay
    Pass: 1
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 16779312 blocks
    d11: Submirror of d10
    State: Okay
    Size: 16779312 blocks
    Stripe 0:
    Device Start Block Dbase State Hot Spare
    c1t0d0s0 0 No Okay
    d12: Submirror of d10
    State: Okay
    Size: 16779312 blocks
    Stripe 0:
    Device Start Block Dbase State Hot Spare
    c1t1d0s0 0 No Okay
    user1@myserver>
    ===================================================================================
    ok nvalias backup_root <disk path>
    Redefine the boot-device variable to reference both the primary and secondary submirrors, in the order in which you want to access them. For example:
    ok printenv boot-device
    boot-device= disk net
    ok setenv boot-device disk backup_root net
    boot-device= disk backup_root net
    In the event of primary root disk failure, the system automatically boots from the secondary submirror. To test the secondary submirror, boot the system manually, as follows:
    ok boot backup_root
    user1@myserver>metadb -i
    flags first blk block count
    a m p luo 16 1034 /dev/dsk/c1t0d0s7
    a p luo 1050 1034 /dev/dsk/c1t0d0s7
    a p luo 2084 1034 /dev/dsk/c1t0d0s7
    a p luo 16 1034 /dev/dsk/c1t1d0s7
    a p luo 1050 1034 /dev/dsk/c1t1d0s7
    a p luo 2084 1034 /dev/dsk/c1t1d0s7
    o - replica active prior to last mddb configuration change
    u - replica is up to date
    l - locator for this replica was read successfully
    c - replica's location was in /etc/lvm/mddb.cf
    p - replica's location was patched in kernel
    m - replica is master, this is replica selected as input
    W - replica has device write errors
    a - replica is active, commits are occurring to this replica
    M - replica had problem with master blocks
    D - replica had problem with data blocks
    F - replica had format problems
    S - replica is too small to hold current data base
    R - replica had device read errors
    user1@myserver>df -k
    Filesystem kbytes used avail capacity Mounted on
    /dev/md/dsk/d10 8258597 3435896 4740116 43% /
    /dev/md/dsk/d40 2053605 929873 1062124 47% /usr
    /proc 0 0 0 0% /proc
    fd 0 0 0 0% /dev/fd
    mnttab 0 0 0 0% /etc/mnttab
    /dev/md/dsk/d30 2053605 937231 1054766 48% /var
    swap 2606008 24 2605984 1% /var/run
    swap 6102504 3496520 2605984 58% /tmp
    /dev/md/dsk/d60 13318206 8936244 4248780 68% /u01
    /dev/md/dsk/d50 5161437 2916925 2192898 58% /opt
    user1@myserver>metastat d40
    d40: Mirror
    Submirror 0: d41
    State: Okay
    Submirror 1: d42
    State: Okay
    Pass: 1
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 4194828 blocks
    d41: Submirror of d40
    State: Okay
    Size: 4194828 blocks
    Stripe 0:
    Device Start Block Dbase State Hot Spare
    c1t0d0s4 0 No Okay
    d42: Submirror of d40
    State: Okay
    Size: 4194828 blocks
    Stripe 0:
    Device Start Block Dbase State Hot Spare
    c1t1d0s4 0 No Okay
    user1@myserver>metastat d30
    d30: Mirror
    Submirror 0: d31
    State: Okay
    Submirror 1: d32
    State: Okay
    Pass: 1
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 4194828 blocks
    d31: Submirror of d30
    State: Okay
    Size: 4194828 blocks
    Stripe 0:
    Device Start Block Dbase State Hot Spare
    c1t0d0s3 0 No Okay
    d32: Submirror of d30
    State: Okay
    Size: 4194828 blocks
    Stripe 0:
    Device Start Block Dbase State Hot Spare
    c1t1d0s3 0 No Okay
    user1@myserver>metastat d50
    d50: Mirror
    Submirror 0: d51
    State: Okay
    Submirror 1: d52
    State: Okay
    Pass: 1
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 10487070 blocks
    d51: Submirror of d50
    State: Okay
    Size: 10487070 blocks
    Stripe 0:
    Device Start Block Dbase State Hot Spare
    c1t0d0s5 0 No Okay
    d52: Submirror of d50
    State: Okay
    Size: 10487070 blocks
    Stripe 0:
    Device Start Block Dbase State Hot Spare
    c1t1d0s5 0 No Okay
    user1@myserver>metastat -p
    d10 -m d11 d12 1
    d11 1 1 c1t0d0s0
    d12 1 1 c1t1d0s0
    d20 -m d21 d22 1
    d21 1 1 c1t0d0s1
    d22 1 1 c1t1d0s1
    d30 -m d31 d32 1
    d31 1 1 c1t0d0s3
    d32 1 1 c1t1d0s3
    d40 -m d41 d42 1
    d41 1 1 c1t0d0s4
    d42 1 1 c1t1d0s4
    d50 -m d51 d52 1
    d51 1 1 c1t0d0s5
    d52 1 1 c1t1d0s5
    d60 -m d61 d62 1
    d61 1 1 c1t0d0s6
    d62 1 1 c1t1d0s6
    user1@myserver>pkginfo -l SUNWmdg
    PKGINST: SUNWmdg
    NAME: Solstice DiskSuite Tool
    CATEGORY: system
    ARCH: sparc
    VERSION: 4.2.1,REV=1999.11.04.18.29
    BASEDIR: /
    VENDOR: Sun Microsystems, Inc.
    DESC: Solstice DiskSuite Tool
    PSTAMP: 11/04/99-18:32:06
    INSTDATE: Apr 16 2004 11:10
    VSTOCK: 258-6252-11
    HOTLINE: Please contact your local service provider
    STATUS: completely installed
    FILES: 150 installed pathnames
    6 shared pathnames
    19 directories
    1 executables
    7327 blocks used (approx)
    user1@myserver>
    =======================================================================================
    5) After successfully testing the above we will bring the system to the single user mode
    # reboot -- -s
    6) Detach the following sub-mirrors :
    # metadetach -f d10 d12
    #metadetach -f d30 d32
    #metadetach -f d40 d42
    #metadetach -f d50 d52
    # metastat =====> (to check if the submirror�s are successfully detached)
    7) Applying patch on the server
    After patch installation is complete will be rebooting the server to single user mode
    # reboot -- -s
    confirming if patch installation was successful (uname �a) .
    8) Will be booting the server to the multiuser mode (init 3) and confirming with the database and the application teams if the
    Applications/databases are working fine .Once confirmed successful will be reattaching the d12 submirror.
    # metattach d12 d10
    # metattach d30 d32
    #metattach d40 d42
    # metattach d50 d52
    # metastat d10 =====> (to check the submirror is successfully reattached)
    user1@myserver>uname -a ; cat /etc/release ; date
    SunOS myserver 5.8 Generic_117350-04 sun4u sparc SUNW,Sun-Fire-V210
    Solaris 8 HW 12/02 s28s_hw1wos_06a SPARC
    Copyright 2002 Sun Microsystems, Inc. All Rights Reserved.
    Assembled 12 December 2002
    Mon Apr 14 17:10:09 BST 2008
    -----------------------------------------------------------------------------------------------------------------------------

    Recommended patch sets are and to the best of my knowledge have always been regenerated twice a month.
    I think your thinking of maintenance releases. When they generate a new CD image which can be used to do an upgrade install.
    They try to generate those every 6 months, but the schedule often slips.
    The two most recent were sol10 11/06 and sol10 8/07.

  • Solaris reboot loop after recommended patch cluster

    Hi,
    I have a problem with my solaris 10 x86 install on my intel notebook. after installing the latest recommended patch cluster and the necessary reboot my latop stays in a loop.
    one of the last lines I was able to see:
    panic&#91;cpu0&#93;/thread=fec1d660: boot_mapin():No pp for pfnum=20fff
    fec33b4c genunix:main+1b ()
    skipping system dump - ....
    rebooting...
    any ideas ?
    cu+thx

    This horrible patch caught me out too. I found this and it fixed my problem.
    http://weblog.erenkrantz.com/weblog/software/solaris
    <snip>
    &#91;...at the Solaris boot prompt; enable kmdb, debugging, and single-user
    so that you can remove the patch and reboot...&#93;
    boot -kds
    &#91;...wait for it to boot...&#93;
    physmax:w
    :c
    &#91;...you'll see 'stop on write of'...&#93;
    physmax/X
    &#91;...you'll see something like the following line:
    physmax: bff7f
    this is a hex number; add one; so if you see bff7f,
    your next line will need to be bff80...&#93;
    physxmax/W bff80
    :c
    &#91;...system will boot and go into single user mode...
    now, go toss those patches...&#93;
    patchrm 118844-19 120662-03 118345-12 118376-04 \
    118565-03 118886-01 119076-10 118813-01 \
    118881-02 120082-07 119851-02
    shutdown -i6 -y -g0 "sun should test their patches"
    </snip>

  • Problem: installing Recommend Patch Cluster: patch 140900-10 fails

    I don't know if this is correct location for this thread (admin/moderator move as required).
    We have a number of Solaris x86 hosts (ie 4) and I'm trying to get the patched and up to date.
    I obtained the latest Recommended Patch Cluster (dated 28Sep2010).
    I installed it on host A yesterday, 2 hours 9 mins, no problems.
    I tried to install it on host B today, and that was just a critical failure.
    Host B failed to install 140900-01, because the non global zones rejected the patch because dependancy 138218-01 wasn't installed in the non global zones.
    Looking closer, it appears that 138218-01 is for global zone installation only; so it makes sense that the non global zones rejected it.
    However, both hosts were in single user mode when installing the cluster and both hosts have non global zones.
    140900-01 installed on host A with log:
    | This appears to be an attempt to install the same architecture and
    | version of a package which is already installed. This installation
    | will attempt to overwrite this package.
    |
    | Dryrun complete.
    | No changes were made to the system.
    |
    | This appears to be an attempt to install the same architecture and
    | version of a package which is already installed. This installation
    | will attempt to overwrite this package.
    |
    |
    | Installation of <SUNWcsu> was successful.
    140900-1 failed on host B with this log:
    | Validating patches...
    |
    | Loading patches installed on the system...
    |
    | Done!
    |
    | Loading patches requested to install.
    |
    | Done!
    |
    | Checking patches that you specified for installation.
    |
    | Done!
    |
    |
    | Approved patches will be installed in this order:
    |
    | 140900-01
    |
    |
    | Preparing checklist for non-global zone check...
    |
    | Checking non-global zones...
    |
    | Checking non-global zones...
    |
    | Restoring state for non-global zone X...
    | Restoring state for non-global zone Y...
    | Restoring state for non-global zone Z...
    |
    | This patch passes the non-global zone check.
    |
    | None.
    |
    |
    | Summary for zones:
    |
    | Zone X
    |
    | Rejected patches:
    | 140900-01
    | Patches that passed the dependency check:
    | None.
    |
    |
    | Zone Y
    |
    | Rejected patches:
    | 140900-01
    | Patches that passed the dependency check:
    | None.
    |
    |
    | Zone Z
    |
    | Rejected patches:
    | 140900-01
    | Patches that passed the dependency check:
    | None.
    |
    | Fatal failure occurred - impossible to install any patches.
    | X: For patch 140900-01, required patch 138218-01 does not exist.
    | Y: For patch 140900-01, required patch 138218-01 does not exist.
    | Z: For patch 140900-01, required patch 138218-01 does not exist.
    | Fatal failure occurred - impossible to install any patches.
    | ----
    | patchadd exit code : 1
    | application of 140900-01 failed : unhandled subprocess exit status '1' (exit 1 b
    | ranch)
    | finish time : 2010.10.02 09:47:18
    | FINISHED : application of 140900-01
    It looks as if the patch strategy is completely different for the two hosts; both are solaris x86 (A is a V40z; B is a V20z).
    Trying to use "-G" with patchadd crabs the patch is for global zones only and stops (ie it's almost as if it thinks installation is being attempted in a non global zone).
    Without "-G" same behavior as patch cluster install attempt (no surprise there).
    I have inherited this machines; I am told each of these machines were upgraded to Solaris 10u6 in June 2009.
    Other than the 4-5 months it took oracle to get our update entitlement to us, we've had no problems.
    I'm at a loss here, I don't see anyone else having this type of issue.
    Can someone point the way...?
    What am I missing...?

    Well, I guess what you see isn't what you get.
    Guess I'm used to the fact USENET just left things formatted the way they were :-O

Maybe you are looking for