Patch cluster question.

What if ,all of my patches in solaris10 05/08 there are not find following:
118731-01 122660-10 119254-59 138217-01 .... about 30
How i can fill up 30 patches? Do command patchadd separetely for everyone? Or make all patch cluster in S mode? Do it in global? I must stop zones befor patchadd command?
Please help?
Edit/Delete Message

Hi Hartmut,
I kind of got the idea. Just want to make sure. The zones 'romantic' and 'modern' show "installed" as the current status at cluster-1. These 2 zones are in fact running and online at cluster-2. So I will issue your commands below at cluster-2 to detach these zones to "configured" status :
cluster-2 # zoneadm -z romantic detach
cluster-2 # zoneadm -z modern detach
Afterwards, I apply the Solaris patch at cluster-2. Then, I go to cluster-1 and apply the same Solaris patch. Once I am done patching both cluster-1 and cluster-2, I will
go back to cluster-2 and run the following commands to force these zones back to "installed" status :
cluster-2 # zoneadm -z romantic attach -f
cluster-2 # zoneadm -z modern attach -f
CORRECT ?? Please let me know if I am wrong or if there's any step missing. Thanks much, Humphrey
root@cluster-1# zoneadm list -iv
ID NAME STATUS PATH BRAND IP
0 global running / native shared
15 classical running /zone-classical native shared
- romantic installed /zone-romantic native shared
- modern installed /zone-modern native shared

Similar Messages

  • Need patchset/patch cluster for solaris 10 5/09 update 7 to update8

    Please let me know where i can download the patch cluster or patchset to upgrade solaris 10 5/09 from update 7 to update8
    Solaris 10 5/09 s10s_u7wos_08 SPARC to Solaris 10 5/09 s10s_u8wos_08 SPARC
    Its a 64 bit SPARC
    Thanks in advance.
    With Regards
    Suresh Rao Jc
    Edited by: user13549783 on 14-Apr-2012 05:55

    Moderator Advice:
    Welcome to the OTN forums.
    This is your very first post to the forums, so before you go any further, a piece of advice:
    When you give your post a title of "Hi All" it does not tell anyone anything about what your inquiry might be.
    The topic title that you place onto each one of your posts is just as important as what you place into your post. It tells everyone whether they need to spend any time with it. It tells everyone whether you are worth spending time with, whether you will be able to understand what they type in a reply, how technical they can be in a response.
    These are technical forums. They are not chat rooms for casual feel-good gatherings. They are not `blogs`. The site is also NOT a way to make contact to Oracle's technical support staff. The OTN community is from the entire globe and is from end-users that happen to have an interest in the topics you read here.
    Spend some time reading the forum FAQ,
    https://wikis.oracle.com/display/Forums/Forums+FAQ
    which is linked at the top corner of every page.
    Also, spend some time with the forums Terms Of Use
    http://www.oracle.com/us/legal/terms/index.html
    which is linked at the bottom of every page.
    In particular, spend some time reviewing section #6 in that T.O.U.
    When you make your posts more informative and easier to read then responses will be more helpful.
    ... now go back and edit your post. Change the title to something that better describes your actual question.

  • JMS/Queue cluster question

              Hi
              I have some very basic cluster questions on JMS Queues. Lets say Q1>I have 3 WLS
              in cluster. I create the queue in only WLS#1 - then all the other WLS (#2 and #3)
              should have a stub in their JNDI tree for the Queue which points to the Queue in
              #1 - right? Basically what I am trying to acheive is to have the queue in one server
              and all the other servers have a pointer to it - I beleive this is possible in WLS
              cluster - right??
              Q2> Is there any way a client to the queue running on a WLS can tell whether the
              Queue handle its using is local (ie in the same server) or remote. Is the API createQueue(./queuename)
              going to help here??
              Q3>Is there any way to create a Queue dynamically - I guess JMX is the answer -right?
              But I will take this question a bit further - lets say Q1 answer is yes. In this
              case if server #1 crashes - then #2 and #3 have no Queues. So if they try to create
              a replica of the Queue (as on server#1) - pointing to the same filestore - can they
              do it?? - I want only one of them to succed in creating the Queue and also the Queue
              should have all the data of the #1 Queue (1 to 1 replica).
              All I want is the concept of primary and secondary queue in a cluster. Go on using
              the primary queue - but if it fails use the 2ndry queue. Kind of HttpSession replication
              concept in clusters. My cluster purpose is more for failover rather than loadbalancing.
              TIA
              Anamitra
              

              Anamitra wrote:
              > Hi Tom
              > 7.0 is definitely an option for me. So lets take the scenarion on case of JMS cluster
              > and 7.0.
              >
              > I do not understand what u mean by HA framework?
              An HA framework is a third party product that can be used to automatically restart a failed server
              (perhaps on a new machine), and that will guarantee that the same server isn't started in two
              different places (that would be bad). There are few of these HA products, "Veritas" is one of
              them. Note that if you are using JMS file stores or transactions, both of which depend on the disk,
              you must make sure that the files are available on the new machine. One approach to this is to use
              what is known as a "dual-ported" disk.
              > If I am using a cluster of 3 WLS
              > 7.0 servers - as u have said I can create a distrubuted Queue with a fwd delay attribute
              > set to 0 if I have the consumer only in one server say server #1.
              > But still if the server #1 goes down u say that the Queues in server #2 and server
              > #3 will not have access to the messages which were stuck in the server #1 Queue when
              > it went down -right?
              Right, but is there a point in forwarding the messages to your consumer's destination if your
              application is down?
              If your application can tolerate it, you may wish to consider allowing multiple instances of it (one
              per physical destination). That way if something goes down, only those messages are out-of-business
              until the application comes back up...
              >
              >
              > Why cant the other servers see them - they all point to the same store right??
              > thanks
              > Anamitra
              >
              Again, multiple JMS servers can not share a store. Nor can multiple stores share a file. That will
              cause corruption. Multiple stores CAN share a database, but can't use the same tables in the
              database.
              Tom
              >
              > Tom Barnes <[email protected]> wrote:
              > >
              > >
              > >Anamitra wrote:
              > >
              > >> Hi
              > >> I have some very basic cluster questions on JMS Queues. Lets say Q1>I
              > >have 3 WLS
              > >> in cluster. I create the queue in only WLS#1 - then all the other WLS
              > >(#2 and #3)
              > >> should have a stub in their JNDI tree for the Queue which points to the
              > >Queue in
              > >> #1 - right?
              > >
              > >Its not a stub. But essentially right.
              > >
              > >> Basically what I am trying to acheive is to have the queue in one server
              > >> and all the other servers have a pointer to it - I beleive this is possible
              > >in WLS
              > >> cluster - right??
              > >
              > >Certainly.
              > >
              > >>
              > >> Q2> Is there any way a client to the queue running on a WLS can tell whether
              > >the
              > >> Queue handle its using is local (ie in the same server) or remote. Is
              > >the API createQueue(./queuename)
              > >> going to help here??
              > >
              > >That would do it. This returns the queue on the CF side of the established
              > >Connection.
              > >
              > >>
              > >> Q3>Is there any way to create a Queue dynamically - I guess JMX is the
              > >answer -right?
              > >> But I will take this question a bit further - lets say Q1 answer is yes.
              > >In this
              > >> case if server #1 crashes - then #2 and #3 have no Queues. So if they
              > >try to create
              > >> a replica of the Queue (as on server#1) - pointing to the same filestore
              > >- can they
              > >> do it??
              > >> - I want only one of them to succed in creating the Queue and also the
              > >Queue
              > >> should have all the data of the #1 Queue (1 to 1 replica).
              > >
              > >No. Not possible. Corruption city.
              > >Only one server may safely access a store at a time.
              > >If you have an HA framework that can ensure this atomicity fine, or are
              > >willing
              > >to ensure this manually then fine.
              > >
              > >>
              > >>
              > >> All I want is the concept of primary and secondary queue in a cluster.
              > >Go on using
              > >> the primary queue - but if it fails use the 2ndry queue. Kind of HttpSession
              > >replication
              > >> concept in clusters. My cluster purpose is more for failover rather than
              > >loadbalancing.
              > >
              > >If you use 7.0 you could use a distributed destination, with a high weight
              > >on the destination
              > >you want used most. Optionally, 7.0 will automatically forward messages
              > >from distr. dest
              > >members that have no consumers to those that do.
              > >
              > >In 6.1 you can emulate a distributed destination this way (from an upcoming
              > >white-paper):
              > >Approximating Distributed Queues in 6.1
              > >
              > >If you wish to distribute the destination across several servers in a cluster,
              > >use the distributed
              > >destination features built into WL 7.0. If 7.0 is not an option, you can
              > >still approximate a simple
              > >distributed destination when running JMS servers in a &#8220;single-tier&#8221;
              > configuration.
              > > Single-tier indicates
              > >that there is a local JMS server on each server that a connection factory
              > >is targeted at. Here is a
              > >typical scenario, where producers randomly pick which server and consequently
              > >which part of the
              > >distributed destination to produce to, while consumers in the form of MDBs
              > >are pinned to a particular
              > >destination and are replicated homogenously to all destinations:
              > >
              > >· Create JMS servers on multiple servers in the cluster. The servers will
              > >collectively host the
              > >distributed queue &#8220;A&#8221;. Remember, the JMS servers (and WL servers) must
              > >be named differently.
              > >
              > >· Configure a queue on each JMS server. These become the physical destinations
              > >that collectively become
              > >the distributed destination. Each destination should have the same name
              > >"A".
              > >
              > >· Configure each queue to have the same JNDI name &#8220;JNDI_A&#8221;, and also
              > take
              > >care to set the destination&#8217;s
              > >&#8220;JNDINameReplicated&#8221; parameter to false. The &#8220;JNDINameReplicated&#8221;
              > parameter
              > >is available in 7.0, 6.1SP3
              > >or later, or 6.1SP2 with patch CR061106.
              > >
              > >· Create a connection factory, and target it at all servers that have a
              > >JMS server with &#8220;A&#8221;.
              > >
              > >· Target the same MDB pool at each server that has a JMS server with destination
              > >&#8220;A&#8221;, configure its
              > >destination to be &#8220;JNDI_A&#8221;. Do not specify a connection factory URL
              > when
              > >configuring the MDB, as it can
              > >use the server&#8217;s default JNDI context that already contains the destination.
              > >
              > >· Producers look up the connection factory, create a connection, then a
              > >session as usual. Then producers
              > >look up the destination by calling javax.jms.QueueSession.createQueue(String).
              > > The parameter to
              > >createQueue requires a special syntax, the syntax is &#8220;./<queue name>&#8221;,
              > so
              > >&#8220;./A&#8221; works in this example.
              > >This will return a physical destination of the distributed destination that
              > >is local to the producer&#8217;s
              > >connection. This syntax is available on 7.0, 6.1SP3 or later, and 6.1SP2
              > >with patch CR072612.
              > >
              > >This design pattern allows for high availability, as if one server goes
              > >down, the distributed destination
              > >is still available and only the messages on that one server become unavailable.
              > > It also allows for high
              > >scalability as speedup is directly proportional to the number of servers
              > >on which the distributed
              > >destination is deployed.
              > >
              > >
              > >
              > >>
              > >> TIA
              > >> Anamitra
              > >
              > >
              > ><!doctype html public "-//w3c//dtd html 4.0 transitional//en">
              > ><html>
              > >Anamitra wrote:
              > ><blockquote TYPE=CITE>Hi
              > ><br>I have some very basic cluster questions on JMS Queues. Lets say Q1>I
              > >have 3 WLS
              > ><br>in cluster. I create the queue in only WLS#1 - then all the other WLS
              > >(#2 and #3)
              > ><br>should have a stub in their JNDI tree for the Queue which points to
              > >the Queue in
              > ><br>#1 - right?</blockquote>
              > >Its not a stub. But essentially right.
              > ><blockquote TYPE=CITE>Basically what I am trying to acheive is to have
              > >the queue in one server
              > ><br>and all the other servers have a pointer to it - I beleive this is
              > >possible in WLS
              > ><br>cluster - right??</blockquote>
              > >Certainly.
              > ><blockquote TYPE=CITE>
              > ><br>Q2> Is there any way a client to the queue running on a WLS can tell
              > >whether the
              > ><br>Queue handle its using is local (ie in the same server) or remote.
              > >Is the API createQueue(./queuename)
              > ><br>going to help here??</blockquote>
              > >That would do it. This returns the queue on the
              > >CF side of the established Connection.
              > ><blockquote TYPE=CITE>
              > ><br>Q3>Is there any way to create a Queue dynamically - I guess JMX is
              > >the answer -right?
              > ><br>But I will take this question a bit further - lets say Q1 answer is
              > >yes. In this
              > ><br>case if server #1 crashes - then #2 and #3 have no Queues. So if they
              > >try to create
              > ><br>a replica of the Queue (as on server#1) - pointing to the same filestore
              > >- can they
              > ><br>do it?? <br>
              > >- I want only one of them to succed in creating the Queue and also the
              > >Queue
              > ><br>should have all the data of the #1 Queue (1 to 1 replica).</blockquote>
              > >No. Not possible. Corruption city.
              > ><br>Only one server may safely access a store at a time.
              > ><br>If you have an HA framework that can ensure this atomicity fine, or
              > >are willing
              > ><br>to ensure this manually then fine.
              > ><blockquote TYPE=CITE>
              > ><p>All I want is the concept of primary and secondary queue in a cluster.
              > >Go on using
              > ><br>the primary queue - but if it fails use the 2ndry queue. Kind of HttpSession
              > >replication
              > ><br>concept in clusters. My cluster purpose is more for failover rather
              > >than loadbalancing.</blockquote>
              > >If you use 7.0 you could use a distributed destination, with a high weight
              > >on the destination
              > ><br>you want used most. Optionally, 7.0 will automatically
              > >forward messages from distr. dest
              > ><br>members that have no consumers to those that do.
              > ><p><i>In 6.1 you can emulate a distributed destination this way (from an
              > >upcoming white-paper):</i>
              > ><br><i>Approximating Distributed Queues in 6.1</i><i></i>
              > ><p><i>If you wish to distribute the destination across several servers
              > >in a cluster, use the distributed destination features built into WL 7.0.
              > >If 7.0 is not an option, you can still approximate a simple distributed
              > >destination when running JMS servers in a &#8220;single-tier&#8221; configuration.
              > >Single-tier indicates that there is a local JMS server on each server that
              > >a connection factory is targeted at. Here is a typical scenario,
              > >where producers randomly pick which server and consequently which part
              > >of the distributed destination to produce to, while consumers in the form
              > >of MDBs are pinned to a particular destination and are replicated homogenously
              > >to all destinations:</i><i></i>
              > ><p><i>· Create JMS servers on multiple servers in the cluster.
              > >The servers will collectively host the distributed queue &#8220;A&#8221;. Remember,
              > >the JMS servers (and WL servers) must be named differently.</i><i></i>
              > ><p><i>· Configure a queue on each JMS server. These become
              > >the physical destinations that collectively become the distributed destination.
              > >Each destination should have the same name "A".</i><i></i>
              > ><p><i>· Configure each queue to have the same JNDI name &#8220;JNDI_A&#8221;,
              > >and also take care to set the destination&#8217;s &#8220;JNDINameReplicated&#8221;
              > parameter
              > >to false. The &#8220;JNDINameReplicated&#8221; parameter is available in
              > >7.0, 6.1SP3 or later, or 6.1SP2 with patch CR061106.</i><i></i>
              > ><p><i>· Create a connection factory, and target it at all servers
              > >that have a JMS server with &#8220;A&#8221;.</i><i></i>
              > ><p><i>· Target the same MDB pool at each server that has a JMS server
              > >with destination &#8220;A&#8221;, configure its destination to be &#8220;JNDI_A&#8221;.
              > >Do not specify a connection factory URL when configuring the MDB, as it
              > >can use the server&#8217;s default JNDI context that already contains the destination.</i><i></i>
              > ><p><i>· Producers look up the connection factory, create a connection,
              > >then a session as usual. Then producers look up the destination by
              > >calling javax.jms.QueueSession.createQueue(String). The parameter
              > >to createQueue requires a special syntax, the syntax is &#8220;./<queue name>&#8221;,
              > >so &#8220;./A&#8221; works in this example. This will return a physical
              > >destination of the distributed destination that is local to the producer&#8217;s
              > >connection. This syntax is available on 7.0, 6.1SP3 or later,
              > >and 6.1SP2 with patch CR072612.</i><i></i>
              > ><p><i>This design pattern allows for high availability, as if one server
              > >goes down, the distributed destination is still available and only the
              > >messages on that one server become unavailable. It also allows
              > >for high scalability as speedup is directly proportional to the number
              > >of servers on which the distributed destination is deployed.</i>
              > ><br><i></i>
              > ><br><i></i>
              > ><blockquote TYPE=CITE>
              > ><br>TIA
              > ><br>Anamitra</blockquote>
              > ></html>
              > >
              > >
              

  • Latest Patch Cluster

    Hi,
    Can some help me, how to find and download the latest patch cluster package for sparc machine.
    Thanks
    Karthik

    Moderator Action:
    Not a hardware question. It is an OS question.
    Your post has been moved from the hardware forum you had put it into
    to a Solaris forum.
    (we guessed Solaris 10 -- you didn't botrher to mention what you are using.)
    Suggestion:
    Since patches and patch clusters are only available to those with service contract login credentials to Oracle Support, you might try to log into MOS and search over there.

  • Management console problem after installing recommended patch cluster

    Hello everyone.
    I just got a new SUN 440 server, it didn't come with the solaris os installed and so I installed solaris 9 12/03 to it. The installation went fine and I could start the management console with no problems at all. Then I went to sunsolve.sun.com to download the latest recommended patch clusters for solaris 9 (this was yesterday Sept.-06-2006), I downloaded the file, checked it with md5sum, unzipped it, went to single user mode and ran the install_patch script. After the patch cluster finished installing I rebooted the server for the patches to take effect, I logged in, tried to run console management 2.1 and it starts OK but I cannot use any services it offers because it says (in the console events):
    "Server Not Running: no solaris management console server was available on the specified server. Please ensure there is a Solaris Management Console server available on the specified host and that it is running"
    When I click on "See Exception" I get (for a particular service that is... in this case com.sun.admin.fsmgr.client.VFsMgr - for managing file systems):
    java.rmi.RemoteException: Server RMI is null:
    at sun.com.management.viperimpl.client.ViperClient.lookupServer(ViperClient.java:376)and so on...
    I already checked that the service is running on port 898 and it seems that there is no other app. using that same port. I also restarted the service using /etc/init.d.init stop/status/start but I get the same results.
    I don't know solaris very much and don't know what else to do, please help.
    Thanks in advance.

    Well, I guess what you see isn't what you get.
    Guess I'm used to the fact USENET just left things formatted the way they were :-O

  • How to backout a Recommended Patch cluster deployment in Solstice disksuite

    Hello Admin's,
    I am planning to use the following Plan of action for deploying the latest Solaris 8 Recommended Patch cluster on the production server's i support.My
    concern is if the patching activity fails or the application and Oracle databases dont come up after deploying the patch cluster.How do i revert back to system to its original state using the submirror's which i have detached prior to patching the system :
    1) Will shutdown the applications and the databases on the server:
    2) Will capture the output of the following commands :
    df -k
    ifconfig -a
    contents of the files /etc/passwd /etc/shadow /etc/vfstab /etc/system
    metastat -p
    netstat -rn
    prtvtoc /dev/rdsk/c1t0d0s0
    prtvtoc /dev/rdsk/c1t1d0s0
    3) We bring the system to the ok prompt
    4) We will try to boot the system from both the disk's which are part of the d10 metadevice for root filesystem
    =======================================================================================
    user1@myserver>pwd ; df -k / ; ls -lt | egrep '(c1t0d0s0|c1t1d0s0)' ; prtconf -vp | grep bootpath ; metastat d10
    /dev/dsk
    Filesystem kbytes used avail capacity Mounted on
    /dev/md/dsk/d10 8258597 3435895 4740117 43% /
    lrwxrwxrwx 1 root root 43 Jul 28 2003 c1t0d0s0 -> ../../devices/pci@1c,600000/scsi@2/sd@0,0:a
    lrwxrwxrwx 1 root root 43 Jul 28 2003 c1t1d0s0 -> ../../devices/pci@1c,600000/scsi@2/sd@1,0:a
    bootpath: '/pci@1c,600000/scsi@2/disk@0,0:a'
    d10: Mirror
    Submirror 0: d11
    State: Okay
    Submirror 1: d12
    State: Okay
    Pass: 1
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 16779312 blocks
    d11: Submirror of d10
    State: Okay
    Size: 16779312 blocks
    Stripe 0:
    Device Start Block Dbase State Hot Spare
    c1t0d0s0 0 No Okay
    d12: Submirror of d10
    State: Okay
    Size: 16779312 blocks
    Stripe 0:
    Device Start Block Dbase State Hot Spare
    c1t1d0s0 0 No Okay
    user1@myserver>
    ===================================================================================
    ok nvalias backup_root <disk path>
    Redefine the boot-device variable to reference both the primary and secondary submirrors, in the order in which you want to access them. For example:
    ok printenv boot-device
    boot-device= disk net
    ok setenv boot-device disk backup_root net
    boot-device= disk backup_root net
    In the event of primary root disk failure, the system automatically boots from the secondary submirror. To test the secondary submirror, boot the system manually, as follows:
    ok boot backup_root
    user1@myserver>metadb -i
    flags first blk block count
    a m p luo 16 1034 /dev/dsk/c1t0d0s7
    a p luo 1050 1034 /dev/dsk/c1t0d0s7
    a p luo 2084 1034 /dev/dsk/c1t0d0s7
    a p luo 16 1034 /dev/dsk/c1t1d0s7
    a p luo 1050 1034 /dev/dsk/c1t1d0s7
    a p luo 2084 1034 /dev/dsk/c1t1d0s7
    o - replica active prior to last mddb configuration change
    u - replica is up to date
    l - locator for this replica was read successfully
    c - replica's location was in /etc/lvm/mddb.cf
    p - replica's location was patched in kernel
    m - replica is master, this is replica selected as input
    W - replica has device write errors
    a - replica is active, commits are occurring to this replica
    M - replica had problem with master blocks
    D - replica had problem with data blocks
    F - replica had format problems
    S - replica is too small to hold current data base
    R - replica had device read errors
    user1@myserver>df -k
    Filesystem kbytes used avail capacity Mounted on
    /dev/md/dsk/d10 8258597 3435896 4740116 43% /
    /dev/md/dsk/d40 2053605 929873 1062124 47% /usr
    /proc 0 0 0 0% /proc
    fd 0 0 0 0% /dev/fd
    mnttab 0 0 0 0% /etc/mnttab
    /dev/md/dsk/d30 2053605 937231 1054766 48% /var
    swap 2606008 24 2605984 1% /var/run
    swap 6102504 3496520 2605984 58% /tmp
    /dev/md/dsk/d60 13318206 8936244 4248780 68% /u01
    /dev/md/dsk/d50 5161437 2916925 2192898 58% /opt
    user1@myserver>metastat d40
    d40: Mirror
    Submirror 0: d41
    State: Okay
    Submirror 1: d42
    State: Okay
    Pass: 1
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 4194828 blocks
    d41: Submirror of d40
    State: Okay
    Size: 4194828 blocks
    Stripe 0:
    Device Start Block Dbase State Hot Spare
    c1t0d0s4 0 No Okay
    d42: Submirror of d40
    State: Okay
    Size: 4194828 blocks
    Stripe 0:
    Device Start Block Dbase State Hot Spare
    c1t1d0s4 0 No Okay
    user1@myserver>metastat d30
    d30: Mirror
    Submirror 0: d31
    State: Okay
    Submirror 1: d32
    State: Okay
    Pass: 1
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 4194828 blocks
    d31: Submirror of d30
    State: Okay
    Size: 4194828 blocks
    Stripe 0:
    Device Start Block Dbase State Hot Spare
    c1t0d0s3 0 No Okay
    d32: Submirror of d30
    State: Okay
    Size: 4194828 blocks
    Stripe 0:
    Device Start Block Dbase State Hot Spare
    c1t1d0s3 0 No Okay
    user1@myserver>metastat d50
    d50: Mirror
    Submirror 0: d51
    State: Okay
    Submirror 1: d52
    State: Okay
    Pass: 1
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 10487070 blocks
    d51: Submirror of d50
    State: Okay
    Size: 10487070 blocks
    Stripe 0:
    Device Start Block Dbase State Hot Spare
    c1t0d0s5 0 No Okay
    d52: Submirror of d50
    State: Okay
    Size: 10487070 blocks
    Stripe 0:
    Device Start Block Dbase State Hot Spare
    c1t1d0s5 0 No Okay
    user1@myserver>metastat -p
    d10 -m d11 d12 1
    d11 1 1 c1t0d0s0
    d12 1 1 c1t1d0s0
    d20 -m d21 d22 1
    d21 1 1 c1t0d0s1
    d22 1 1 c1t1d0s1
    d30 -m d31 d32 1
    d31 1 1 c1t0d0s3
    d32 1 1 c1t1d0s3
    d40 -m d41 d42 1
    d41 1 1 c1t0d0s4
    d42 1 1 c1t1d0s4
    d50 -m d51 d52 1
    d51 1 1 c1t0d0s5
    d52 1 1 c1t1d0s5
    d60 -m d61 d62 1
    d61 1 1 c1t0d0s6
    d62 1 1 c1t1d0s6
    user1@myserver>pkginfo -l SUNWmdg
    PKGINST: SUNWmdg
    NAME: Solstice DiskSuite Tool
    CATEGORY: system
    ARCH: sparc
    VERSION: 4.2.1,REV=1999.11.04.18.29
    BASEDIR: /
    VENDOR: Sun Microsystems, Inc.
    DESC: Solstice DiskSuite Tool
    PSTAMP: 11/04/99-18:32:06
    INSTDATE: Apr 16 2004 11:10
    VSTOCK: 258-6252-11
    HOTLINE: Please contact your local service provider
    STATUS: completely installed
    FILES: 150 installed pathnames
    6 shared pathnames
    19 directories
    1 executables
    7327 blocks used (approx)
    user1@myserver>
    =======================================================================================
    5) After successfully testing the above we will bring the system to the single user mode
    # reboot -- -s
    6) Detach the following sub-mirrors :
    # metadetach -f d10 d12
    #metadetach -f d30 d32
    #metadetach -f d40 d42
    #metadetach -f d50 d52
    # metastat =====> (to check if the submirror�s are successfully detached)
    7) Applying patch on the server
    After patch installation is complete will be rebooting the server to single user mode
    # reboot -- -s
    confirming if patch installation was successful (uname �a) .
    8) Will be booting the server to the multiuser mode (init 3) and confirming with the database and the application teams if the
    Applications/databases are working fine .Once confirmed successful will be reattaching the d12 submirror.
    # metattach d12 d10
    # metattach d30 d32
    #metattach d40 d42
    # metattach d50 d52
    # metastat d10 =====> (to check the submirror is successfully reattached)
    user1@myserver>uname -a ; cat /etc/release ; date
    SunOS myserver 5.8 Generic_117350-04 sun4u sparc SUNW,Sun-Fire-V210
    Solaris 8 HW 12/02 s28s_hw1wos_06a SPARC
    Copyright 2002 Sun Microsystems, Inc. All Rights Reserved.
    Assembled 12 December 2002
    Mon Apr 14 17:10:09 BST 2008
    -----------------------------------------------------------------------------------------------------------------------------

    Recommended patch sets are and to the best of my knowledge have always been regenerated twice a month.
    I think your thinking of maintenance releases. When they generate a new CD image which can be used to do an upgrade install.
    They try to generate those every 6 months, but the schedule often slips.
    The two most recent were sol10 11/06 and sol10 8/07.

  • Non priviledged commands fail after patch cluster install

    For several years, to allow users to access system functions for higher clock resolutions in their
    applications, we have made the following mods to two files (per Sun recommendations):
    In the /etc/system file, add lines:
    * 10 ms clock resolution*
    set higher_tick = 1*
    In /etc/user_attr, add:
    dssuser::::type=normal;defaultpriv=proc_clock_highres,file_link_any,proc_info,proc_session,proc_fork,proc_exec
    Everything was fine until I installed the latest patch cluster (from 2-14-2011) on our Netra 210. No issues installing the cluster. Upon reboot, the dssuser cannot do some simple things, like use ftp or ping. Messages like this:
    ping: unknown host
    ftp: socket: Permission denied.
    The root user has no problems using any of these commands. As soon as I comment out the dssuser line from the /etc/user_attr file, everything works as expected. But we need the highres clock settings for our applications. I am unfamiliar with the workings and intent of the entries in user_attr so any help would be most appreciated.
    Mark

    The problem is that after patch implementation, the special privileges for any user defined in the /etc/user_attr file were modified to append the string "!net_privaddr" specifically denying use of common commands ping and ftp for example. The output of ppriv:
    $ ppriv $$
    14870: -sh
    flags = <none>
    E: basic,proc_clock_highres,!net_privaddr
    I: basic,proc_clock_highres,!net_privaddr
    P: basic,proc_clock_highres,!net_privaddr
    L: all
    I had to add the keyword net_privaddr in the user_attr file like:
    dssuser::::type=normal;defaultpriv=proc_clock_highres,file_link_any,proc_info,proc_session,proc_fork,proc_exec,net_privaddr
    to get past this problem. A bug or what?

  • Solaris 2.6 Patch cluster

    Is there anyone out there knows where I can get the last Solaris 2.6 patch cluster? I know it's de-supported now but sadly some legacy applications are still out there and need to run on old software/hardware.
    Many thanks in advance,
    Jonathan

    if you run a pstack against the core, I might
    recognize the failure. Usually, logging is of no
    use, as we don't log a crash. . .Thanks. I'll do that. It just generated a new core file.
    >
    I don't know of any problems with installing 4.15p7,
    nor of installing later hotfixes on top of it, and
    I've been supporting Messaging Server since 4.0 came
    out. . . .
    If you've had file permission issues since 4.05,
    that's truly news to me, unless it was that pesky
    problem with Admin Server, where it would change the
    permissions of the logs when it went to roll them
    over. I don't remember when we fixed that one, but
    we did..We've had file permission issues with the patches since 4.0. When we'd upgrade some of the files would be owned by root and not exec'able by nobody. Some of the patches would fail when the patch process would try to bind to the directory (I don't know why). We've used NMS since the 3.x version. It has also been a long time since we tried to patch to app because our experience was bad. It was stable for a long time (once we disabled shared folders and resolved the nscd issue).

  • Solaris 8 patches added to Solaris 9 patch cluster and vice versa

    Has anyone noticed this? On the Solaris 8 and 9 patch cluster readmes, it shows sol 9 patches have been added to the sol 8 cluster and sol 8 patches have been added to the sol 9 cluster. what's the deal? I haven't found any information about whether or not this is a mistake.

    Desiree,
    Solaris 9's kernel patch 112233-12 was the last revision for that particular patch number. The individual zipfile became so large that it was subsequently supplanted by 117191-xx and that has also been supplanted when its zipfile became very large, by 118558-xx.
    Consequently you will never see any newer version than 112233-12 on that particular patch.
    What does <b>uname -a</b> show you for that system?
    Solaris 8 SPARC was similarly effected, for 108528, 117000 and 117350 kernel patches.
    If you have login privileges to Sunsolve, find <font color="teal">Infodoc 76028</font>.

  • Solaris 8 x86 patch cluster installation

    hi. i recently installed solaris 8 x86 on my intel p4 machine. i got through the installation fine but i am now running into problems with installing the 8x86 recommended patch cluster.
    whenever i run the install_cluster script, it says every patch failed to install due to return code 1. i even tried manually installing the patches manually via patchadd...but i end up getting a message saying the patch directory is not valid.
    i basically downloaded the 8x86 recommended patch cluster zip file on a windows 2k machine. i unzip the file and then burn the patches onto a cd. i then copy the patches from the cd onto my solaris machine and try to install the patches that way. so far...this doesn't work..and i dont know what im doing wrong.
    does anyone know how to fix this problem? thx.

    hi again. i just fixed my problem. apparently when i unzipped the file and then burned it onto cd...the data got corrupted. i fixed it by copying the zip file onto cd and then extracting the patches onto the solaris machine.

  • Latest patch cluster idated march 14 nstalled on solaris10

    I installed the latest patch cluster on my solaris10 system and afterwards the Solaris Management console would not work. I kept getting the error that the server was not running. I followed the limited trouble-shooting guidelines for stopping and starting the service but still did not work. After reviwing the patch cluster I discoved there was a patch that updated smc. There was no special instructions for this patch. I removed the patch and the smc worked find. Has anyone had any problems with patch 121308-12.

    Patch 119313-19(SPARC), 119314-20(x86) will resolve your problem.
    But these patch are NOT included in the recommanded pathces.
    You need to join Sun's SunSolve or Sun's standard support.

  • August Patch Cluster Problems

    Has anyone had the following issue after installing the latest Patch Cluster?
    After a reboot I get
    couldn't set locale correctly
    To correct this I have to edit /etc/default/init
    and remove
    LC_COLLATE=en_GB.ISO8859-1
    LC_CTYPE=en_GB.ISO8859-1
    LC_MESSAGES=C
    LC_MONETARY=en_GB.ISO8859-1
    LC_NUMERIC=en_GB.ISO8859-1
    LC_TIME=en_GB.ISO8859-1
    If I then create a flash archive and use this flash archive the jumpstart process then puts the locale info back and the problem appears again.
    It's not critical as I don't need to be on the latest Patch Cluster but would wondered if I'm the only one having issues?

    If you open the directory in CDE's file manager, right click on the zipped file and select unzip. The cluster will be unzipped to a directory structure called x86_recommended or something of the sort. Change to that directory to run the patch cluster install script. The patch script is looking for that directory structure.
    Lee

  • Patch cluster 3/26/10 will hose X4500 disk management task in multiuser

    I just want to warn user about the potential problem that you will encounter if you have a SUN X4500 system.
    After applying patch cluster 3/26/10 on a SUN X4500 (thumper), now I can not do many disk management tasks in multiuser mode, such as
    format
    zpool create
    and possibly many more commands (involving disk devices) would just hang.
    Drop it to single user mode, however, you CAN still do these management task. I had to create zpool in single user mode
    zfs create pool/zfsset still works in multiuser mode.
    Thank you for your attention.

    I'm having exactly the same problem on an x4500 server running OpenSolaris snv_127. We suffered a disk failure (c7t0d0), so I resilvered one of the pools onto a spare drive, which succeeded:
    # zpool history data
    <snip>
    2010-04-20.14:45:08 zpool offline data c7t0d0
    2010-04-20.14:58:19 zpool replace -f data c7t0d0 c7t1d0
    # zpool status data
    pool: data
    state: ONLINE
    scrub: resilver completed after 43h58m with 0 errors on Thu Apr 22 10:56:50 2010
    config:
    NAME STATE READ WRITE CKSUM
    data ONLINE 0 0 0
    raidz2-0 ONLINE 0 0 0
    c11t1d0 ONLINE 0 0 0
    c10t1d0 ONLINE 0 0 0
    c13t1d0 ONLINE 0 0 0
    c12t0d0 ONLINE 0 0 0
    c8t1d0 ONLINE 0 0 0
    c12t1d0 ONLINE 0 0 0
    c10t0d0 ONLINE 0 0 0
    c10t7d0 ONLINE 0 0 0
    c12t2d0 ONLINE 0 0 0
    c8t0d0 ONLINE 0 0 0
    c7t1d0 ONLINE 0 0 0 381G resilvered
    As you can see, the drive c7t0d0 has been successfully replaced and no longer appears in any zpool:
    # zpool status | grep c7t0d0
    # iostat -En | grep c7t0d0
    c7t0d0 Soft Errors: 2 Hard Errors: 4 Transport Errors: 0
    # cfgadm -l | grep c7t0d0
    sata1/0::dsk/c7t0d0 disk connected configured ok
    However when I attempt to unconfigure the sata port in preperation for replacing the drive, it doesn't work:
    # cfgadm -c unconfigure sata1/0
    Unconfigure the device at: /devices/pci@0,0/pci1022,7458@1/pci11ab,11ab@1:0
    This operation will suspend activity on the SATA device
    Continue (yes/no)? yes
    cfgadm: Hardware specific failure: Failed to unconfig device at ap_id: /devices/pci@0,0/pci1022,7458@1/pci11ab,11ab@1:0
    Also just pulling out the drive doesn't work, you end up with an unusable unconfigured port:
    # cfgadm -l | grep unusable
    sata6/0 disk connected unconfigured unusable
    It's highly frustrating, because it means to replace disks we'll have to shut this server down. Not ideal!
    I'm fairly certain this used to work fine.
    If anyone has a solution, I'm all ears. I'm going to update this box to snv_130 soon (mainly to get the new zfs resilvering code) but I'm not hopeful about this bug/issue.
    Regards,
    Alasdair

  • GLOBAL ZONE stuck in shutting_down after applying latest patch cluster

    Hi,
    after installing the latest patch cluster, my zones on the system are not accessable
    root@typhoon # zoneadm list -civ
    ID NAME STATUS PATH BRAND IP
    0 global shutting_down / native shared
    - typhoon-log installed /local/zone/typhoon-log native shared
    - typhoon-ftp installed /local/zone/typhoon-ftp native shared
    - typhoon-rndrepos installed /local/zone/typhoon-rndrepos native shared
    - typhoon-drpreview installed /local/zone/typhoon-test native shared
    - typhoon-ossec installed /local/zone/typhoon-ossec native shared
    - typhoon-drjboss installed /local/zone/typhoon-drjboss native shared
    - typhoon-webdata installed /datapool/web/zones/webdata native shared
    - typhoon-webzone installed /datapool/web/zones/typhoon-webzone native shared
    # uname -a
    SunOS typhoon 5.10 Generic_139556-08 i86pc i386 i86pc
    there is one service reporting in maintenane
    # svcs -xv
    svc:/system/filesystem/volfs:default (Volume Management filesystem)
    State: maintenance since Wed Jul 15 20:04:07 2009
    Reason: Restarting too quickly.
    See: http://sun.com/msg/SMF-8000-L5
    See: man -M /usr/man -s 7FS volfs
    See: /var/svc/log/system-filesystem-volfs:default.log
    Impact: This service is not running.
    I tried stopping and starting the vold, but without any effect
    the / is on a zfs pool
    # zpool list
    NAME SIZE USED AVAIL CAP HEALTH ALTROOT
    datapool 576G 105G 471G 18% ONLINE -
    datapool2 148G 111G 37.0G 74% ONLINE -
    rpool 120G 37.1G 82.9G 30% ONLINE -
    root@typhoon # zpool status
    pool: datapool
    state: ONLINE
    scrub: scrub completed after 0h31m with 0 errors on Thu Jul 16 02:31:53 2009
    config:
    NAME STATE READ WRITE CKSUM
    datapool ONLINE 0 0 0
    mirror ONLINE 0 0 0
    c2d1s3 ONLINE 0 0 0
    c1d1s3 ONLINE 0 0 0
    errors: No known data errors
    pool: datapool2
    state: ONLINE
    scrub: scrub completed after 1h53m with 0 errors on Wed Jul 15 23:53:06 2009
    config:
    NAME STATE READ WRITE CKSUM
    datapool2 ONLINE 0 0 0
    mirror ONLINE 0 0 0
    c2d0s0 ONLINE 0 0 0
    c1d0s3 ONLINE 0 0 0
    errors: No known data errors
    pool: rpool
    state: ONLINE
    scrub: scrub completed after 0h26m with 0 errors on Thu Jul 16 06:20:39 2009
    config:
    NAME STATE READ WRITE CKSUM
    rpool ONLINE 0 0 0
    mirror ONLINE 0 0 0
    c1d1s0 ONLINE 0 0 0
    c2d1s0 ONLINE 0 0 0
    errors: No known data errors
    root@typhoon #
    # tail /var/svc/log/system-filesystem-volfs:default.log
    [ Jul 16 08:03:11 Method "start" exited with status 0 ]
    Thu Jul 16 08:03:11 2009 fatal: mounting of "/vol" failed
    [ Jul 16 08:03:11 Stopping because all processes in service exited. ]
    [ Jul 16 08:03:11 Executing stop method (:kill) ]
    [ Jul 16 08:03:11 Executing start method ("/lib/svc/method/svc-volfs start") ]
    [ Jul 16 08:03:11 Method "start" exited with status 0 ]
    Thu Jul 16 08:03:11 2009 fatal: mounting of "/vol" failed
    [ Jul 16 08:03:11 Stopping because all processes in service exited. ]
    [ Jul 16 08:03:11 Executing stop method (:kill) ]
    [ Jul 16 08:03:11 Restarting too quickly, changing state to maintenance ]
    please advise, (this was on test run before implemention this patch cluster on the production server).
    And I need to get a solution, to get my zone back running and then decide what to do with this patch cluster
    thx
    Edited by: Denis@SDN on Jul 16, 2009 1:06 AM

    Sound familiar?
    [http://opensolaris.org/jive/thread.jspa?threadID=105001&tstart=0]
    This guy killed a process as workaround
    [http://alittlestupid.com/2009/07/04/solaris-zone-stuck-in-shutting_down-state/]
    We patched some SPARC systems recently and no issues though that's little consolation to you x86 admins.

  • SPARC10 not booting after 2.5.1 Recommended Patch Cluster Install

    Hi everyone,
    Just finished installing the "latest" patch cluster on my ancient SPARC10, and now it doesn't boot. Here's what I see:
    ok boot /iommu/sbus/espdma&#64;f,400000/esp&#64;f,800000/sd&#64;0,0
    screen not found.
    Can't open input device.
    Keyboard not present. Using tty for input and output.
    SPARCstation 10 (1 X 390Z55), No Keyboard
    ROM Rev. 2.25, 112 MB memory installed, Serial #6332710.
    Ethernet address 8:0:20:12:7f:99, Host ID: 7260a126.
    Rebooting with command: /iommu/sbus/espdma&#64;f,400000/esp&#64;f,800000/sd&#64;0,0
    Boot device: /iommu/sbus/espdma&#64;f,400000/esp&#64;f,800000/sd&#64;0,0 File and args:
    not found: abort_enable
    not found: abort_enable
    not found: tod_validate
    not found: tod_fault_reset
    krtld: error during initial load/link phase
    Memory Address not Aligned
    Type help for more information
    ok
    At first I thought I had a RAM problem. I've swapped RAM, but that didn't make it boot. I then thought I had a system board problem. I've swapped that as well, and still no dice. I wish I'd have kept the list of patches that successfully installed like my gut told me to, but I didn't. Searching Google and the Sun websites didn't turn up any fixes to the problem, so I'm hoping I can find help here.
    Did the patches clobber something the machine needs to boot? I believe the OS disks are mirrored, and I've tried booting off each submirror, but the result is the same.
    If I can avoid booting off of CD-ROM, that would be ideal since I don't have an external CD-ROM available to boot from.
    Thanks for any help!

    Ouch !
    Haven't seen that one in years !
    It's your choice of that ancient Operating Environment, combined with a cpu faster than 419MHz.
    I hope you still have the extra OECD disk that was packed with that system when it was first shipped from Sun.
    Otherwise you may just be out of luck with using Solaris 2.5.1
    Here's a link at a non-Sun 3rd party web site to SRDB document *20576*
    http://www.sunshack.org/data/sh/2.1/infoserver.central/data/syshbk/collections/intsrdb/20576.html
    Perhaps MAAL and haroldb can contribute their thoughts on this.

Maybe you are looking for

  • Ipod generation 1 apps opening and closing instantly with software version

    I have a Ipod Touch generation 1 and have just downloaded version 3.1.3 upgrading from 1.1.5. I am now having a problem with all the apps that I have downloaded from the App store through the IPOD. The apps are opening and closing instantly. Any comm

  • One-Time Customer or Vendor Address Printing in Smart Forms

    Hi friends, I am using the Address Node in Smart Forms to print the Addresses in Tax Invoice Printing. How to get the address details for the One-Time Customers which is stored in the BSEC table using the same Address Function. (We can directly get t

  • How do I show duplicates in iPhoto?

    I have multiple copies of all my photos in iPhoto... I would like to delete all the duplicates. plz help!

  • Working together in indesign

    Hello, i am a student with CC and i am making a magazine with my friend who has cs6, is there a way we can work togeather on the same file over the cloud and the same time similar to google docs. I know my friend will need to upgrade to Cc but are yo

  • Very Big Problem KT4Ultra BIOS I/O

              hi my pc is an Athlon 2400+ with MS 6590 Kt4 ultra with 1.3 and XP home. I get this error when i use my pc (especially with videog) that reboots the pc: BIOS AMLI is trying to read an invalid adress from IO port (0x75)or 0x74) in the protec