Patching a SunCluster node

Hi,
we try to pach a Sun Cluster environment:
2 Nodes, Solaris 10 and SC 3.2
We want to patch the systems in non cluster nodes, but during the patch, the system is rebooted and starts in cluster-mode.
One workaround would be booting of one node in multiuser mode without cluster-services and force patching in mutli-user-mode.
Does anyboy has any experiences or ideas?
Thanks
Greets
Bjoern

For SunCluster Environments, the following method can be used:
- use Sun Connection to analyze needed patches with simulate job
- create patch set for manual installation (with create_patchset*)
- manually reboot into non-cluster mode with (boot -sx)
- manually add patch set with 'install_all_patches'*
- reboot in cluster mode
*If you do not have these scripts, they can be obtained from your support representative.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

Similar Messages

  • Rolling Patch on 11gR2 nodes

    Environment:
    Operating System  Red Hat Enterprise Linux Server release 5.7 (Tikanga) 2.6.18 274.7.1.el5 (64-bit)       
    RDBMS:   Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    It is a 3 Node RAC.
    I would like to have the step in brief and in detail to perform the PSU 7 on ROLLING UPGRADE method over 3 Node RAC.
    Please also point me to any link or details.
    Thanks Sivaprasad.S

    IMP NOTE:
    TIME TAKEN FOR EACH NODE 1hr 30 mins
    STEP 01 : CREATE PSU DIRECTORY -- ON ALL NODES
    $cd $ORACLE_HOME
    $mkdir ps
    STEP 02: DOWNLOAD THE UPLOAD THE LATEST OPATCH TO THE SERVER -- ON ALL NODES
    copy <NAME>.zip file to all the servers into psu folder which are part of the cluster
    STEP 03: DOWNLOAD AND UPLOAD THE GI AND DB PATCH TO THE SERVER -- ON ALL NODES
    copy <NAME>.zip file to all the servers into psu folder which are part of the cluster
    STEP 04: STOP THE AGENT -- ON FIRST NODE1
    $. oraenv
    ORACLE_SID = [agent] ?agent
    $emctl stop agent
    STEP 05: SHUTDOWN THE INSTANCE -- ON FIRST NODE1
    $. oraenv
    ORACLE_SID = [orcl1] ?orcl1
    $srvctl stop instance -i orcl1 -d orcl -o immediate
    STEP 06: STOP CRS -- ON FIRST NODE1
    #cd $CRS_HOME/bin
    #./crsctl stop crs
    STEP 07: APPLY PATCH FOR GRID (GI)  -- ON FIRST NODE1
    As root user on NODE1 execute:
    <GRID_HOME>/OPatch/opatch auto <GRID_HOME>/psu -och <GRID_HOME> -ocmrf <GRID_HOME>/OPatch/ocm/bin/ocm.rsp
    STEP 08: APPLY PATCH FOR INSTANCE (DBI)
    As oracle user :
    .$ORACLE_HOME/OPatch/opatch apply -local
    $./$ORACLE_HOME/OPatch/opatch apply -local
    STEP 06: START CRS
    $cd $CRS_HOME/bin
    $sudo ./crsctl start crs
    STEP 07: VERIFICATION
    Verify that an $ORACLE_HOME/psu/*********/catbundle_PSU_<database SID>_ROLLBACK.sql file exists for each database associated with this ORACLE_HOME.
    Repeat the STEP04,STEP05,STEP07,STEP8 on NODE2 and NODE3 one after the another
    LAST BUT NOT LEAST:
    RUN CATBUNDLE.sql -- Only in with last instance

  • RAC Patching & status of nodes

    Hi All
    1. After doing patching in RAC environment, How come we can make sure that particular node is patched properly? Are there any table available to query?
    2. What are all the ways to identify the status (alive or dead) of any particular node in RAC environment? We can check with srvctl, crs_stat from surviving node, through OEM etc. Any other methods?
    Thanks

    1. use "opatch lsinv" to get all installed patches of a particular oracle home.
    2. manually login to each instance ^^

  • Patching on a Dual node

    Hi ,
    Can you please explain me , what should be the process of applying a patch
    on a dual node ebiz environment .
    for ex if all of my application tier(forms,reports,admin,apache etc) is on one node and my database on other node .
    when applying a patch , should i be applying the patch on both nodes ? or should i be applying on only application tier node ?
    is there any difference applying a patch on a single node and multinode please
    kind regards
    Krish

    for ex if all of my application
    tier(forms,reports,admin,apache etc) is on one node
    and my database on other node .
    when applying a patch , should i be applying the
    patch on both nodes ? or should i be applying on only
    application tier node ?With this configuration, you only need to apply it on the application tier node.
    is there any difference applying a patch on a single node and multinode pleaseIf you have more than one application tier (more than one APPL_TOP), then you need to apply a patch on each node.

  • Reservation conflict after patching

    Hi All,
    I just added the latest SunCluster patches and some adjustment to the
    ssd.conf file. After this I got reservation conflicts on all LUN:s that belongs to the cluster. I'm able to switch a resource group to the newly patched node after
    doing scsi scrub an all the LUN:s that belongs to the RG .
    I'm I way lost here?, is it a normal state that a disk-path does not have any reservation keys?
    /BR
    ulf

    Hi,
    I'm pretty sure that SCSI3-PGR is used here. The default_fencing
    for all disks is global. And it seems like it is derived this from the global_fencing that was set to prefer3.
    Even though it looks ok I still got the reservation panics
    when moving a disk to the other node.
    It seems like scrubbing and recreate the keys is the
    only way out here. And then be able to patch the second
    node.
    # grep access_mode /etc/cluster/ccr/infrastructure
    cluster.quorum_devices.2.properties.access_mode scsi3
    All shared disks looks the same.
    # ./scsi -c inkeys -d /dev/did/rdsk/d80
    Reservation keys(2):
    0x460a6dfb00000002
    0x460a6dfb00000001
    # ./scsi -c inresv -d /dev/did/rdsk/d80
    Reservations(1):
    0x460a6dfb00000002
    type ---> 5
    # ./scsi -c status -d /dev/did/rdsk/d80
    status...0
    /BR
    Ulf

  • Patch 115780-03

    Installing patch 115780-03 returns the following: One or more patch packages included in
    115780-03 are not installed on this system.
    The problem is that I do not know what other patches are required to get a successful installation.
    The only other dependency to this patch is patch: 110934-12 which is already applied.
    If any one had a similar problem, please comment.
    Thanks.
    Lumenca

    This is a patch for pkg SUNWgnome-wm which needs to be installed.
    I wonder why somebody wants to have gnome on a suncluster node ;-)
    bye,
    jono

  • What are the preferred methods for backing up a cluster node bootdisk?

    Hi,
    I would like to use flarcreate to backup the bootdisks for each of the nodes in my cluster... but I cannot see this method mentioned in any cluster documentation...
    Has anybody used flash backups for cluster nodes before (and more importantly - successfully restored a cluster node from a flash image..?)
    Thanks very much,
    Trevor

    Hi, some backround on this - I need to patch some production cluster nodes, and obviously would like to backup the rootdisk of each node before doing this.
    What I really need is some advice about the best method to backup & patch my cluster node (with a recovery method also).
    The sun documentation for this says to use ufsdump, which i have used in the past - but will FLAR do the same job? - has anyone had experiance using FLAR to restore a cluster node?
    Or if someone has some other solutions for patching the nodes? - maybe offline my root mirror (SVM) - patch root disk - barring any major problems - online the mirror again??
    Cheers, Trevor

  • Patch 4775612 issue for Migration of 11.5.10.2 from Windows to Linux

    Hi All
    I am planing to migrate Oracle Application 11.5.10.2 from Windows 2003 to Oracle Enterprise Linux. I have Dual node system. My Application is on APPS NODE and my CONCURRENT MANAGER and DB are on DB node. I applied Patch *4775612* on both the nodes after shutting down the APPLICATIONS services and CONCURRENT MANAGER services on DB node, but failed. The errors are:
    ERROR ON APPS NODE:
    ===================
    Error was found when copying the DLLs from AU_TOP/bin to PROD_TOP/bin
    adrelink is exiting with status 1
    End of adrelink session
    Date/time is Wed Jun 24 15:38:30 AST 2009
    An error occurred while relinking application programs.
    Continue as if it were successful [No] :
    ERROR ON DB NODE:
    =================
    An error occurred while relinking application programs.
    Continue as if it were successful [No] :
    You should check the file
    e:\oracle\prodappl\admin\PROD\log\4775612.log
    please advise
    Regards
    Edited by: user457577 on Jun 24, 2009 4:21 AM

    Hi,
    Thanks for your reply. I followed your instruction and stopped all services on DB node as my CONCURRENT MANAGER is on DB node and applied the patch. I got success, but when i tried to apply the same patch on APPLICATIONS node i got the following error:
    Error executing cp -pf d:/oracle/prodappl/au/11.5.0/bin/ar*.dll d:/oracle/prodappl/ar/11.5.0/bin
    Please make sure file isn't being used.
    Copy DLLs of AS to d:/oracle/prodappl/as/11.5.0/bin
    Copy DLLs of BOM to d:/oracle/prodappl/bom/11.5.0/bin
    Copy DLLs of DT to d:/oracle/prodappl/dt/11.5.0/bin
    Copy DLLs of ENG to d:/oracle/prodappl/eng/11.5.0/bin
    Copy DLLs of FA to d:/oracle/prodappl/fa/11.5.0/bin
    Copy DLLs of FF to d:/oracle/prodappl/ff/11.5.0/bin
    Copy DLLs of FND to d:/oracle/prodappl/fnd/11.5.0/bin
    Copy DLLs of GL to d:/oracle/prodappl/gl/11.5.0/bin
    Copy DLLs of INV to d:/oracle/prodappl/inv/11.5.0/bin
    Copy DLLs of MRP to d:/oracle/prodappl/mrp/11.5.0/bin
    Copy DLLs of GMA to d:/oracle/prodappl/gma/11.5.0/bin
    Copy DLLs of PAY to d:/oracle/prodappl/pay/11.5.0/bin
    Copy DLLs of PO to d:/oracle/prodappl/po/11.5.0/bin
    Copy DLLs of WIP to d:/oracle/prodappl/wip/11.5.0/bin
    Error was found when copying the DLLs from AU_TOP/bin to PROD_TOP/bin
    adrelink is exiting with status 1
    End of adrelink session
    Date/time is Wed Jun 24 19:05:39 AST 2009
    An error occurred while relinking application programs.
    Continue as if it were successful [No] :
    You should check the file
    d:\oracle\prodappl\admin\PROD\log\4775612
    for errors.
    This time i stopped all the services rather disabled them as well and restarted the servers. I also checked adrelink.log and it also displayed the same error message.
    Please advise.
    Regards
    Edited by: user457577 on Jun 24, 2009 6:18 AM

  • How to failover SCAN VIP and SCAN Listener from one node to another?

    Environment:
    O.S :          HP-UX  B.11.31 U ia64
    RDBMS:   Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    It is a 2 Node RAC.
    Question:
    How to failover the SCAN VIP and SCAN LISTENER running on node 1 to node 2?
    What is the relation between standard LISTENER and SCAN LISTENER ?
    Why do we need LISTENER, when we have SCAN LISTENER ?
    When I tried with SRCVTL STOP LISTENER , I thought the SCAN LISTENER adn SCAN IP will failover, but it did not?
    Also please clarify if I use SRVCTL RELOCATE SCAN -i 1 -n Node1
    Actalluy I am trying that by moving the SCAN listeners so that when I do PSU 7 patching on 1 node, no incoming attempt to connect will spawn
    a process and thereby opening files in $ORACLE_HOME (which would prevent the patch from occurring)
    Please clarify my queries.
    Thanks,  Sivaprasad.S

    Hi Sivaprasad,
    1. The following link will help you for SCAN VIP and SCAN LISTENER failover from 1 node to another.
    http://heliosguneserol.wordpress.com/2012/10/19/how-to-relocate-scan_listener-from-one-node-to-another-node-on-rac-system/
    http://oracledbabay.blogspot.co.uk/2013/05/steps-to-change-scan-ip-address-in.html
    2. The Standard LISTENER is specific for particular node for which it is running. It cannot be relocated as its specific for the node its running. SCAN listeners are not replacements for the node listeners.A new set of cluster processes called scan listeners will run on three nodes in a cluster (or all nodes if there are less than 3).  If you have more than three nodes, regardless of the number of nodes you have, there will be at most three scan listeners. So no relation for standard LISTENER and SCAN LISTENER.
    3. Hmmm. let me put it in easy way for this question. All the RAC services like, asm, db , services, nodeapps registers with this SCAN_LISTENER. So if any of these services (asm, db , services, nodeapps) got down/not running, the SCAN_LISTENER will know the down status, and if any client requests to access the node/service which is down, the SCAN_LISTENER will redirect the client request to the least loaded node. So here all these process will happen without the knowledge of client. And As usual the standard LISTENER looks only for incoming request to connect with the database. So we need both LISTENER and SCAN LISTENER.
    4. If you provide SRCVTL STOP LISTENER,  it stops the default listener on the specified node_name, or the listeners represented in a given list of listener names, that are registered with Oracle Clusterware on the given node. No failover will happen under this case.
    5. Yes you can relocate if you want to relocate the scan.
    Hope this helps!!
    Regards,
    Pradeep. V

  • Applying patch on RAC seetup

    Dear all,
    Tomorrow i need to apply a bug fix patch on my RAC Set up database.For that please tell me the step by step process , how to apply it.
    it is 2 node RAC.
    Database version : 10.2.0.4
    OS:linux rhel5 i have couple of doubts on it.
    1)For patch to apply , we need to down both the nodes and we have to apply that patch ( or ) making one node down and applying the patch on that node is enough to minimise the down time .
    2)let me correct my steps are wrong.
    Steps to down the database:
    1.on both the nodes:
    1.database down.
    2.asm down
    3.node apps down
    2.intrim patch which we downloaded we have to keep it in oracle/home/bin/opatch directory on one of two nodes is enough.
    3.set PATH environment variable
    4.set HOME environment variable
    5.Go to opatch folder , and go to patch number folder.
    6.opatch apply
    after that
    7. node apps upp
    8. asm upp
    9. database upp. will do right.....no need to apply on both the nodes....applying on one node is enough right.
    or else if any one of you have detailed documentation , please give the link here or mail to my id
    [email protected] Regards,
    Vamsi.

    I have applied such patches several times. I am confident that I can apply any patch any time.
    But sorry to say I can't tell you the steps, I'm afraid that I'll miss something. and the smallest mistake can make your activity miserable. There is no generic method to apply a CPU/PSU patch. Methods differ.
    The best guide for you, is the readme attached with the patch.
    The generic method to apply the patch (probably):
    1. Be cool, calm and confident
    2. Without being lazy, go through each and every line of the readme.
    3. Put the steps on a notepad (your interpretation of the readme)
    4. start the patch application process
    I know I have not answered your question, but this is the way which I follow.
    Regards,
    S.K.

  • Oracle Apps R12 Shared ApplTier Patching

    Hi
    I am having a small doubt related to R12 shared appltier.Could you please clarify me on this?
    As per the Metalink notes and blogs in R12 shared appltier INST_TOPs will be there seperately in multiple nodes.In INST_TOP all configuration files,logfiles,xml,dbc,certificates files will be there.
    If we want to apply apps patches means patching on one node is enough or we have to apply patch on every node?
    Correct me if i am wrong. If patch brings any changes to APPL_TOP,COMMON_TOP,Techstack Homes related means its a shared appl_tier so one time patching is enough.If it brings parallely configuration files changes to INST_TOP files related also means we have to apply on every node?
    Then what is the advantage in R12 Share Appl Tier...
    waiting for your reply.
    Edited by: user7239280 on Dec 23, 2009 11:58 AM

    Hi,
    As per the Metalink notes and blogs in R12 shared appltier INST_TOPs will be there seperately in multiple nodes.In INST_TOP all configuration files,logfiles,xml,dbc,certificates files will be there.
    If we want to apply apps patches means patching on one node is enough or we have to apply patch on every node?Correct. Just make sure you run AutoConfig on each application tier node once you apply the patch.
    Correct me if i am wrong. If patch brings any changes to APPL_TOP,COMMON_TOP,Techstack Homes related means its a shared appl_tier so one time patching is enough.If it brings parallely configuration files changes to INST_TOP files related also means we have to apply on every node?
    Then what is the advantage in R12 Share Appl Tier...You do not have to apply the patch on each node, just on one node then run AutoConfig. By doing this, you save your time and reduce the downtime as the patches need to be applied only once.
    You may also see (Note: 745580.1 - Apply Patches in a Shared Application Tier File System Environment).
    Regards,
    Hussein

  • 10.2.0.4 patch on single instance standby database

    Hi All,
    We have production database 10.2.0.3 running with RAC having 2 nodes on Solaris 10. We have patched this
    databases to 10.2.0.4 (both nodes) on production without any issue.
    We have a physical standby database (10.2.0.3) with 2 nodes on solaris 10. But we have stopped node2 sometime
    back and currently it's a single instance standby database. Whe nwe are trying to apply patch to standby dataabse
    it's showing both node during installation and installation fails when it tries to search for node2.
    What's the solution for this problem? Any document on how to patch the single instance standby database when
    production is running in RAC.

    I think you are basically saying that you have a 2 node RAC cluster with 1 node down (possibly permanently) and you want to just patch 1 of the nodes?
    It's not overly a surprise the installer is trying to patch the other node, when as far as your inventory is concerned you have a 2 node cluster.
    Have you tried running the installer with the -local option?
    This should just patch your local node. Obviously the dictionary changes will get applied via MRP from the primary db.
    jason.
    http://jarneil.wordpress.com

  • Patch status in ad_patch

    Hi,
    I have 1 dbtier and 2 apps servers. One apps tier is using for CM and other serverices but node 2 has everythings else minus CM. When I applied patch on both node it returns 2 rows. Is that correct?
    BUG_NUMBER BUG_ID CREATION_
    8623536 373661 16-JUN-11
    8623536 373519 16-JUN-11
    Please advice in dual node arcitecture patching.
    thanks

    I have 1 dbtier and 2 apps servers. One apps tier is using for CM and other serverices but node 2 has everythings else minus CM. When I applied patch on both node it returns 2 rows. Is that correct?
    BUG_NUMBER BUG_ID CREATION_
    8623536 373661 16-JUN-11
    8623536 373519 16-JUN-11
    Please advice in dual node arcitecture patching.Have you tried checking the creation_date with something like to_char(creation_date, 'hh24:mm') to see if the timings correspond to when you applied the patches to the tiers. Your patch log would have the times applied to verify.
    In my case, I have11 5.10.2 with a shared APPL_TOP for most of the modules but have a separate external apps tier outside our local network sitting in DMZ - but I can see only one record for each bug at least for all that were applied in the last 100 days.
    Regards,
    Edited by: DBA_EBiz_EBS on Jun 17, 2011 3:23 PM

  • Adding node back into cluster after removal...

    Hi,
    I removed a cluster node using "scconf -r -h <node>" (carried out all the other usual removal steps before getting this command to work).
    Because this is a pair+1 cluster and the node i was trying to remove was physically attached to the quroum device (scsi), I had to create a dummy node before the removal command above would work.
    I reinstalled solaris, SC3.1u4 framwork, patches etc. and then tried to run scsinstall again on the node (reintroduced the node to the cluster again first using scconf -a -T node=<node>).
    However! during the scsinstall i got the following problem:
    Updating file ("ntp.conf.cluster") on node n20-2-sup ... done
    Updating file ("hosts") on node n20-2-sup ... done
    Updating file ("ntp.conf.cluster") on node n20-3-sup ... done
    Updating file ("hosts") on node n20-3-sup ... done
    scrconf: RPC: Unknown host
    scinstall:  Failed communications with "bogusnode"
    scinstall: scinstall did NOT complete successfully!
    Press Enter to continue:
    Was not sure what to do at this point, but since the other clusternodes could now see my 'new' node again, i removed the dummy node, rebooted the new node and said a little prayer...
    Now, my node will not boot as part of the cluster:
    Rebooting with command: boot
    Boot device: /pci@8,600000/SUNW,qlc@4/fp@0,0/disk@w21000004cfa3e691,0:a File and args:
    SunOS Release 5.10 Version Generic_127111-06 64-bit
    Copyright 1983-2007 Sun Microsystems, Inc. All rights reserved.
    Use is subject to license terms.
    Hostname: n20-1-sup
    /usr/cluster/bin/scdidadm: Could not load DID instance list.
    Cannot open /etc/cluster/ccr/did_instances.
    Booting as part of a cluster
    NOTICE: CMM: Node n20-1-sup (nodeid = 1) with votecount = 0 added.
    NOTICE: CMM: Node n20-2-sup (nodeid = 2) with votecount = 2 added.
    NOTICE: CMM: Node n20-3-sup (nodeid = 3) with votecount = 1 added.
    NOTICE: CMM: Node bogusnode (nodeid = 4) with votecount = 0 added.
    NOTICE: clcomm: Adapter qfe5 constructed
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-2-sup:qfe5 being constructed
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-3-sup:qfe5 being constructed
    NOTICE: clcomm: Adapter qfe1 constructed
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-2-sup:qfe1 being constructed
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-3-sup:qfe1 being constructed
    NOTICE: CMM: Node n20-1-sup: attempting to join cluster.
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-2-sup:qfe1 being initiated
    NOTICE: CMM: Node n20-2-sup (nodeid: 2, incarnation #: 1205318308) has become reachable.
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-2-sup:qfe1 online
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-3-sup:qfe5 being initiated
    NOTICE: CMM: Node n20-3-sup (nodeid: 3, incarnation #: 1205265086) has become reachable.
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-3-sup:qfe5 online
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-3-sup:qfe1 being initiated
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-3-sup:qfe1 online
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-2-sup:qfe5 being initiated
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-2-sup:qfe5 online
    NOTICE: CMM: Cluster has reached quorum.
    NOTICE: CMM: Node n20-1-sup (nodeid = 1) is up; new incarnation number = 1205346037.
    NOTICE: CMM: Node n20-2-sup (nodeid = 2) is up; new incarnation number = 1205318308.
    NOTICE: CMM: Node n20-3-sup (nodeid = 3) is up; new incarnation number = 1205265086.
    NOTICE: CMM: Cluster members: n20-1-sup n20-2-sup n20-3-sup.
    NOTICE: CMM: node reconfiguration #18 completed.
    NOTICE: CMM: Node n20-1-sup: joined cluster.
    NOTICE: CMM: Node (nodeid = 4) with votecount = 0 removed.
    NOTICE: CMM: Cluster members: n20-1-sup n20-2-sup n20-3-sup.
    NOTICE: CMM: node reconfiguration #19 completed.
    WARNING: clcomm: per node IP config clprivnet0:-1 (349): 172.16.193.1 failed with 19
    WARNING: clcomm: per node IP config clprivnet0:-1 (349): 172.16.193.1 failed with 19
    cladm: CLCLUSTER_ENABLE: No such device
    UNRECOVERABLE ERROR: Sun Cluster boot: Could not initialize cluster framework
    Please reboot in non cluster mode(boot -x) and Repair
    syncing file systems... done
    WARNING: CMM: Node being shut down.
    Program terminated
    {1} ok
    Any ideas how i can recover this situation without having to reinstall the node again?
    (have a flash with OS, sc3.1u4 framework etc... so not the end of the world but...)
    Thanks a mil if you can help here!
    - headwrecked

    Hi - got sorted with this problem...
    basically just removed (scinstall -r) the sc3.1u4 software from the node which was not booting, and then re-installed the software (this time the dummy node had been removed so it did not try to contact this node and the scinstall completed without any errors)
    I think the only problem with the procedure i used to remove and readd the node was that i forgot to remove the dummy node before re-adding the actaul cluster node again...
    If anyone can confirm this to be the case then great - if not... well its working now so this thread can be closed.
    root@n20-1-sup # /usr/cluster/bin/scinstall -r
    Verifying that no unexpected global mounts remain in /etc/vfstab ... done
    Verifying that no device services still reference this node ... done
    Archiving the following to /var/cluster/uninstall/uninstall.1036/archive:
    /etc/cluster ...
    /etc/path_to_inst ...
    /etc/vfstab ...
    /etc/nsswitch.conf ...
    Updating vfstab ... done
    The /etc/vfstab file was updated successfully.
    The original entry for /global/.devices/node@1 has been commented out.
    And, a new entry has been added for /globaldevices.
    Mounting /dev/dsk/c3t0d0s6 on /globaldevices ... done
    Attempting to contact the cluster ...
    Trying "n20-2-sup" ... okay
    Trying "n20-3-sup" ... okay
    Attempting to unconfigure n20-1-sup from the cluster ... failed
    Please consider the following warnings:
    scrconf: Failed to remove node (n20-1-sup).
    scrconf: All two-node clusters must have at least one shared quorum device.
    Additional housekeeping may be required to unconfigure
    n20-1-sup from the active cluster.
    Removing the "cluster" switch from "hosts" in /etc/nsswitch.conf ... done
    Removing the "cluster" switch from "netmasks" in /etc/nsswitch.conf ... done
    ** Removing Sun Cluster framework packages **
    Removing SUNWkscspmu.done
    Removing SUNWkscspm..done
    Removing SUNWksc.....done
    Removing SUNWjscspmu.done
    Removing SUNWjscspm..done
    Removing SUNWjscman..done
    Removing SUNWjsc.....done
    Removing SUNWhscspmu.done
    Removing SUNWhscspm..done
    Removing SUNWhsc.....done
    Removing SUNWfscspmu.done
    Removing SUNWfscspm..done
    Removing SUNWfsc.....done
    Removing SUNWescspmu.done
    Removing SUNWescspm..done
    Removing SUNWesc.....done
    Removing SUNWdscspmu.done
    Removing SUNWdscspm..done
    Removing SUNWdsc.....done
    Removing SUNWcscspmu.done
    Removing SUNWcscspm..done
    Removing SUNWcsc.....done
    Removing SUNWscrsm...done
    Removing SUNWscspmr..done
    Removing SUNWscspmu..done
    Removing SUNWscspm...done
    Removing SUNWscva....done
    Removing SUNWscmasau.done
    Removing SUNWscmasar.done
    Removing SUNWmdmu....done
    Removing SUNWmdmr....done
    Removing SUNWscvm....done
    Removing SUNWscsam...done
    Removing SUNWscsal...done
    Removing SUNWscman...done
    Removing SUNWscgds...done
    Removing SUNWscdev...done
    Removing SUNWscnmu...done
    Removing SUNWscnmr...done
    Removing SUNWscscku..done
    Removing SUNWscsckr..done
    Removing SUNWscu.....done
    Removing SUNWscr.....done
    Removing the following:
    /etc/cluster ...
    /dev/did ...
    /devices/pseudo/did@0:* ...
    The /etc/inet/ntp.conf file has not been updated.
    You may want to remove it or update it after uninstall has completed.
    The /var/cluster directory has not been removed.
    Among other things, this directory contains
    uninstall logs and the uninstall archive.
    You may remove this directory once you are satisfied
    that the logs and archive are no longer needed.
    Log file - /var/cluster/uninstall/uninstall.1036/log
    root@n20-1-sup #
    Ran the scinstall again:
    >>> Confirmation <<<
    Your responses indicate the following options to scinstall:
    scinstall -ik \
    -C N20_Cluster \
    -N n20-2-sup \
    -M patchdir=/var/cluster/patches \
    -A trtype=dlpi,name=qfe1 -A trtype=dlpi,name=qfe5 \
    -m endpoint=:qfe1,endpoint=switch1 \
    -m endpoint=:qfe5,endpoint=switch2
    Are these the options you want to use (yes/no) [yes]?
    Do you want to continue with the install (yes/no) [yes]?
    Checking device to use for global devices file system ... done
    Installing patches ... failed
    scinstall: Problems detected during extraction or installation of patches.
    Adding node "n20-1-sup" to the cluster configuration ... skipped
    Skipped node "n20-1-sup" - already configured
    Adding adapter "qfe1" to the cluster configuration ... skipped
    Skipped adapter "qfe1" - already configured
    Adding adapter "qfe5" to the cluster configuration ... skipped
    Skipped adapter "qfe5" - already configured
    Adding cable to the cluster configuration ... skipped
    Skipped cable - already configured
    Adding cable to the cluster configuration ... skipped
    Skipped cable - already configured
    Copying the config from "n20-2-sup" ... done
    Copying the postconfig file from "n20-2-sup" if it exists ... done
    Copying the Common Agent Container keys from "n20-2-sup" ... done
    Setting the node ID for "n20-1-sup" ... done (id=1)
    Verifying the major number for the "did" driver with "n20-2-sup" ... done
    Checking for global devices global file system ... done
    Updating vfstab ... done
    Verifying that NTP is configured ... done
    Initializing NTP configuration ... done
    Updating nsswitch.conf ...
    done
    Adding clusternode entries to /etc/inet/hosts ... done
    Configuring IP Multipathing groups in "/etc/hostname.<adapter>" files
    IP Multipathing already configured in "/etc/hostname.qfe2".
    Verifying that power management is NOT configured ... done
    Ensure that the EEPROM parameter "local-mac-address?" is set to "true" ... done
    Ensure network routing is disabled ... done
    Updating file ("ntp.conf.cluster") on node n20-2-sup ... done
    Updating file ("hosts") on node n20-2-sup ... done
    Updating file ("ntp.conf.cluster") on node n20-3-sup ... done
    Updating file ("hosts") on node n20-3-sup ... done
    Log file - /var/cluster/logs/install/scinstall.log.938
    Rebooting ...
    Mar 13 13:59:13 n20-1-sup reboot: rebooted by root
    Terminated
    root@n20-1-sup # syncing file systems... done
    rebooting...
    R
    LOM event: +103d+20h44m26s host reset
    screen not found.
    keyboard not found.
    Keyboard not present. Using lom-console for input and output.
    Sun Netra T4 (2 X UltraSPARC-III+) , No Keyboard
    Copyright 1998-2003 Sun Microsystems, Inc. All rights reserved.
    OpenBoot 4.10.1, 4096 MB memory installed, Serial #52960491.
    Ethernet address 0:3:ba:28:1c:eb, Host ID: 83281ceb.
    Initializing 15MB Rebooting with command: boot
    Boot device: /pci@8,600000/SUNW,qlc@4/fp@0,0/disk@w21000004cfa3e691,0:a File and args:
    SunOS Release 5.10 Version Generic_127111-06 64-bit
    Copyright 1983-2007 Sun Microsystems, Inc. All rights reserved.
    Use is subject to license terms.
    Hostname: n20-1-sup
    Configuring devices.
    devfsadm: minor_init failed for module /usr/lib/devfsadm/linkmod/SUNW_scmd_link.so
    Loading smf(5) service descriptions: 24/24
    /usr/cluster/bin/scdidadm: Could not load DID instance list.
    Cannot open /etc/cluster/ccr/did_instances.
    Booting as part of a cluster
    NOTICE: CMM: Node n20-1-sup (nodeid = 1) with votecount = 0 added.
    NOTICE: CMM: Node n20-2-sup (nodeid = 2) with votecount = 2 added.
    NOTICE: CMM: Node n20-3-sup (nodeid = 3) with votecount = 1 added.
    NOTICE: clcomm: Adapter qfe5 constructed
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-2-sup:qfe5 being constructed
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-3-sup:qfe5 being constructed
    NOTICE: clcomm: Adapter qfe1 constructed
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-2-sup:qfe1 being constructed
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-3-sup:qfe1 being constructed
    NOTICE: CMM: Node n20-1-sup: attempting to join cluster.
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-2-sup:qfe1 being initiated
    NOTICE: CMM: Node n20-2-sup (nodeid: 2, incarnation #: 1205318308) has become reachable.
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-2-sup:qfe1 online
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-3-sup:qfe5 being initiated
    NOTICE: CMM: Node n20-3-sup (nodeid: 3, incarnation #: 1205265086) has become reachable.
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-3-sup:qfe5 online
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-2-sup:qfe5 being initiated
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-2-sup:qfe5 online
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-3-sup:qfe1 being initiated
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-3-sup:qfe1 online
    NOTICE: CMM: Cluster has reached quorum.
    NOTICE: CMM: Node n20-1-sup (nodeid = 1) is up; new incarnation number = 1205416931.
    NOTICE: CMM: Node n20-2-sup (nodeid = 2) is up; new incarnation number = 1205318308.
    NOTICE: CMM: Node n20-3-sup (nodeid = 3) is up; new incarnation number = 1205265086.
    NOTICE: CMM: Cluster members: n20-1-sup n20-2-sup n20-3-sup.
    NOTICE: CMM: node reconfiguration #23 completed.
    NOTICE: CMM: Node n20-1-sup: joined cluster.
    ip: joining multicasts failed (18) on clprivnet0 - will use link layer broadcasts for multicast
    NOTICE: CMM: Votecount changed from 0 to 1 for node n20-1-sup.
    NOTICE: CMM: Cluster members: n20-1-sup n20-2-sup n20-3-sup.
    NOTICE: CMM: node reconfiguration #24 completed.
    Mar 13 14:02:23 in.ndpd[351]: solicit_event: giving up on qfe1
    Mar 13 14:02:23 in.ndpd[351]: solicit_event: giving up on qfe5
    did subpath /dev/rdsk/c1t3d0s2 created for instance 2.
    did subpath /dev/rdsk/c2t3d0s2 created for instance 12.
    did subpath /dev/rdsk/c1t3d1s2 created for instance 3.
    did subpath /dev/rdsk/c1t3d2s2 created for instance 6.
    did subpath /dev/rdsk/c1t3d3s2 created for instance 7.
    did subpath /dev/rdsk/c1t3d4s2 created for instance 8.
    did subpath /dev/rdsk/c1t3d5s2 created for instance 9.
    did subpath /dev/rdsk/c1t3d6s2 created for instance 10.
    did subpath /dev/rdsk/c1t3d7s2 created for instance 11.
    did subpath /dev/rdsk/c2t3d1s2 created for instance 13.
    did subpath /dev/rdsk/c2t3d2s2 created for instance 14.
    did subpath /dev/rdsk/c2t3d3s2 created for instance 15.
    did subpath /dev/rdsk/c2t3d4s2 created for instance 16.
    did subpath /dev/rdsk/c2t3d5s2 created for instance 17.
    did subpath /dev/rdsk/c2t3d6s2 created for instance 18.
    did subpath /dev/rdsk/c2t3d7s2 created for instance 19.
    did instance 20 created.
    did subpath n20-1-sup:/dev/rdsk/c0t6d0 created for instance 20.
    did instance 21 created.
    did subpath n20-1-sup:/dev/rdsk/c3t0d0 created for instance 21.
    did instance 22 created.
    did subpath n20-1-sup:/dev/rdsk/c3t1d0 created for instance 22.
    Configuring DID devices
    t_optmgmt: System error: Cannot assign requested address
    obtaining access to all attached disks
    n20-1-sup console login:

  • ISE MnT Node purge running for 6 days straight

    Hi all,
    Our ISE 1.1.4 patch 2 MnT node appears to be stuck in DB purge. I am getting e-mail alerts that say "Hourly purge skipped as purge is already running." Also, when I try to run a backup of the MnT node I receive the message. "Cannot submit full backup when data purging is in process."
    We had received the "maximum open cursors exceeded" error. When that I happened I re-synchronized the deployment which re-started the services on the MnT node. This cleared the open cursors error but left us where we're at now. I was hoping it would clear itself with time, but it hasn't.
    All I can think to do is restart the ISE services on the MnT node, but I'm a little worried about what might happen if I do that in the middle of a purge. Of course we don't have a recent backup of MnT (see above) and I would not like to lose the historical data.
    I haven't opened a TAC yet, will if no answers here. We can't patch or upgrade above 1.1.4 patch 2 because we're waiting on a fix for an unrelated bug.
    Any ideas appreciated, thank you.

    Hi Leroy Plock,
    Let me explain you the root cause of experiencing the "maximum open cursors exceeded" .
    In ISE 1.1.4 Patch 1 we have introduced a new hourly purge Mechanism. Due to this feature we are experiencing this open cursors issue.
    The issue 'ORA-01000: maximum open cursors exceeded' is caused because of the feature of HOURLY PURGE introduced in ISE 1.1.4 Patch 1.
    On each hour a purge process is triggered and a connection to the MNT database is opened. As per the  logic this opened connection should be closed right immediately after the transaction of purge process is completed. But this opened connection is not being closed and thus in a day 24 connections are kept opened.
    In oracle we have set the count of 1500 open cursors and so the database will not open the cursor count beyond this value.
    With the above said 24 connections opened every day this 1500 cursors will be consumed within 62 days (1500/24) and this error will then populate.
    If we restart the MNT node once in a month these cursors will get freed up and then will not see this open cursor issue. This defect is addressed in ISE 1.1.4 Patch 4.
    The defect for this issue is CSCuh70984

Maybe you are looking for

  • Color question for Acrobat

    Somehow my settings got fouled up because the images in my .pdf files are printing out with a red overlay. I can't find a place to change that. Also, do any of you know how to restore the factory settings universally in Acrobat? Thanks!

  • Cannot add share storecenter px6-300d 4.1.102.29716

    Dear Lenovo Community, I need to add a share to manage the backups in our group. However after updateing to 4.1.102.29716 I cannot create a new share anymore. Once I click on 'add share' the lenovo manager just shows 'processing' and does nothing any

  • Need GUI "plan of attack" suggestions

    I'm looking for suggestions on going about a chess board gui. I want this to look nice, not cheaply done, since it will be used with several different side projects I am beginning to work on such as an correspondance chess client and notation convert

  • Local Files Will not Sync to iPhone and Gets Error.....

    DescriptionI load local files to my spotify playlist, so that I can play them in the car when listening to spotify on my iPhone 6. When I go to play the song on the computer through spotify it works fine from "local files" but when I log onto my iPho

  • DAQmx with signal generation

    Hello, I am working on converting my current test code that used Traditional DAQ to DAQmx. I need some help with DAQ Waveform Generation. In my current code I am using the Traditional DAQ to send a Sinewave output on on the DAC0Out on my PCI-6036E ca