Sun Cluster failed to switchover

Hi,
I have configured two node sun cluster and was working fine all these days.
Since yesterday, i am unable to failover the cluster to second node.
instead, resources are stopped and started again on the first node.
when i use the command "scswitch -z -g oracle_failover_rg -h MFIN-SOL02" in first node I am getting these messages on the console
Sep 28 17:53:16 MFIN-SOL01 ip: [ID 678092 kern.notice] TCP_IOC_ABORT_CONN: local = 010.010.007.120:0, remote = 000.000.000.00
0:0, start = -2, end = 6
Sep 28 17:53:16 MFIN-SOL01 ip: [ID 302654 kern.notice] TCP_IOC_ABORT_CONN: aborted 0 connection
Pl. suggest me to solve this problem.

Those messages aren't important here. I think that might be related to the fault monitor being stopped.
As I said in the previous post, you need to diagnose this bit by bit. Try the procedure manually, i.e. stop Oracle on node 1, manually switch-over the disks and storage to node 2, mount the file system, bring up the logical address, start the database.
I expect there is something wrong with your configuration, e.g. incorrect listener configuration.
There is also a way of increasing the debug level for the Oracle agent. This is documented in the manuals IIRC.
Regards,
Tim
---

Similar Messages

  • Sun cluster failed when switching, mount /global/ I/O error .

    Hi all,
    I am having a problem during switching two Sun Cluster nodes.
    Environment:
    Two nodes with Solaris 8 (Generic_117350-27), 2 Sun D2 arrays & Vxvm 3.2 and Sun Cluster 3.0.
    Porblem description:
    scswitch failed , then scshutdown and boot up the both nodes. One node failed because of vxvm boot failure.
    The other node is booting up normally but cannot mount /global directories. Manually mount is working fine.
    # mount /global/stripe01
    mount: I/O error
    mount: cannot mount /dev/vx/dsk/globdg/stripe-vol01
    # vxdg import globdg
    # vxvol -g globdg startall
    # mount /dev/vx/dsk/globdg/mirror-vol03 /mnt
    # echo $?
    0
    port:root:/global/.devices/node@1/dev/vx/dsk 169# mount /global/stripe01
    mount: I/O error
    mount: cannot mount /dev/vx/dsk/globdg/stripe-vol01
    Need help urgently
    Jeff

    I would check your patch levels. I seem to remember there was a linker patch that cause an issue with mounting /global/.devices/node@X
    Tim
    ---

  • QFS Meta data resource on sun cluster failed

    Hi,
    I'm trying to configure QFS on cluster environment, to configure metadata resource faced error. i tried with different type of qfs none of them worked.
    [root @ n1u331]
    ~ # scrgadm -a -j mds -g qfs-mds-rg -t SUNW.qfs:5 -x QFSFileSystem=/sharedqfs
    n1u332 - shqfs: Invalid priority (0) for server n1u332FS shqfs: validate_node() failed.
    (C189917) VALIDATE on resource mds, resource group qfs-mds-rg, exited with non-zero exit status.
    (C720144) Validation of resource mds in resource group qfs-mds-rg on node n1u332 failed.
    [root @ n1u331]
    ~ # scrgadm -a -j mds -g qfs-mds-rg -t SUNW.qfs:5 -x QFSFileSystem=/global/haqfs
    n1u332 - Mount point /global/haqfs does not have the 'shared' option set.
    (C189917) VALIDATE on resource mds, resource group qfs-mds-rg, exited with non-zero exit status.
    (C720144) Validation of resource mds in resource group qfs-mds-rg on node n1u332 failed.
    [root @ n1u331]
    ~ # scrgadm -a -j mds -g qfs-mds-rg -t SUNW.qfs:5 -x QFSFileSystem=/global/hasharedqfs
    n1u332 - has: No /dsk/ string (nodev) in device.Inappropriate path in FS has device component: nodev.FS has: validate_qfsdevs() failed.
    (C189917) VALIDATE on resource mds, resource group qfs-mds-rg, exited with non-zero exit status.
    (C720144) Validation of resource mds in resource group qfs-mds-rg on node n1u332 failed.
    any QFS expert here?

    hi
    Yes we have 5.2, here is the wiki's link, [ http://wikis.sun.com/display/SAMQFSDocs52/Home|http://wikis.sun.com/display/SAMQFSDocs52/Home]
    I have added the file system trough webconsole, and it's mounted and working fine.
    after creating the file system i tried to put under sun cluster's management, but it asked for metadata resource and to create metadata resource I have got the mentioned errors.
    I need the use QFS file system in non-RAC environment, just mounting and using the file system. I could mount it on two machine in shared mode and high available mode, in both case in the second node it's 3 time slower then the node which has metadata server when you write and the same read speed. could you please let me know if it's the same for your environment or not. if so what do you think of the reason, i see both side is writing to the storage directly but why it's so slow on one node.
    regards,

  • Failed to create resource - Error in Sun cluster 3.2

    Hi All,
    I have a 2 node cluster in place. When i trying to create a resource, i am getting following error.
    Can anybody tell me why i am getting this. I have Sun Cluster 3.2 on Solaris 10.
    I have created zpool called testpool.
    clrs create -g test-rg -t SUNW.HAStoragePlus -p Zpools=testpool hasp-testpool-res
    clrs: sun011:test011z - : no error
    clrs: (C189917) VALIDATE on resource hasp-testpool-res, resource group test-rg, exited with non-zero exit status.
    clrs: (C720144) Validation of resource hasp-testpool-res in resource group test-rg on node sun011:test011z failed.
    clrs: (C891200) Failed to create resource "hasp-testpool-res".
    Regards
    Kumar

    Thorsten,
    testpool created in one of the cluster nodes and is accessible from both the nodes in the cluster. But if it is imported in one node and will not be access from other node. If other node want to get access we need to export and import testpool in other node.
    Storage LUNs allocated to testpool are accessible from all the nodes in the cluster and able import and export testpool from all the nodes in the cluster.
    Regards
    Kumar

  • SUN Cluster.PMF.pmfd Failed to stay up

    Dear All,
    Please help I am facing problem and unable to start sun cluster concurrent manager resource group it is showing me status "starting" but unable to start please find below the log
    Oct 16 14:06:24 iat-dc-ebpdb02 Cluster.PMF.pmfd: [ID 887656 daemon.notice] Process: tag="prdclone-rg,PRODE-cmg-res,0.svc", cmd="/bin/sh -c /opt/SUNWscebs/cmg/bin/start_cmg -R 'PRODE-cmg-res' -G 'prdclone-rg' -C '/bkpclone/acvetprdcm/inst/apps/PRODE_iat-dc-prdclone' -U 'acvetprdcm' -P 'apps' -V '12.0' -S 'PRODE' -O '/bkpclone/acvetprdcm/apps/tech_st/10.1.2' -L '77' ", Failed to stay up.
    Oct 16 14:06:24 iat-dc-ebpdb02 Cluster.PMF.pmfd: [ID 534408 daemon.notice] "prdclone-rg,PRODE-cmg-res,0.svc" restarting too often ... sleeping 8 seconds.
    Oct 16 14:06:32 iat-dc-ebpdb02 SC[SUNWscebs.cmg.start]:prdclone-rg:PRODE-cmg-res: [ID 567783 daemon.error] startebs - ld.so.1: sh: fatal: /usr/lib/secure/libschost.so.1: open failed: No such file or directory
    Oct 16 14:06:32 iat-dc-ebpdb02 Cluster.PMF.pmfd: [ID 887656 daemon.notice] Process: tag="prdclone-rg,PRODE-cmg-res,0.svc", cmd="/bin/sh -c /opt/SUNWscebs/cmg/bin/start_cmg -R 'PRODE-cmg-res' -G 'prdclone-rg' -C '/bkpclone/acvetprdcm/inst/apps/PRODE_iat-dc-prdclone' -U 'acvetprdcm' -P 'apps' -V '12.0' -S 'PRODE' -O '/bkpclone/acvetprdcm/apps/tech_st/10.1.2' -L '77' ", Failed to stay up.
    Oct 16 14:06:32 iat-dc-ebpdb02 Cluster.PMF.pmfd: [ID 534408 daemon.notice] "prdclone-rg,PRODE-cmg-res,0.svc" restarting too often ... sleeping 16 seconds.
    Oct 16 14:06:48 iat-dc-ebpdb02 SC[SUNWscebs.cmg.start]:prdclone-rg:PRODE-cmg-res: [ID 567783 daemon.error] startebs - ld.so.1: sh: fatal: /usr/lib/secure/libschost.so.1: open failed: No such file or directory
    kindly help to resolve the issue.
    Regards,

    Thanks unable to resolve the issue.
    Please see below are my setup:
    Database tier:
    Sun cluster 3.2u3
    oracle EBS 12.1.3
    Two node sun cluster active node a1 and passive b1.
    Application Tier:
    App01
    I want to move concurrent manager from appo1 to database tier what I did below was my action plan.
    step-1 cloned application(app01) to DB on primary host and enabled only batch processing and other all disabled using same virtual host as we have same defined virtual host for database resource group LH (vhost)
    the problem is when I start CM it started but immediately stop when I cloned again with physical host
    CM is started and working fine but I need anyone to tell me how can i start manually and move to CM resource sun cluster.
    Question: Can I choose same LR host for application or I need to put physical name of the primary node during cloning process as I said same we are using LR host for DB tier or need to add new virtual host for CM.
    thanks
    Regards,

  • Tmboot fails started by a Sun Cluster Agent (Data Service)

    Hi,
    we are using TUXEDO 6.5 in conjunction with ADC's Singl.eView.
    Starting TUXEDO by calling tmboot -y works fine in the command prompt.
    Starting TUXEDO within a Sun Cluster Agent, which internaly calls tmboot -y as
    well, fails with the foolowing error messages:
    004859.lbgas1!GWTDOMAIN.5762: 08022002: TUXEDO Version 6.5 SunOS 5.5.1 Generic_103640-29
    sun4u sparc SUNW,Ultra-1.
    004859.lbgas1!GWTDOMAIN.5762: LIBTUX_CAT:262: INFO: Standard main starting
    004901.lbgas1!GWTDOMAIN.5762: LIBGWT_CAT:1223: WARN: Reach system open max(0)
    limit
    004901.lbgas1!GWTDOMAIN.5762: LIBGWT_CAT:1122: ERROR: Unable to allocate free
    fd structure
    004901.lbgas1!GWTDOMAIN.5762: LIBGWT_CAT:1124: ERROR: Unable to open listening
    endpoint
    004901.lbgas1!GWTDOMAIN.5762: LIBTUX_CAT:250: ERROR: tpsvrinit() failed
    ubbconfig entries for GWTDOMAIN:
    GWTDOMAIN SRVGRP="GWADMGROUP" SRVID=2
    REPLYQ = N
    RESTART = Y
    GRACE = 0
    MAXGEN = 5
    CLOPT = ""
    RQADDR = "GWADMGROUP"
    What configuration do I have to change, a TUXEDO one or Sun ?
    Can I drop an application server in ubbconfig ?
    Lutz

    Patch #232, CR041488 seems to address this:
    GWTDOMAIN connection problem when maxfiles kernel parm set > 32K
    Perhaps the Cluster Agent sets a higher ulimit when it runs.
    See if you are running the latest patch.
         Scott Orshan
    Lutz wrote:
    >
    Hi,
    we are using TUXEDO 6.5 in conjunction with ADC's Singl.eView.
    Starting TUXEDO by calling tmboot -y works fine in the command prompt.
    Starting TUXEDO within a Sun Cluster Agent, which internaly calls tmboot -y as
    well, fails with the foolowing error messages:
    004859.lbgas1!GWTDOMAIN.5762: 08022002: TUXEDO Version 6.5 SunOS 5.5.1 Generic_103640-29
    sun4u sparc SUNW,Ultra-1.
    004859.lbgas1!GWTDOMAIN.5762: LIBTUX_CAT:262: INFO: Standard main starting
    004901.lbgas1!GWTDOMAIN.5762: LIBGWT_CAT:1223: WARN: Reach system open max(0)
    limit
    004901.lbgas1!GWTDOMAIN.5762: LIBGWT_CAT:1122: ERROR: Unable to allocate free
    fd structure
    004901.lbgas1!GWTDOMAIN.5762: LIBGWT_CAT:1124: ERROR: Unable to open listening
    endpoint
    004901.lbgas1!GWTDOMAIN.5762: LIBTUX_CAT:250: ERROR: tpsvrinit() failed
    ubbconfig entries for GWTDOMAIN:
    GWTDOMAIN SRVGRP="GWADMGROUP" SRVID=2
    REPLYQ = N
    RESTART = Y
    GRACE = 0
    MAXGEN = 5
    CLOPT = ""
    RQADDR = "GWADMGROUP"
    What configuration do I have to change, a TUXEDO one or Sun ?
    Can I drop an application server in ubbconfig ?
    Lutz

  • Apache with PHP Fails to Validate in Sun Cluster

    Greetings,
    I have Sun Cluster 3.2u2 running with two nodes and have Apache 2.2.11 running successfully in failover mode on shared storage. However, I just installed PHP 5.2.10 and added the line "LoadModule php5_module modules/libphp5.so" to httpd.conf. I am now getting "Command {/global/data/local/apache/bin/apachectl configtest >/dev/null 2>&1} failed: httpd cannot parse httpd.conf, Failed to validate configuration." when I try to start the resource. I can start Apache just fine outside of the cluster, and when I run configtest manually, it replies "Syntax OK".
    Anyone have any ideas why the Cluster software doesn't like the PHP module even though configtest passes with Syntax OK?
    Many thanks,
    Tim

    Found it. Sun Cluster was apparently smart enough to know I was missing the correct PHP AddType lines in httpd.conf.

  • Sun Cluster 3.2 upgrade fail

    Dear mate,
    when I upgrade the cluster from 3.1 to 3.2 by using Live Upgrade, the folloing error message came out, any hints or idea will be appreciate.
    The PBE (sol8) is Solaris 8 with sun cluster 3.1. the PBE (sol10) is Solaris 10 and will upgrade to sun cluster 3.2
    # cd /Solaris_sparc/Product/sun_cluster/Solaris_10/Tools
    # ./scinstall -u update -R /sol10
    scinstall: "SUNWesu" is not installed in "/sol10".
    scinstall: scinstall did NOT complete successfully!
    # luupgrade -p -n sol10 -s /mnt/Solaris_10/Product SUNWesu
    Validating the contents of the media </mnt/Solaris_10/Product>.
    Mounting the BE <sol10>.
    ERROR: The boot environment <sol10> supports non-global zones.The current boot environment does not support non-global zones. Releases prior to Solaris 10 cannot be used to maintain Solaris 10 and later releases that include support for non-global zones. You may only execute the specified operation on a system with Solaris 10 (or later) installed.
    cat: cannot open /tmp/.liveupgrade.6951.16469/.lmz.list
    Thanks and Regards,
    Donald

    Hi Tim,
    Thanks for the information.
    I got the following result.
    # pkginfo -R /sol10 SUNWesu
    ERROR: information for "SUNWesu" was not found
    # pkginfo SUNWesu
    system SUNWesu Extended System Utilities
    PBE has SUNWesu while ABE missing this package, does it mean SUNWesu hadn't upgrade from PBE to ABE? if so what will be the alternative to do it?
    thanks and Regards,
    Donald

  • SAP 7.0 on SUN Cluster 3.2 (Solaris 10 / SPARC)

    Dear All;
    i'm installing a two nodes cluster (SUN Cluster 3.2 / Solaris 10 / SPARC), for a HA SAP 7.0 / Oracle 10g DataBase
    SAP and Oracle softwares were successfully installed and i could successfully cluster the Oracle DB and it is tested and working fine.
    for the SAP i did the following configurations
    # clresource create -g sap-ci-res-grp -t SUNW.sap_ci_v2 -p SAPSID=PRD -p Ci_instance_id=01 -p Ci_services_string=SCS -p Ci_startup_script=startsap_01 -p Ci_shutdown_script=stopsap_01 -p resource_dependencies=sap-hastp-rs,ora-db-res sap-ci-scs-res
    # clresource create -g sap-ci-res-grp -t SUNW.sap_ci_v2 -p SAPSID=PRD -p Ci_instance_id=00 -p Ci_services_string=ASCS -p Ci_startup_script=startsap_00 -p Ci_shutdown_script=stopsap_00 -p resource_dependencies=sap-hastp-rs,or-db-res sap-ci-Ascs-res
    and when trying to bring the sap-ci-res-grp online # clresourcegroup online -M sap-ci-res-grp
    it executes the startsap scripts successfully as following
    Sun Microsystems Inc.     SunOS 5.10     Generic     January 2005
    stty: : No such device or address
    stty: : No such device or address
    Starting SAP-Collector Daemon
    11:04:57 04.06.2008 LOG: Effective User Id is root
    Starting SAP-Collector Daemon
    11:04:57 04.06.2008 LOG: Effective User Id is root
    * This is Saposcol Version COLL 20.94 700 - V3.72 64Bit
    * Usage: saposcol -l: Start OS Collector
    * saposcol -k: Stop OS Collector
    * saposcol -d: OS Collector Dialog Mode
    * saposcol -s: OS Collector Status
    * Starting collector (create new process)
    * This is Saposcol Version COLL 20.94 700 - V3.72 64Bit
    * Usage: saposcol -l: Start OS Collector
    * saposcol -k: Stop OS Collector
    * saposcol -d: OS Collector Dialog Mode
    * saposcol -s: OS Collector Status
    * Starting collector (create new process)
    saposcol on host eccprd01 started
    Starting SAP Instance ASCS00
    Startup-Log is written to /export/home/prdadm/startsap_ASCS00.log
    saposcol on host eccprd01 started
    Running /usr/sap/PRD/SYS/exe/run/startj2eedb
    Trying to start PRD database ...
    Log file: /export/home/prdadm/startdb.log
    Instance Service on host eccprd01 started
    Jun 4 11:05:01 eccprd01 SAPPRD_00[26054]: Unable to open trace file sapstartsrv.log. (Error 11 Resource temporarily unavailable) [ntservsserver.cpp 1863]
    /usr/sap/PRD/SYS/exe/run/startj2eedb completed successfully
    Starting SAP Instance SCS01
    Startup-Log is written to /export/home/prdadm/startsap_SCS01.log
    Instance Service on host eccprd01 started
    Jun 4 11:05:02 eccprd01 SAPPRD_01[26111]: Unable to open trace file sapstartsrv.log. (Error 11 Resource temporarily unavailable) [ntservsserver.cpp 1863]
    Instance on host eccprd01 started
    Instance on host eccprd01 started
    and the it repeats the following warnings on the /var/adm/messages till it fails to the other node
    Jun 4 12:26:22 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:25 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:25 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:28 eccprd01 last message repeated 1 time
    Jun 4 12:26:28 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:31 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:31 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:34 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:34 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:37 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:37 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:40 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:40 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:43 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:43 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:46 eccprd01 last message repeated 1 time
    Jun 4 12:26:46 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:49 eccprd01 last message repeated 1 time
    Jun 4 12:26:49 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:52 eccprd01 last message repeated 1 time
    Jun 4 12:26:52 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:55 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:55 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:58 eccprd01 last message repeated 1 time
    Jun 4 12:26:58 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:01 eccprd01 last message repeated 1 time
    Jun 4 12:27:01 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:04 eccprd01 last message repeated 1 time
    Jun 4 12:27:04 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:07 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:07 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:10 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:10 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:13 eccprd01 last message repeated 1 time
    Jun 4 12:27:13 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:16 eccprd01 last message repeated 1 time
    Jun 4 12:27:16 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:19 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:19 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:22 eccprd01 last message repeated 1 time
    Jun 4 12:27:22 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:25 eccprd01 last message repeated 1 time
    Jun 4 12:27:25 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:28 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:28 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:31 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:31 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:34 eccprd01 last message repeated 1 time
    Jun 4 12:27:34 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:37 eccprd01 last message repeated 1 time
    Jun 4 12:27:37 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:40 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:40 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:43 eccprd01 last message repeated 1 time
    Jun 4 12:27:43 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:46 eccprd01 last message repeated 1 time
    Jun 4 12:27:46 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dis
    can anyone one help me if there is any error on configurations or what is the cause of this problem.....thanks in advance
    ARSSES

    Hi all.
    I am having a similar issue with a Sun Cluster 3.2 and SAP 7.0
    Scenrio:
    Central Instance (not incluster) : Started on one node
    Dialog Instance (not in cluster): Started on the other node
    When I create the resource for SUNW.sap_as like
    clrs create --g sap-rg -t SUNW.sap_as .....etc etc
    in the /var/adm/messages I got lots of WAITING FOR DISPACHER TO COME UP....
    Then after timeout it gives up.
    Any clue? What does is try to connect or waiting for? I hve notest that it's something before the startup script....
    TIA

  • 11g r2 non rac using asm for sun cluster os (two node but non-rac)

    I am going to install grid installation for non-rac using asm for two node sun cluster environment..
    How to create candidate disk in solaris cluster (sparc os) to install grid home in asm.. please provide me the steps if anyone knows

    Please refer the thread Re: 11GR2 ASM in non-rac node not starting... failing with error ORA-29701
    and this doc http://docs.oracle.com/cd/E11882_01/install.112/e24616/presolar.htm#CHDHAAHE

  • Sun cluster 3.1 on Solaris 10 update1

    Hi All,
    Good day !!!
    I am trying to build Sun Cluster 3.1 on Sun Solaris 10 update1 operating system.
    I am using sun V240 servers. If i plumb bge1 and bge2 the second
    and third interface and reboot the server system is not comming up.
    it promts error saying "init" failed and it stops responding.
    Also if i create /etc/defaultrouter file and put gateway system is not comming up.
    Kindly let me know weather solaris 10 update1 is support for cluster or not.
    Thanks,
    nagaraju

    Hi, I am not sure what your network setup looks like but I assume that you have configured your first port as the public network. You do not need to touch any other interface for the cluster setup manually. During the installation procedure you just give the names of your private network ports to the scinstall procedure. This will do all the setup for you.
    The V240 is supported for SC3.1.
    Regards
    Hartmut

  • DS6 in a zone on a Sun Cluster

    I have a sun cluster that I am trying to configure and I don't know if I am trying to do something wrong, so I thought I would ask.
    I am using Sun Cluster 3.2 on a pair of Sun T2000s with a Fiber Channel disk array attached to both nodes. I have configured the disk array to have two file systems. One for each server. I have configured two resource groups in the global zone and setup a HAStoragePlus resource for each file system. I am successfully able to fail the file systems between the two nodes. On each of the file systems I have installed a zone. The zone is managed with the resource type provided by the SUNWsczone package to start and stop the zone. The resource is in the same resource group has the HAStoragePlus resource.
    At this point I have created resource group for the zone to manage the directory server. After creating the resource group I am trying to create a resource for the directory service HA service. When I use the clresource command it complains that the resource group does not contain a logical hostname. When using the services provided by the SUNWsczone package I created a logical hostname that is being assigned to the zone in question. Is there a way to install the Directory Server HA resource into the resource group for the zone?

    Philippe,
    DS 6 Sun Cluster Agent was not tested with SC 3.2 in Zones.
    Zone support came with SC 3.2, and DS 6 Cluster Agent was built with SC 3.1, tested with SC 3.1 and 3.2 in the Global zone.
    Regards,
    Ludovic.

  • What is the best practice to perform DB Backup on Sun Cluster using OSB

    I have a query on OSB 10.4.
    I want to configure OSB 10.4 on 2 Node Sun Cluster where the oracle database is running.
    When im performing DB backup, my DB backup job should not get failed if my node1 fails. What is the best practice to achieve this?

    Hi,
    Each Host that participates in an OSB administrative domain must also have some pre-configured way to resolve a host name to an IP address.Use DNS, NIS etc to do this.
    Specify cluster IP in OSB, so that OSB always looks for Cluster IP only instead of physical IPs of each node.
    Explanation :
    If it is 2-Node OR 4-Node, when Cluster software installed in these nodes we have to configure Cluster IP so that when one node fails Cluster IP will automatically move to the another node.
    This cluster IP we have to specify whether it is RMAN backup or Application JDBC connection. Failing to second node/another Node is the job of Cluster IP. So wherever we install cluster configuration we have to specify in all the failover places specify CLUSTER IP.
    Hope it helps..
    Thanks
    LaserSoft

  • Sun Cluster 3.3 Mirror 2 SAN storages (storagetek) with SVM

    Hello all,
    I would like to know if you have any best practice for mirroring two storage systems with svm on sun cluster without corrupting/loosing data from the storages.
    I currently have enabled the multipath on the fc (stmsboot) after that configure the cluster and created the SVM mirror with the did devices.
    I have some issues that i wan to know if there's gonna be any problem.
    a) 4 quorum votes. As i have two (2) nodes and 2 storages that i need to know which is up i have 4 votes, so in order the cluster to start needs 3 votes. Is there any solution on this like cldevice combine ?
    b) The mirror is on SVM level so when a failover happens the metasets go to the other node. Is there any change to start the mirror from the second SAN insteand of the first and have any kind of corruption? Is there someway to better protect the storage ?
    c) The storagetek has option for snapshots, is there a good way of using this feature or not?
    d) Is there any problem by failling over global filesystems (global option in mount)? The only thing that may write in this filesystem is the application itself that belongs in the same resource group, so when it will need to fail over it will stop all the proccesses accessing this filesystem and it would be ok to unmount it.
    Best regards to all of you,
    PiT

    Thank you very much for your answers Tim, they are really very helpfull, i only have some comments on them to be fully answered.
    a) Its all answered to me. I thing that i will add the vote from only one storage and if the storage goes down, i will tell the customer to check the quorum status and add the second storage as QD. The quorum server is not a bad idea, but if the network is down for some reason i thing that bad thing will happen so i dont wont to relly on that.
    b) I think you are clear enough.
    c) I thing you are clear enough! (just as i thought this would happen for the snapshots....)
    d) Finally, if this filesystem is in a metadisk that is been started from the first node and the second node is proxing to the first node for the metaset disks, is there any change to lock the filesystem/ metaset group and don't be able to take it?
    Thanks in advance,
    Pit
    (I will also look the document you mention, a lot of thanks)

  • Jboss configuration on Sun Cluster 3.1

    Hi.
    I am using generic Data Services to manage JBoss instance under Sun Cluster. the command is as follows.
    scrgadm -a -j jboss_resource -g cluster_failover_rg -t SUNW.gds \
    -y Scalable=false -y Start_timeout=900 \
    -y Stop_timeout=420 -x Probe_timeout=300 \
    -y Port_list="8080/tcp" \
    -y Resource_dependencies=oracle_server_resource \
    -x Start_command='/bin/su mform -c "/usr/msm40/scripts/startup/jboss.sh start"' \
    -x Stop_command='/bin/su mform -c "/usr/msm40/scripts/startup/jboss.sh stop"' \
    -x Child_mon_level=0 -x Failover_enabled=true -x Stop_signal=9
    My jboss script will take about 8 to 10 minutes to start completely as it is designed to start about 10 child processes. Hence I set the time out as 15 minutes.
    But while starting the resource I found following messages on the console.
    Oct 6 12:45:29 MFIN-SOL01 SC[SUNW.gds:5,cluster_failover_rg,jboss_resource,gds_
    svc_start]: Failed to connect to host msm and port 8080: Connection refused.
    Oct 6 12:45:29 MFIN-SOL01 SC[SUNW.gds:5,cluster_failover_rg,jboss_resource,gds_
    svc_start]: Failed to connect to the host <msm> and port <8080>.
    Oct 6 12:45:31 MFIN-SOL01 SC[SUNW.gds:5,cluster_failover_rg,jboss_resource,gds_
    svc_start]: Failed to connect to host msm and port 8080: Connection refused.
    Oct 6 12:45:31 MFIN-SOL01 SC[SUNW.gds:5,cluster_failover_rg,jboss_resource,gds_
    svc_start]: Failed to connect to the host <msm> and port <8080>.
    Oct 6 12:45:33 MFIN-SOL01 SC[SUNW.gds:5,cluster_failover_rg,jboss_resource,gds_
    svc_start]: Failed to connect to host msm and port 8080: Connection refused.
    Oct 6 12:45:33 MFIN-SOL01 SC[SUNW.gds:5,cluster_failover_rg,jboss_resource,gds_
    svc_start]: Failed to connect to the host <msm> and port <8080>.
    Oct 6 12:45:35 MFIN-SOL01 SC[SUNW.gds:5,cluster_failover_rg,jboss_resource,gds_
    svc_start]: Failed to connect to host msm and port 8080: Connection refused.
    Oct 6 12:45:35 MFIN-SOL01 SC[SUNW.gds:5,cluster_failover_rg,jboss_resource,gds_
    svc_start]: Failed to connect to the host <msm> and port <8080>.
    here msm is the logical hostname i have selected and port 8080 is used by jboss instance.
    after throwing these error messages the cluster software failes over to the other node and changes the status to offline after several attempts.
    I tried starting the instance manually and it worked fine.
    Please let me know if I am missing something.
    Thanks in advance for the help.

    Found the solution. Added delay at the end of start script. This may be because jboss takes some time to bind the ports and the hostname.

Maybe you are looking for

  • Why does a new tab page open along with my home page?

    When I go online, I expect to have only my home page open, instead, I have two tabs, my home page tab and New Tab and the New Tab page is the page that actually opens. This just started yesterday, why??

  • How do I get a license key for Xcelsius Enterprise 2008 sp3

    Hello The comapny I work for are partners with SAP, so what is the quickest way for me to get a license key for Xcelsius? Thanks Eddie

  • Trackpad/KB Ribbon Cable Clip NEEDED

    I recently upgraded my 17" 1.33 GHz PB hard drive from 80 GB to 120 GB. In the process, I cracked the plastic retaining clip that holds the trackpad/KB ribbon cable in its connector on the logic board. The cable is fine, but the little 10 cent plasti

  • Reading instruments with mutiple baud rates

    Hi, I have two pressure manometers with RS-232 ports. One of them is 2400 baud rate while the other is 9600 baud rate. They continously spit out data on the port and there is no way to send commands to the two devices. Since i have only one serial po

  • CALLING A IMAGE FROM A LOCATION OF SERVER

    Hello to all,i have 2 doubts: 1ª) try to do this by creating a dummy image and then i format picture and put url{var} where var is in xml element and contains the path of the picture but doesnt work. i'm doing something wrong? 2º) sometimes i want to