Sun Cluster 3.2 upgrade fail
Dear mate,
when I upgrade the cluster from 3.1 to 3.2 by using Live Upgrade, the folloing error message came out, any hints or idea will be appreciate.
The PBE (sol8) is Solaris 8 with sun cluster 3.1. the PBE (sol10) is Solaris 10 and will upgrade to sun cluster 3.2
# cd /Solaris_sparc/Product/sun_cluster/Solaris_10/Tools
# ./scinstall -u update -R /sol10
scinstall: "SUNWesu" is not installed in "/sol10".
scinstall: scinstall did NOT complete successfully!
# luupgrade -p -n sol10 -s /mnt/Solaris_10/Product SUNWesu
Validating the contents of the media </mnt/Solaris_10/Product>.
Mounting the BE <sol10>.
ERROR: The boot environment <sol10> supports non-global zones.The current boot environment does not support non-global zones. Releases prior to Solaris 10 cannot be used to maintain Solaris 10 and later releases that include support for non-global zones. You may only execute the specified operation on a system with Solaris 10 (or later) installed.
cat: cannot open /tmp/.liveupgrade.6951.16469/.lmz.list
Thanks and Regards,
Donald
Hi Tim,
Thanks for the information.
I got the following result.
# pkginfo -R /sol10 SUNWesu
ERROR: information for "SUNWesu" was not found
# pkginfo SUNWesu
system SUNWesu Extended System Utilities
PBE has SUNWesu while ABE missing this package, does it mean SUNWesu hadn't upgrade from PBE to ABE? if so what will be the alternative to do it?
thanks and Regards,
Donald
Similar Messages
-
Upgrade from Solaris 8 SPARC with Sun cluster 3.1u3 to Solaris 10 SPARC
Dear All,
We are planning an upgrade of the OS from Solaris 8 SPARC to Solaris 10 SPARC on a two-node active-standby clustered system.
The current major software we have on the Solaris 8 system are:
1: Sun Cluster 3.1u3
2: Oracle 9i 9.2.0.8
3: Veritas File System Vxfs v4.0
4: Sun Solaris 8 2/04 SPARC
Any pointers as to what sequence and how the upgrade should be done?
Thanks in advance.
Regards,
Rayyes I know it can be quite complicated and complex, but Sun provided us with a detailed documentation, at least in our case Solaris 9 to 10 it was very helpful.
You might get better help in the cluster forum http://forums.sun.com/forum.jspa?forumID=842
-- Nick -
Upgrading Solaris OS (9 to 10) in sun cluster 3.1 environment
Hi all ,
I have to upgrade the solaris OS 9 to 10 in Sun cluster 3.1.
Sun Cluster 3.1
data service - Netbackup 5.1
Questions:
1 .Best ways to upgrade the Solaris 9 to 10 and the Problems while upgrading the OS?
2 .Sun Trunking support in Sun Cluster 3.1?
Regards
RamanaHi Ramana
We had used the live upgrade for upgrading Solaris 9 to 10 and its the best method for less downtime and risk but you have to follow the proper procedure as it is not the same for normal solaris. Live upgrade with sun cluster is different . you have to take into consideration about global devices and veritas volume manager. while creating new boot environment.
Thanks/Regards
Sadiq -
Sun cluster failed when switching, mount /global/ I/O error .
Hi all,
I am having a problem during switching two Sun Cluster nodes.
Environment:
Two nodes with Solaris 8 (Generic_117350-27), 2 Sun D2 arrays & Vxvm 3.2 and Sun Cluster 3.0.
Porblem description:
scswitch failed , then scshutdown and boot up the both nodes. One node failed because of vxvm boot failure.
The other node is booting up normally but cannot mount /global directories. Manually mount is working fine.
# mount /global/stripe01
mount: I/O error
mount: cannot mount /dev/vx/dsk/globdg/stripe-vol01
# vxdg import globdg
# vxvol -g globdg startall
# mount /dev/vx/dsk/globdg/mirror-vol03 /mnt
# echo $?
0
port:root:/global/.devices/node@1/dev/vx/dsk 169# mount /global/stripe01
mount: I/O error
mount: cannot mount /dev/vx/dsk/globdg/stripe-vol01
Need help urgently
JeffI would check your patch levels. I seem to remember there was a linker patch that cause an issue with mounting /global/.devices/node@X
Tim
--- -
Failed to create resource - Error in Sun cluster 3.2
Hi All,
I have a 2 node cluster in place. When i trying to create a resource, i am getting following error.
Can anybody tell me why i am getting this. I have Sun Cluster 3.2 on Solaris 10.
I have created zpool called testpool.
clrs create -g test-rg -t SUNW.HAStoragePlus -p Zpools=testpool hasp-testpool-res
clrs: sun011:test011z - : no error
clrs: (C189917) VALIDATE on resource hasp-testpool-res, resource group test-rg, exited with non-zero exit status.
clrs: (C720144) Validation of resource hasp-testpool-res in resource group test-rg on node sun011:test011z failed.
clrs: (C891200) Failed to create resource "hasp-testpool-res".
Regards
KumarThorsten,
testpool created in one of the cluster nodes and is accessible from both the nodes in the cluster. But if it is imported in one node and will not be access from other node. If other node want to get access we need to export and import testpool in other node.
Storage LUNs allocated to testpool are accessible from all the nodes in the cluster and able import and export testpool from all the nodes in the cluster.
Regards
Kumar -
SUN Cluster.PMF.pmfd Failed to stay up
Dear All,
Please help I am facing problem and unable to start sun cluster concurrent manager resource group it is showing me status "starting" but unable to start please find below the log
Oct 16 14:06:24 iat-dc-ebpdb02 Cluster.PMF.pmfd: [ID 887656 daemon.notice] Process: tag="prdclone-rg,PRODE-cmg-res,0.svc", cmd="/bin/sh -c /opt/SUNWscebs/cmg/bin/start_cmg -R 'PRODE-cmg-res' -G 'prdclone-rg' -C '/bkpclone/acvetprdcm/inst/apps/PRODE_iat-dc-prdclone' -U 'acvetprdcm' -P 'apps' -V '12.0' -S 'PRODE' -O '/bkpclone/acvetprdcm/apps/tech_st/10.1.2' -L '77' ", Failed to stay up.
Oct 16 14:06:24 iat-dc-ebpdb02 Cluster.PMF.pmfd: [ID 534408 daemon.notice] "prdclone-rg,PRODE-cmg-res,0.svc" restarting too often ... sleeping 8 seconds.
Oct 16 14:06:32 iat-dc-ebpdb02 SC[SUNWscebs.cmg.start]:prdclone-rg:PRODE-cmg-res: [ID 567783 daemon.error] startebs - ld.so.1: sh: fatal: /usr/lib/secure/libschost.so.1: open failed: No such file or directory
Oct 16 14:06:32 iat-dc-ebpdb02 Cluster.PMF.pmfd: [ID 887656 daemon.notice] Process: tag="prdclone-rg,PRODE-cmg-res,0.svc", cmd="/bin/sh -c /opt/SUNWscebs/cmg/bin/start_cmg -R 'PRODE-cmg-res' -G 'prdclone-rg' -C '/bkpclone/acvetprdcm/inst/apps/PRODE_iat-dc-prdclone' -U 'acvetprdcm' -P 'apps' -V '12.0' -S 'PRODE' -O '/bkpclone/acvetprdcm/apps/tech_st/10.1.2' -L '77' ", Failed to stay up.
Oct 16 14:06:32 iat-dc-ebpdb02 Cluster.PMF.pmfd: [ID 534408 daemon.notice] "prdclone-rg,PRODE-cmg-res,0.svc" restarting too often ... sleeping 16 seconds.
Oct 16 14:06:48 iat-dc-ebpdb02 SC[SUNWscebs.cmg.start]:prdclone-rg:PRODE-cmg-res: [ID 567783 daemon.error] startebs - ld.so.1: sh: fatal: /usr/lib/secure/libschost.so.1: open failed: No such file or directory
kindly help to resolve the issue.
Regards,Thanks unable to resolve the issue.
Please see below are my setup:
Database tier:
Sun cluster 3.2u3
oracle EBS 12.1.3
Two node sun cluster active node a1 and passive b1.
Application Tier:
App01
I want to move concurrent manager from appo1 to database tier what I did below was my action plan.
step-1 cloned application(app01) to DB on primary host and enabled only batch processing and other all disabled using same virtual host as we have same defined virtual host for database resource group LH (vhost)
the problem is when I start CM it started but immediately stop when I cloned again with physical host
CM is started and working fine but I need anyone to tell me how can i start manually and move to CM resource sun cluster.
Question: Can I choose same LR host for application or I need to put physical name of the primary node during cloning process as I said same we are using LR host for DB tier or need to add new virtual host for CM.
thanks
Regards, -
QFS Meta data resource on sun cluster failed
Hi,
I'm trying to configure QFS on cluster environment, to configure metadata resource faced error. i tried with different type of qfs none of them worked.
[root @ n1u331]
~ # scrgadm -a -j mds -g qfs-mds-rg -t SUNW.qfs:5 -x QFSFileSystem=/sharedqfs
n1u332 - shqfs: Invalid priority (0) for server n1u332FS shqfs: validate_node() failed.
(C189917) VALIDATE on resource mds, resource group qfs-mds-rg, exited with non-zero exit status.
(C720144) Validation of resource mds in resource group qfs-mds-rg on node n1u332 failed.
[root @ n1u331]
~ # scrgadm -a -j mds -g qfs-mds-rg -t SUNW.qfs:5 -x QFSFileSystem=/global/haqfs
n1u332 - Mount point /global/haqfs does not have the 'shared' option set.
(C189917) VALIDATE on resource mds, resource group qfs-mds-rg, exited with non-zero exit status.
(C720144) Validation of resource mds in resource group qfs-mds-rg on node n1u332 failed.
[root @ n1u331]
~ # scrgadm -a -j mds -g qfs-mds-rg -t SUNW.qfs:5 -x QFSFileSystem=/global/hasharedqfs
n1u332 - has: No /dsk/ string (nodev) in device.Inappropriate path in FS has device component: nodev.FS has: validate_qfsdevs() failed.
(C189917) VALIDATE on resource mds, resource group qfs-mds-rg, exited with non-zero exit status.
(C720144) Validation of resource mds in resource group qfs-mds-rg on node n1u332 failed.
any QFS expert here?hi
Yes we have 5.2, here is the wiki's link, [ http://wikis.sun.com/display/SAMQFSDocs52/Home|http://wikis.sun.com/display/SAMQFSDocs52/Home]
I have added the file system trough webconsole, and it's mounted and working fine.
after creating the file system i tried to put under sun cluster's management, but it asked for metadata resource and to create metadata resource I have got the mentioned errors.
I need the use QFS file system in non-RAC environment, just mounting and using the file system. I could mount it on two machine in shared mode and high available mode, in both case in the second node it's 3 time slower then the node which has metadata server when you write and the same read speed. could you please let me know if it's the same for your environment or not. if so what do you think of the reason, i see both side is writing to the storage directly but why it's so slow on one node.
regards, -
Any one did an upgrade from Sun cluster 3.0 07/01 release to 12/01 release
was there any problem in the upgrade and what was the approach
any suggestionThis upgrade is delivered as a patch set and some
additional packages. The upgrade procedure is
basically the same as a normal patch procedure.
You might consider getting the latest core Sun Cluster
patch as well, just to be sure you are at the latest rev.
The additional packages provide new features. The
most popular is the Generic Data Service which can
really save development time for simple agents.
-- richard -
Tmboot fails started by a Sun Cluster Agent (Data Service)
Hi,
we are using TUXEDO 6.5 in conjunction with ADC's Singl.eView.
Starting TUXEDO by calling tmboot -y works fine in the command prompt.
Starting TUXEDO within a Sun Cluster Agent, which internaly calls tmboot -y as
well, fails with the foolowing error messages:
004859.lbgas1!GWTDOMAIN.5762: 08022002: TUXEDO Version 6.5 SunOS 5.5.1 Generic_103640-29
sun4u sparc SUNW,Ultra-1.
004859.lbgas1!GWTDOMAIN.5762: LIBTUX_CAT:262: INFO: Standard main starting
004901.lbgas1!GWTDOMAIN.5762: LIBGWT_CAT:1223: WARN: Reach system open max(0)
limit
004901.lbgas1!GWTDOMAIN.5762: LIBGWT_CAT:1122: ERROR: Unable to allocate free
fd structure
004901.lbgas1!GWTDOMAIN.5762: LIBGWT_CAT:1124: ERROR: Unable to open listening
endpoint
004901.lbgas1!GWTDOMAIN.5762: LIBTUX_CAT:250: ERROR: tpsvrinit() failed
ubbconfig entries for GWTDOMAIN:
GWTDOMAIN SRVGRP="GWADMGROUP" SRVID=2
REPLYQ = N
RESTART = Y
GRACE = 0
MAXGEN = 5
CLOPT = ""
RQADDR = "GWADMGROUP"
What configuration do I have to change, a TUXEDO one or Sun ?
Can I drop an application server in ubbconfig ?
LutzPatch #232, CR041488 seems to address this:
GWTDOMAIN connection problem when maxfiles kernel parm set > 32K
Perhaps the Cluster Agent sets a higher ulimit when it runs.
See if you are running the latest patch.
Scott Orshan
Lutz wrote:
>
Hi,
we are using TUXEDO 6.5 in conjunction with ADC's Singl.eView.
Starting TUXEDO by calling tmboot -y works fine in the command prompt.
Starting TUXEDO within a Sun Cluster Agent, which internaly calls tmboot -y as
well, fails with the foolowing error messages:
004859.lbgas1!GWTDOMAIN.5762: 08022002: TUXEDO Version 6.5 SunOS 5.5.1 Generic_103640-29
sun4u sparc SUNW,Ultra-1.
004859.lbgas1!GWTDOMAIN.5762: LIBTUX_CAT:262: INFO: Standard main starting
004901.lbgas1!GWTDOMAIN.5762: LIBGWT_CAT:1223: WARN: Reach system open max(0)
limit
004901.lbgas1!GWTDOMAIN.5762: LIBGWT_CAT:1122: ERROR: Unable to allocate free
fd structure
004901.lbgas1!GWTDOMAIN.5762: LIBGWT_CAT:1124: ERROR: Unable to open listening
endpoint
004901.lbgas1!GWTDOMAIN.5762: LIBTUX_CAT:250: ERROR: tpsvrinit() failed
ubbconfig entries for GWTDOMAIN:
GWTDOMAIN SRVGRP="GWADMGROUP" SRVID=2
REPLYQ = N
RESTART = Y
GRACE = 0
MAXGEN = 5
CLOPT = ""
RQADDR = "GWADMGROUP"
What configuration do I have to change, a TUXEDO one or Sun ?
Can I drop an application server in ubbconfig ?
Lutz -
Apache with PHP Fails to Validate in Sun Cluster
Greetings,
I have Sun Cluster 3.2u2 running with two nodes and have Apache 2.2.11 running successfully in failover mode on shared storage. However, I just installed PHP 5.2.10 and added the line "LoadModule php5_module modules/libphp5.so" to httpd.conf. I am now getting "Command {/global/data/local/apache/bin/apachectl configtest >/dev/null 2>&1} failed: httpd cannot parse httpd.conf, Failed to validate configuration." when I try to start the resource. I can start Apache just fine outside of the cluster, and when I run configtest manually, it replies "Syntax OK".
Anyone have any ideas why the Cluster software doesn't like the PHP module even though configtest passes with Syntax OK?
Many thanks,
TimFound it. Sun Cluster was apparently smart enough to know I was missing the correct PHP AddType lines in httpd.conf.
-
Sun Cluster failed to switchover
Hi,
I have configured two node sun cluster and was working fine all these days.
Since yesterday, i am unable to failover the cluster to second node.
instead, resources are stopped and started again on the first node.
when i use the command "scswitch -z -g oracle_failover_rg -h MFIN-SOL02" in first node I am getting these messages on the console
Sep 28 17:53:16 MFIN-SOL01 ip: [ID 678092 kern.notice] TCP_IOC_ABORT_CONN: local = 010.010.007.120:0, remote = 000.000.000.00
0:0, start = -2, end = 6
Sep 28 17:53:16 MFIN-SOL01 ip: [ID 302654 kern.notice] TCP_IOC_ABORT_CONN: aborted 0 connection
Pl. suggest me to solve this problem.Those messages aren't important here. I think that might be related to the fault monitor being stopped.
As I said in the previous post, you need to diagnose this bit by bit. Try the procedure manually, i.e. stop Oracle on node 1, manually switch-over the disks and storage to node 2, mount the file system, bring up the logical address, start the database.
I expect there is something wrong with your configuration, e.g. incorrect listener configuration.
There is also a way of increasing the debug level for the Oracle agent. This is documented in the manuals IIRC.
Regards,
Tim
--- -
Sun Cluster 3.2, Zones, HA-Oracle, & FSS
I have a customer who wants to deploy a cluster utilizing Solaris 10 Zones. With creating the resource groups with the following: nodeA:zoneA, nodeB:zoneA, the Oracle resource group will be contained in the respective zone.
First create the Zone after the Sun Cluster software has been installed?
When installing Oracle, the binaries and such should reside in the Zone or in the global zone?
When configuring FSS, should this be done after the resources have been configured?
Thanks in advance,
RyanThe Oracle biaries are not big at all, ther is not much IO happening at this fs, you can easily create a ufs file system for each zone, mount that via lofs mounts into the zone. Or you can create a zpool for the binaries. My personal take would be to include them in the root path of the zones an you are set.
You must install the binaries in all zones where your Oracle database can fail over to. To reduce the maintenance work in the case of upgrades I would limit the binary installation to the zones in the nodelist of your oracle resource group. If you install the binaries on all nodes/zones of the cluster you have more work when it comes to an upgrade.
Kind Regards
Detlef -
SAP 7.0 on SUN Cluster 3.2 (Solaris 10 / SPARC)
Dear All;
i'm installing a two nodes cluster (SUN Cluster 3.2 / Solaris 10 / SPARC), for a HA SAP 7.0 / Oracle 10g DataBase
SAP and Oracle softwares were successfully installed and i could successfully cluster the Oracle DB and it is tested and working fine.
for the SAP i did the following configurations
# clresource create -g sap-ci-res-grp -t SUNW.sap_ci_v2 -p SAPSID=PRD -p Ci_instance_id=01 -p Ci_services_string=SCS -p Ci_startup_script=startsap_01 -p Ci_shutdown_script=stopsap_01 -p resource_dependencies=sap-hastp-rs,ora-db-res sap-ci-scs-res
# clresource create -g sap-ci-res-grp -t SUNW.sap_ci_v2 -p SAPSID=PRD -p Ci_instance_id=00 -p Ci_services_string=ASCS -p Ci_startup_script=startsap_00 -p Ci_shutdown_script=stopsap_00 -p resource_dependencies=sap-hastp-rs,or-db-res sap-ci-Ascs-res
and when trying to bring the sap-ci-res-grp online # clresourcegroup online -M sap-ci-res-grp
it executes the startsap scripts successfully as following
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
stty: : No such device or address
stty: : No such device or address
Starting SAP-Collector Daemon
11:04:57 04.06.2008 LOG: Effective User Id is root
Starting SAP-Collector Daemon
11:04:57 04.06.2008 LOG: Effective User Id is root
* This is Saposcol Version COLL 20.94 700 - V3.72 64Bit
* Usage: saposcol -l: Start OS Collector
* saposcol -k: Stop OS Collector
* saposcol -d: OS Collector Dialog Mode
* saposcol -s: OS Collector Status
* Starting collector (create new process)
* This is Saposcol Version COLL 20.94 700 - V3.72 64Bit
* Usage: saposcol -l: Start OS Collector
* saposcol -k: Stop OS Collector
* saposcol -d: OS Collector Dialog Mode
* saposcol -s: OS Collector Status
* Starting collector (create new process)
saposcol on host eccprd01 started
Starting SAP Instance ASCS00
Startup-Log is written to /export/home/prdadm/startsap_ASCS00.log
saposcol on host eccprd01 started
Running /usr/sap/PRD/SYS/exe/run/startj2eedb
Trying to start PRD database ...
Log file: /export/home/prdadm/startdb.log
Instance Service on host eccprd01 started
Jun 4 11:05:01 eccprd01 SAPPRD_00[26054]: Unable to open trace file sapstartsrv.log. (Error 11 Resource temporarily unavailable) [ntservsserver.cpp 1863]
/usr/sap/PRD/SYS/exe/run/startj2eedb completed successfully
Starting SAP Instance SCS01
Startup-Log is written to /export/home/prdadm/startsap_SCS01.log
Instance Service on host eccprd01 started
Jun 4 11:05:02 eccprd01 SAPPRD_01[26111]: Unable to open trace file sapstartsrv.log. (Error 11 Resource temporarily unavailable) [ntservsserver.cpp 1863]
Instance on host eccprd01 started
Instance on host eccprd01 started
and the it repeats the following warnings on the /var/adm/messages till it fails to the other node
Jun 4 12:26:22 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:25 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:25 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:28 eccprd01 last message repeated 1 time
Jun 4 12:26:28 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:31 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:31 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:34 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:34 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:37 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:37 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:40 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:40 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:43 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:43 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:46 eccprd01 last message repeated 1 time
Jun 4 12:26:46 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:49 eccprd01 last message repeated 1 time
Jun 4 12:26:49 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:52 eccprd01 last message repeated 1 time
Jun 4 12:26:52 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:55 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:55 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:58 eccprd01 last message repeated 1 time
Jun 4 12:26:58 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:01 eccprd01 last message repeated 1 time
Jun 4 12:27:01 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:04 eccprd01 last message repeated 1 time
Jun 4 12:27:04 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:07 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:07 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:10 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:10 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:13 eccprd01 last message repeated 1 time
Jun 4 12:27:13 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:16 eccprd01 last message repeated 1 time
Jun 4 12:27:16 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:19 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:19 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:22 eccprd01 last message repeated 1 time
Jun 4 12:27:22 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:25 eccprd01 last message repeated 1 time
Jun 4 12:27:25 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:28 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:28 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:31 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:31 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:34 eccprd01 last message repeated 1 time
Jun 4 12:27:34 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:37 eccprd01 last message repeated 1 time
Jun 4 12:27:37 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:40 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:40 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:43 eccprd01 last message repeated 1 time
Jun 4 12:27:43 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:46 eccprd01 last message repeated 1 time
Jun 4 12:27:46 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dis
can anyone one help me if there is any error on configurations or what is the cause of this problem.....thanks in advance
ARSSESHi all.
I am having a similar issue with a Sun Cluster 3.2 and SAP 7.0
Scenrio:
Central Instance (not incluster) : Started on one node
Dialog Instance (not in cluster): Started on the other node
When I create the resource for SUNW.sap_as like
clrs create --g sap-rg -t SUNW.sap_as .....etc etc
in the /var/adm/messages I got lots of WAITING FOR DISPACHER TO COME UP....
Then after timeout it gives up.
Any clue? What does is try to connect or waiting for? I hve notest that it's something before the startup script....
TIA -
11g r2 non rac using asm for sun cluster os (two node but non-rac)
I am going to install grid installation for non-rac using asm for two node sun cluster environment..
How to create candidate disk in solaris cluster (sparc os) to install grid home in asm.. please provide me the steps if anyone knowsPlease refer the thread Re: 11GR2 ASM in non-rac node not starting... failing with error ORA-29701
and this doc http://docs.oracle.com/cd/E11882_01/install.112/e24616/presolar.htm#CHDHAAHE -
Beta Refresh Release Now Available! Sun Cluster 3.2 Beta Program
The Sun Cluster 3.2 Release team is pleased to announce a Beta Refresh release. This release is based on our latest and greatest build of Sun Cluster 3.2, build 70, which is close to the final Revenue Release build of the product.
To apply for the Sun Cluster 3.2 Beta program, please visit:
https://feedbackprograms.sun.com/callout/default.html?callid=%7B11B4E37C-D608-433B-AF69-07F6CD714AA1%7D
or contact Eric Redmond <[email protected]>.
New Features in Sun Cluster 3.2
Ease of use
* New Sun Cluster Object Oriented Command Set
* Oracle RAC 10g improved integration and administration
* Agent configuration wizards
* Resources monitoring suspend
* Flexible private interconnect IP address scheme
Availability
* Extended flexibility for fencing protocol
* Disk path failure handling
* Quorum Server
* Cluster support for SMF services
Flexibility
* Solaris Container expanded support
* HA ZFS
* HDS TrueCopy campus cluster
* Veritas Flashsnap Fast Mirror Resynchronization 4.1 and 5.0 option support
* Multi-terabyte disk and EFI label support
* Veritas Volume Replicator 5.0 support
* Veritas Volume Manager 4.1 support on x86 platform
* Veritas Storage Foundation 5.0 File System and Volume Manager
OAMP
* Live upgrade
* Dual partition software swap (aka quantum leap)
* Optional GUI installation
* SNMP event MIB
* Command logging
* Workload system resource monitoring
Note: Veritas 5.0 features are not supported with SC 3.2 Beta.
Sun Cluster 3.2 beta supports the following Data Services
* Apache (shipped with the Solaris OS)
* DNS
* NFS V3
* Java Enterprise System 2005Q4: Application Server, Web Server, Message Queue, HADBWithout speculating on the release date of Sun Cluster 3.x or even its feature list, I would like to understand what risk Sun would take when Sun Cluster would support ZFS as a failover filesystem? Once ZFS is part of Solaris 10, I am sure customers will want to use it in clustered environments.
BTW: this means that even Veritas will have to do something about ZFS!!!
If VCS is a much better option, it would be interesting to understand what features are missing from Sun Cluster to make it really competitive.
Thanks
Hartmut
Maybe you are looking for
-
Video card dell dimension 3000
Hey, everybody. Im planning to buy a new video card for my ; dell dimension 3000, Intel(R) , Celeron(R) CPU 2.4GHz, 2.00 gb of ram, and I believe power supply is 250 W. So here is the bunch of questions; 1. Seems like my computer uses a pci type mem
-
Are Output of reports generated stored in tables & fields in SAP?
Hi, If a report is generated or if a scheduled job is executed, is the output stored in SAP tables & fields or just displayed on the screen. Thank You... Nag.
-
Urgent !!!!! HR interfaces
Information on HR interfaces
-
"Add flagged photos" only works sometimes. Is there a fix?
After I flag photos in one event, then select another event, and choose "Add flagged photos...", the flagged photos are sometimes moved and sometimes not. After opening the target event and clicking on a few photos, then selecting the target event a
-
Mocha tracking not working when applied to AE clip
Thank you in advance for reading this. I'm a complete newbie to AE and, frankly, video editing in general. I'm trying to replace faces in a video with png pictures. I'm really not trying to go for the professional look and my skills wouldn't allow it