Sun Cluster migration to VCS

hi,
I would like to migrate from SC 3.1 to VCS.
The cluster has VxVM diskgroups registered in SC, and running versin
4.0.
To really cut the downtime, can i shutdown the entire cluster and just
import the diskgroup directly from the VCS 5.0 host directly (without
unregister the diskgroup in SC)?
Thanks for the reply.
WH

You should check if any SCSI reservations are on the disk after shutting down the cluster.
IIRC with a regular shutdown the reservations should be cleared.

Similar Messages

  • VCS to Sun Cluster migration

    I am planning to migrate a 2-node cluster from VCS to Sun Cluster. how much downtime does this involve? is there any documentation that i can reference?

    Hi all,
    In the following I outlined the principle steps how to migrate a cluster in place. Tis will be one of the subtopics of an upcoming blog about VCS to SC migration.
    Pavel, you should revisit SC 3.2 definitely and explicitly the bui. We had various VCS admins on different projects who told us, the gap became that small, that VCS is not worth the additional costs.
    Bear in mind that migrating in place is the most complex scenario, and doing it on a complete alternative platform is a much simpler process. But lets proceed with the assumptions and process:
    Let us assume a two node cluster where you want to migrate from VCS with VXVM to Solaris Cluster and Solaris Volume Manager. I assume as well that your data is mirrored. The steps below are a principle outline of the migration process, to get the necessary cluster administration commands you need to consult the appropriate documentation.
    1.Reduce the VCS cluster to a one node cluster and disconnect the interconnect. The interconnect has to be disconnected to allow a Solaris Cluster installation on the other node. Solaris Cluster check the interconnect for unwanted traffic
    2.Split the storage in two halfs, and disallow the access from the VCS cluster to the future Solaris cluster part. This can be achieved in example by modifying the switch zoning, or lun masking. At this point in time your application is still running, but you have no high availability and no data redundancy any more.
    3.Install a single node Solaris Cluster on the second host, it is advisable to start with a fresh Solaris install.
    4.Configure the full Solaris Cluster topology with a temporary copy of your date. The data has to be installed by backup/restore, because you are changing the volume manager as well. It is important here, that you use different IP addresses for the logical hosts to avoid duplicate addresses. Now the new single node Solaris Cluster is ready to take the actual data.
    5.When you are ready for an application downtime, transfer the actual data from the Veritas Cluster again to the Solaris Cluster, and shut down the remaining VCS single node cluster.
    6.Change the IP Addresses of the logical host in the Solaris Cluster to the final value and enable all relevant resources. From now on your application will be running on the new Solaris Cluster.
    7.Reestablish the interconnect, destroy the VCS cluster and install Solaris Cluster packages on the old VCS node, but do not configure the node yet.
    8.Allow data access to the storage for both nodes with appropriate methods.
    9.Add the second node to the Solaris Cluster including the Solaris Cluster device groups, this step will take an other short application downtime.
    10.Mirror your data. From this point you have full redundancy and full high availability again.
    Cheers
    Detlef

  • Sun Cluster Migration

    Dear Experts ,
    We are going to migrate 3 (2 nodes) Sun Cluster Systems running in Sun old hardware to Sun new hardware. The old hardwares are EOL.
    What are all information need to gather from Old system, in order to built a New Cluster system.
    Is Sun explorer output is enough or apart from explorer output, what are all the information need gather.
    Awaiting for your reply.
    Regards,
    R. Rajesh Kannan.

    You should check if any SCSI reservations are on the disk after shutting down the cluster.
    IIRC with a regular shutdown the reservations should be cleared.

  • Migrate Sun Cluster (+RACdisks to new hardware running Sun Cluster ( + RAC)

    Hello,
    We have old hardware (v490s) running Sun Cluster 3.2 + Oracle RAC 10.2.0.4.0 connected to SAN. We need to move to T4. Oracle advised against including new hardware into existing cluster, so we are planning on building a new cluster with T4's, same software (Solaris 10, Sun Cluster 3.2, RAC 10.2.0.4.0).
    When ready, we plan to shut down existing cluster, zone new cluster to existing disks and bring up everything on new hardware (simply stated).
    Will it work?
    Any gotchas - like need to clear disk ids or Sun Cluster panicking? RAC panicking? Any reference docs out there?
    Thanks
    user12961096

    Do we absolutely need that in our new setup or could we forgo that additional layer? Would Sun Cluster give us anything that the OS + RAC doesn't give us?Yes, Oracle Solaris Cluster does make things a lot easier. It looks after your device space and gives you consistent DID devices for CRS/RAC. It gives you the choice to use sQFS, raw metasets, or ASM. It has clprivnet which is a lot easier and performs better than an IPMP solution. The node failure detection time is <= 10 seconds which is quicker than CRS on it's own and it uses SCSI fencing instead of a STONITH approach. Finally, you have all the off the shelf agents that Solaris Cluster offers.
    However, if you are only doing RAC and you just want ASM and you don't need the last few seconds of failure detection that OSC gives you and you think STONITH is good enough for your fencing purposes, then CRS on its own is perfect. There are many, many deployments both with and without OSC, it's not a simple yes/no answer.
    Having worked for the Solaris Cluster group, I'm still slightly bias to including it rather than going without. Others have the alternate view! :-)
    Hope that helps,
    Tim
    ---

  • Migrating a Sun Cluster Running Oracle to New Hardware

    Has anyone attempted this? Essentially we are moving a Sun Cluster from one location to hardware at another location while maintaining the same node names. From what I can tell, I need to (on an install lan):
    1) Load the OS
    2) Configure the IPs
    3) Install Sun Cluster
    4) Install Oracle Parallel Server/RAC
    5) Restore the data on a per node basis
    6) Restore the shared data
    7) Adjust, tweak, and run
    Are there any pitfalls or suggestions on the approach? The shop is relatively new to clustering much less oracle clustering and the original cluster was installed by admins gone bye.

    I would say that Apple should be able to update your 36-months maintenance agreement with a OSXS 10.4 serial number.
    As far as I know, the structure of 10.3 and 10.4 serial numbers is different (wasn't the case between 10.2 and 10.3) so I'm short of a technical answer here.
    Maybe you could try :
    /System/Library/ServerSetup/serversetup -setServerSerialNumber xxxx-xxx-xxx-x-xxx-xxx-xxx-xxx-xxx-xxx-x
    in a Terminal window on the server. It's theorically the same as using Server Admin but maybe this could help.

  • Beta Refresh Release Now Available!  Sun Cluster 3.2 Beta Program

    The Sun Cluster 3.2 Release team is pleased to announce a Beta Refresh release. This release is based on our latest and greatest build of Sun Cluster 3.2, build 70, which is close to the final Revenue Release build of the product.
    To apply for the Sun Cluster 3.2 Beta program, please visit:
    https://feedbackprograms.sun.com/callout/default.html?callid=%7B11B4E37C-D608-433B-AF69-07F6CD714AA1%7D
    or contact Eric Redmond <[email protected]>.
    New Features in Sun Cluster 3.2
    Ease of use
    * New Sun Cluster Object Oriented Command Set
    * Oracle RAC 10g improved integration and administration
    * Agent configuration wizards
    * Resources monitoring suspend
    * Flexible private interconnect IP address scheme
    Availability
    * Extended flexibility for fencing protocol
    * Disk path failure handling
    * Quorum Server
    * Cluster support for SMF services
    Flexibility
    * Solaris Container expanded support
    * HA ZFS
    * HDS TrueCopy campus cluster
    * Veritas Flashsnap Fast Mirror Resynchronization 4.1 and 5.0 option support
    * Multi-terabyte disk and EFI label support
    * Veritas Volume Replicator 5.0 support
    * Veritas Volume Manager 4.1 support on x86 platform
    * Veritas Storage Foundation 5.0 File System and Volume Manager
    OAMP
    * Live upgrade
    * Dual partition software swap (aka quantum leap)
    * Optional GUI installation
    * SNMP event MIB
    * Command logging
    * Workload system resource monitoring
    Note: Veritas 5.0 features are not supported with SC 3.2 Beta.
    Sun Cluster 3.2 beta supports the following Data Services
    * Apache (shipped with the Solaris OS)
    * DNS
    * NFS V3
    * Java Enterprise System 2005Q4: Application Server, Web Server, Message Queue, HADB

    Without speculating on the release date of Sun Cluster 3.x or even its feature list, I would like to understand what risk Sun would take when Sun Cluster would support ZFS as a failover filesystem? Once ZFS is part of Solaris 10, I am sure customers will want to use it in clustered environments.
    BTW: this means that even Veritas will have to do something about ZFS!!!
    If VCS is a much better option, it would be interesting to understand what features are missing from Sun Cluster to make it really competitive.
    Thanks
    Hartmut

  • DS6 in Sun cluster

    Hi
    I am doing DS 5.2 migrate to DS6.1. I have DS5.2 and messaging server 6.2 in cluster environment. I am OK with migration process but how to replace existing ldap resource in cluster with the new setting. My messaging resource is dependent on ldap (5.2) resource. I would like to add new ldap resource in cluster and link messaging resource to new ldap while keeping my old ldap resource offline. Can any body guide me what to do and how?
    Thanks in advance

    I believe you need to make a new Resource group for DS 6. Once it is up and running, it should be easy to change the MS dependency to the new RG.
    I am not familiar enough with Sun Cluster dependencies (and I don't have a cluster running now) to give you the exact details. But it seems like just changing the name of the Resource Group MS depends on.
    Regards,
    Ludovic.

  • Sun cluster: virtual IP address

    Hi,
    What is the virtual IP address and how to configure it?
    For example, should it be defined in /etc/hosts? dns?
    Thank you,

    [[[s this correct to have apache HA?]]]
    Apache can be set up as a failover resource (so it is active only on one node at a time) or a scalable resource (where it would be active on multiple nodes at the same time).
    [[[Just an aside question: HAStoragePlus is NFS sharing? What is difference between NFS resource and mount resource (I saw Veritas differentiate between them)? In case I set up a shared disk, is it NFS or mount resource?]]]
    HAStoragePlus is not NFS sharing. HAStoragePlus let's you create HA storage (it's called HAStoragePlus because there was an earlier generation data service (aka Clustering Agent a-la VCS) called HAStorage). This will let you wrap a shared storage device and fail it back and forth between multiple nodes of the cluster.
    NFS sharing has to be handled using the SUNW.nfs Data service (or in other words, the NFS clustering agent) (ie only if you want to set up NFS as a HA service). Otherwise, you can use standard NFS.
    Mount resource is (i'm guessing here) any resource that can be mounted. In other words, a Filesystem.
    NFS resource is a resource that is shared out via NFS.
    [[[Also, a basic question: The shared disk should not be mounted in /etc/vfstab. Correct? It should be only present when doing format on each node. Right? It is SCS that manages the mounting of the file system? This should be up before testing apache HA…no?]]]
    That is correct. Sun Cluster will handle mounting/unmounting the filesystem and importing/deporting the disk set (in Veritas world it is called a Disk group).
    When you build your cluster resource group (aka VCS Service group), you will have to build the dependency tree (just how you would in VCS).
    1) Create empty RG
    2) Create HAStoragePlus Resource
    3) Create Logical Hostname resource
    4) Create Apache resource
    5) define dependency of Logical hostname (Virtual IP) and HAStoragePlus (filesystem) so that apache can start.
    At each stage, you can test whether the RG is working as it should before proceeding to the next level.

  • Sun Cluster, vx mode - "mode: enabled: cluster inactive"

    Hi,
    I have installed sun cluster 3.2 on solaris 9 (Solaris 9 9/05). I want to make it an active-active setup with shared veritas DGs. This setup also has vxvm 5 (Veritas-5.0_MP1_RP4.4) with rolling pack 4 and solaris has all the latest pathes updated via "updatemanager". The shared storage comes from DMX800.
    In order to get VxVM in cluster mode I have installed licenses for CVM, VCS and also ORCLudlm (3.3.4.8 ) package.
    The sun cluster install has all the necessary framework packages. But the VX mode refuses to be in cluster mode:
    #vxdctl -c mode
    mode: enabled: cluster inactive
    Issue is udlm daemon "dlmmon" isnt starting.
    Also I see the below errors
    cacao: Error: Fail to start cacao agent. (instance default)
    Error: Fail to start cacao agent. (instance default)
    AND messages file on nodeA shows the below error
    [ID 988885 daemon.error] libpnm error: can't connect to PNMd on nodeB
    I am at my wits end on how to resolve this issue :(
    Any help is appreciated.
    Regards,
    Ashish

    Well it could be the problem I ran into... and I went round and round for ages trying to figure out what was wrong - before I realised my mistake.
    Assuming you have VxVM/CVM licensed properly, check that ORCLudlm is installed on all nodes. Then create your rac-framework-rg and ensure you have a rac-framework-rs, rac-udlm-rs AND a rac-cvm-rs resource. Now, unless you have both of these and they can be brought enabled and brought online, then you'll have exactly the problem you are seeing.
    Hope that helps,
    Tim
    Edited by: Tim.Read on Feb 19, 2008 4:08 AM
    Ooops missed the rac-udlm-rs ... Doh!

  • Sun Cluster 3.1 setup

    Dear All,
    Sooner we will upgrade the Sun Cluster 3.1. I am now working on a testing site.
    What I am trying to setup is two server with SC 3.1 and simulate the migration procedure, however we don't have SAN in the testing site, so I stuck in the quorum configuration.
    I was told SC 3.1 can be setup without any SAN but local disk, however I cannot locate any document related.
    Could anyone please help with any tips? How can I setup the quorum device on NFS or even just local disk?
    Thanks and Regards,
    Donald
    Edited by: Foo Donald on 2011/7/14 上午 1:07

    Hi Nik,
    I have setup a Sun Cluster 3.2 Quorum Server in a third system which is listening on port 9000.
    Please correct me if wrong, it seems like Sun Cluster 3.1 command scconf cannot see the quorum server, it cannot specify the IP nor port.
    The testing site is on Solaris 8 + Sun Cluster 3.1, it will be upgrade to Solaris 10 + Sun Cluster 3.2 by Live Upgrade.
    Thanks and Regards,
    Donald

  • Sun cluster 3.1 io error

    Hi,
    I have 2 cluster nodes with solaris 9/05 with sun cluster 3.1,After a migration from Hitachi AMS1000 storage to SUN storagetek 9985v when i shutdown one node in the cluster the mounted volumes on the second node giving io error.I already installed the new patches for os,cluster and san but the problem still persists.Please help me
    Regards,
    Arun

    Arun,
    You say you migrated to the 9985v - did you do that with backup and restore or with a replication technology? If it was the latter, you might have inadvertantly copied over some SCSI reservation keys. Otherwise, I can't see any reason for the problem.
    SCSI keys can be removed (with extreme care) using the scsi and pgre commands in the /usr/cluster/lib/sc directory.
    Tim
    ---

  • Does sun Cluster 3.0/3.1 HA Oracle agent support to use Oracle spfile?

    When defining the resource, the 'parameter_file' properties is usually sets to the Oracle pfile. Is it possible to use Oracle spfile?
    It is said if leaving 'parameter_file' NULL, it will be default to Oracle default. Suppose leaving it NULL and with Oracle spfile created in default location, will it use the spfile?

    You did not specify which Sun Cluster version and Oracle version you are running.
    Within SC 3.1 my understanding is that beginning with Oracle 9i it is possible to use the spfile.
    If you leave the "parameter_file" property empty (NULL) then the default behaviour for 9i should work:
    search under $ORACLE_HOME/dbs in the order:
    1. spfile$(ORACLE_SID}.ora
    2. spfile.ora
    3. init${ORACLE_SID}.ora
    Greets
    Thorsten

  • Encountered ora-29701 during Sun Cluster for Oracle RAC 9.2.0.7 startup (UR

    Hi all,
    Need some help from all out there
    In our Sun Cluster 3.1 Data Service for Oracle RAC 9.2.0.7 (Solaris 9) configuration, my team had encountered
    ora-29701 *Unable to connect to Cluster Manager*
    during the startup of the Oracle RAC database instances on the Oracle RAC Server resources.
    We tried the attached workaround by Oracle. This workaround works well for the 1^st time but it doesn’t work anymore when the server is rebooted.
    Kindly help me to check whether anyone encounter the same problem as the above and able to resolve. Thanks.
    Bug No. 4262155
    Filed 25-MAR-2005 Updated 11-APR-2005
    Product Oracle Server - Enterprise Edition Product Version 9.2.0.6.0
    Platform Linux x86
    Platform Version 2.4.21-9.0.1
    Database Version 9.2.0.6.0
    Affects Platforms Port-Specific
    Severity Severe Loss of Service
    Status Not a Bug. To Filer
    Base Bug N/A
    Fixed in Product Version No Data
    Problem statement:
    ORA-29701 DURING DATABASE CREATION AFTER APPLYING 9.2.0.6 PATCHSET
    *** 03/25/05 07:32 am ***
    TAR:
    PROBLEM:
    Customer applied 9.2.0.6 patchset over 9.2.0.4 patchset.
    While creating the database, customer receives following error:
         ORA-29701: unable to connect to Cluster Manager
    However, if customer goes from 9.2.0.4 -> 9.2.0.5 -> 9.2.0.6, the problem does not occur.
    DIAGNOSTIC ANALYSIS:
    It seems that the problem is with libskgxn9.so shared library.
    For 9.2.0.4 -> 9.2.0.5 -> 9.2.0.6, the install log shows the following:
    installActions2005-03-22_03-44-42PM.log:,
    [libskgxn9.so->%ORACLE_HOME%/lib/libskgxn9.so 7933 plats=1=>[46]langs=1=> en,fr,ar,bn,pt_BR,bg,fr_CA,ca,hr,cs,da,nl,ar_EG,en_GB,et,fi,de,el,iw,hu,is,in, it,ja,ko,es,lv,lt,ms,es_MX,no,pl,pt,ro,ru,zh_CN,sk,sl,es_ES,sv,th,zh_TW, tr,uk,vi]]
    installActions2005-03-22_04-13-03PM.log:, [libcmdll.so ->%ORACLE_HOME%/lib/libskgxn9.so 64274 plats=1=>[46] langs=-554696704=>[en]]
    For 9.2.0.4 -> 9.2.0.6, install log shows:
    installActions2005-03-22_04-13-03PM.log:, [libcmdll.so ->%ORACLE_HOME%/lib/libskgxn9.so 64274 plats=1=>[46] langs=-554696704=>[en]] does not exist.
    This means that while patching from 9.2.0.4 -> 9.2.0.5, Installer copies the libcmdll.so library into libskgxn9.so, while patching from 9.2.0.4 -> 9.2.0.6 does not.
    ORACM is located in /app/oracle/ORACM which is different than ORACLE_HOME in customer's environment.
    WORKAROUND:
    Customer is using the following workaround:
    cd $ORACLE_HOME/rdbms/lib make -f ins_rdbms.mk rac_on ioracle ipc_udp
    RELATED BUGS:
    Bug 4169291

    Check if following MOS note helps.
    Series of ORA-7445 Errors After Applying 9.2.0.7.0 Patchset to 9.2.0.6.0 Database (Doc ID 373375.1)

  • SAP 7.0 on SUN Cluster 3.2 (Solaris 10 / SPARC)

    Dear All;
    i'm installing a two nodes cluster (SUN Cluster 3.2 / Solaris 10 / SPARC), for a HA SAP 7.0 / Oracle 10g DataBase
    SAP and Oracle softwares were successfully installed and i could successfully cluster the Oracle DB and it is tested and working fine.
    for the SAP i did the following configurations
    # clresource create -g sap-ci-res-grp -t SUNW.sap_ci_v2 -p SAPSID=PRD -p Ci_instance_id=01 -p Ci_services_string=SCS -p Ci_startup_script=startsap_01 -p Ci_shutdown_script=stopsap_01 -p resource_dependencies=sap-hastp-rs,ora-db-res sap-ci-scs-res
    # clresource create -g sap-ci-res-grp -t SUNW.sap_ci_v2 -p SAPSID=PRD -p Ci_instance_id=00 -p Ci_services_string=ASCS -p Ci_startup_script=startsap_00 -p Ci_shutdown_script=stopsap_00 -p resource_dependencies=sap-hastp-rs,or-db-res sap-ci-Ascs-res
    and when trying to bring the sap-ci-res-grp online # clresourcegroup online -M sap-ci-res-grp
    it executes the startsap scripts successfully as following
    Sun Microsystems Inc.     SunOS 5.10     Generic     January 2005
    stty: : No such device or address
    stty: : No such device or address
    Starting SAP-Collector Daemon
    11:04:57 04.06.2008 LOG: Effective User Id is root
    Starting SAP-Collector Daemon
    11:04:57 04.06.2008 LOG: Effective User Id is root
    * This is Saposcol Version COLL 20.94 700 - V3.72 64Bit
    * Usage: saposcol -l: Start OS Collector
    * saposcol -k: Stop OS Collector
    * saposcol -d: OS Collector Dialog Mode
    * saposcol -s: OS Collector Status
    * Starting collector (create new process)
    * This is Saposcol Version COLL 20.94 700 - V3.72 64Bit
    * Usage: saposcol -l: Start OS Collector
    * saposcol -k: Stop OS Collector
    * saposcol -d: OS Collector Dialog Mode
    * saposcol -s: OS Collector Status
    * Starting collector (create new process)
    saposcol on host eccprd01 started
    Starting SAP Instance ASCS00
    Startup-Log is written to /export/home/prdadm/startsap_ASCS00.log
    saposcol on host eccprd01 started
    Running /usr/sap/PRD/SYS/exe/run/startj2eedb
    Trying to start PRD database ...
    Log file: /export/home/prdadm/startdb.log
    Instance Service on host eccprd01 started
    Jun 4 11:05:01 eccprd01 SAPPRD_00[26054]: Unable to open trace file sapstartsrv.log. (Error 11 Resource temporarily unavailable) [ntservsserver.cpp 1863]
    /usr/sap/PRD/SYS/exe/run/startj2eedb completed successfully
    Starting SAP Instance SCS01
    Startup-Log is written to /export/home/prdadm/startsap_SCS01.log
    Instance Service on host eccprd01 started
    Jun 4 11:05:02 eccprd01 SAPPRD_01[26111]: Unable to open trace file sapstartsrv.log. (Error 11 Resource temporarily unavailable) [ntservsserver.cpp 1863]
    Instance on host eccprd01 started
    Instance on host eccprd01 started
    and the it repeats the following warnings on the /var/adm/messages till it fails to the other node
    Jun 4 12:26:22 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:25 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:25 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:28 eccprd01 last message repeated 1 time
    Jun 4 12:26:28 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:31 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:31 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:34 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:34 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:37 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:37 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:40 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:40 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:43 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:43 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:46 eccprd01 last message repeated 1 time
    Jun 4 12:26:46 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:49 eccprd01 last message repeated 1 time
    Jun 4 12:26:49 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:52 eccprd01 last message repeated 1 time
    Jun 4 12:26:52 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:55 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:55 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:58 eccprd01 last message repeated 1 time
    Jun 4 12:26:58 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:01 eccprd01 last message repeated 1 time
    Jun 4 12:27:01 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:04 eccprd01 last message repeated 1 time
    Jun 4 12:27:04 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:07 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:07 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:10 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:10 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:13 eccprd01 last message repeated 1 time
    Jun 4 12:27:13 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:16 eccprd01 last message repeated 1 time
    Jun 4 12:27:16 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:19 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:19 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:22 eccprd01 last message repeated 1 time
    Jun 4 12:27:22 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:25 eccprd01 last message repeated 1 time
    Jun 4 12:27:25 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:28 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:28 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:31 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:31 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:34 eccprd01 last message repeated 1 time
    Jun 4 12:27:34 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:37 eccprd01 last message repeated 1 time
    Jun 4 12:27:37 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:40 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:40 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:43 eccprd01 last message repeated 1 time
    Jun 4 12:27:43 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:46 eccprd01 last message repeated 1 time
    Jun 4 12:27:46 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dis
    can anyone one help me if there is any error on configurations or what is the cause of this problem.....thanks in advance
    ARSSES

    Hi all.
    I am having a similar issue with a Sun Cluster 3.2 and SAP 7.0
    Scenrio:
    Central Instance (not incluster) : Started on one node
    Dialog Instance (not in cluster): Started on the other node
    When I create the resource for SUNW.sap_as like
    clrs create --g sap-rg -t SUNW.sap_as .....etc etc
    in the /var/adm/messages I got lots of WAITING FOR DISPACHER TO COME UP....
    Then after timeout it gives up.
    Any clue? What does is try to connect or waiting for? I hve notest that it's something before the startup script....
    TIA

  • Creating Logical hostname in sun cluster

    Can someone tell me, what exactly logical hostname in sun cluster mean?
    For registering logical hostname resource in failoover group, what exactly i need to specify
    for example, i have two nodes in sun cluster , How to create or configure a logical hostanme and it should point to which IP Address ( Whether it should point to IP addresses of nodes in sun cluster). Can i get clarification on this?

    Thanks Thorsten for ur continue help...
    The output of clrs status abc_lg
    === Cluster Resources ===
    Resource Name Node Name State Status Message
    abc_lg node1 Offline Offline
    node2 Offline Offline
    The status is offline...
    the output of clresourcegroup status
    === Cluster Resource Groups ===
    Group Name Node Name Suspended Status
    abc_rg node1 No Unmanaged
    node2 No Unmanaged
    You say that the resource should de enabled after creating the resource.. I am using GDS and i am just following the steps he provided to acheive high availabilty (in developers guide...)
    I have 1) Logical hostname resorce.
    2) Application resource in my failover resource group
    When i bring online the failover resource group , what should my failover resource group status and the status of resource in my resource group

Maybe you are looking for

  • Expert for hire needed

    I am in charge of rapidly building a large database-driven website using Dreamweaver/PHP/MySQL.  I am familiar with data-driven sites for many years with GoLive.  Now I'm using Dreamweaver CS3 and am getting a pretty good handle on it.  But I am havi

  • Trouble with UIAlertView

    I have a serious problem with UIAlertView. Basically what I want to do is in a function perform two checks. Both can be negative, both can be positive, there is no direct correlation. If a check is positive then an alert is displayed with two options

  • Indicator for number of items in an array....

    Hi, Does anyone know how to display series of dots in possibly a UILabel field - the number of dots represent - let's say - the number of items in an NSArray? Sam

  • Stucked in Checking for Update - how to fix?

    Hi. my iphone was jailbroken and I want to update to the latest version of IOS which is ios7. When trying to update, it as stucked in "Checking for Update". Please advise on how to fix this.

  • QuickTime Player quits when opening a file

    QuickTime Player quits when attempting to open a file. The Crash Log shows the following in part: Process:    QuickTime Player [540] Path:        /Applications/QuickTime Player.app/Contents/MacOS/QuickTime Player Identifier:  com.apple.QuickTimePlaye