SUN Cluster 3.2 pooling SNMP

Hi,
I want to know if it is possible to do SNMP pooling for SUN CLuster 3.2 via a snmp tool like snmp walk ?
What is the command to launh with snmpwalk? I need to kow how to activate the snmp pooling on the sun cluster?
What are the value to pool from the sun-cluster-mib ?
Thanks.

Why did you post the same post 8 times?
.7/M.

Similar Messages

  • Query Sun Cluster 3.2 With SNMP?

    Hello,
    Is there a way to gleen cluster information from SNMP without the use of Sun Management Center (SMC)? My understanding is that it can be configured to send traps to an SNMP monitoring host, but can the cluster nodes be queried in any way using something such as snmpget?
    Thank you and any and all help appreciated.
    Regards,
    Peter

    please check this blog
    http://blogs.oracle.com/SC/entry/sun_cluster_3_2_snmp

  • Beta Refresh Release Now Available!  Sun Cluster 3.2 Beta Program

    The Sun Cluster 3.2 Release team is pleased to announce a Beta Refresh release. This release is based on our latest and greatest build of Sun Cluster 3.2, build 70, which is close to the final Revenue Release build of the product.
    To apply for the Sun Cluster 3.2 Beta program, please visit:
    https://feedbackprograms.sun.com/callout/default.html?callid=%7B11B4E37C-D608-433B-AF69-07F6CD714AA1%7D
    or contact Eric Redmond <[email protected]>.
    New Features in Sun Cluster 3.2
    Ease of use
    * New Sun Cluster Object Oriented Command Set
    * Oracle RAC 10g improved integration and administration
    * Agent configuration wizards
    * Resources monitoring suspend
    * Flexible private interconnect IP address scheme
    Availability
    * Extended flexibility for fencing protocol
    * Disk path failure handling
    * Quorum Server
    * Cluster support for SMF services
    Flexibility
    * Solaris Container expanded support
    * HA ZFS
    * HDS TrueCopy campus cluster
    * Veritas Flashsnap Fast Mirror Resynchronization 4.1 and 5.0 option support
    * Multi-terabyte disk and EFI label support
    * Veritas Volume Replicator 5.0 support
    * Veritas Volume Manager 4.1 support on x86 platform
    * Veritas Storage Foundation 5.0 File System and Volume Manager
    OAMP
    * Live upgrade
    * Dual partition software swap (aka quantum leap)
    * Optional GUI installation
    * SNMP event MIB
    * Command logging
    * Workload system resource monitoring
    Note: Veritas 5.0 features are not supported with SC 3.2 Beta.
    Sun Cluster 3.2 beta supports the following Data Services
    * Apache (shipped with the Solaris OS)
    * DNS
    * NFS V3
    * Java Enterprise System 2005Q4: Application Server, Web Server, Message Queue, HADB

    Without speculating on the release date of Sun Cluster 3.x or even its feature list, I would like to understand what risk Sun would take when Sun Cluster would support ZFS as a failover filesystem? Once ZFS is part of Solaris 10, I am sure customers will want to use it in clustered environments.
    BTW: this means that even Veritas will have to do something about ZFS!!!
    If VCS is a much better option, it would be interesting to understand what features are missing from Sun Cluster to make it really competitive.
    Thanks
    Hartmut

  • Configuration of LUN's to Sun Cluster

    Hi,
    I have a 2 node Sun Cluster (V3.2) running on 2xE2900, Solaris 10...
    Basically, there are 3 installed Databases running on the development environment and I need to cluster all 3 in the Global Zone do some failovers and then engage Sun PS to come on site and configure the production cluster environment...
    Usually I have already configured metasets or ZFS and then the DBA installs the DB while everything is nice and neat, my question however is what is the best way to cluster the LUN's when they already have data which I cannot (or would prefer not) to loose.
    I believe the creation of LUN's in a metaset will destroy the data and obviously zfs pools will also destroy any data... hopefully this is a simple question from an SC novice :)
    Thanks...

    Thanks Tim, that answer the question... one more though :)
    I was advised to install a single node cluster then add the 2nd node to the config later. Ive done this but when I try to do the add it seems I have a problem with the cluster interconnects and receive the messages:-
    Adding cable to the cluster configuration ... failed
    scrconf: Failed to add cluster transport cable - does not exist
    scinstall: Failed to update cluster configuration ("-m endpoint=<server>:ce3,endpoint=switch1")
    The heartbeats are ce3 and ce7 which I know are working ok, ive tried everything from the 1st node but when I enter:-
    # scstat -W
    Nothing is shown, although when I do a scconf -p I can see the node transport adapters ok... so how do I let the 2nd node access to the cluster interconnects, ive tried clsetup and adding the interconnects via option4 and I remember configuring them during installation...
    Again any input would be greatly received...
    Thanks...
    Steve..

  • Sun Cluster question

    Hello everyone
    I've inherited an Oracle Solaris system holding ASE Sybase databases. The system consists of two nodes inside a Sun Cluster. Each of the nodes is hosting 2 Sybase database instances, where one of the nodes is active and other is standing by. The scenario at hand is that when any of the databases on one node fails for whatever reason, the whole system gets shifted to the second node to keep the environment going. That works fine.
    My intended scenario:
    Each node is holding 2 database instances, both nodes ARE working at the same time so that each one is serving one instance of the database. In the event of failure on one node, the other one should assume the role of BOTH database instances till the first one gets fixed.
    The question is: is that possible? and if it is, does that require breaking the whole cluster and rebuilding it? or can this be done online without bringing down the system?
    Thanks a lot in advance

    What you propose will not work either. E.g. there is no logic implemented to fence the underlying zpool from one node to the other in such a configuration.
    Also the current SUNW.HAStoragePlus(5) manpage document:
            Note -   SUNW.HAStoragePlus does not support  file  sys-
                     tems created on ZFS volumes.
                     You cannot use SUNW.HAStoragePlus  to  manage  a
                     ZFS storage pool that contains a file system for
                     which the ZFS  mountpoint  property  is  set  to
                     legacy or none.[...]
    Greets
    Thorsten

  • Deploy HA Zones with Sun Cluster

    Hi
    I have 2 physical Sol 10 Servers with a storedge array for the shared storage.
    I have installed Sun Cluster 3.3 on both nodes and sorted the quorum and shared drive using a zfs file system for a mount point
    Next i have installed a non global zone on 1 node using the zone path on the shared filesystem
    When i switch the shared file system the zone is not instaalled on the 2nd node.
    So when i try to install the zone on the 2nd node
    i get a Rootpath is already mounted on this filesystem
    Does anyone know how to setup a Sun Cluster with HA Zones please.

    The option to forcibly attach a zone got added to zoneadm with a Solarus 10 Update release. With that option the procedure to configure and install a zone for HA Container use can be:
    The assumption is there is already a RG configured with a HASP resource managing the zpool for the zone rootpath:
    a) Swithch the RG online on node A
    b) Configure (zonecfg) and install (zoneadm) the zone on node A on shared storage
    c) boot the zone and go through interactive sysidcfg within "zlogic -C zonename"
    d) Switch the RG hosting the HASP resource for the pool to node B
    e) Configure (zonecfg) the zone on node B.
    f) "Install" the zone by forcibly attaching it: zoneadm -z <zonename> attach -F
    The user can then test if the zone boots on node B, halt it and proceed with the sczbt resource registration as described within http://download.oracle.com/docs/cd/E18728_01/html/821-2677/index.html.
    Regards
    Thorsten

  • Sun Cluster 3.2 + Apache Web Server (Apache HA)

    Hi All,
    I am trying to setup an HA Apache Cluster with Sun Cluster 3.2 and Apache Version 2.2.15.
    i have faced a problem saying unable to create resource group
    clresource: (C273000) apache-web.nntele.net-rs: Invalid resource name
    clresource: (C891200) Failed to create resource "apache-web.nntele.net-rs".
    i run the follwing commands:
    /usr/cluster/bin/clresourcegroup create -p nodelist=node2.nntele.net,node1.nntele.net apache-server-rg
    /usr/cluster/bin/clresource create -t SUNW.HAStoragePlus:8 -g apache-server-rg -p FilesystemMountPoints=/global/apache global_apache-rs
    /usr/cluster/bin/clresourcegroup online -emM apache-server-rg
    /usr/cluster/lib/ds/apache/configureApache.ksh copyConfiguration node2.nntele.net /global/apache /usr/local/apache2/conf/httpd.conf /usr/local/apache2/htdocs /global/apache/apache-web.nntele.net-rs apache-web.nntele.net-rs
    /usr/cluster/bin/clreslogicalhostname create -g apache-server-rg -h web.nntele.net -N [email protected],[email protected] web-nntele-net-rs
    /usr/cluster/bin/clresource create -t SUNW.apache:4.1 -g apache-server-rg -p Resource_dependencies=global_apache-rs,web-nntele-net-rs -p Port_list=80/tcp -p Bin_dir=/global/apache/apache-web.nntele.net-rs/bin apache-web.nntele.net-rs
    Please Help Me .... Thanks In Advance.

    You can't do it with the agent directly as it stands, as far as I can tell. However, you could do it indirectly by creating a pool and binding that pool to the FX scheduler. You then create a zone and bind the zone to the pool and put the HA Oracle instance in the zone If you do that on both nodes you achieve what you want (I think). I've never tried this, but it would appear to be feasible.
    i.e. your RG list would be:
    node1:zone1,node2:zone2
    Regards,
    Tim
    ---

  • Sun Cluster 3.0 and VxVM 3.2 problems at boot

    i 've a little problem with a two node cluster (2 x 480r + 2 x 3310 with a single raid ctl.)
    Every 3310 has 3 (raid5) luns .
    I've mirrored these 3 luns with VxVM, and i've mirror also the 2 internal (o.s.) disks.
    One of the disk of the first 3310 is the quorum disk.
    Every time i boot the nodes , i read an error at "block 0" of the quorum disk and then starts a fastidious synchronization of the mirrors. (sometimes also of the os mirror..)
    Why does it happen?
    Thanks.
    Regards,
    Mauro.

    We did another test today and again the resource group went into a STOP_FAILED state. On this occasion, the export for the corresponding ZFS pool timed-out. We were able to successfully bring the resource group online on the desired cluster node. Subsequent failovers worked fine. There's something strange happening when the zpool is being exported (eg error correction?). Once the zpool is exported, further imports of it seem to work fine.
    When we first had the problem, we were able to manually export and import the zpools, though they did take quite some time to export/import.
    "zpool list" shows we have a total of 7 zpools.
    "zfs list" shows we have a total of 27 zfs file systems.
    Is there any specific Sun or otherwise links to any problems with Sun Cluster and ZFS?

  • Messaging Server on Sun Cluster

    Hi,
    My messaging server is Sun ONE Messaging Server version 6.1 ,Directory server version 5.2 , Sun Cluster 3.1
    I install the directory server to use "ldap.ezone.com" name. And I want to configure the messaging server to use "mail.ezone.com" for Sun Cluster.
    But I can not configure the messaging server to use this name.
    Can you let me know how to install the messaging server and run "confugre" script to use "mail.ezone.com"
    Thanks
    Lee.
    /etc/hosts
    192.168.5.40 beef.ezone.com beef
    192.168.5.44 ldap.ezone.com ldap
    192.168.5.50 mail.ezone.com mail
    imta.cnf after run configure script =====================
    ! IMTA configuration file
    ! part I : rewrite rules
    ! Domain Rewrite Rules.
    ! Uncomment this line to use domain rewrite rules
    ! from the configuration file instead of the domain database.
    ! Please refer to the iMS documentation for details.
    !<IMTA_TABLE:domains.rules
    ! Rules to select local users
    $* $A$E$F$U%[email protected]
    ldap.ezone.com $U%[email protected]
    ezone.com $U%[email protected]
    ! ims-ms
    .ims-ms-daemon $U%$H.ims-ms-daemon@ims-ms-daemon
    ! native
    .native-daemon $U%$H.native-daemon@native-daemon
    ! pipe
    .pipe-daemon $U%$H.pipe-daemon@pipe-daemon
    ! tcp_local
    ! Rules for top level internet domains
    <IMTA_TABLE:internet.rules
    ! tcp_intranet
    ! Do mapping lookup for internal IP addresses
    [] $E$R${INTERNAL_IP,$L}$U%[$L]@tcp_intranet-daemon
    .ezone.com $U%$H.ezone.com@tcp_intranet-daemon
    * $U%$&0.ezone.com
    ! reprocess
    reprocess $U%reprocess.ldap.ezone.com@reprocess-daemon
    reprocess.ldap.ezone.com $U%reprocess.ldap.ezone.com@reprocess-daemon
    ! process
    process $U%process.ldap.ezone.com@process-daemon
    process.ldap.ezone.com $U%process.ldap.ezone.com@process-daemon
    ! defragment
    defragment $U%defragment.ldap.ezone.com@defragment-daemon
    defragment.ldap.ezone.com $U%defragment.ldap.ezone.com@defragment-daemon
    ! conversion
    conversion $U%conversion.ldap.ezone.com@conversion-daemon
    conversion.ldap.ezone.com $U%conversion.ldap.ezone.com@conversion-daemon
    ! bitbucket
    bitbucket $U%bitbucket.ldap.ezone.com@bitbucket-daemon
    bitbucket.ldap.ezone.com $U%bitbucket.ldap.ezone.com@bitbucket-daemon
    ! deleted
    deleted-daemon $U%$H@deleted-daemon
    .deleted-daemon $U%$H@deleted-daemon
    ! inactive
    inactive-daemon $U%$H@inactive-daemon
    .inactive-daemon $U%$H@inactive-daemon
    ! hold
    hold-daemon $U%$H@hold-daemon
    .hold-daemon $U%$H@hold-daemon
    ! part II : channel blocks
    defaults notices 1 2 4 7 copywarnpost copysendpost postheadonly noswitchchannel immnonurgent maxjobs 7 defaulthost ezone.com ezone.com
    ! delivery channel to local /var/mail store
    l subdirs 20 viaaliasrequired maxjobs 7 pool LOCAL_POOL
    ldap.ezone.com
    ! ims-ms
    ims-ms defragment subdirs 20 notices 1 7 14 21 28 backoff "pt5m" "pt10m" "pt30m" "pt1h" "pt2h" "pt4h" maxjobs 2 pool IMS_POOL fileinto $U+$S@$D
    ims-ms-daemon
    ! native
    native defragment subdirs 20 maxjobs 1
    native-daemon
    ! pipe
    pipe single defragment subdirs 20
    pipe-daemon
    ! tcp_local
    tcp_local smtp mx single_sys remotehost inner switchchannel identnonenumeric subdirs 20 maxjobs 7 pool SMTP_POOL maytlsserver maysaslserver saslswitchchannel tcp_auth
    tcp-daemon
    ! tcp_intranet
    tcp_intranet smtp mx single_sys subdirs 20 dequeue_removeroute maxjobs 7 pool SMTP_POOL maytlsserver allowswitchchannel saslswitchchannel tcp_auth
    tcp_intranet-daemon
    ! tcp_submit
    tcp_submit submit smtp mx single_sys mustsaslserver maytlsserver
    tcp_submit-daemon
    ! tcp_auth
    tcp_auth smtp mx single_sys mustsaslserver
    tcp_auth-daemon
    ! tcp_tas
    tcp_tas smtp mx single_sys allowswitchchannel mustsaslserver maytlsserver deliveryflags 2
    tcp_tas-daemon
    ! tcp_lmtps
    tcp_lmtps lmtp subdirs 20
    tcp_lmtps-daemon
    ! reprocess
    reprocess
    reprocess-daemon
    ! process
    process
    process-daemon
    ! defragment
    defragment
    defragment-daemon
    ! conversion
    conversion
    conversion-daemon
    ! bitbucket
    bitbucket
    bitbucket-daemon
    ! deleted
    deleted
    deleted-daemon
    ! inactive
    inactive
    inactive-daemon
    ! hold
    hold slave
    hold-daemon

    I'm sorry, I don't see any problem with your imta.cnf, nor do I understand exactly what the problem is. All you say is, " And I want to configure the messaging server to use "mail.ezone.com" for Sun Cluster.
    But I can not configure the messaging server to use this name."
    What prevents you from configuring Messaging Server frm using "mail.ezone.com" as it's name?

  • Does sun Cluster 3.0/3.1 HA Oracle agent support to use Oracle spfile?

    When defining the resource, the 'parameter_file' properties is usually sets to the Oracle pfile. Is it possible to use Oracle spfile?
    It is said if leaving 'parameter_file' NULL, it will be default to Oracle default. Suppose leaving it NULL and with Oracle spfile created in default location, will it use the spfile?

    You did not specify which Sun Cluster version and Oracle version you are running.
    Within SC 3.1 my understanding is that beginning with Oracle 9i it is possible to use the spfile.
    If you leave the "parameter_file" property empty (NULL) then the default behaviour for 9i should work:
    search under $ORACLE_HOME/dbs in the order:
    1. spfile$(ORACLE_SID}.ora
    2. spfile.ora
    3. init${ORACLE_SID}.ora
    Greets
    Thorsten

  • Encountered ora-29701 during Sun Cluster for Oracle RAC 9.2.0.7 startup (UR

    Hi all,
    Need some help from all out there
    In our Sun Cluster 3.1 Data Service for Oracle RAC 9.2.0.7 (Solaris 9) configuration, my team had encountered
    ora-29701 *Unable to connect to Cluster Manager*
    during the startup of the Oracle RAC database instances on the Oracle RAC Server resources.
    We tried the attached workaround by Oracle. This workaround works well for the 1^st time but it doesn’t work anymore when the server is rebooted.
    Kindly help me to check whether anyone encounter the same problem as the above and able to resolve. Thanks.
    Bug No. 4262155
    Filed 25-MAR-2005 Updated 11-APR-2005
    Product Oracle Server - Enterprise Edition Product Version 9.2.0.6.0
    Platform Linux x86
    Platform Version 2.4.21-9.0.1
    Database Version 9.2.0.6.0
    Affects Platforms Port-Specific
    Severity Severe Loss of Service
    Status Not a Bug. To Filer
    Base Bug N/A
    Fixed in Product Version No Data
    Problem statement:
    ORA-29701 DURING DATABASE CREATION AFTER APPLYING 9.2.0.6 PATCHSET
    *** 03/25/05 07:32 am ***
    TAR:
    PROBLEM:
    Customer applied 9.2.0.6 patchset over 9.2.0.4 patchset.
    While creating the database, customer receives following error:
         ORA-29701: unable to connect to Cluster Manager
    However, if customer goes from 9.2.0.4 -> 9.2.0.5 -> 9.2.0.6, the problem does not occur.
    DIAGNOSTIC ANALYSIS:
    It seems that the problem is with libskgxn9.so shared library.
    For 9.2.0.4 -> 9.2.0.5 -> 9.2.0.6, the install log shows the following:
    installActions2005-03-22_03-44-42PM.log:,
    [libskgxn9.so->%ORACLE_HOME%/lib/libskgxn9.so 7933 plats=1=>[46]langs=1=> en,fr,ar,bn,pt_BR,bg,fr_CA,ca,hr,cs,da,nl,ar_EG,en_GB,et,fi,de,el,iw,hu,is,in, it,ja,ko,es,lv,lt,ms,es_MX,no,pl,pt,ro,ru,zh_CN,sk,sl,es_ES,sv,th,zh_TW, tr,uk,vi]]
    installActions2005-03-22_04-13-03PM.log:, [libcmdll.so ->%ORACLE_HOME%/lib/libskgxn9.so 64274 plats=1=>[46] langs=-554696704=>[en]]
    For 9.2.0.4 -> 9.2.0.6, install log shows:
    installActions2005-03-22_04-13-03PM.log:, [libcmdll.so ->%ORACLE_HOME%/lib/libskgxn9.so 64274 plats=1=>[46] langs=-554696704=>[en]] does not exist.
    This means that while patching from 9.2.0.4 -> 9.2.0.5, Installer copies the libcmdll.so library into libskgxn9.so, while patching from 9.2.0.4 -> 9.2.0.6 does not.
    ORACM is located in /app/oracle/ORACM which is different than ORACLE_HOME in customer's environment.
    WORKAROUND:
    Customer is using the following workaround:
    cd $ORACLE_HOME/rdbms/lib make -f ins_rdbms.mk rac_on ioracle ipc_udp
    RELATED BUGS:
    Bug 4169291

    Check if following MOS note helps.
    Series of ORA-7445 Errors After Applying 9.2.0.7.0 Patchset to 9.2.0.6.0 Database (Doc ID 373375.1)

  • SAP 7.0 on SUN Cluster 3.2 (Solaris 10 / SPARC)

    Dear All;
    i'm installing a two nodes cluster (SUN Cluster 3.2 / Solaris 10 / SPARC), for a HA SAP 7.0 / Oracle 10g DataBase
    SAP and Oracle softwares were successfully installed and i could successfully cluster the Oracle DB and it is tested and working fine.
    for the SAP i did the following configurations
    # clresource create -g sap-ci-res-grp -t SUNW.sap_ci_v2 -p SAPSID=PRD -p Ci_instance_id=01 -p Ci_services_string=SCS -p Ci_startup_script=startsap_01 -p Ci_shutdown_script=stopsap_01 -p resource_dependencies=sap-hastp-rs,ora-db-res sap-ci-scs-res
    # clresource create -g sap-ci-res-grp -t SUNW.sap_ci_v2 -p SAPSID=PRD -p Ci_instance_id=00 -p Ci_services_string=ASCS -p Ci_startup_script=startsap_00 -p Ci_shutdown_script=stopsap_00 -p resource_dependencies=sap-hastp-rs,or-db-res sap-ci-Ascs-res
    and when trying to bring the sap-ci-res-grp online # clresourcegroup online -M sap-ci-res-grp
    it executes the startsap scripts successfully as following
    Sun Microsystems Inc.     SunOS 5.10     Generic     January 2005
    stty: : No such device or address
    stty: : No such device or address
    Starting SAP-Collector Daemon
    11:04:57 04.06.2008 LOG: Effective User Id is root
    Starting SAP-Collector Daemon
    11:04:57 04.06.2008 LOG: Effective User Id is root
    * This is Saposcol Version COLL 20.94 700 - V3.72 64Bit
    * Usage: saposcol -l: Start OS Collector
    * saposcol -k: Stop OS Collector
    * saposcol -d: OS Collector Dialog Mode
    * saposcol -s: OS Collector Status
    * Starting collector (create new process)
    * This is Saposcol Version COLL 20.94 700 - V3.72 64Bit
    * Usage: saposcol -l: Start OS Collector
    * saposcol -k: Stop OS Collector
    * saposcol -d: OS Collector Dialog Mode
    * saposcol -s: OS Collector Status
    * Starting collector (create new process)
    saposcol on host eccprd01 started
    Starting SAP Instance ASCS00
    Startup-Log is written to /export/home/prdadm/startsap_ASCS00.log
    saposcol on host eccprd01 started
    Running /usr/sap/PRD/SYS/exe/run/startj2eedb
    Trying to start PRD database ...
    Log file: /export/home/prdadm/startdb.log
    Instance Service on host eccprd01 started
    Jun 4 11:05:01 eccprd01 SAPPRD_00[26054]: Unable to open trace file sapstartsrv.log. (Error 11 Resource temporarily unavailable) [ntservsserver.cpp 1863]
    /usr/sap/PRD/SYS/exe/run/startj2eedb completed successfully
    Starting SAP Instance SCS01
    Startup-Log is written to /export/home/prdadm/startsap_SCS01.log
    Instance Service on host eccprd01 started
    Jun 4 11:05:02 eccprd01 SAPPRD_01[26111]: Unable to open trace file sapstartsrv.log. (Error 11 Resource temporarily unavailable) [ntservsserver.cpp 1863]
    Instance on host eccprd01 started
    Instance on host eccprd01 started
    and the it repeats the following warnings on the /var/adm/messages till it fails to the other node
    Jun 4 12:26:22 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:25 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:25 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:28 eccprd01 last message repeated 1 time
    Jun 4 12:26:28 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:31 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:31 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:34 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:34 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:37 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:37 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:40 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:40 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:43 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:43 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:46 eccprd01 last message repeated 1 time
    Jun 4 12:26:46 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:49 eccprd01 last message repeated 1 time
    Jun 4 12:26:49 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:52 eccprd01 last message repeated 1 time
    Jun 4 12:26:52 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:55 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:55 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:58 eccprd01 last message repeated 1 time
    Jun 4 12:26:58 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:01 eccprd01 last message repeated 1 time
    Jun 4 12:27:01 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:04 eccprd01 last message repeated 1 time
    Jun 4 12:27:04 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:07 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:07 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:10 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:10 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:13 eccprd01 last message repeated 1 time
    Jun 4 12:27:13 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:16 eccprd01 last message repeated 1 time
    Jun 4 12:27:16 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:19 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:19 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:22 eccprd01 last message repeated 1 time
    Jun 4 12:27:22 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:25 eccprd01 last message repeated 1 time
    Jun 4 12:27:25 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:28 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:28 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:31 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:31 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:34 eccprd01 last message repeated 1 time
    Jun 4 12:27:34 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:37 eccprd01 last message repeated 1 time
    Jun 4 12:27:37 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:40 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:40 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:43 eccprd01 last message repeated 1 time
    Jun 4 12:27:43 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:46 eccprd01 last message repeated 1 time
    Jun 4 12:27:46 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dis
    can anyone one help me if there is any error on configurations or what is the cause of this problem.....thanks in advance
    ARSSES

    Hi all.
    I am having a similar issue with a Sun Cluster 3.2 and SAP 7.0
    Scenrio:
    Central Instance (not incluster) : Started on one node
    Dialog Instance (not in cluster): Started on the other node
    When I create the resource for SUNW.sap_as like
    clrs create --g sap-rg -t SUNW.sap_as .....etc etc
    in the /var/adm/messages I got lots of WAITING FOR DISPACHER TO COME UP....
    Then after timeout it gives up.
    Any clue? What does is try to connect or waiting for? I hve notest that it's something before the startup script....
    TIA

  • Creating Logical hostname in sun cluster

    Can someone tell me, what exactly logical hostname in sun cluster mean?
    For registering logical hostname resource in failoover group, what exactly i need to specify
    for example, i have two nodes in sun cluster , How to create or configure a logical hostanme and it should point to which IP Address ( Whether it should point to IP addresses of nodes in sun cluster). Can i get clarification on this?

    Thanks Thorsten for ur continue help...
    The output of clrs status abc_lg
    === Cluster Resources ===
    Resource Name Node Name State Status Message
    abc_lg node1 Offline Offline
    node2 Offline Offline
    The status is offline...
    the output of clresourcegroup status
    === Cluster Resource Groups ===
    Group Name Node Name Suspended Status
    abc_rg node1 No Unmanaged
    node2 No Unmanaged
    You say that the resource should de enabled after creating the resource.. I am using GDS and i am just following the steps he provided to acheive high availabilty (in developers guide...)
    I have 1) Logical hostname resorce.
    2) Application resource in my failover resource group
    When i bring online the failover resource group , what should my failover resource group status and the status of resource in my resource group

  • File System Sharing using Sun Cluster 3.1

    Hi,
    I need help on how to setup and configure the system to share a remote file system that is created on a SAN disk (SAN LUN ) between two Sun Solaris 10 servers.
    The files in the remote system should be read/writabe from both the solaris servers concurrently.
    As a security policy NFS mount is not allowed. Some one suggested it can be done by using Sun Cluster 3.1 agents on both servers. Any details on how I can do this using Sun Cluster 3.1 is really appreciated.
    thanks
    Suresh

    You could do this by installing Sun Cluster on both systems and then creating a global file system on the shared LUN. However, if there was significant write activity on both nodes, then the performance will not necessarily what you need.
    What is wrong with the security of NFS? If it is set up properly I don't think this should be a problem.
    The other option would be to use shared QFS, but without Sun Cluster.
    Regards,
    Tim
    ---

  • Connected to an idle instance in sun cluster nodes.

    i have two sun cluster node sharing common storage.
    Two schema's:
    test1 for nodeA
    test2 for nodeB.
    My requirement is as below:
    Login into b node.
    export ORACLE_SID=test1.
    sqlplus / as sysdba.
    But i am getting as
    "connected to an idle instance"
    Is there any way to connect to node A schema from Node B.

    I found the answer..
    sqlplus <sysdbauser>/<password>@test1 as sysdba

Maybe you are looking for