Cluster 3.2 SUNW.LogicalHostname deprecated? bug?

Just wondering if this is normal for Cluster 3.2 The LogicalHostame/Virtual interface is tagged as DEPRECATED when it is configured. I was wondering why the real IP on the base interface is not DEPRECATED and the virtual should not be, so that the virtual would be used as a source IP for outbound traffic.
Is there any way to change this behaviour in cluster?
nxge3: flags=9000843 UP,BROADCAST,RUNNING,MULTICAST,IPv4,NOFAILOVER mtu 1500 index 3
inet 10.10.10.81 netmask ffffff00 broadcast 10.10.10.255
groupname sc_ipmp1
ether 0:14:4f:66:2a:6b
nxge3:1: flags=1040843 UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4 mtu 1500 index 3
inet 10.10.10.73 netmask ffffff00 broadcast 10.10.10.255

I do agree that this could cause problems if applications are not cluster aware, but I would like the option. Right now the application (oracle) is setup with dependancies on the logical hostname so it will fail over when the address does.
It still does not make sense that they set the logical IP as deprecated when the default system behaviour produces the same effect.

Similar Messages

  • Logicalhostname IP wont failover when one member of the cluster dies

    Hi There,
    I've setup a failover cluster with 2 servers. The cluser IP is set up as a logicalhostname and each server has two network cards configured as IPMP groups.
    I can test the IPMP failover on each server by failing a network card and checkign the IP address fails over.
    I can test the logicalhost name failsover by switchign the resource group over from one node to the other
    BUT
    If I drop one member of the cluster the failover fails
    Nov 4 15:09:06 nova cl_runtime: NOTICE: clcomm: Path nova:qfe2 - gambit:qfe2 errors during initiation
    Nov 4 15:09:06 nova cl_runtime: WARNING: Path nova:ce1 - gambit:bge1 initiation encountered errors, errno = 62. Remote node may be down or unreachable through this path.
    Nov 4 15:09:06 nova cl_runtime: WARNING: Path nova:qfe2 - gambit:qfe2 initiation encountered errors, errno = 62. Remote node may be down or unreachable through this path.
    ova
    Nov 4 15:09:08 nova Cluster.PNM: PNM daemon system error: SIOCLIFADDIF failed.: Network is down
    Nov 4 15:09:08 nova Cluster.PNM: production can't plumb 130.159.17.1.
    Nov 4 15:09:08 nova SC[SUNW.LogicalHostname,test-vle,vle1,hafoip_prenet_start]: IPMP logical interface configuration operation failed with <-1>.
    Nov 4 15:09:08 nova Cluster.RGM.rgmd: Method <hafoip_prenet_start> failed on resource <vle1> in resource group <test-vle>, exit code <1>, time used: 0% of timeout <300 seconds>
    Nov 4 15:09:08 nova ip: TCP_IOC_ABORT_CONN: local = 130.159.017.001:0, remote = 000.000.000.000:0, start = -2, end = 6
    Nov 4 15:09:08 nova ip: TCP_IOC_ABORT_CONN: aborted 0 connection
    scswitch: Resource group test-vle failed to start on chosen node and may fail over to other node(s)
    Any ideas would be appreciated as I dont understand how it all fails over correctly if the cluster is up but fails when one member is down.

    Hi,
    looking at the messages, the problem seems to be with the network setup on nova. I would suggest to try to configure the logical IP on nova manually to see if that works. If that does not it should tell you where the problem is.
    Or are you saying that manually switching the RG works, but when a node dies and cluster switches the RG it doesn't. That would be strange.
    You should also post the status of your network on nova in the failure case. There might be something wrong with your IPMP setup. Or has the public net failed completely when you killed the other node?
    Regards
    Hartmut

  • Failover to deprecated interface

    Hi,
    I have a 2 node cluster (SC 3.2) with Link based IPMP. Ive configured a very basic Resource Group with a LogicalHostname and simple Apache start script to test failover.
    I have 2 interfaces (ce0 and ce4) with the management IP on ce0, my problem is that when I start the logicalhostname resource it is created on ce4, this also occurs when I failover the resource group to the failover node.
    I have cut/pasted some information below and any input would be gratefully received - please note I have had to remove the 1st 2 octets as our security requires.. any more output required please let me know....
    On another note, I created a dependency of apache-test to testip-res so what is the best command to view the actual dependency please..??.. VCS is a simple hares ...
    root@server1 # ifconfig -a
    lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
    inet 127.0.0.1 netmask ff000000
    ce0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
    inet xx.xx.69.224 netmask ffffff80 broadcast xx.xx.69.255
    groupname man_ipmp
    ether 0:14:4f:7c:c5:1c
    ce3: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 5
    inet xx.xx.0.130 netmask ffffff80 broadcast xx.xx.0.255
    ether 0:14:4f:43:67:5
    ce4: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
    inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
    groupname man_ipmp
    ether 0:14:4f:43:67:6
    ce4:1: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4> mtu 1500 index 3
    inet xx.xx.69.225 netmask ffffff80 broadcast xx.xx.69.255
    ce7: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 4
    inet xx.xx.1.2 netmask ffffff80 broadcast xx.xx.1.127
    ether 0:14:4f:43:79:21
    clprivnet0: flags=1009843<UP,BROADCAST,RUNNING,MULTICAST,MULTI_BCAST,PRIVATE,IPv4> mtu 1500 index 6
    inet xx.xx.4.2 netmask fffffe00 broadcast xx.xx.5.255
    ether 0:0:0:0:0:2
    root@server1 # cat /etc/hostname.ce0
    server1 group man_ipmp up
    root@server1 # cat /etc/hostname.ce4
    group man_ipmp up
    root@server1 # cat /etc/hosts
    # Internet host table
    127.0.0.1 localhost
    xx.xx.69.224 server1 loghost
    xx.xx.69.227 server2
    # Logical IP for SunCluster test
    xx.xx.69.225 test-ip
    root@server1 # clrg show
    Resource Groups and Resources ===
    Resource Group: sterg
    RG_description: <NULL>
    RG_mode: Failover
    RG_state: Managed
    Failback: False
    Nodelist: server1 server2
    --- Resources for Group sterg ---
    Resource: testip-res
    Type: SUNW.LogicalHostname:2
    Type_version: 2
    Group: sterg
    R_description:
    Resource_project_name: default
    Enabled{server2}: True
    Enabled{server1}: True
    Monitored{server2}: True
    Monitored{server1}: True
    Resource: apache-res
    Type: SUNW.gds:6
    Type_version: 6
    Group: sterg
    R_description:
    Resource_project_name: default
    Enabled{server2}: True
    Enabled{server1}: True
    Monitored{server2}: True
    Monitored{server1}: True

    I've checked with my networking colleagues and this is what is expected. The logical host should end up on the NIC with the least number of logical IP addresses plumb.
    Is it actually causing any problem? If so, could you post back and I can take it up with my colleagues.
    The commands to show resource dependency would be:
    clrs show -v <resource_name> | grep -i depend
    For resource groups you can get dependencies and affinities using:
    clrg show -v <rg_name> | egrep -e "affinity|depend"
    Tim
    ---

  • Failing to create HA nfs storage on a shared 3310 HW Raid cluster 3.2

    Hi,
    I'm working on testing clustering on a couple v240s, running identitcal Sol10 10/08 and Sun Cluster 3.2. In trying things, I may have messed up the cluster. I may want to backout the cluster and start over. Is that possible, or do I need to install Solaris fresh.
    But first, the problem. I have the array connect to both machines and working. I mount 1 LUN on /global/nfs using the device /dev/did/dsk/d4s0. Then I ran the commands:
    # clrt register SUNW.nfs
    # clrt register SUNW.HAStoragePlus
    # clrt list -v
    Resource Type Node List
    SUNW.LogicalHostname:2 <All>
    SUNW.SharedAddress:2 <All>
    SUNW.nfs:3.2 <All>
    SUNW.HAStoragePlus:6 <All>
    # clrg create -n stnv240a,stnv240b -p PathPrefix=/global/nfs/admin nfs-rg
    I enabled them just now so:
    # clrg status
    Cluster Resource Groups ===
    Group Name Node Name Suspended Status
    nfs-rg stnv240a No Online
    stnv240b No Offline
    Then:
    # clrslh create -g nfs-rg cluster
    # clrslh status
    Cluster Resources ===
    Resource Name Node Name State Status Message
    cluster stnv240a Online Online - LogicalHostname online.
    stnv240b Offline Offline
    I'm guessing that 'b' is offline because it's the backup.
    Finally, I get:
    # clrs create -t HAStoragePlus -g nfs-rg -p AffinityOn=true -p FilesystemMountPoints=/global/nfs nfs-stor
    clrs: stnv240b - Invalid global device path /dev/did/dsk/d4s0 detected.
    clrs: (C189917) VALIDATE on resource nfs-stor, resource group nfs-rg, exited with non-zero exit status.
    clrs: (C720144) Validation of resource nfs-stor in resource group nfs-rg on node stnv240b failed.
    clrs: (C891200) Failed to create resource "nfs-stor".
    On stnv240a:
    # df -h /global/nfs
    Filesystem size used avail capacity Mounted on
    /dev/did/dsk/d4s0 49G 20G 29G 41% /global/nfs
    and on stnv240b:
    # df -h /global/nfs
    Filesystem size used avail capacity Mounted on
    /dev/did/dsk/d4s0 49G 20G 29G 41% /global/nfs
    Any help? Like I said, this is a test setup. I've started over once. So I can start over if I did something irreversible.

    I still have the issue. I reinstalled from scratch and installed the cluster. Then I did the following:
    $ vi /etc/default/nfs
    GRACE_PERIOD=10
    $ ls /global//nfs
    $ mount /global/nfs
    $ df -h
    Filesystem size used avail capacity Mounted on
    /dev/global/dsk/d4s0 49G 20G 29G 41% /global/nfs
    $ clrt register SUNW.nfs
    $ clrt register SUNW.HAStoragePlus
    $ clrt list -v
    Resource Type Node List
    SUNW.LogicalHostname:2 <All>
    SUNW.SharedAddress:2 <All>
    SUNW.nfs:3.2 <All>
    SUNW.HAStoragePlus:6 <All>
    $ clrg create -n stnv240a,stnv240b -p PathPrefix=/global/nfs/admin nfs-rg
    $ clrslh create -g nfs-rg patience
    clrslh: IP Address 204.155.141.146 is already plumbed at host: stnv240b
    $ grep cluster /etc/hosts
    204.155.141.140 stnv240a stnv240a.mns.qintra.com # global - cluster
    204.155.141.141 cluster cluster.mns.qintra.com # cluster virtual address
    204.155.141.146 stnv240b stnv240b.mns.qintra.com patience patience.mns.qintra.com # global v240 - cluster test
    $ clrslh create -g nfs-rg cluster
    $ clrs create -t HAStoragePlus -g nfs-rg -p AffinityOn=true -p FilesystemMountPoints=/global/nfs nfs-stor
    clrs: stnv240b - Failed to analyze the device special file associated with file system mount point /global/nfs: No such file or directory.
    clrs: (C189917) VALIDATE on resource nfs-stor, resource group nfs-rg, exited with non-zero exit status.
    clrs: (C720144) Validation of resource nfs-stor in resource group nfs-rg on node stnv240b failed.
    clrs: (C891200) Failed to create resource "nfs-stor".
    Now, on the second machine (stnv240b), /dev/global does not exist, but the file system mounts anyway. I guess that's cluster magic?
    $ cat /etc/vfstab
    /dev/global/dsk/d4s0 /dev/global/dsk/d4s0 /global/nfs ufs 1 yes global
    $ df -h /global/nfs
    Filesystem size used avail capacity Mounted on
    /dev/global/dsk/d4s0 49G 20G 29G 41% /global/nfs
    $ ls -l /dev/global
    /dev/global: No such file or directory
    I followed the other thread. devfsadm and scgdevs
    One other thing I notice. Both nodes mount my global on node@1
    /dev/md/dsk/d6 723M 3.5M 662M 1% /global/.devices/node@1
    /dev/md/dsk/d6 723M 3.5M 662M 1% /global/.devices/node@1

  • Cluster 3.0 hafoip_stop problem

    I have 2 Netra 1405T's running Sol9 and SunCluster3.0
    most of my resources switch over (either manually or automatic) with no
    issues, however looks like my hafoip is getting hung-up somehow causing
    the host that is moving to standby to reboot.
    Any help/ideas would be great!
    log sample:
    Nov 26 14:11:34 host last message repeated 107 times
    Nov 26 14:11:42 host Cluster.PNM: PNM system error: SIOCLIFREMOVEIF of qfe2:0: Address family not supported by protocol family
    Nov 26 14:11:42 host SC[SUNW.LogicalHostname,psql-harg,theq,hafoip_stop]: NAFO logical interface configuration operation failed with <-1>.
    Nov 26 14:11:42 host Cluster.RGM.rgmd: Method <hafoip_stop> failed on resource <hostname> in resource group <psql-harg>, exit code <1>
    Nov 26 14:11:42 host Cluster.RGM.rgmd: fatal: Aborting this node because method <hafoip_stop> failed on resource <hostname> and Failover_mode is set to HARD
    Thanks!

    Sounds like a misconfiguration. If you have a cluster of hosts: h1 and h2 and a logical host lh1, then the /etc/hosts file on both nodes should be something like:
    192.168.0.1 h1
    192.168.0.2 h2
    192.168.0.3 lh1
    (or some other suitable subnet).
    You would then register your resources as follows:
    scragdm -a -g dummy-rg
    scrgadm -a -L -g dummy-rg -l lh1
    etc. This would create you a movable logical host. If you configure it with one of the h1 or h2 hosts, all hell will break loose.
    Tim
    ---

  • I want to know how to setup the logicaladdress (cluster+HA+oracle)

    I want to know how to setup the logicaladdress (cluster+HA+oracle)

    Please have a look at:
    http://docs.sun.com/app/docs/doc/819-0703/6n343k6g0?q=SUNW.LogicalHostname&a=view
    for how to add a Logical Hostname Resource to a Resource Group (in that case the RG where you configrued ha oracle for).
    There is nothing special for HA oracle on that part.
    If you configure your oracle_listener resource for that specific logical hostname resource, then you should configure a dependecy from the oracle_listener to the LH.
    By default the RG will have property Implicit_network_dependencies set to true, which should be enough.
    If this property is false, I recommend adding the LH resource name to the Resource_dependencies property of the oracle_listener resource.
    For a general overview I recommend reading:
    http://docs.sun.com/app/docs/doc/819-0703/
    Greets
    Thorsten

  • DSEE 6.0  won't start under Cluster 3.2u2

    DS v6.0 originally native package install then install DS v6.3.1 with patch 125278-08, my mistake was that I describe below, then probe with DS v6.0 and I still have the same error.
    pienso que el problema es con el agente de cluster y sus binarios
    I think the problem is with the agent and its binary cluster
    Solaris 10 x86
    bash-3.00# hostname
    solnodo1
    bash-3.00# clrg status
    === Cluster Resource Groups ===
    Group Name Node Name Suspended Status
    ldap-rg solnodo1 No Offline
    solnodo2 No Offline
    bash-3.00# clrs status
    === Cluster Resources ===
    Resource Name Node Name State Status Message
    ldap-lh-rs solnodo1 Offline Offline - LogicalHostname offline.
    solnodo2 Offline Offline - LogicalHostname offline.
    ldap-hastp-rs solnodo1 Offline Offline
    solnodo2 Offline Offline
    ds--opt-iplanet-ds-63-instances-PCUS-LDAP solnodo1 Offline Offline - Successfully stopped Sun Java(TM) System Directory Server.
    solnodo2 Offline Offline - Successfully stopped Sun Java(TM) System Directory Server.
    bash-3.00# clrt lsi ist
    SUNW.LogicalHostname:3
    SUNW.SharedAddress:2
    SUNW.HAStoragePlus:6
    SUNW.ds6ldap
    bash-3.00# df -h
    Filesystem size used avail capacity Mounted on
    /dev/dsk/c1t0d0s0 7.1G 5.4G 1.6G 77% /
    /devices 0K 0K 0K 0% /devices
    ctfs 0K 0K 0K 0% /system/contract
    proc 0K 0K 0K 0% /proc
    mnttab 0K 0K 0K 0% /etc/mnttab
    swap 1.0G 1.2M 1.0G 1% /etc/svc/volatile
    objfs 0K 0K 0K 0% /system/object
    sharefs 0K 0K 0K 0% /etc/dfs/sharetab
    /usr/lib/libc/libc_hwcap1.so.1
    7.1G 5.4G 1.6G 77% /lib/libc.so.1
    fd 0K 0K 0K 0% /dev/fd
    swap 1.0G 44K 1.0G 1% /tmp
    swap 1.0G 32K 1.0G 1% /var/run
    /vol/dev/dsk/c0t0d0/sc_32u2_dvd
    1013M 1013M 0K 100% /cdrom/sc_32u2_dvd
    /dev/did/dsk/d2s5 486M 5.5M 432M 2% /global/.devices/node@1
    /hgfs 16G 4.0M 16G 1% /hgfs
    /dev/did/dsk/d8s5 486M 5.5M 432M 2% /global/.devices/node@2
    bash-3.00# cd / /opt/SUNWdsee/ds6/bin/dsadm -V
    [dsadm]
    dsadm : 6.0 B2007.025.1834
    [slapd 32-bit]
    Sun Microsystems, Inc.
    Sun-Java(tm)-System-Directory/6.0 B2007.025.1834 32-bit
    ns-slapd : 6.0 B2007.025.1834
    Slapd Library : 6.0 B2007.025.1834
    Front-End Library : 6.0 B2007.025.1834
    [slapd 64-bit]
    Sun Microsystems, Inc.
    Sun-Java(tm)-System-Directory/6.0 B2007.025.1834 64-bit
    ns-slapd : 6.0 B2007.025.1834
    Slapd Library : 6.0 B2007.025.1834
    Front-End Library : 6.0 B2007.025.1834
    bash-3.00# pkgingo fo -l SUNWldap. -directory-ha
    PKGINST: SUNWldap-directory-ha
    NAME: Sun Java(TM) System Directory Server Component for Cluster
    CATEGORY: system
    ARCH: i386
    VERSION: 6.0,REV=2006.07.07
    BASEDIR: /opt/SUNWdsee
    VENDOR: Sun Microsystems, Inc.
    DESC: Sun Java(TM) System Directory Server Component for Cluster
    PSTAMP: carabosse20060707032224
    INSTDATE: Mar 25 2010 07:27
    HOTLINE: Please contact your local service provider
    STATUS: completely installed
    FILES: 13 installed pathnames
    3 shared pathnames
    4 directories
    8 executables
    998 blocks used (approx)
    bash-3.00# clrs g online ldap-rg
    bash-3.00# clrs status
    === Cluster Resources ===
    Resource Name Node Name State Status Message
    ldap-lh-rs solnodo1 Online Online - LogicalHostname online.
    solnodo2 Offline Offline - LogicalHostname offline.
    ldap-hastp-rs solnodo1 Online Online
    solnodo2 Offline Offline
    **ds--opt-iplanet-ds-63-instances-PCUS-LDAP solnodo1 Starting Online - Completed successfully*.*
    solnodo2 Offline Offline - Successfully stopped Sun Java(TM) System Directory Server.
    bash-3.00# tail -f /var/adm/messages
    Mar 25 09:21:30 solnodo2 Cluster.RGM.global.rgmd: [ID 515159 daemon.notice] method <hafoip_monitor_stop> completed successfully for resource <ldap-lh-rs>, resource group <ldap-rg>, node <solnodo2>, time used: 0% of timeout <300 seconds>
    Mar 25 09:21:30 solnodo2 Cluster.RGM.global.rgmd: [ID 224900 daemon.notice] launching method <hafoip_stop> for resource <ldap-lh-rs>, resource group <ldap-rg>, node <solnodo2>, timeout <300> seconds
    Mar 25 09:21:30 solnodo2 Cluster.RGM.global.rgmd: [ID 944595 daemon.notice] 39 fe_rpc_command: cmd_type(enum):<1>:cmd=</usr/cluster/lib/rgm/rt/hafoip/hafoip_stop>:tag=<ldap-rg.ldap-lh-rs.1>: Calling security_clnt_connect(..., host=<solnodo2>, sec_type {0:WEAK, 1:STRONG, 2:DES} =<1>, ...)
    Mar 25 09:21:30 solnodo2 ip: [ID 678092 kern.notice] TCP_IOC_ABORT_CONN: local = 010.010.010.050:0, remote = 000.000.000.000:0, start = -2, end = 6
    Mar 25 09:21:30 solnodo2 ip: [ID 302654 kern.notice] TCP_IOC_ABORT_CONN: aborted 0 connection
    Mar 25 09:21:30 solnodo2 Cluster.RGM.global.rgmd: [ID 515159 daemon.notice] method <hafoip_stop> completed successfully for resource <ldap-lh-rs>, resource group <ldap-rg>, node <solnodo2>, time used: 0% of timeout <300 seconds>
    Mar 25 09:21:30 solnodo2 Cluster.RGM.global.rgmd: [ID 224900 daemon.notice] launching method <hastorageplus_postnet_stop> for resource <ldap-hastp-rs>, resource group <ldap-rg>, node <solnodo2>, timeout <1800> seconds
    Mar 25 09:21:30 solnodo2 Cluster.RGM.global.rgmd: [ID 944595 daemon.notice] 39 fe_rpc_command: cmd_type(enum):<1>:cmd=</usr/cluster/lib/rgm/rt/hastorageplus/hastorageplus_postnet_stop>:tag=<ldap-rg.ldap-hastp-rs.11>: Calling security_clnt_connect(..., host=<solnodo2>, sec_type {0:WEAK, 1:STRONG, 2:DES} =<1>, ...)
    Mar 25 09:21:30 solnodo2 Cluster.RGM.global.rgmd: [ID 515159 daemon.notice] method <hastorageplus_postnet_stop> completed successfully for resource <ldap-hastp-rs>, resource group <ldap-rg>, node <solnodo2>, time used: 0% of timeout <1800 seconds>
    Mar 25 09:21:30 solnodo2 Cluster.Framework: [ID 801593 daemon.notice] stdout: no longer primary for ldap-ds
    somebody help me
    Edited by: bootbk on Mar 26, 2010 7:26 AM

    DS v6.0 originally native package install then install DS v6.3.1 with patch 125278-08, my mistake was that I describe below, then probe with DS v6.0 and I still have the same error.
    pienso que el problema es con el agente de cluster y sus binarios
    I think the problem is with the agent and its binary cluster
    Solaris 10 x86
    bash-3.00# hostname
    solnodo1
    bash-3.00# clrg status
    === Cluster Resource Groups ===
    Group Name Node Name Suspended Status
    ldap-rg solnodo1 No Offline
    solnodo2 No Offline
    bash-3.00# clrs status
    === Cluster Resources ===
    Resource Name Node Name State Status Message
    ldap-lh-rs solnodo1 Offline Offline - LogicalHostname offline.
    solnodo2 Offline Offline - LogicalHostname offline.
    ldap-hastp-rs solnodo1 Offline Offline
    solnodo2 Offline Offline
    ds--opt-iplanet-ds-63-instances-PCUS-LDAP solnodo1 Offline Offline - Successfully stopped Sun Java(TM) System Directory Server.
    solnodo2 Offline Offline - Successfully stopped Sun Java(TM) System Directory Server.
    bash-3.00# clrt lsi ist
    SUNW.LogicalHostname:3
    SUNW.SharedAddress:2
    SUNW.HAStoragePlus:6
    SUNW.ds6ldap
    bash-3.00# df -h
    Filesystem size used avail capacity Mounted on
    /dev/dsk/c1t0d0s0 7.1G 5.4G 1.6G 77% /
    /devices 0K 0K 0K 0% /devices
    ctfs 0K 0K 0K 0% /system/contract
    proc 0K 0K 0K 0% /proc
    mnttab 0K 0K 0K 0% /etc/mnttab
    swap 1.0G 1.2M 1.0G 1% /etc/svc/volatile
    objfs 0K 0K 0K 0% /system/object
    sharefs 0K 0K 0K 0% /etc/dfs/sharetab
    /usr/lib/libc/libc_hwcap1.so.1
    7.1G 5.4G 1.6G 77% /lib/libc.so.1
    fd 0K 0K 0K 0% /dev/fd
    swap 1.0G 44K 1.0G 1% /tmp
    swap 1.0G 32K 1.0G 1% /var/run
    /vol/dev/dsk/c0t0d0/sc_32u2_dvd
    1013M 1013M 0K 100% /cdrom/sc_32u2_dvd
    /dev/did/dsk/d2s5 486M 5.5M 432M 2% /global/.devices/node@1
    /hgfs 16G 4.0M 16G 1% /hgfs
    /dev/did/dsk/d8s5 486M 5.5M 432M 2% /global/.devices/node@2
    bash-3.00# cd / /opt/SUNWdsee/ds6/bin/dsadm -V
    [dsadm]
    dsadm : 6.0 B2007.025.1834
    [slapd 32-bit]
    Sun Microsystems, Inc.
    Sun-Java(tm)-System-Directory/6.0 B2007.025.1834 32-bit
    ns-slapd : 6.0 B2007.025.1834
    Slapd Library : 6.0 B2007.025.1834
    Front-End Library : 6.0 B2007.025.1834
    [slapd 64-bit]
    Sun Microsystems, Inc.
    Sun-Java(tm)-System-Directory/6.0 B2007.025.1834 64-bit
    ns-slapd : 6.0 B2007.025.1834
    Slapd Library : 6.0 B2007.025.1834
    Front-End Library : 6.0 B2007.025.1834
    bash-3.00# pkgingo fo -l SUNWldap. -directory-ha
    PKGINST: SUNWldap-directory-ha
    NAME: Sun Java(TM) System Directory Server Component for Cluster
    CATEGORY: system
    ARCH: i386
    VERSION: 6.0,REV=2006.07.07
    BASEDIR: /opt/SUNWdsee
    VENDOR: Sun Microsystems, Inc.
    DESC: Sun Java(TM) System Directory Server Component for Cluster
    PSTAMP: carabosse20060707032224
    INSTDATE: Mar 25 2010 07:27
    HOTLINE: Please contact your local service provider
    STATUS: completely installed
    FILES: 13 installed pathnames
    3 shared pathnames
    4 directories
    8 executables
    998 blocks used (approx)
    bash-3.00# clrs g online ldap-rg
    bash-3.00# clrs status
    === Cluster Resources ===
    Resource Name Node Name State Status Message
    ldap-lh-rs solnodo1 Online Online - LogicalHostname online.
    solnodo2 Offline Offline - LogicalHostname offline.
    ldap-hastp-rs solnodo1 Online Online
    solnodo2 Offline Offline
    **ds--opt-iplanet-ds-63-instances-PCUS-LDAP solnodo1 Starting Online - Completed successfully*.*
    solnodo2 Offline Offline - Successfully stopped Sun Java(TM) System Directory Server.
    bash-3.00# tail -f /var/adm/messages
    Mar 25 09:21:30 solnodo2 Cluster.RGM.global.rgmd: [ID 515159 daemon.notice] method <hafoip_monitor_stop> completed successfully for resource <ldap-lh-rs>, resource group <ldap-rg>, node <solnodo2>, time used: 0% of timeout <300 seconds>
    Mar 25 09:21:30 solnodo2 Cluster.RGM.global.rgmd: [ID 224900 daemon.notice] launching method <hafoip_stop> for resource <ldap-lh-rs>, resource group <ldap-rg>, node <solnodo2>, timeout <300> seconds
    Mar 25 09:21:30 solnodo2 Cluster.RGM.global.rgmd: [ID 944595 daemon.notice] 39 fe_rpc_command: cmd_type(enum):<1>:cmd=</usr/cluster/lib/rgm/rt/hafoip/hafoip_stop>:tag=<ldap-rg.ldap-lh-rs.1>: Calling security_clnt_connect(..., host=<solnodo2>, sec_type {0:WEAK, 1:STRONG, 2:DES} =<1>, ...)
    Mar 25 09:21:30 solnodo2 ip: [ID 678092 kern.notice] TCP_IOC_ABORT_CONN: local = 010.010.010.050:0, remote = 000.000.000.000:0, start = -2, end = 6
    Mar 25 09:21:30 solnodo2 ip: [ID 302654 kern.notice] TCP_IOC_ABORT_CONN: aborted 0 connection
    Mar 25 09:21:30 solnodo2 Cluster.RGM.global.rgmd: [ID 515159 daemon.notice] method <hafoip_stop> completed successfully for resource <ldap-lh-rs>, resource group <ldap-rg>, node <solnodo2>, time used: 0% of timeout <300 seconds>
    Mar 25 09:21:30 solnodo2 Cluster.RGM.global.rgmd: [ID 224900 daemon.notice] launching method <hastorageplus_postnet_stop> for resource <ldap-hastp-rs>, resource group <ldap-rg>, node <solnodo2>, timeout <1800> seconds
    Mar 25 09:21:30 solnodo2 Cluster.RGM.global.rgmd: [ID 944595 daemon.notice] 39 fe_rpc_command: cmd_type(enum):<1>:cmd=</usr/cluster/lib/rgm/rt/hastorageplus/hastorageplus_postnet_stop>:tag=<ldap-rg.ldap-hastp-rs.11>: Calling security_clnt_connect(..., host=<solnodo2>, sec_type {0:WEAK, 1:STRONG, 2:DES} =<1>, ...)
    Mar 25 09:21:30 solnodo2 Cluster.RGM.global.rgmd: [ID 515159 daemon.notice] method <hastorageplus_postnet_stop> completed successfully for resource <ldap-hastp-rs>, resource group <ldap-rg>, node <solnodo2>, time used: 0% of timeout <1800 seconds>
    Mar 25 09:21:30 solnodo2 Cluster.Framework: [ID 801593 daemon.notice] stdout: no longer primary for ldap-ds
    somebody help me
    Edited by: bootbk on Mar 26, 2010 7:26 AM

  • Sun Cluster: Graph resources and resource groups dependencies

    Hi,
    Is there anything like the scfdot (http://opensolaris.org/os/community/smf/scfdot/) to graph resource dependencies in Sun Cluster?
    Regards,
    Ciro

    Solaris 10 8/07 s10s_u4wos_12b SPARC
    + scha_resource_get -O TYPE -R lh-billapp-rs
    + echo SUNW.LogicalHostname:2
    + [ -z sa-billapp-rs ]
    + NETRS=sa-billapp-rs lh-billapp-rs
    + [ true = true -a ! -z sa-billapp-rs lh-billapp-rs ]
    cluster2dot.ksh[193]: test: syntax error
    + + tr -s \n
    + scha_resource_get -O RESOURCE_DEPENDENCIES -R sa-billapp-rs
    DEP=
    + [ true = true -a ! -z sa-billapp-rs lh-billapp-rs ]
    cluster2dot.ksh[193]: test: syntax error
    + + tr -s \n
    + scha_resource_get -O RESOURCE_DEPENDENCIES -R lh-billapp-rs
    DEP=
    + [   !=   ]
    + echo \t\t"lh-billapp-rs";
    + 1>> /tmp/clu-dom3-resources.dot
    + + tr -s \n
    + scha_resource_get -O RESOURCE_DEPENDENCIES_WEAK -R lh-billapp-rs
    DEP_WEAK=

  • Errors when adding resources to rg in zone cluster

    Hi guys,
    I managed to create and bring up a zone cluster, create a rg and add a HAStoragePlus resource (zpool), but getting errors when I want to add a lh resource. Here's the output I find relevant:
    root@node1:~# zpool list
    NAME           SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT
    rpool         24.6G  10.0G  14.6G  40%  1.00x  ONLINE  -
    zclusterpool   187M  98.5K   187M   0%  1.00x  ONLINE  -
    root@node1:~# clzonecluster show ztestcluster
    === Zone Clusters ===
    Zone Cluster Name:                              ztestcluster
      zonename:                                        ztestcluster
      zonepath:                                        /zcluster/ztestcluster
      autoboot:                                        TRUE
      brand:                                           solaris
      bootargs:                                        <NULL>
      pool:                                            <NULL>
      limitpriv:                                       <NULL>
      scheduling-class:                                <NULL>
      ip-type:                                         shared
      enable_priv_net:                                 TRUE
      resource_security:                               SECURE
      --- Solaris Resources for ztestcluster ---
      Resource Name:                                net
        address:                                       192.168.10.55
        physical:                                      auto
      Resource Name:                                dataset
        name:                                          zclusterpool
      --- Zone Cluster Nodes for ztestcluster ---
      Node Name:                                    node2
        physical-host:                                 node2
        hostname:                                      zclnode2
        --- Solaris Resources for node2 ---
      Node Name:                                    node1
        physical-host:                                 node1
        hostname:                                      zclnode1
        --- Solaris Resources for node1 ---
    Now I want to add  a lh (zclusterip - 192.168.10.55) to a resource group named z-test-rg.
    root@zclnode2:~# cat /etc/hosts
    # Copyright 2009 Sun Microsystems, Inc.  All rights reserved.
    # Use is subject to license terms.
    # Internet host table
    ::1 localhost
    127.0.0.1 localhost loghost
    #zone cluster
    192.168.10.51   zclnode1
    192.168.10.52   zclnode2
    192.168.10.55   zclusterip
    root@zclnode2:~# cluster status
    === Cluster Resource Groups ===
    Group Name       Node Name       Suspended      State
    z-test-rg        zclnode1        No             Online
                     zclnode2        No             Offline
    === Cluster Resources ===
    Resource Name        Node Name      State       Status Message
    zclusterpool-rs      zclnode1       Online      Online
                         zclnode2       Offline     Offline
    root@zclnode2:~# clrg show
    === Resource Groups and Resources ===
    Resource Group:                                 z-test-rg
      RG_description:                                  <NULL>
      RG_mode:                                         Failover
      RG_state:                                        Managed
      Failback:                                        False
      Nodelist:                                        zclnode1 zclnode2
      --- Resources for Group z-test-rg ---
      Resource:                                     zclusterpool-rs
        Type:                                          SUNW.HAStoragePlus:10
        Type_version:                                  10
        Group:                                         z-test-rg
        R_description:
        Resource_project_name:                         default
        Enabled{zclnode1}:                             True
        Enabled{zclnode2}:                             True
        Monitored{zclnode1}:                           True
        Monitored{zclnode2}:                           True
    The error, for lh resource:
    root@zclnode2:~# clrslh create -g z-test-rg -h zclusterip zclusterip-rs
    clrslh:  No IPMP group on zclnode1 matches prefix and IP version for zclusterip
    Any ideas?
    Much appreciated!

    Hello,
    First of all, I detected a mistake in my previous config: instead of adding an ipmp, a "simple" NIC was added to cluster. I rectified that (I created zclusteripmp0 ipmp out of net11):
    root@node1:~# ipadm
    NAME              CLASS/TYPE STATE        UNDER      ADDR
    clprivnet0        ip         ok           --         --
       clprivnet0/?   static     ok           --         172.16.3.66/26
       clprivnet0/?   static     ok           --         172.16.2.2/24
    lo0               loopback   ok           --         --
       lo0/v4         static     ok           --         127.0.0.1/8
       lo0/v6         static     ok           --         ::1/128
       lo0/zoneadmd-v4 static    ok           --         127.0.0.1/8
       lo0/zoneadmd-v6 static    ok           --         ::1/128
    net0              ip         ok           sc_ipmp0   --
    net1              ip         ok           sc_ipmp1   --
    net2              ip         ok           --         --
       net2/?         static     ok           --         172.16.0.66/26
    net3              ip         ok           --         --
       net3/?         static     ok           --         172.16.0.130/26
    net4              ip         ok           sc_ipmp2   --
    net5              ip         ok           sc_ipmp2   --
    net11             ip         ok           zclusteripmp0 --
    sc_ipmp0          ipmp       ok           --         --
       sc_ipmp0/out   dhcp       ok           --         192.168.1.3/24
    sc_ipmp1          ipmp       ok           --         --
       sc_ipmp1/static1 static   ok           --         192.168.10.11/24
    sc_ipmp2          ipmp       ok           --         --
       sc_ipmp2/static1 static   ok           --         192.168.30.11/24
       sc_ipmp2/static2 static   ok           --         192.168.30.12/24
    zclusteripmp0     ipmp       ok           --         --
       zclusteripmp0/zoneadmd-v4 static ok    --         192.168.10.51/24
    root@node1:~# clzonecluster export ztestcluster
    create -b
    set zonepath=/zcluster/ztestcluster
    set brand=solaris
    set autoboot=true
    set enable_priv_net=true
    set ip-type=shared
    add net
    set address=192.168.10.55
    set physical=auto
    end
    add dataset
    set name=zclusterpool
    end
    add attr
    set name=cluster
    set type=boolean
    set value=true
    end
    add node
    set physical-host=node2
    set hostname=zclnode2
    add net
    set address=192.168.10.52
    set physical=zclusteripmp0
    end
    end
    add node
    set physical-host=node1
    set hostname=zclnode1
    add net
    set address=192.168.10.51
    set physical=zclusteripmp0
    end
    end
    An then I tried again to add the lh, but getting the same error:
    root@node2:~# zlogin -C ztestcluster
    [Connected to zone 'ztestcluster' console]
    zclnode2 console login: root
    Password:
    Last login: Mon Jan 19 15:28:28 on console
    Jan 19 19:17:24 zclnode2 login: ROOT LOGIN /dev/console
    Oracle Corporation      SunOS 5.11      11.2    June 2014
    root@zclnode2:~# ipadm
    NAME              CLASS/TYPE STATE        UNDER      ADDR
    clprivnet0        ip         ok           --         --
       clprivnet0/?   inherited  ok           --         172.16.3.65/26
    lo0               loopback   ok           --         --
       lo0/?          inherited  ok           --         127.0.0.1/8
       lo0/?          inherited  ok           --         ::1/128
    zclusteripmp0     ipmp       ok           --         --
       zclusteripmp0/? inherited ok           --         192.168.10.52/24
    root@zclnode2:~# cluster status
    === Cluster Resource Groups ===
    Group Name       Node Name       Suspended      State
    z-test-rg        zclnode1        No             Offline
                     zclnode2        No             Online
    === Cluster Resources ===
    Resource Name        Node Name      State       Status Message
    zclusterpool-rs      zclnode1       Offline     Offline
                         zclnode2       Online      Online
    root@zclnode2:~# ipadm
    NAME              CLASS/TYPE STATE        UNDER      ADDR
    clprivnet0        ip         ok           --         --
       clprivnet0/?   inherited  ok           --         172.16.3.65/26
    lo0               loopback   ok           --         --
       lo0/?          inherited  ok           --         127.0.0.1/8
       lo0/?          inherited  ok           --         ::1/128
    zclusteripmp0     ipmp       ok           --         --
       zclusteripmp0/? inherited ok           --         192.168.10.52/24
    root@zclnode2:~# clreslogicalhostname create -g z-test-rg -h zclusterip zcluste
    rip-rs
    clreslogicalhostname:  No IPMP group on zclnode1 matches prefix and IP version for zclusterip
    root@zclnode2:~#
    To answer your first question, yes - all global nodes and zone cluster nodes have entries for zclusterip:
    root@zclnode2:~# cat /etc/hosts
    # Copyright 2009 Sun Microsystems, Inc.  All rights reserved.
    # Use is subject to license terms.
    # Internet host table
    ::1 localhost
    127.0.0.1 localhost loghost
    #zone cluster
    192.168.10.51   zclnode1
    192.168.10.52   zclnode2
    192.168.10.55   zclusterip
    root@zclnode2:~# ping zclnode1
    zclnode1 is alive
    When I tried the command you mentioned, first it gave me an error ( there was a space between interfaces), then I changed the rg group to fit mine (z-test-rg) and it  (partially) worked:
    root@zclnode2:~# clrs create -g z-test-rg -t LogicalHostname -p Netiflist=sc_ip
    mp0@1,sc_ipmp0@2 -p Hostnamelist=zclusterip zclusterip-rs
    root@zclnode2:~# clrg show
    === Resource Groups and Resources ===
    Resource Group:                                 z-test-rg
      RG_description:                                  <NULL>
      RG_mode:                                         Failover
      RG_state:                                        Managed
      Failback:                                        False
      Nodelist:                                        zclnode1 zclnode2
      --- Resources for Group z-test-rg ---
      Resource:                                     zclusterpool-rs
        Type:                                          SUNW.HAStoragePlus:10
        Type_version:                                  10
        Group:                                         z-test-rg
        R_description:
        Resource_project_name:                         default
        Enabled{zclnode1}:                             True
        Enabled{zclnode2}:                             True
        Monitored{zclnode1}:                           True
        Monitored{zclnode2}:                           True
      Resource:                                     zclusterip-rs
        Type:                                          SUNW.LogicalHostname:5
        Type_version:                                  5
        Group:                                         z-test-rg
        R_description:
        Resource_project_name:                         default
        Enabled{zclnode1}:                             True
        Enabled{zclnode2}:                             True
        Monitored{zclnode1}:                           True
        Monitored{zclnode2}:                           True
    root@zclnode2:~# cluster status
    === Cluster Resource Groups ===
    Group Name       Node Name       Suspended      State
    z-test-rg        zclnode1        No             Offline
                     zclnode2        No             Online
    === Cluster Resources ===
    Resource Name        Node Name      State       Status Message
    zclusterip-rs        zclnode1       Offline     Offline
                         zclnode2       Online      Online - LogicalHostname online.
    zclusterpool-rs      zclnode1       Offline     Offline
                         zclnode2       Online      Online
    root@zclnode2:~# ipadm
    NAME              CLASS/TYPE STATE        UNDER      ADDR
    clprivnet0        ip         ok           --         --
       clprivnet0/?   inherited  ok           --         172.16.3.65/26
    lo0               loopback   ok           --         --
       lo0/?          inherited  ok           --         127.0.0.1/8
       lo0/?          inherited  ok           --         ::1/128
    sc_ipmp0          ipmp       ok           --         --
       sc_ipmp0/?     inherited  ok           --         192.168.10.55/24
    zclusteripmp0     ipmp       ok           --         --
       zclusteripmp0/? inherited ok           --         192.168.10.52/24
    root@zclnode2:~# ping zclusterip
    zclusterip is alive
    root@zclnode2:~# clrg switch -n zclnode1 z-test-rg
    root@zclnode2:~# cluster status
    === Cluster Resource Groups ===
    Group Name       Node Name       Suspended      State
    z-test-rg        zclnode1        No             Online
                     zclnode2        No             Offline
    === Cluster Resources ===
    Resource Name        Node Name      State       Status Message
    zclusterip-rs        zclnode1       Online      Online - LogicalHostname online.
                         zclnode2       Offline     Offline - LogicalHostname offline.
    zclusterpool-rs      zclnode1       Online      Online
                         zclnode2       Offline     Offline
    root@zclnode2:~# ping zclusterip
    no answer from zclusterip
    root@zclnode2:~# ping zclusterip
    no answer from zclusterip
    root@zclnode2:~#
    So, the lh was added, the rg can switch over to the other node, but zclusterip is pingable only from that cluster zone; I cannot ping zcluster ip from the cluster zone that does not hold the rg, nor from any global cluster node (node1, node2)...

  • Information on the Virtual IP Address of cluster resource group

    Hi,
    I would like to know how we can find a Virtual IP address of resource group. i.e. The IP address associated to the SUNW.LogicalHostname resource of the resource group.
    Is it always present in the output of the command 'ifconfig -a'?
    What is difference between IPMP and the Virtual IP Address of the resource groups? Can these IP Addresses be common / same?
    Thanks,
    Chaitanya

    Chaitanya,
    There seems to be a little confusion in your question so let me try and explain.
    Resource groups do not necessarily have to have a logical (virtual) IP address. They will only have a logical IP address if you configure on by adding a SUNW.LogicalHostname resource to the resource group. When you create that resource, you give it the hostname of the IP address you want it to control. This host name should be in the name service, i.e. /etc/hosts, NIS, DNS, LDAP.
    When you have added such a resource to a resource group (RG), bringing the RG online will result in the logical IP address being plumbed up on one of the NICs that form the IPMP group that host the relevant subnet. So, if you have two NICs (ce0, ce1) in an IPMP group that support the 129.156.10.x/24 subnet and you added a logical IP address 129.156.10.42 to RG foo-rg, then bringing foo-rg online will result in 129.156.10.42 being plumbed in on either ce0 or ce1. ifconfig -a will then show this in the output, usually as ce0:1 or ce0:2 or ce1:3, etc.
    An IPMP group is a Solaris feature that supports IP multi-pathing. It uses either ICMP probes or link detection to determine the health of a network interface. In the event that a NIC fails, the IP addresses that are hosted on that NIC are transferred to the remaining NIC. So, if your host has an IP of 129.156.10.41 plumbed on ce0 and it fails, it will be migrated to ce1.
    That's a very short description of a much more detailed topic. Please have a look at the relevant sections in the Solaris and Solaris Cluster documentation on docs.sun.com.
    Hope that helps,
    Tim
    ---

  • [Patch 정보] TRACKING BUG FOR CUMULATIVE MLR#6 ON TOP OF BPEL PM 10.1.3.3.1

    최근에 출시된 BPEL PM 10.1.3.3.1의 통합패치입니다.
    아래는 readme.txt에 포함된 patch list입니다.
    # WARNING: Failure to carefully read and understand these requirements may
    # result in your applying a patch that can cause your Oracle Server to
    # malfunction, including interruption of service and/or loss of data.
    # If you do not meet all of the following requirements, please log an
    # iTAR, so that an Oracle Support Analyst may review your situation. The
    # Oracle analyst will help you determine if this patch is suitable for you
    # to apply to your system. We recommend that you avoid applying any
    # temporary patch unless directed by an Oracle Support Analyst who has
    # reviewed your system and determined that it is applicable.
    # Requirements:
    # - You must have located this patch via a Bug Database entry
    # and have the exact symptoms described in the bug entry.
    # - Your system configuration (Oracle Server version and patch
    # level, OS Version) must exactly match those in the bug
    # database entry - You must have NO OTHER PATCHES installed on
    # your Oracle Server since the latest patch set (or base release
    # x.y.z if you have no patch sets installed).
    # - [Oracle 9.0.4.1 & above] You must have Perl 5.00503 (or later)
    # installed under the ORACLE_HOME, or elsewhere within the host
    # environment.
    # Refer to the following link for details on Perl and OPatch:
    # http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=189489.1
    # If you do NOT meet these requirements, or are not certain that you meet
    # these requirements, please log an iTAR requesting assistance with this
    # patch and Support will make a determination about whether you should
    # apply this patch.
    # 10.1.3.3.1 Bundle Patch 6823628
    # DATE: March 14, 2008
    # Platform Patch for : Generic
    # Product Version # : 10.1.3.3.1
    # Product Patched : Oracle(R) SOA
    # Bugs Fixed by 10.1.3.3.1 Initial patch 6492514 :
    # Bug 5473225 - PATCH01GENESIS HOT UNABLE TO CATCH AN EXCEPTION DURING A
    # TRANSFORM
    # Bug 5699423 - PARTNERLINK PROPERTY THAT SET BPELXPROPERTY FUNCTION DOESN'T
    # WORK
    # Bug 5848272 - STATEFUL WEBSERVICES DEMO ON OTN DOES NOT WORK 10.1.3.1
    # Bug 5872799 - ANT DEPLOY BPEL TEST FAILS/RUNS ON DEFAULT DOMAIN NOT
    # SPECIFIED TARGET DOMAIN
    # Bug 5883401 - ALLOW A WAY TO CREATE EMPTY NODES - AND USE FOR REQUIRED
    # NODES
    # Bug 5919412 - SAMPLE DEMO BPEL PROCESSES MIMESERVICE MIMEREQUESTER AXIS
    # JAVA EXAMPLE ERROR
    # Bug 5924483 - ESB SHOULD SUPPORT SOAP EDNPOINT LOCATION DYNAMIC UDDI LOOKUP
    # Bug 5926809 - ORAPARSEESCAPEDXML XPATH EXPRESSION FAILED TO EXECUTE
    # FOTY0001 TYPE ERROR
    # Bug 5937320 - STRANGE BEHAVIOUR CALLING FROM BPEL TO BPEL GETTING
    # NULLPOINTEREXCEPTION.
    # Bug 5944641 - BPA BLUEPRINT NOT AVAIALBLE IN JDEVELOPER
    # Bug 5945059 - JAVA.LANG.NULLPOINTEREXCEPTION SENDING EMAILS WITH PAYLOADS
    # LARGER THAT 1MB
    # Bug 5962677 - WS RESPONSE IS EMPTY SOAP BODY IN ONE-WAY CALL
    # Bug 5963425 - WHEN THE OUTCOMES FOR A HT CHANGED & IMPORTED - UPDATE
    # CONNECTION ROLES IN BPEL
    # Bug 5964097 - AQ ADAPTER DEPLOYMENT CAUSES OPMN TO PERFORM A FORCEFUL
    # SHUTDOWN IN SOA
    # Bug 5971534 - CANNOT GRANT USER TASK VIEWS TO GROUPS, ONLY TO USERS.
    # Bug 5989367 - REFER TO SR 6252219.993 BPEL 10.1.3 ONLY COPIES IN ASSIGN,
    # IN 10.1.2 IT CREATES
    # Bug 5989527 - ENHANCEMENT WARNING SHOULD BE GIVEN UPON UPLOAD IF BPEL
    # PROCESS IS OPEN IN ARIS
    # Bug 5997936 - ESB FAULT DOES NOT GET PROPAGATED TO BPEL
    # Bug 6000575 - PERF NEED ESB PURGE SCRIPT TO PURGE BY DATE AND PROCESS
    # Bug 6001796 - POSTING OF DATE RECEIVED FROM XML GATEWAY TO BPEL FAILED IN
    # ESB
    # Bug 6005407 - BPEL PROCESS DOESN'T PROPOGATE FAULT THROWN BY BPEL
    # SUB-PROCESS
    # Bug 6017846 - MIMETYPE OF EMAIL NOTIFICATION IS NOT SET THROUGH HUMAN TASK
    # Bug 6027734 - DECISION SERVICE IMPORT - LOCATING DECISION SERVICE IN .DECS
    # FILE IMPROPER
    # Bug 6028985 - EXCEEDED MAXIMUM NUMBER OF SUBSCRIBERS FOR QUEUE
    # ORAESB.ESB_CONTROL
    # Bug 6041508 - CREATING/UPDATING DVM CAUSE EXCEPTION
    # Bug 6053708 - FTP ADAPTER DOES NOT SUPPORT ENCRYPTED PASSWORD IN
    # OC4J-RA.XML
    # Bug 6054034 - INDEX4,INDEX5 AND INDEX6 CANNOT BE USED IN BPEL CONSOLE
    # Bug 6068801 - BACKPORT OF BPEL ON WEBLOGIC - VERSION 10.1.3.3
    # Bug 6070991 - HT EXPORT DOES NOT EXPORT PARAMETERS, ALLOW PARTICIPANTS TO
    # INVITE OTHERS
    # Bug 6071001 - WSIF HTTP BINDING NOT WORKING FROM ESB
    # Bug 6073311 - STRESS SCOPE NOT FOUND ON CALLBACK - WRONG (DUPE)
    # SUBSCRIPTION IN TABLE
    # Bug 6081070 - JMS ADAPTER REJECTION HANDLER CREATE 0 BYTE FILES
    # Bug 6083419 - DECISION SERVICE SCOPE NEED TO HAVE A SPECIAL INDICATOR
    # Bug 6085799 - HUMAN TASK ADDED IN SCOPE IN JDEV IS NOT UPDATED TO BPA
    # SERVER
    # Bug 6085933 - EXPORT AND EXPLORE SHOULD USE USER LANGUAGE AND NOT ENGLISH
    # ALWAYS
    # Bug 6086281 - STRING INDEX OUT OF RANGE ERROR FOR COBOL COPYBOOK WITH PIC
    # CLAUSE HAVING S
    # Bug 6086453 - DOMAINS CREATED IN A CLUSTER GETS NOT PROPAGATED TO NEW OR
    # EXISTING NODES
    # Bug 6087484 - MULTIPLE HEADER SETTING CAUSES ESB EXCEPTION
    # Bug 6087645 - ESB SHOULD ALLOW USER PICK RUNTIME PROTOCOL (HTTP/HTTPS)
    # Bug 6110231 - TRANSLATION NOT BASED ON MQ CCSID CHARSET
    # Bug 6120226 - BPEL IS NOT SETTING THE APPS CONTEXT CORRECTLY
    # Bug 6120323 - COMPLETIONPERSISTPOLICY ON DOMAIN LEVEL HAS DISAPPEARED
    # Bug 6125184 - ESB JMS SESSION ROLLBACK ORACLE.JMS.AQJMSEXCEPTION
    # Bug 6127824 - [AIA2.0] CURRENT XREF IMPLEMENTATION IS MISSING REQUIRED
    # INDEXES ON XREF SCHEMA
    # Bug 6128247 - HTTPCONNECTOR POST() METHOD SHOULD RAISE EXCEPTION FOR ALL
    # STATUS CODES EXCEPT 2
    # Bug 6131159 - ENABLE USERS TO CHOOSE XSD WHEN CREATING A BPEL PROCESS FROM
    # BLUE PRINT
    # Bug 6132141 - PROCESS_DEFAULT TABLE STILL CONTAINS INFORMATION FROM
    # UNDEPLOYED PROCESSES
    # Bug 6133190 - ENABLING ESB CONSOLE HTTP/S IS MAKING THE CONSOLE TO COME UP
    # BLANK.
    # Bug 6139681 - BPEL WSDL LINK IN CLUSTERED RUNTIME POINTS TO A SINGLE NODE
    # Bug 6141259 - BASICHEADERS NOT PUTTING WWW-AUTHENTICATE HEADERS FOR HTTP
    # BINDING IN BPEL
    # Bug 6148021 - BPEL NATIVE SCHEMA FOR COBOL COPYBOOK WITH IMPLIED DECIMAL
    # LOSES DIGIT IN OUTPUT
    # Bug 6149672 - XOR DATA - CONDITION EXPRESSION SPECIFICATION IS NOT
    # INTUITIVE IN BPMN MODELS
    # Bug 6152830 - LOSING CONDITIONAL EXPRESSIONS CREATED IN JDEV UPON MERGE
    # Bug 6158128 - BASICHEADERS NOT PUTTING WWW-AUTHENTICATE HEADERS FOR HTTP
    # BINDING
    # Bug 6166991 - WHEN STARTING SOA SUITE,, PROCESSES FAIL DUE TO UNDEFINED
    # WSDL
    # Bug 6168226 - LOCATION-RESOLVER EXCEPTION THROWN IN OPMN LOGS
    # Bug 6187883 - CHANGES FOR BPEL RELEASE ON JBOSS- VERSION 10.1.3.3
    # Bug 6206148 - [AIA2.0] NEW FUNCTION REQUEST, XREFLOOKUPPOPULATEDCOLUMNS()
    # Bug 6210481 - BPEL PROCESS WORKS INCORRECTLY WHEN AN ACTIVITY HAS MULTIPLE
    # TRANSITIONCONDITION
    # Bug 6240028 - WEBSERVICE THAT DOES NOT CHALLENGE FOR BASIC CREDENTIALS
    # CANNOT BE INVOKED
    # Bug 6257116 - MULTIPLE HEADER SETTING CAUSES ESB EXCEPTION
    # Bug 6258925 - MESSAGE RECEIVED BY THE TARGET ENDPOINT VIA HTTP POST IS
    # MISSING THE XML HEADER
    # Bug 6259686 - TOO MANY UNNECESSARY WORKFLOW E-MAIL NOTIFICATIONS GENERATED
    # Bug 6267726 - 10.1.3.3 ORACLE APPLICATIONS ADAPTER - NOT ABLE TO CAPTURE
    # BUSINESS EVENT
    # Bug 6272427 - WEBSPHERE BPEL FAILS FOR DATA RETRIEVAL OF SIZE 500+ KB
    # Bug 6276995 - MERGE SCOPE NAME IS NOT UPDATED WHEN CHANGED IN THE SERVER
    # Bug 6280570 - XPATH EXPRESSION ERROR IN MEDIATOR FOR ASSIGNING USER-DEFINED
    # CONTEXT VALUES
    # Bug 6282339 - RETRYCOUNT DOES NOT WORK PROPERLY
    # Bug 6311039 - ONE RECORD IS INSERTED TO SYNC_STORE IF
    # COMPLETIONPERSISTPOLICY SET TO FAULTED
    # Bug 6311809 - [AIA2.0] NON-RETRYABLE ERRORS ARE NOT POSTED ON ESB_ERROR
    # TOPIC
    # Bug 6314784 - THE PRIORITY DEFINED IN THE BPA SUITE IS NOT TRANSFERRED TO
    # THE JDEV CORRECTLY
    # Bug 6314982 - THREADPOOL RACE CONDITION IN ADAPTER INITIALIZATION MESSAGES
    # NOT PROCESSED
    # Bug 6315104 - (SET)CLASSNAME MISSING IN TSENSOR JAXB OBJECTS
    # Bug 6316554 - CONSUME FUNCTIONALITY OF JMS ADAPTER FOR BEA WEBLOGIC DOES
    # NOT WORK
    # Bug 6316950 - FILEADAPTER HARPER ENHANCEMENTS SYNC WRITE AND CHUNKED
    # INTERACTION SPEC
    # Bug 6317398 - THE ICON FOR COMPUTING DIFFERENCE IS MISSING IN JDEV REFRESH
    # FROM SERVER DIALOG
    # Bug 6320506 - IMPORT FAILS WHEN THERE IS AN UNNAMED CASE
    # Bug 6321011 - CANNOT PROCESS 0 BYTE FILE USING FTP ADAPTER
    # Bug 6325749 - TRACKING BUG FOR TRACKING ADDITIONAL CHANGES TO BUG #6032044
    # Bug 6328584 - NEED A NEW XPATH EXPRESSION TO GET ATTACHMENT CONTENT VIA
    # SOAP INVOKATION
    # Bug 6333788 - COLLAPSING OF CONSECUTIVE ASSIGN TASKS BREAKS BAM SENSOR
    # Bug 6335773 - BUILD.XML CONTAINS DO NOT EDIT .. - WHILE <CUSTOMIZE> TASK
    # MUST BE IN <BPELC>
    # Bug 6335805 - AQ ADAPTER OUTBOUND DOESN'T RECONNECT AFTER FAILURE
    # Bug 6335822 - [AIA2.0] PSRPERFESB - RUNTIME DVM PERFORMANCE OVERHEAD IN ABS
    # USE CASE
    # Bug 6339126 - CHECKPOINT BPEL JAVA METHOD DOESN'T WORK IN BPEL 10.1.3.3
    # Bug 6342899 - OUTLINECHANGE.XML NOT UPDATE WITH ACTIVITY FROM NEW BRANCH
    # Bug 6343299 - ESB CONCRETE WSDL NAMESPACE SHOULD BE DIFFERENT FROM IMPORTED
    # WSDL NAMESPACE
    # Bug 6372741 - DEHYDRATION DATABASE KEEPS GROWING IN 10.1.3.3
    # Bug 6401295 - NXSD SHOULD SUPPORT ESCAPING THE TERMINATED/QUOTED/SURROUNDED
    # DELIMITERS
    # Bug 6458691 - DIST DIRECTORY FOR 10.1.3.3.1 NEEDS UPDATE
    # Bug 6461516 - BPEL CONSOLE CHANGES FOR DISPLAYING RELEASE 10.1.3.3.1
    # Bug 6470742 - CHANGE THE VERSION NUMBER AND BUILD INFO IN ABOUT DIALOG IN
    # ESB
    # BUG ADDED IN MLR#1, 6671813 :
    # Bug 6494921 - ORABPEL-02154 IF LONG DOMAIN AND SUITECASE NAMES IN USE
    # BUGS ADDED IN MLR#2, 6671831 :
    # Bug 6456519 - ERROR IN BPEL CONSOLE THREADS TAB:SERVLETEXCEPTION CANNOT GET
    # DISPATCHER TRACE
    # Bug 6354719 - WHICH JGROUP CONFIGURATION PARAMETER IMPACTS BPEL CLUSTER
    # ACTIVITY
    # Bug 6216169 - SCOPE NOT FOUND ERROR WHILE DELIVERING EXPIRATION MESSAGE OF
    # ONALARM
    # Bug 6395060 - ORA-01704 ON INSERTING A FAULTED INVOKE ACTIVITY_SENSOR
    # Bug 6501312 - DEHYDRATION DATABASE KEEPS GROWING IN 10.1.3.3 #2
    # Bug 6601020 - SEARCHBASE WHICH INCLUDES PARENTHESIS IN THE NAMES DOES NOT
    # WORK
    # Bug 6182023 - WAIT ACTIVITY FAILS TO CONTINUE IN CLUSTER WHEN PROCESSING
    # NODE GOES DOWN
    # BUGS ADDED IN MLR#3, 6723162 :
    # Bug 6725374 - INSTANCE NOT FOUND IN DATASOURCE
    # Bug 4964824 - TIMED OUT IF SET CORRELATIONSET INITIATE YES IN REPLY
    # ACTIVITY
    # Bug 6443218 - [AIA2.0]BPEL PROCESS THAT REPLIES A CAUGHT FAULT AND THEN
    # RETHROWS IT IS STUCK
    # Bug 6235180 - BPPEL XPATH FUNCTION XP20 CURRENT-DATETIME() IS RETURNING AN
    # INCORRET TIME
    # Bug 6011665 - BPEL RESTART CAUSES ORABPEL-08003 FAILED TO READ WSDL
    # Bug 6731179 - INCREASED REQUESTS CAUSE OUTOFMEMORY ERRORS IN OC4J_SOA WHICH
    # REQUIRES A RESTART
    # Bug 6745591 - SYNC PROCESS <REPLY> FOLLOWED BY <THROW> CASE CAUSING
    # OUTOFMEMORY ERRORS
    # Bug 6396308 - UNABLE TO SEARCH FOR HUMAN TASK THAT INCLUDES TASK HISTORY
    # FROM PREVIOUS TASK
    # Bug 6455812 - DIRECT INVOCATION FROM ESB ROUTING SERVICE FAILS WHEN CALLED
    # BPEL PROCESS
    # Bug 6273370 - ESBLISTENERIMPL.ONFATALERROR GENERATING NPE ON CUSTOM ADAPTER
    # Bug 6030243 - WORKFLOW NOTIFICATIONS FAILING WITHOUT BPELADMIN USER
    # Bug 6473280 - INVOKING A .NET 3.0 SOAP SERVICE EXPOSED BY A ESB ENDPOINT
    # GIVES A NPE
    # BUGS ADDED IN MLR#4, 6748706 :
    # Bug 6336442 - RESETTING ESB REPOSITORY DOES NOT CLEAR DB SLIDE REPOSITORY
    # Bug 6316613 - MIDPROCESS ACTIVATION AGENT DOES NOT ACTIVATED FOR RETIRED
    # BPEL PROCESS
    # Bug 6368420 - SYSTEM IS NOT ASSIGNING TASK FOR REAPPROVAL AFTER REQUEST
    # MORE INFO SUBMITTED
    # Bug 6133670 - JDEV: UNABLE TO CREATE AN INTEGRATION SERVER CONNETION WHEN
    # ESB IS ON HTTPS
    # Bug 6681055 - TEXT ATTACHMENT CONTENT IS CORRUPTED
    # Bug 6638648 - REQUEST HEADERS ARE NOT PASSED THROUGH TO THE OUTBOUND HEADER
    # Bug 5521385 - [HA]PATCH01:ESB WILL LOSE TRACKING DATA WHEN JMS PROVIDER IS
    # DOWN
    # Bug 6759068 - WORKLIST APPLICATION PERFORMANCE DEGRADATION W/ SSL ENABLED
    # FOR BPEL TO OVD
    # BUGS ADDED IN MLR#5, 6782254 :
    # Bug 6502310 - AUTOMATED RETRY ON FAILED INVOKE WITH CORRELATIONSET INIT
    # FAILS
    # Bug 6454795 - FAULT POLICY CHANGE NEEDS RESTART OF BPEL SERVER
    # Bug 6732064 - FAILED TO READ WSDL ERROR ON THE CALLBACK ON RESTARTING BPEL
    # OC4J CONTAINER
    # Bug 6694313 - ZERO BYTE FILE WHEN REJECTEDMESSAGEHANDLERS FAILS
    # Bug 6686528 - LINK IN APPLICATION.XML FILES CHANGED TO HARD LINKS WHEN MORE
    # THAN 1 HT PRESENT
    # Bug 6083024 - TEXT AND HTML DOC THAT RECEIVED AS ATTACHMENTS WERE EITHER
    # BLANK OR GARBLED
    # Bug 6638648 - REQUEST HEADERS ARE NOT PASSED THROUGH TO THE OUTBOUND HEADER
    # Bug 6267726 - 10.1.3.3 ORACLE APPLICATIONS ADAPTER - NOT ABLE TO CAPTURE
    # BUSINESS EVENT
    # Bug 6774981 - NON-RETRYABLE ERRORS ARE NOT POSTED ON ESB_ERROR TOPIC
    # Bug 6789177 - SFTP ADAPTER DOES NOT SUPPORT RENAMING FILES
    # Bug 6809593 - BPEL UPGRADE TO 10.1.3.3.1 WITH ESB CALLS FAILS DUE TO
    # CACHING OF PLNK - SERVICE
    # BUGS ADDED IN MLR#6, 6823628 :
    # Bug 6412909 - <BPELX:RENAME> DOES NOT ADD XMLNS DECLARATION AUTOMATICALLY
    # Bug 6753116 - OUTPUT FROM HUMAN TASK IS NOT IS NOT CONSISTENT WITH
    # SCHEMA
    # ORDERING
    # Bug 6832205 - BAD VERIFICATIONSERVICE PERFORMANCE IF LDAP SERVICE HAS HUGE
    # DATA
    # Bug 6189268 - CALLING BPEL PROCESS VIA SOAP FROM ESB FAILS WITH
    # NAMENOTFOUNDEXCEPTION
    # Bug 6834402 - JMS ADAPTER IMPROPERLY CASTS XAQUEUESESSION TO QUEUESESSION
    # Bug 6073117 - TASK SERVICE DOESN'T RENDER THE TASK ACTIONS
    # Bug 6054263 - REUSING SOAP WSDL IN RS CAUSES SOAP ACTION'S NS TO BE
    # STRIPPED
    # AWAY
    # Bug 6489703 - ESB: NUMBER OF LISTENERS > 1 GIVES JMS EXCEPTION UNDER STRESS
    # Bug 5679542 - FTP ADAPTER: COULD NOT PARSE TIME:
    # JAVA.LANG.STRINGINDEXOUTOFBOUNDSEXCEPTION
    # Bug 6770198 - AQ ACTIVATIONINSTANCES >1 DOESN'T WORK IN ESB
    # Bug 6798779 - ESB ROUTING RULES CORRUPTED ON RE-REGISTERING WITH ROUTING
    # ORDER
    # IN WSDL CHANGED
    # Bug 6617974 - BACKPORT REQUEST FOR MOVING FILES FUNCTION OF FTP ADAPTER
    # Bug 6705707 - VALIDATION ON ESB CAN'T HANDLE NESTED SCHEMAS
    # Bug 6414848 - FTP ADAPTER ARCHIVE FILENAME FOR BPEL IS BEING SCRAMBLED
    # AFTER
    # THE 10.1.3.3 UPGR
    # Bug 5990764 - INFORMATION ARE LOST WHEN BPEL PROCESS IS POLLING FOR MAILS
    # WITH
    # ATTACHEMENTS
    # Bug 6802070 - ORA-12899 SUBSCRIBER_ID/RES_SUBSCRIBER COLUMN SMALL FOR LONG
    # DOMAIN AND PROCESS
    # Bug 6753524 - WRONG SERVICE ENDPOINT OPEN WHEN TEST WEB SERVICE OF ESB
    # Bug 6086434 - PROBLEM IN BPEL FILE ADAPTER WHILE READING A FIXED LENGTH
    # FILE
    # Bug 6823374 - BPEL 10.1.3.3.1 BAM SENSOR ACTION FAILS WITH BAM 11
    # Bug 6819677 - HTTS STATUS 202 RETURNED INSTEAD OF SOAP FAULT
    # Bug 6853301 - MQ ADAPTER REJECTED MESSAGES IS NOT REMOVED FROM THE RECOVERY
    # QUEUE
    # Bug 6847200 - 10.1.3.3.1 PATCH (#6748706) HAS STOPPED FTP ADAPTER POLLING
    # IN
    # SFTP MODE
    # Bug 6895795 - AQ OUTBOUND DOESN'T WORK WITH MLR#6
    업무에 참고하시기 바랍니다.

    David,
    You are right, theer are some changes incorporated in the latest MLR # 16 on the configurations files and on the dehydration store metrics(such as performance, fields,..).
    However, I would not suggest to continue working on olite, even for Development/Test purposes as you might get stuck with strange errors...and the only solution would be to re-install SOA Suite if your olite gets corrupted. There might be ways to gets your olite back to position, but trust me..its not so simple.
    Also, when you develop and stress test all your testcase scenarios in an TEST Adv installation, its simple to mimic the same in actual production box, as you exactly know its behavior.
    So, go for a brand new SOA 10.1.3.4 MLR # 5 (or) 10.1.3.3.1 MLR # 16 SOA Suite Advanced installation with Oracle DB 10.2.0.3 as its dehydration store.
    Hope this helps!
    Cheers
    Anirudh Pucha

  • Didadm: unable to determine hostname.  error on Sun cluster 4.0 - Solaris11

    Trying to install Sun Cluster 4.0 on Sun Solaris 11 (x86-64).
    iscs sharedi Quorum Disk are available in /dev/rdsk/ .. ran
    devfsadm
    cldevice populate
    But don't see DID devices getting populated in /dev/did.
    Also when scdidadm -L is issued getting the following error. Has any seen the same error ??
    - didadm: unable to determine hostname.
    Found in cluster 3.2 there was a Bug 6380956: didadm should exit with error message if it cannot determine the hostname
    The sun cluster command didadm, didadm -l in particular, requires the hostname to function correctly. It uses the standard C library function gethostname to achieve this.
    Early in the cluster boot, prior to the service svc:/system/identity:node coming online, gethostname() returns an empty string. This breaks didadm.
    Can anyone point me in the right direction to get past this issue with shared quorum disk DID.

    Let's step back a bit. First, what hardware are you installing on? Is it a supported platform or is it some guest VM? (That might contribute to the problems).
    Next, after you installed Solaris 11, did the system boot cleanly and all the services come up? (svcs -x). If it did boot cleanly, what did 'uname -n' return? Do commands like 'getent hosts <your_hostname>' work? If there are problems here, Solaris Cluster won't be able to get round them.
    If the Solaris install was clean, what were the results of the above host name commands after OSC was installed? Do the hostnames still resolve? If not, you need to look at why that is happening first.
    Regards,
    Tim
    ---

  • Cluster Disk Logical Disk Performance Counters Problem

    I have some problem with Oracle cluster. It's a two node Windows server 2008 R2 SP1 cluster with MS Failover cluster. I have SCOM 2012 SP1 with latest OS MP (agent running with local system rights). All other cluster are showing fine performance counters
    but this cluster does only show one cluster disk free space counters. It has 5 cluster disks (MBR formatted). Logical disk performance counters are running fine. So my questions:
    What performance counters does the new "Cluster Disk % and MB free space rule" use? Is it the same old Logical Disk performance counter that collects data for example for drive C:?
    What should I check from this cluster because performance view from my cluster groups are not showing all disks, just one and there are 5 cluster disks?
    Only one Cluster Disk is showing under performance view!
    Health Explorer from Cluster group shows all Cluster disks!

    We have this issue with normal Win server 2008 SQL servers. I can see disks at the Health Explorer view but no collection rules at the performance view. Rule does not have any overrides and it's enabled by default. We use the latest Core OS pack and Cluster
    MP. IS this ANOTHER bug in the MS management pack?

  • Removing synchronous ajax as planned deprecation is a bad idea

    Well that wasn't a question, but I have some serious misgivings about your proposed idea to get rid of synchronous ajax. Your site says:
    Note: Starting with Gecko 30.0 (Firefox 30.0 / Thunderbird 30.0 / SeaMonkey 2.27), synchronous requests on the main thread have been deprecated due to the negative effects to the user experience.
    As a simple developer of web forms, I assume that I am using the "main thread" and will be affected. Some of my forms are built dynamically and data is sometimes fetched dynamically as well. If I open a form for a user I may need to pre-populate the form fields with some data. In that case, I'd have to get the data (via synchronous ajax) before showing the form. This is a simple case and I think a common one. Am I missing something or am I just screwed?

    See:
    *https://developer.mozilla.org/Mozilla/Firefox/Releases/30/Site_Compatibility
    It hasn't been removed in Firefox 30, but is merely tagged as deprecated
    <blockquote>Synchronous XMLHttpRequest has been deprecated
    * Bug 969671 – Warn about use of sync XHR in the main thread
    The async argument of the XMLHttpRequest constructor is true by default, and false makes the request synchronous. Developers now get an warning in the Web Console if a synchronous request is used on the main thread, or the outside of workers, since it's now considered deprecated due to the negative effects to the user experience.</blockquote>

  • Jumpstart install of  sun cluster 3.1

    Hi,
    I'm trying to use jumpstart to install a 2nodes cluster. Solaris9 installs but get problems when the cluster software begins to install. I get these error messages:
    Performing setup for Sun Cluster autoinstall ... nfs mount: scadm: : RPC: Rpcbin
    d failure - RPC: Success
    nfs mount: retrying: /a/autoscinstalldir
    nfs mount: scadm: : RPC: Rpcbind failure - RPC: Success
    nfs mount: scadm: : RPC: Rpcbind failure - RPC: Success
    nfs mount: scadm: : RPC: Rpcbind failure - RPC: Success
    nfs mount: scadm: : RPC: Rpcbind failure - RPC: Success
    n
    It does not install further. Could somebody give some suggestions?
    thank you, pcurran

    Hi All,
    I have decided to first install sun cluster 3.1(trail version) manually:
    1) installed sun_web_console. Installation completed successfully but got this statement at the end:
    "Server not started, No management application registed"
    2) Went ahead and to run "scinstall" but got this error:
    ** Installing SunCluster 3.1 framework **
    SUNWscr.....failed
    scinstall: Installation of "SUNWscr" failed
    Below is the "log file":
    ** Installing SunCluster 3.1 framework **
    SUNWscr
    pkgadd -S -d /cdrom/hotburn_oct24_05/suncluster3.1/Solaris_sparc/Product/sun_clu
    ster/Solaris_9/Packages -n -a /var/cluster/run/scinstall/scinstall.admin.11374
    SUNWscrfailed
    pkgadd: ERROR: cppath(): unable to stat </cdrom/hotburn_oct24_05/suncluster3.1/S
    olaris_sparc/Product/sun_cluster/Solaris_9/Packages/SUNWscr/reloc/etc/cluster/cc
    r/rgm_rt_SUNW.LogicalHostname:2>
    pkgadd: ERROR: cppath(): unable to stat </cdrom/hotburn_oct24_05/suncluster3.1/S
    olaris_sparc/Product/sun_cluster/Solaris_9/Packages/SUNWscr/reloc/etc/cluster/cc
    r/rgm_rt_SUNW.SharedAddress:2>
    ERROR: attribute verification of </etc/cluster/ccr/rgm_rt_SUNW.LogicalHostname:2
    failedERROR: attribute verification of </etc/cluster/ccr/rgm_rt_SUNW.SharedAddress:2>
    failed
    pathname does not exist
    Reboot client to install driver.
    Installation of <SUNWscr> partially failed.
    scinstall: Installation of "SUNWscr" failed
    scinstall: scinstall did NOT complete successfully!
    Could someone please give some direction?
    thank you, pcurran

Maybe you are looking for