Logical hostname netmask

I evaluate a Solaris cluster (x86_64). I want to evaluate "High availability mysql database replication with solaris zone clsuter" -from sun BluePrints .... and I have a problem with netmask for logical hostname. Cluster set ths for "C class" (dec: 255.255.255.0 ), but this mask should be 255.255.255.240. How to change a netmask for logical hostname?

/etc/netmasks ?
The netmask for the physical IP address and the logical IP address cannot be different if the address are part of the same subnet. So, if you physical node/host/zone has a public address of 192.168.1.32/24 and the logical host address is 192.168.1.33, then the netmask must also be /24. Furthermore, you cannot have a physical IP address of 192.168.1.32/24 and have a logical IP address of 10.1.1.33 because there would be no adapter to host the address on.
Does that make sense?
Regards,
Tim
---

Similar Messages

  • Problem in creating logical hostname resource

    Hi all,
    I have a cluster configured on 10.112.10.206 and 10.112.10.208
    i have a resource group testrg
    I want to create a logical hostname resource testhost
    I have given a ip 10.112.10.245 in /etc/hosts file for testhost
    I am creating a logical hostname resource by below command -
    clrslh create -g testrg testhost
    I am doing this on 206
    As I do, the other node 208 becomes unreachable....I m not able to ping 208 but ssh is done from 206 to 208.
    I am also not able to ping 10.112.10.245
    Please help.

    So, the physical IP addresses of your two nodes are:
    10.112.10.206 node1
    10.112.10.208 node2
    And your logical host is:
    10.112.10.245 testhost
    Have you got a netmask set for this network? Is it 255.255.255.0 and is it set in /etc/netmasks?
    It's most likely that this is the cause of the problem if you have different netmasks on the interfaces.
    Tim
    ---

  • How to add network information for failover zones with logical hostname?

    Hello!
    As stated in [http://docs.sun.com/app/docs/doc/819-3069/ds_template-21?a=view] I must not configure network addresses for a zone when I manage these with a logical hostname:
    If you require the SUNW.LogicalHostName resource type to manage all the zone's addresses, configure a SUNW.LogicalHostName resource with a list of the zone`s addresses and do not configure them by using the zonecfg utility.But when I start the zone for the first time using "zlogin -C" it does not ask me any questions about the network. Of course, there is no adapter configured. But how do I add information like routes or nameservers to the system when using a logical hostname?
    TIA
    Stephan

    Hi Stephan,
    I can only assume that when the zone was configured via zonecfg without any network interfaces that sysidcfg did not ask you for the default route or name service, as such you will need to setup those parts up manually.
    Please take a look at the FAQs for zones, i.e. http://opensolaris.org/os/community/zones/faq/ in particular
    http://opensolaris.org/os/community/zones/faq/#u5
    http://opensolaris.org/os/community/zones/faq/#cfg_defroute
    Finally, if you require a NIS client then please see http://docs.sun.com/app/docs/doc/816-5166/ypinit-1m?a=view
    Regards
    Neil

  • Creating Logical hostname in sun cluster

    Can someone tell me, what exactly logical hostname in sun cluster mean?
    For registering logical hostname resource in failoover group, what exactly i need to specify
    for example, i have two nodes in sun cluster , How to create or configure a logical hostanme and it should point to which IP Address ( Whether it should point to IP addresses of nodes in sun cluster). Can i get clarification on this?

    Thanks Thorsten for ur continue help...
    The output of clrs status abc_lg
    === Cluster Resources ===
    Resource Name Node Name State Status Message
    abc_lg node1 Offline Offline
    node2 Offline Offline
    The status is offline...
    the output of clresourcegroup status
    === Cluster Resource Groups ===
    Group Name Node Name Suspended Status
    abc_rg node1 No Unmanaged
    node2 No Unmanaged
    You say that the resource should de enabled after creating the resource.. I am using GDS and i am just following the steps he provided to acheive high availabilty (in developers guide...)
    I have 1) Logical hostname resorce.
    2) Application resource in my failover resource group
    When i bring online the failover resource group , what should my failover resource group status and the status of resource in my resource group

  • Single interface instead of logical hostname

    Hello!
    Currently we have a solaris cluster of two nodes running which serves as an NFS server. Both nodes have their own interface and a logical hostname is used for NFS.
    Now we want to provide NFS for a second network with the same cluster. We are short of addresses in this network so the best solution would be a single interface which is only activated by the node which runs the NFS server:
    bge0: First network (and virtual hostname if active)
    bge1: Second network if active
    bge2+3: Cluster transport
    Is this setup possible with sun cluster?
    TIA
    Stephan

    If I understand your problem statement correctly, then the answer is no.
    Tim
    ---

  • SNMP Trap and logical hostname

    Consider a case in which a cluster node has two public network interface in a IPMP group and a logical hostname (ip addr) is also assigned to one of the interfaces.
    If an SNMP trap is generated from this cluster node, what would be the source IP address in the IP header of the trap?
    Is there a way to restrict the source IP addr to one of the interfaces?

    There is only IPMP group so I'm not sure what you mean by active/standby groups. If your question meant to ask whether the interfaces within the group are in active/standby, the answer is no.
    Even if they were active/standby and the trap always travels over the active interface, the question still remains whether the source IP address would be the physical address or the logical address, assuming the logical address is also assigned to the same interface.

  • SC 3.2 - logical hostname create rpoblem

    Hello
    I am running with Sun Cluster 3.2 in two node cluster with IPMP. And we are using IBM Storage and metaset (Solaris Volume Manager) in the cluster.
    While trying to create Sun Cluster resource logical hostname, I get this error. I was able to successfully create the data resource but the logical hostname resource is giving this error.
    Command executed: /usr/cluster/bin/clreslogicalhostname create -g test-rg -p Resource_project_name=default -p R_description=Failover\ network\ resource\ for\ SUNW.LogicalHostname:3 -N group1@1:node1,group1@2:node22 -h test-rs test-rs
    Error message:
    clreslogicalhostname: <test-rs> cannot be mapped to an IP address
    Please advise.

    Hi,
    The value you specified with -h option is the logical hostname that should be mapped to an IP address and is not
    the resource name. The address mapping is possible if you have an entry for this hostname either in the /etc/hosts file or an entry in the name service
    that you are using. Make sure that you have an entry in the /etc/hosts file for "test-rs" and retry the create operation.
    BTW, You need not specify the -h option if your hostname
    is same as the resource name, and resource name is resolvable.
    From man page of clrslh command:
    -h lhost[,…]
    --logicalhost lhost[,…]
    Specifies the list of logical hostnames that this resource represents. You must use the -h option either when more than one logical hostname is to be associated with the new logical hostname resource or when the logical hostname does not have the same name as the resource itself. All logical hostnames in the list must be on the same subnet. If you do not specify the -h option, the resource represents a single logical hostname whose name is the name of the resource itself.
    You can use -h instead of setting the HostnameList property with -p. However, you cannot use -h and explicitly set HostnameList in the same command.
    Thanks,
    Prasanna Kunisetty

  • Problems registering Logical Hostname resource

    I�m trying to mount a HA Oracle.
    I installed SC 3.1, with Volume Manager I configurated the logical devices.
    I install Oracle 9i in the local HD of the cluster nodes.
    I have 2 280R in the cluster.
    I installed HA for Oracle from SC 3.1 Cd.
    I configure in /etc/hosts in both nodes:
    node 0:
    /etc/hosts
    <IP> bbddtlatam
    node 1:
    /etc/hosts
    <same IP> bbddtlatam
    I configure a IPMP for each interface (both machines)
    # ifconfig qfe0 group iptest
    and
    #scrgadm -a -t SUNW.oracle_server
    #scrgadm -a -t SUNW.oracle_listener
    #scrgadm -a -g resource_name_group
    but
    when i try to register the logical hostname:
    #scrgadm -a -L -g resource_name_group -l bbddtlata
    Call to rpc.fed failed for resource bbddtlatam, method hafoip_validate.
    Validation of resource bbddtlatam in resource group bbdd-ora-rg on node latamint-9 failed.
    In /var/adm/messages
    Jun 15 10:31:46 latamint-9 Cluster.RGM.fed: [ID 115461 daemon.error] in libsecurity __rpc_get
    localuid failed
    Jun 15 10:31:46 latamint-9 Cluster.RGM.fed: [ID 991800 daemon.error] in libsecurity transport
    tcp is not a loopback transport
    Jun 15 10:31:46 latamint-9 Cluster.RGM.fed: [ID 299417 daemon.error] in libsecurity strong Un
    ix authorization failed
    Jun 15 10:31:46 latamint-9 Cluster.RGM.rgmd: [ID 217093 daemon.error] Call failed: FE_RUN: RP
    C: Authentication error; why = Client credential too weak
    Jun 15 10:31:46 latamint-9 Cluster.RGM.rgmd: [ID 581180 daemon.error] launch_validate: call t
    o rpc.fed failed for resource <bbddtlatam>, method <hafoip_validate>
    Any idea????
    Thanks.....

    looking at your error
    this all comes and looks like error with rpc
    see if rpc deamon is running fine

  • Determining Logical Hostname

    We have a (non-cluster-aware) client-server application that has several background client programs that do asynchronous processing. In principle, these can be distributed among an arbitrary number of hosts, although in practice usually all reside on a single host.
    These clients are administered by a control program. Details of all background clients are kept on a table, which includes the host name. The client program compares that host name with the results of the UNIX program 'hostname'. If they are the same, it controls the program directly. If not, it uses rsh to log in to the correct host.
    The problem is that 'hostname' prints the physical host. When failover occurs, this changes, and the table becomes out of date.
    Just changing the table contents to the logical host name is not possible, as then all operations would be via 'rsh', and this is generally disabled in production systems for security reasons.
    My question is, how can I determine the logical hostname, so that the control program can determine whether an rsh is necessary in a clustered environment?
    Thanks,
    Roger

    Know this reply is a bit late but I thought I'd post it as I'd like to know if anyone has a better way of doing this within a script.
    How we achieved it was as follows:
    Use uname to determine the physical host name
    Use a case statement to define the logical hosts which maybe present on each physical host - e.g:
    case $physhost in
    physhost1) logihost1=fred
    logihost2=albert
    physhost2) logihost1=albert etc
    The reason for using the case is that we list all logical hosts for all our physical hosts in one script and use the same script on all servers.
    Use vxdg list and grep for the disk groups which belong to the logical hosts - e.g:
    disk_dg=`vxdg list | egrep -v 'NAME|rootdg|sc_dg' | grep -v ${logihost2}dg | wc -l`
    If this command is run on physhost1 and sets disk_dg to 1 then logihost1 (fred) is present on this machine.
    Does anyone have a slicker way of doing this?

  • Including Logical Hostname as a Resource Dependency

    Hi Folks,
    We're setting up a Calendar Server. we assign a Logical Hostname resource on which the Calendar Clients connect to.
    These are the steps that are used:
    I created a Fail Over Resource Group called MS_RG_BUDDY and bring it online
    scrgadm -a -g MS_RG_BUDDY -h mars,venus
    I created a Logical Hostname resource EVERGREEN for this Fail Over resource group
    scrgadm -a -L -g MS_RG_BUDDY -l EVERGREEN
    scrgadm -c -j EVERGREEN -y R_description="LogicalHostname resource for evergreen"
    Bring the Resource Group Online
    scswitch -Z -g MS_RG_BUDDY
    I create a HAStoragePlus Resources called disk-evergreen with two cluster file systems ( /var/opt/xyz and /var/opt/abc )
    scrgadm -a -j disk-evergreen -g MS_RG_BUDDY -t SUNW.HAStoragePlus -x FileSystemMountPoints="/var/opt/xyz,/var/opt/abc" -x AffinityOn=TRUE
    Enable the HAStoragePlus resource
    scswitch -e -j disk-evergreen
    Create a resource with a dependency on the HAStoragePlus resource
    scrgadm -a -j core-rs -t SUNW.iws -g MS_RG_BUDDY -x IWS_serverroot=/opt/SUNWiws -y Resource_dependencies=disk-evergreen
    I have a question. In the above command, we created a Resource_Dependency on the HAstoragePlus resource, should we also create dependency on the Logical hostname i.e should the Resource_dependency look like
    Resource_dependencies=evergreen,disk-evergreen
    Is this correct ?. Should we bother. Are there any other factors we might want to consder

    Hartmut,
    Thanks for your response, I think i wil use the explicit dependency :) since all our clients connect to the custer via this LogicalHostname.
    Are you referring to this property ?
    (CAL-RED-RG) Res Group network dependencies: True
    Gotten from running the following command:
    # ./scrgadm -pvv -g CAL-RED-RG
    Res Group name: CAL-RED-RG
    (CAL-RED-RG) Res Group RG_description: <NULL>
    (CAL-RED-RG) Res Group mode: Failover
    (CAL-RED-RG) Res Group management state: Managed
    (CAL-RED-RG) Res Group RG_project_name: default
    (CAL-RED-RG) Res Group RG_SLM_type: manual
    (CAL-RED-RG) Res Group RG_affinities: <NULL>
    (CAL-RED-RG) Res Group Auto_start_on_new_cluster: True
    (CAL-RED-RG) Res Group Failback: False
    (CAL-RED-RG) Res Group Nodelist: shaw telstra
    (CAL-RED-RG) Res Group Maximum_primaries: 1
    (CAL-RED-RG) Res Group Desired_primaries: 1
    (CAL-RED-RG) Res Group RG_dependencies: <NULL>
    (CAL-RED-RG) Res Group network dependencies: True
    (CAL-RED-RG) Res Group Global_resources_used: <All>
    (CAL-RED-RG) Res Group Pingpong_interval: 3600
    (CAL-RED-RG) Res Group Pathprefix: <NULL>
    (CAL-RED-RG) Res Group system: False
    (CAL-RED-RG) Res Group Suspend_automatic_recovery: False
    (CAL-RED-RG) Res name: redmeadows
    (CAL-RED-RG:redmeadows) Res R_description: LogicalHostname resource for
    redmeadows
    (CAL-RED-RG:redmeadows) Res resource type: SUNW.LogicalHostname:2
    (CAL-RED-RG:redmeadows) Res type version: 2
    (CAL-RED-RG:redmeadows) Res resource group name: CAL-RED-RG
    (CAL-RED-RG:redmeadows) Res resource project name: default
    (CAL-RED-RG:redmeadows{shaw}) Res enabled: True
    (CAL-RED-RG:redmeadows{telstra}) Res enabled: True
    (CAL-RED-RG:redmeadows{shaw}) Res monitor enabled: True
    (CAL-RED-RG:redmeadows{telstra}) Res monitor enabled: True
    (CAL-RED-RG:redmeadows) Res strong dependencies: <NULL>
    (CAL-RED-RG:redmeadows) Res weak dependencies: <NULL>
    (CAL-RED-RG:redmeadows) Res restart dependencies: <NULL>
    (CAL-RED-RG:redmeadows) Res offline restart dependencies: <NULL>
    (CAL-RED-RG:redmeadows) Res property name: Retry_interval
    (CAL-RED-RG:redmeadows:Retry_interval) Res property class: standard
    (CAL-RED-RG:redmeadows:Retry_interval) Res property description: Time in w
    hich monitor attempts to restart a failed resource Retry_count times.
    (CAL-RED-RG:redmeadows:Retry_interval) Res property type: int
    (CAL-RED-RG:redmeadows:Retry_interval) Res property value: 300
    (CAL-RED-RG:redmeadows) Res property name: Retry_count
    (CAL-RED-RG:redmeadows:Retry_count) Res property class: standard
    (CAL-RED-RG:redmeadows:Retry_count) Res property description: Indicates th
    e number of times a monitor restarts the resource if it fails.
    (CAL-RED-RG:redmeadows:Retry_count) Res property type: int
    (CAL-RED-RG:redmeadows:Retry_count) Res property value: 2
    (CAL-RED-RG:redmeadows) Res property name: Thorough_probe_interval
    (CAL-RED-RG:redmeadows:Thorough_probe_interval) Res property class: standa
    rd
    (CAL-RED-RG:redmeadows:Thorough_probe_interval) Res property description:
    Time between invocations of a high-overhead fault probe of the resource.
    (CAL-RED-RG:redmeadows:Thorough_probe_interval) Res property type: int
    (CAL-RED-RG:redmeadows:Thorough_probe_interval) Res property value: 60
    (CAL-RED-RG:redmeadows) Res property name: Cheap_probe_interval
    (CAL-RED-RG:redmeadows:Cheap_probe_interval) Res property class: standard
    (CAL-RED-RG:redmeadows:Cheap_probe_interval) Res property description: Tim
    e between invocations of a quick fault probe of the resource.
    (CAL-RED-RG:redmeadows:Cheap_probe_interval) Res property type: int
    (CAL-RED-RG:redmeadows:Cheap_probe_interval) Res property value: 60
    (CAL-RED-RG:redmeadows) Res property name: Failover_mode
    (CAL-RED-RG:redmeadows:Failover_mode) Res property class: standard
    (CAL-RED-RG:redmeadows:Failover_mode) Res property description: Modifies r
    ecovery actions taken when the resource fails.
    (CAL-RED-RG:redmeadows:Failover_mode) Res property type: enum
    (CAL-RED-RG:redmeadows:Failover_mode) Res property value: HARD
    (CAL-RED-RG:redmeadows) Res property name: PRENET_START_TIMEOUT
    (CAL-RED-RG:redmeadows:PRENET_START_TIMEOUT) Res property class: standard
    (CAL-RED-RG:redmeadows:PRENET_START_TIMEOUT) Res property description: Max
    imum execution time allowed for Prenet_Start method.
    (CAL-RED-RG:redmeadows:PRENET_START_TIMEOUT) Res property type: int
    (CAL-RED-RG:redmeadows:PRENET_START_TIMEOUT) Res property value: 300
    (CAL-RED-RG:redmeadows) Res property name: MONITOR_CHECK_TIMEOUT
    (CAL-RED-RG:redmeadows:MONITOR_CHECK_TIMEOUT) Res property class: standard
    (CAL-RED-RG:redmeadows:MONITOR_CHECK_TIMEOUT) Res property description: Ma
    ximum execution time allowed for Monitor_Check method.
    (CAL-RED-RG:redmeadows:MONITOR_CHECK_TIMEOUT) Res property type: int
    (CAL-RED-RG:redmeadows:MONITOR_CHECK_TIMEOUT) Res property value: 300
    (CAL-RED-RG:redmeadows) Res property name: MONITOR_STOP_TIMEOUT
    (CAL-RED-RG:redmeadows:MONITOR_STOP_TIMEOUT) Res property class: standard
    (CAL-RED-RG:redmeadows:MONITOR_STOP_TIMEOUT) Res property description: Max
    imum execution time allowed for Monitor_Stop method.
    (CAL-RED-RG:redmeadows:MONITOR_STOP_TIMEOUT) Res property type: int
    (CAL-RED-RG:redmeadows:MONITOR_STOP_TIMEOUT) Res property value: 300
    (CAL-RED-RG:redmeadows) Res property name: MONITOR_START_TIMEOUT
    (CAL-RED-RG:redmeadows:MONITOR_START_TIMEOUT) Res property class: standard
    (CAL-RED-RG:redmeadows:MONITOR_START_TIMEOUT) Res property description: Ma
    ximum execution time allowed for Monitor_Start method.
    (CAL-RED-RG:redmeadows:MONITOR_START_TIMEOUT) Res property type: int
    (CAL-RED-RG:redmeadows:MONITOR_START_TIMEOUT) Res property value: 300
    (CAL-RED-RG:redmeadows) Res property name: UPDATE_TIMEOUT
    (CAL-RED-RG:redmeadows:UPDATE_TIMEOUT) Res property class: standard
    (CAL-RED-RG:redmeadows:UPDATE_TIMEOUT) Res property description: Maximum e
    xecution time allowed for Update method.
    (CAL-RED-RG:redmeadows:UPDATE_TIMEOUT) Res property type: int
    (CAL-RED-RG:redmeadows:UPDATE_TIMEOUT) Res property value: 300
    (CAL-RED-RG:redmeadows) Res property name: VALIDATE_TIMEOUT
    (CAL-RED-RG:redmeadows:VALIDATE_TIMEOUT) Res property class: standard
    (CAL-RED-RG:redmeadows:VALIDATE_TIMEOUT) Res property description: Maximum
    execution time allowed for Validate method.
    (CAL-RED-RG:redmeadows:VALIDATE_TIMEOUT) Res property type: int
    (CAL-RED-RG:redmeadows:VALIDATE_TIMEOUT) Res property value: 300
    (CAL-RED-RG:redmeadows) Res property name: STOP_TIMEOUT
    (CAL-RED-RG:redmeadows:STOP_TIMEOUT) Res property class: standard
    (CAL-RED-RG:redmeadows:STOP_TIMEOUT) Res property description: Maximum exe
    cution time allowed for Stop method.
    (CAL-RED-RG:redmeadows:STOP_TIMEOUT) Res property type: int
    (CAL-RED-RG:redmeadows:STOP_TIMEOUT) Res property value: 300
    (CAL-RED-RG:redmeadows) Res property name: START_TIMEOUT
    (CAL-RED-RG:redmeadows:START_TIMEOUT) Res property class: standard
    (CAL-RED-RG:redmeadows:START_TIMEOUT) Res property description: Maximum ex
    ecution time allowed for Start method.
    (CAL-RED-RG:redmeadows:START_TIMEOUT) Res property type: int
    (CAL-RED-RG:redmeadows:START_TIMEOUT) Res property value: 500
    (CAL-RED-RG:redmeadows) Res property name: CheckNameService
    (CAL-RED-RG:redmeadows:CheckNameService) Res property class: extension
    (CAL-RED-RG:redmeadows:CheckNameService) Res property description: Name se
    rvice check flag
    (CAL-RED-RG:redmeadows:CheckNameService) Res property pernode: False
    (CAL-RED-RG:redmeadows:CheckNameService) Res property type: boolean
    (CAL-RED-RG:redmeadows:CheckNameService) Res property value: TRUE
    (CAL-RED-RG:redmeadows) Res property name: NetIfList
    (CAL-RED-RG:redmeadows:NetIfList) Res property class: extension
    (CAL-RED-RG:redmeadows:NetIfList) Res property description: List of IPMP g
    roups on each node
    (CAL-RED-RG:redmeadows:NetIfList) Res property pernode: False
    (CAL-RED-RG:redmeadows:NetIfList) Res property type: stringarray
    (CAL-RED-RG:redmeadows:NetIfList) Res property value: sc_ipmp0@1 sc_ipmp0@
    2
    (CAL-RED-RG:redmeadows) Res property name: HostnameList
    (CAL-RED-RG:redmeadows:HostnameList) Res property class: extension
    (CAL-RED-RG:redmeadows:HostnameList) Res property description: List of hos
    tnames this resource manages
    (CAL-RED-RG:redmeadows:HostnameList) Res property pernode: False
    (CAL-RED-RG:redmeadows:HostnameList) Res property type: stringarray
    (CAL-RED-RG:redmeadows:HostnameList) Res property value: redmeadows
    (CAL-RED-RG) Res name: disk-red-rs
    (CAL-RED-RG:disk-red-rs) Res R_description: Failover data service resourc
    e for SUNW.HAStoragePlus:4
    (CAL-RED-RG:disk-red-rs) Res resource type: SUNW.HAStoragePlus:4
    (CAL-RED-RG:disk-red-rs) Res type version: 4
    (CAL-RED-RG:disk-red-rs) Res resource group name: CAL-RED-RG
    (CAL-RED-RG:disk-red-rs) Res resource project name: default
    (CAL-RED-RG:disk-red-rs{shaw}) Res enabled: True
    (CAL-RED-RG:disk-red-rs{telstra}) Res enabled: True
    (CAL-RED-RG:disk-red-rs{shaw}) Res monitor enabled: True
    (CAL-RED-RG:disk-red-rs{telstra}) Res monitor enabled: True
    (CAL-RED-RG:disk-red-rs) Res strong dependencies: <NULL>
    (CAL-RED-RG:disk-red-rs) Res weak dependencies: <NULL>
    (CAL-RED-RG:disk-red-rs) Res restart dependencies: <NULL>
    (CAL-RED-RG:disk-red-rs) Res offline restart dependencies: <NULL>
    (CAL-RED-RG:disk-red-rs) Res property name: Retry_interval
    (CAL-RED-RG:disk-red-rs:Retry_interval) Res property class: standard
    (CAL-RED-RG:disk-red-rs:Retry_interval) Res property description: Time in
    which monitor attempts to restart a failed resource Retry_count times.
    (CAL-RED-RG:disk-red-rs:Retry_interval) Res property type: int
    (CAL-RED-RG:disk-red-rs:Retry_interval) Res property value: 300
    (CAL-RED-RG:disk-red-rs) Res property name: Retry_count
    (CAL-RED-RG:disk-red-rs:Retry_count) Res property class: standard
    (CAL-RED-RG:disk-red-rs:Retry_count) Res property description: Indicates t
    he number of times a monitor restarts the resource if it fails.
    (CAL-RED-RG:disk-red-rs:Retry_count) Res property type: int
    (CAL-RED-RG:disk-red-rs:Retry_count) Res property value: 2
    (CAL-RED-RG:disk-red-rs) Res property name: Failover_mode
    (CAL-RED-RG:disk-red-rs:Failover_mode) Res property class: standard
    (CAL-RED-RG:disk-red-rs:Failover_mode) Res property description: Modifies
    recovery actions taken when the resource fails.
    (CAL-RED-RG:disk-red-rs:Failover_mode) Res property type: enum
    (CAL-RED-RG:disk-red-rs:Failover_mode) Res property value: SOFT
    (CAL-RED-RG:disk-red-rs) Res property name: POSTNET_STOP_TIMEOUT
    (CAL-RED-RG:disk-red-rs:POSTNET_STOP_TIMEOUT) Res property class: standard
    (CAL-RED-RG:disk-red-rs:POSTNET_STOP_TIMEOUT) Res property description: Ma
    ximum execution time allowed for Postnet_stop method.
    (CAL-RED-RG:disk-red-rs:POSTNET_STOP_TIMEOUT) Res property type: int
    (CAL-RED-RG:disk-red-rs:POSTNET_STOP_TIMEOUT) Res property value: 1800
    (CAL-RED-RG:disk-red-rs) Res property name: PRENET_START_TIMEOUT
    (CAL-RED-RG:disk-red-rs:PRENET_START_TIMEOUT) Res property class: standard
    (CAL-RED-RG:disk-red-rs:PRENET_START_TIMEOUT) Res property description: Ma
    ximum execution time allowed for Prenet_Start method.
    (CAL-RED-RG:disk-red-rs:PRENET_START_TIMEOUT) Res property type: int
    (CAL-RED-RG:disk-red-rs:PRENET_START_TIMEOUT) Res property value: 1800
    (CAL-RED-RG:disk-red-rs) Res property name: MONITOR_CHECK_TIMEOUT
    (CAL-RED-RG:disk-red-rs:MONITOR_CHECK_TIMEOUT) Res property class: standar
    d
    (CAL-RED-RG:disk-red-rs:MONITOR_CHECK_TIMEOUT) Res property description: M
    aximum execution time allowed for Monitor_Check method.
    (CAL-RED-RG:disk-red-rs:MONITOR_CHECK_TIMEOUT) Res property type: int
    (CAL-RED-RG:disk-red-rs:MONITOR_CHECK_TIMEOUT) Res property value: 90
    (CAL-RED-RG:disk-red-rs) Res property name: MONITOR_STOP_TIMEOUT
    (CAL-RED-RG:disk-red-rs:MONITOR_STOP_TIMEOUT) Res property class: standard
    (CAL-RED-RG:disk-red-rs:MONITOR_STOP_TIMEOUT) Res property description: Ma
    ximum execution time allowed for Monitor_Stop method.
    (CAL-RED-RG:disk-red-rs:MONITOR_STOP_TIMEOUT) Res property type: int
    (CAL-RED-RG:disk-red-rs:MONITOR_STOP_TIMEOUT) Res property value: 90
    (CAL-RED-RG:disk-red-rs) Res property name: MONITOR_START_TIMEOUT
    (CAL-RED-RG:disk-red-rs:MONITOR_START_TIMEOUT) Res property class: standar
    d
    (CAL-RED-RG:disk-red-rs:MONITOR_START_TIMEOUT) Res property description: M
    aximum execution time allowed for Monitor_Start method.
    (CAL-RED-RG:disk-red-rs:MONITOR_START_TIMEOUT) Res property type: int
    (CAL-RED-RG:disk-red-rs:MONITOR_START_TIMEOUT) Res property value: 90
    (CAL-RED-RG:disk-red-rs) Res property name: INIT_TIMEOUT
    (CAL-RED-RG:disk-red-rs:INIT_TIMEOUT) Res property class: standard
    (CAL-RED-RG:disk-red-rs:INIT_TIMEOUT) Res property description: Maximum ex
    ecution time allowed for Init method.
    (CAL-RED-RG:disk-red-rs:INIT_TIMEOUT) Res property type: int
    (CAL-RED-RG:disk-red-rs:INIT_TIMEOUT) Res property value: 1800
    (CAL-RED-RG:disk-red-rs) Res property name: UPDATE_TIMEOUT
    (CAL-RED-RG:disk-red-rs:UPDATE_TIMEOUT) Res property class: standard
    (CAL-RED-RG:disk-red-rs:UPDATE_TIMEOUT) Res property description: Maximum
    execution time allowed for Update method.
    (CAL-RED-RG:disk-red-rs:UPDATE_TIMEOUT) Res property type: int
    (CAL-RED-RG:disk-red-rs:UPDATE_TIMEOUT) Res property value: 1800
    (CAL-RED-RG:disk-red-rs) Res property name: VALIDATE_TIMEOUT
    (CAL-RED-RG:disk-red-rs:VALIDATE_TIMEOUT) Res property class: standard
    (CAL-RED-RG:disk-red-rs:VALIDATE_TIMEOUT) Res property description: Maximu
    m execution time allowed for Validate method.
    (CAL-RED-RG:disk-red-rs:VALIDATE_TIMEOUT) Res property type: int
    (CAL-RED-RG:disk-red-rs:VALIDATE_TIMEOUT) Res property value: 1800
    (CAL-RED-RG:disk-red-rs) Res property name: STOP_TIMEOUT
    (CAL-RED-RG:disk-red-rs:STOP_TIMEOUT) Res property class: standard
    (CAL-RED-RG:disk-red-rs:STOP_TIMEOUT) Res property description: Maximum ex
    ecution time allowed for Stop method.
    (CAL-RED-RG:disk-red-rs:STOP_TIMEOUT) Res property type: int
    (CAL-RED-RG:disk-red-rs:STOP_TIMEOUT) Res property value: 1800
    (CAL-RED-RG:disk-red-rs) Res property name: START_TIMEOUT
    (CAL-RED-RG:disk-red-rs:START_TIMEOUT) Res property class: standard
    (CAL-RED-RG:disk-red-rs:START_TIMEOUT) Res property description: Maximum e
    xecution time allowed for Start method.
    (CAL-RED-RG:disk-red-rs:START_TIMEOUT) Res property type: int
    (CAL-RED-RG:disk-red-rs:START_TIMEOUT) Res property value: 90
    (CAL-RED-RG:disk-red-rs) Res property name: Zpools
    (CAL-RED-RG:disk-red-rs:Zpools) Res property class: extension
    (CAL-RED-RG:disk-red-rs:Zpools) Res property description: The list of zpoo
    ls
    (CAL-RED-RG:disk-red-rs:Zpools) Res property pernode: False
    (CAL-RED-RG:disk-red-rs:Zpools) Res property type: stringarray
    (CAL-RED-RG:disk-red-rs:Zpools) Res property value: <NULL>
    (CAL-RED-RG:disk-red-rs) Res property name: FilesystemCheckCommand
    (CAL-RED-RG:disk-red-rs:FilesystemCheckCommand) Res property class: extens
    ion
    (CAL-RED-RG:disk-red-rs:FilesystemCheckCommand) Res property description:
    Command string to be executed for file system checks
    (CAL-RED-RG:disk-red-rs:FilesystemCheckCommand) Res property pernode: False
    (CAL-RED-RG:disk-red-rs:FilesystemCheckCommand) Res property type: stringa
    rray
    (CAL-RED-RG:disk-red-rs:FilesystemCheckCommand) Res property value: <NULL>
    (CAL-RED-RG:disk-red-rs) Res property name: AffinityOn
    (CAL-RED-RG:disk-red-rs:AffinityOn) Res property class: extension
    (CAL-RED-RG:disk-red-rs:AffinityOn) Res property description: For specifyi
    ng affinity switchover
    (CAL-RED-RG:disk-red-rs:AffinityOn) Res property pernode: False
    (CAL-RED-RG:disk-red-rs:AffinityOn) Res property type: boolean
    (CAL-RED-RG:disk-red-rs:AffinityOn) Res property value: TRUE
    (CAL-RED-RG:disk-red-rs) Res property name: GlobalDevicePaths
    (CAL-RED-RG:disk-red-rs:GlobalDevicePaths) Res property class: extension
    (CAL-RED-RG:disk-red-rs:GlobalDevicePaths) Res property description: The l
    ist of HA global device paths
    (CAL-RED-RG:disk-red-rs:GlobalDevicePaths) Res property pernode: False
    (CAL-RED-RG:disk-red-rs:GlobalDevicePaths) Res property type: stringarray
    (CAL-RED-RG:disk-red-rs:GlobalDevicePaths) Res property value: <NULL>
    (CAL-RED-RG:disk-red-rs) Res property name: FilesystemMountPoints
    (CAL-RED-RG:disk-red-rs:FilesystemMountPoints) Res property class: extensi
    on
    (CAL-RED-RG:disk-red-rs:FilesystemMountPoints) Res property description: T
    he list of file system mountpoints
    (CAL-RED-RG:disk-red-rs:FilesystemMountPoints) Res property pernode: False
    (CAL-RED-RG:disk-red-rs:FilesystemMountPoints) Res property type: stringar
    ray
    (CAL-RED-RG:disk-red-rs:FilesystemMountPoints) Res property value: /opt/sh
    aw /cal/shaw

  • Logical Hostname

    Hi All ,
    Below is my requirement
    1)  Two - Node Cluster
    2)  To Create Failover NFS resource group
    I want to create a logica hostname resource, what is procedure and requirements for configuring the logical hostname.  
    I am very much confused in creating a logical hostname resource.
    Please help to create a logical hostname.
    Regards,
    R. Rajesh Kannan.

    Have you looked att he example in http://docs.sun.com/app/docs/doc/819-2979/z4000275997776?a=view
    I would have thought that was fairly clear. If not, which specific bit do you need explaining?
    Thanks,
    Tim
    ---

  • Logical Hostname fail to create

    Hi,
    I have installed Sun Cluster 3.2 on Solaris 10 10/08 with Kernel patch 138888-01
    The cluster is a test cluster running on one node.
    I have create 2 Local Zones with Exclusive IP
    1. clrg create -n HOST:zone1,HOST:zone2 test-rg
    2. I have add test-ip in /etc/hosts on the 2 zones
    3. when I am trying to run the clrslh command to create the logical hostname I get the following errors
    ( I am using the GUI for the creation of the logical hostname)
    clrslh (C189917) VALIDATE on resource OP-rs resource group test-rg exited with non-zero exit status
    clrslh (C720144) validation on resource IP-rs in resource group test-rg on node HOST;ZONE failed
    clrslh (C891200) Failed to create resource IP-rs
    Can any one help to resolve this problem?
    Thanks
    Yacov

    Hi
    I have some information regarding this problem.
    The /var/adm/messages* content regarding the problem
    May 28 09:26:23 za-dr-it-sp1 Cluster.RGM.global.rgmd: [ID 224900 daemon.notice] launching method <hafoip_validate> for resource <test-ip>, resource group <test-rg>, node <za-dr-it-sp1:uat-mozambique2>, timeout <300> seconds
    May 28 09:26:23 za-dr-it-sp1 Cluster.RGM.global.rgmd: [ID 896918 daemon.notice] 10 fe_rpc_command: cmd_type(enum):<1>:cmd=</usr/cluster/lib/rgm/rt/hafoip/hafoip_validate>:tag=<uat-mozambique2.test-rg.test-ip.2>: Calling security_clnt_connect(..., host=<za-dr-it-sp1>, sec_type {0:WEAK, 1:STRONG, 2:DES} =<1>, ...)
    May 28 09:26:24 za-dr-it-sp1 Cluster.RGM.global.rgmd: [ID 699104 daemon.error] VALIDATE failed on resource <test-ip>, resource group <test-rg>, time used: 0% of timeout <300, seconds>
    When creating the logical hostname*
    Command executed: /usr/cluster/bin/clreslogicalhostname create -g test-rg -p Resource_project_name= -p Failover_mode=NONE -p R_description=Failover\ network\ resource\ for\ SUNW.LogicalHostname:3 -h test-ip test-ip
    Error message:
    clreslogicalhostname: (C189917) VALIDATE on resource test-ip, resource group test-rg, exited with non-zero exit status.
    clreslogicalhostname: (C720144) Validation of resource test-ip in resource group test-rg on node za-dr-it-sp1:uat-mozambique2 failed.
    clreslogicalhostname: (C891200) Failed to create resource "test-ip".

  • Sun Cluster 3.1 Failover Resource without Logical Hostname

    Maybe it could sound strange, but I'd need to create a failover service without any network resource in use (or at least with a dependency on a logical hostname created in a different resource-group).
    Does anybody know how to do that?

    Well, you don't really NEED a LogicalHostname in a RG. So, i guess i am not understanding
    the question.
    Is there an application agent which demands to have a network resource in the RG? Sometimes
    the VALIDATE method of such agents refuses to work if there is no network resource in
    the RG.
    If so, tell us a bit more about the application. Is this GDS based and generated by
    Sun Cluster Agent Builder? The Agent Builder has a option of "non Network Aware", if you
    select that while building you app, it ought to work without a network resource in the RG.
    But maybe i should back up and ask the more basic question of exactly what is REQUIRING
    you to create a LogicalHostname?
    HTH,
    -ashu

  • Restrict HA-NFS to logical hostname resource

    I have a 2 node cluster running 2 HA-NFS pools (one active on each node at a time). The two nodes each have a connection to our storage network (a private network, but not the same as the cluster interconnect networks). They also each have connections to the public network.
    I have everything working fine from a clustering standpoint. However, when I try to connect to the logical hostname of one of either of the NFS resource groups from FreeBSD, things fail. When I did a snoop, I saw the FreeBSD box talking to the address defined by my logical hostname for that resource group, then all of a sudden, the cluster node responding with its non-cluster address from that interface. As a result, the FreeBSD machine apparently rejects the packet, since it came from a differing source (despite it being UDP). Is it possible to restrict the NFS traffic to just the logical hostname for the resource group?
    For example:
    dfshares <clusternode> would return nothing, as the cluster node has no shares specifically listed.
    But
    dfshares <resourcegroup-lh-rs> would return the shares for that resource group.
    Basically, can we bind SUNW.HA-NFS to a specific interface/address like we can with samba?

    Hi,
    1. Have a look at http://blogs.sun.com/SC/entry/why_a_logical_ip_is , which should explain part of the problem. I doubt that you can configure the NFS server to listen on a specific interface or address only.
    2. I am not sure why your NFS client refuses the response from the NFS server. Or is it some firewall on your system that does this?
    3. If you switch NFS to TCP, this "problem" should not occur.
    Regards
    Hartmut

  • Difference betwn shared address and logical hostname

    Hello,
    For cluster 3.2, could someone explain the diff betwn "clreslogicalhostname create" and "clressharedaddress create"? I thought the former was to share a logical hostname (depending on DNS to resolve to the ip address) betwn nodes, and the latter to share ip address (don't need to rely ip address resolution). But examples of the latter use a hostname and not an ip address, which leads to my confusion.
    Thank you for clarifying this!

    You've got things a little bit confused.
    A logical hostname resource is an IP address (and associated hostname) that is placed in a fail-over resource group and used by the application(s) in that resource group. Such a resource group is resident on one node only and thus the IP address only exists, i.e. is plumbed in, on that cluster node too.
    An example of such a service would be, say, an NFS service, with the IP address being the one you mount shares from. Alternatively, it could be an HA-Oracle database with the IP address being that of the listener that clients connect to.
    A shared address resource is an IP address (and associated hostname) that is placed in a fail-over resource group but is used by applications in a scalable resource group. The shared address IP/hostname becomes a global IP address, such that it is physically plumbed in, e.g. on bge0:1, on the cluster node where the fail-over resource group is mastered and also plumbed in on the loopback interface, e.g. lo:2, of the cluster nodes that run the scalable services that depend on it.
    An example of such a service would be a scalable web server, e.g. Apache or Sun Java Web Server, that is run on multiple cluster nodes at once and has requests load-balanced to them directly by the cluster. The scalable IP address would provide a single address that clients could use, yet the requests can be serviced by one of several nodes.
    Hope that helps,
    Tim
    ---

Maybe you are looking for

  • All-day events across multiple days

    Okay, this is a very specifc question about a very specific situaiton, but I am curious to know what is going on. When I have an all-day event scheduled for one day in iCal and I sync it with my phone (Motorola v551) it shows up as a nice little note

  • Rounding Up

    i've searched the forum and i tried google, but i can't find out how to make it so that even if it's mathematically correct to round down, it rounds up. ie: lets say the resulting number is 2.6, that gets round up to 3, but i need it to round up to 3

  • Using Map  Builder to view Spatial data

    Is anyone using XE for Spatial/Locator work? I have created a schema in XE from a spatial dmp file. I know the dmp file creates a valid schema as we use it all the time for testing. After import, I can't view any spatial layers using Oracle's Map Bui

  • Title case in Pages for mac?

    How do I use Title Case in Pages 5?

  • AT&T released a software update for the HTC One M9 (OPJA110) on July 15, 2015.

    HTC One M9 users,    AT&T released Android Version 5.1 for OTA download today! To download:Drag down the notification bar with two fingers, and then tap Settings > General.Tap About Phone > AT&T software update.Tap Check for updates.Tap Update Now.Fo