Zone Cluster: Creating a logical host resource fails
Hi All,
I am trybing to create a logical host resource with two logical addresses to be part of the resource, however, the command is failing. Here is what I run to create the resource:
clrslh create -Z pgsql-production -g pgsql-rg -h pgprddb-prod,pgprddb-voip pgsql-halh-rs
And I am presented with this failure:
clrslh: specified hostname(s) cannot be hosted by any adapter on bfieprddb01
clrslh: Hostname(s): pgprddb-prod pgprddb-voip
I have pgprddb-prod and pgprddb-voip defined in the /etc/hosts files on the 2 global cluster nodes and also within the two zones in the zone cluster.
I have also modified the zone cluster configuration as described in the following thread:
http://forums.sun.com/thread.jspa?threadID=5420128
This is what I have done to the zone cluster:
clzc configure pgsql-production
clzc:pgsql-production> add net
clzc:pgsql-production:net> set address=pgprddb-prod
clzc:pgsql-production:net> end
clzc:pgsql-production> add net
clzc:pgsql-production:net> set address=pgprddb-voip
clzc:pgsql-production:net> end
clzc:pgsql-production> verify
clzc:pgsql-production> commit
clzc:pgsql-production> quit
Am I missing something here, help please :)
I did read a blog post mentioning that the logical host resource is not supported with exclusive-ip zones at the moment, but I have checked my configuration and I am running with ip-type=shared.
Any suggestions would be greatly appreciated.
Thanks
I managed to fix the issue, I got the hint from the following thread:
http://72.5.124.102/thread.jspa?threadID=5432115&tstart=15
Turns out that you can only define more than one logical host if they all reside on the same subnet. I therefor had to create 2 logical host resources for each subnet by doing the following in the global zone:
clrslh create -g pgsql-rg -Z pgsql-production -h pgprddb-prod pgsql-halh-prod-rs
clrslh create -g pgsql-rg -Z pgsql-production -h pgprddb-voip pgsql-halh-voip-rs
Thanks for reading :)
Similar Messages
-
Logical host resource in SC3.2 missing IPMP
I was under the impression that when creating a logical host resource in Solaris Cluster 3.2, that it will read the hosts and hostname.* files to get the IPMP info and do this automatically for you. I am not seeing this behavior and am hoping someone can shed some light on this for me. Here is the relevant data, any help is greatly appreciated. If it matters, this is Solaris Cluster 3.2 on Solaris 10, running in a 25k domain.
# grep cluster1vip hosts
10.230.108.28 cluster1vip cluster1vip.example.com
10.230.108.29 cluster1vip-ce1 #ipmp_test_cluster1vip1
10.230.108.30 cluster1vip-ce5 #ipmp_test_cluster1vip2
# cat hostname.ce1
cluster1vip netmask + broadcast + group ipmp_sc1 up \
addif cluster1vip-ce1 deprecated -failover netmask + up
# cat hostname.ce5
cluster1vip-ce5 deprecated -failover netmask + broadcast + group ipmp_sc1 up
When I bring the resource online, these are the settings that get applied:
ce6:1: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4> mtu 1500 index 3
inet 10.230.108.28 netmask ffffff80 broadcast 10.230.108.127
Again, any help or guidance is greatly appreciated.For probe based IPMP the following has always worked for me:
- IPMP interfaces of same media type
- SPARC needs local-mac-address set to true in OBP
- Supported interface (if using probe method - [http://docs.sun.com/app/docs/doc/816-4554/mpoverview?a=view|http://docs.sun.com/app/docs/doc/816-4554/mpoverview?a=view] )
/etc/hosts
192.168.1.1 hawkeye
192.168.1.2 ipmp-test1
192.168.1.3 ipmp-test2
For the cluster node 'hawkeye'
/etc/hostname.ce0 before the IPMP change:
hawkeye group sc_ipmp0 -failover
/etc/hostname.ce0 after the IPMP change:
hawkeye group sc_ipmp- failover up addif ipmp-test1 -failover deprecated up
/etc/hostname.ce.1
group sc_ipmp0 failover up addif ipmp-test2 -failover deprecated up
Reboot cluster node and you should be good to go. -
Similar logical host resource configured on 2 different resource groups
hi,
I had an issue with logical host setup as follows.
Background
We had a failover resource group RGA which is running a critical Oracle instance and listener process.
It has a logical host resource ora-lh-rs configured with logical host pointing to VirtualHostA.
I need to create another failover resource group called RGB with another Oracle instance.
Now at present this Oracle instance (Not under cluster control yet) is listening to the same logical host - VirtualHostA as RGA.
When I create this new resource group RGB, I need to configure it to share the same logical host as RGA.
The problem is if I need to add the logical host resource ora-lh-rs with VirtualHostA to this RGB, I foresee that there will be an IP address conflict if I failover RGB to the other cluster node (If my guess is wrong, but logically there should be an IP address conflict if SUN Cluster does not block me from adding the same logical host resource to RGB).
Question
Is there any other way to overcome this issue other than to configure the second Oracle instance to listen on a different logical IP?
To add, RGB is a less priority Oracle instance and will always need to be with RGA on the same node.
Any suggestions are welcome. My apology if this question sounds stupid as I do not have an UAT environment to test.
Thanks
ldd76HI Tim,
Thanks very much for your kind advice. I must say that I have very little experience with SUN Cluster so I beg your pardon if my concept is too superficial.
The reason why I cannot put both Oracle instance in the same RG is because that' s my boss initial proposal. I will speak to her again on the log_only option.
But can I confirm that the command looks like this after I add the resources to RGA.
scrgadm -cj rgb-ora-rs -y Failover_mode=LOG_ONLY (Oracle server)
scrgadm -cj rgb-lsn-rs -y Failover_mode=LOG_ONLY (Oracle Listener)
scrgadm -cj rgb-hasp-rs -y Failover_mode=LOG_ONLY (HA Storage Plus)
As for the logical host questions, it has been legacy that both the Oracle instance listens on the same logical IP(Just that the new and less critical Oracle instance is not under cluster control yet)
But my gut feel for the reason behind this is that the less important Oracle instance is always running on the same node where RGA is running (since that Oracle instance is used for data warehousing with data replicated from the more critical Oracle instance only) else it is meaningless. Thats my guess why ony one logical IP is used.
If the direction is still to use RG_affinities, then if I tweak the following steps a bit will it work?
1) Create RGB.
2) Activate RGB - scswitch -Z -g RGB
3) Offline RGB - scswitch -F -g RGB
4) Failover RGA to the other node
5) Failover RGB to the other node
6) Offline RGB on the other node
7) Failover RGA back to the original node
8) Failover RGB back to the original node
Thanks. -
Solaris Cluster - two machines - logical host
Good morning!
I am a complete dummie Solaris Cluster, buuuuuuuuuuuuuuuuuuuut... I need to create a cluster and install an application:
I have two V440, with Solaris 10;
I need to put the two machines in the cluster;
I have CE0 of each of the machines plugged into the network;
I have CE1 and CE2 of each machine connected together via a crossover cable;
According to the documentation "Oracle Solaris Cluster HA for Alliance Access" there are prerequisites (http://docs.oracle.com/cd/E19680-01/html/821-1547/ciajejfa.html) as creating HAstoragePlus , and logical host resource group;
Could anyone give me some tips on how to create this cluster and the prerequisites?
tanks!
Edited by: user13045950 on 05/12/2012 05:04
Edited by: user13045950 on 05/12/2012 05:06Hi,
a good source of information for the beginner is: http://www.oracle.com/technetwork/articles/servers-storage-admin/how-to-install-two-node-cluster-166763.pdf
To create a highly available logical IP address just do
clrg create <name-for-resource-group>
clrslh create -g <name-for-resource-group> <name-of-ip-address> # This IP address should be available in /etc/hosts on both cluster nodes.
clrg online -M <name-for-resource-group>
Regards
Hartmut -
Cluster changing hostname & Logical host
hello I have sunCluster 3.1 update 4.
the cluster is 2(v440) + 1(v240) installed on solaris8.
Did anyone actualy succeeded to change the hostnames of all the nodes & Logical host on SunCluster system like above?
I saw a procedure that is not realy supported :
1. Reboot cluster nodes into non-cluster node (reboot -- -x)
2. Change the hostname of the system (nodenames, hosts etc)
3. Change hostname on all nodes within the files under /erc/cluster/ccr
4. Regenerate the checksums for each file changed using ccradm -I /etc/cluster/ccr/FILENAME -0Reboot every cluster node into the cluster
is it works?
Thanks & RegardsSo if I understand you correctly, you have two metasets already created and mounted. If so, this is a fairly tricky process (outlined from memory):
1. Backup your data
2. Shut down the RGs using scswitch -F -g <RGnames>, make the RGs unmanaged
3. Unmount the file systems
4. Deconstruct the metasets and mediators
5. Shutdown the cluster and boot -x
6. Change the hostnames in /etc/inet/hosts, etc
7. Change the CCR and re-checksum it
8. Reboot the cluster into cluster mode
9. Re-construct metasets and mediators with new host names
10. scswitch -Z -g <RGs>
If you recreate your metasets in the same way as they were originally created and the state replicas haven't changed in size, then the data should be intact.
Note - I have not tried this process in a long time. Remember also that changing the CCR as described previously is officially unsupported (partly because of the risks involved).
Regards,
Tim
--- -
Creating Logical Hostname Resource - Resource contains invalid hostnames
I am desperately trying to create a shared ip address that my two-node zone cluster will utilize for a failover application. I have added the hostname/ip address pair to /etc/hosts and /etc/inet/ipnodes on both global nodes as well as within each zone cluster node. I then attempt to run the following:
# clrslh create -Z test -g test-rg -h foo.bar.com test-hostname-rs
which yields the following:
clrslh: host1.example.com:test - The hostname foo.bar.com is not authorized to be used in this zone cluster test.
clrslh: host1.example.com:test - Resource contains invalid hostnames.
clrslh: (C189917) VALIDATE on resource test-hostname-rs, resource group test-rg, exited with non-zero exit status.
clrslh: (C720144) Validation of resource test-hostname-rs in resource group test-rg on node host1 failed.
clrslh: (C891200) Failed to create resource "test:test-hostname-rs".
I have searched high and low. The only thing I found was the following:
http://docs.sun.com/app/docs/doc/820-4681/m6069?a=view
Which states: User the clzonecluster(1M) command to configure the hostnames to be used for this zone cluster and then rerun this command to create the resource.
I do not understand what it is saying. My guess is that I need to apply a hostname to the zone cluster. Granted, I don't know how to accomplish this. Halp?The procedure to authorize the hostnames for the zone cluster is below:
clzc configure <zonecluster> (this will bring you under the zone cluster scope like below)
clzc:<zonecluster> add net
clzc:<zonecluster>:net> set address=<hostname>
clzc:<zonecluster>:net> end
clzc:<zonecluster> commit
clzc:<zonecluster>info (to verify the hostname)
After this operation, run the clrslh command to create the logical host resource
and the command should pass.
Thanks,
Prasanna Kunisetty -
IS42SP2 creating a Match Review Configuration fails
Hi,
I'm trying to create an MR configuration in my IS42SP2 installation. But I'm getting the following error:
I'm using a MS SQL Server 2008 database. The duplicate ans status tables (TDUPLICATE, TSTATUS) have all necessary fields.
The very same tables work fine with my IS42SP0 Installation.
The error seems to occur when IS tries to create the TDUPLICATES_AR table.
Probably the DDL statement is not created properly.
Is there any log file where I could have a look at?
Has anyone experienced the same?
Thanks for your Support!
Message was edited by: Martin Bernhardt
Meanwhile I checked the SQL Server log. This is how the DDL Statement looks:
CREATE TABLE TDUPLICATES_ACT (
JOBID null NOT NULL,
SOURCE null(100) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
CUSTOMERID null(100) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
GROUP_NUMBER null(10) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
ORIG_GROUP_NUMBER null(10) COLLATE SQL_Latin1_General_CP1_CI_AS ,
GROUP_RANK null(1) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
ORIG_GROUP_RANK null(1) COLLATE SQL_Latin1_General_CP1_CI_AS ,
IS_FINISHED_ CHAR(1) ,
CONSTRAINT PK_TDUPLICATES_ACT PRIMARY KEY (JOBID,SOURCE,CUSTOMERID)
It seems that a column type table is not initialized. The Match Review is created though, an I can run it. when I open the Task inthe worklist
to list the duplicate Groups, I'm getting this error:
Message was edited by: Martin BernhardtI managed to fix the issue, I got the hint from the following thread:
http://72.5.124.102/thread.jspa?threadID=5432115&tstart=15
Turns out that you can only define more than one logical host if they all reside on the same subnet. I therefor had to create 2 logical host resources for each subnet by doing the following in the global zone:
clrslh create -g pgsql-rg -Z pgsql-production -h pgprddb-prod pgsql-halh-prod-rs
clrslh create -g pgsql-rg -Z pgsql-production -h pgprddb-voip pgsql-halh-voip-rs
Thanks for reading :) -
Creating logical host on zone cluster causing SEG fault
As noted in previous questions, I've got a two node cluster. I am now creating zone clusters on these nodes. I've got two problems that seem to be showing up.
I have one working zone cluster with the application up and running with the required resources including a logical host and a shared address.
I am now trying to configure the resource groups and resources on additional zone clusters.
In some cases when I install the zone cluster the clzc command core dumps at the end. The resulting zones appear to be bootable and running.
I log onto the zone and I create a failover resource group, no problem. I then try to create a logical host and I get:
"Method hafoip_validate on resource xxx stopped or terminated due to receipt of signal 11"
This error appears to be happening on the other node, ie: not the one that I'm building from.
Anyone seen anything like, this have any thoughts on where I should go with it?
Thanks.Hi,
In some cases when I install the zone cluster the clzc command core dumps at the end. The resulting zones appear to be bootable and running.Look at the stack from your core dump and see whether this is matching with the bug:
6763940 clzc dumped core after zones were installed
As far as I know, the above bug is harmless and no functionality should be impacted. This bug is already fixed in the later release.
"Method hafoip_validate on resource xxx stopped or terminated due to receipt of signal 11" The above message is not enough to figure out what's wrong. Please look at the below:
1) Check the /var/adm/messages on the nodes and observe the messages that got printed around the same time that the above
message got printed and see whether that gives more clues.
2) Also see whether there is a core dump associated with the above message and that might also provide more information.
If you need more help, please provide the output for the above.
Thanks,
Prasanna Kunisetty -
Problem in creating logical hostname resource
Hi all,
I have a cluster configured on 10.112.10.206 and 10.112.10.208
i have a resource group testrg
I want to create a logical hostname resource testhost
I have given a ip 10.112.10.245 in /etc/hosts file for testhost
I am creating a logical hostname resource by below command -
clrslh create -g testrg testhost
I am doing this on 206
As I do, the other node 208 becomes unreachable....I m not able to ping 208 but ssh is done from 206 to 208.
I am also not able to ping 10.112.10.245
Please help.So, the physical IP addresses of your two nodes are:
10.112.10.206 node1
10.112.10.208 node2
And your logical host is:
10.112.10.245 testhost
Have you got a netmask set for this network? Is it 255.255.255.0 and is it set in /etc/netmasks?
It's most likely that this is the cause of the problem if you have different netmasks on the interfaces.
Tim
--- -
Errors when adding resources to rg in zone cluster
Hi guys,
I managed to create and bring up a zone cluster, create a rg and add a HAStoragePlus resource (zpool), but getting errors when I want to add a lh resource. Here's the output I find relevant:
root@node1:~# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
rpool 24.6G 10.0G 14.6G 40% 1.00x ONLINE -
zclusterpool 187M 98.5K 187M 0% 1.00x ONLINE -
root@node1:~# clzonecluster show ztestcluster
=== Zone Clusters ===
Zone Cluster Name: ztestcluster
zonename: ztestcluster
zonepath: /zcluster/ztestcluster
autoboot: TRUE
brand: solaris
bootargs: <NULL>
pool: <NULL>
limitpriv: <NULL>
scheduling-class: <NULL>
ip-type: shared
enable_priv_net: TRUE
resource_security: SECURE
--- Solaris Resources for ztestcluster ---
Resource Name: net
address: 192.168.10.55
physical: auto
Resource Name: dataset
name: zclusterpool
--- Zone Cluster Nodes for ztestcluster ---
Node Name: node2
physical-host: node2
hostname: zclnode2
--- Solaris Resources for node2 ---
Node Name: node1
physical-host: node1
hostname: zclnode1
--- Solaris Resources for node1 ---
Now I want to add a lh (zclusterip - 192.168.10.55) to a resource group named z-test-rg.
root@zclnode2:~# cat /etc/hosts
# Copyright 2009 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
# Internet host table
::1 localhost
127.0.0.1 localhost loghost
#zone cluster
192.168.10.51 zclnode1
192.168.10.52 zclnode2
192.168.10.55 zclusterip
root@zclnode2:~# cluster status
=== Cluster Resource Groups ===
Group Name Node Name Suspended State
z-test-rg zclnode1 No Online
zclnode2 No Offline
=== Cluster Resources ===
Resource Name Node Name State Status Message
zclusterpool-rs zclnode1 Online Online
zclnode2 Offline Offline
root@zclnode2:~# clrg show
=== Resource Groups and Resources ===
Resource Group: z-test-rg
RG_description: <NULL>
RG_mode: Failover
RG_state: Managed
Failback: False
Nodelist: zclnode1 zclnode2
--- Resources for Group z-test-rg ---
Resource: zclusterpool-rs
Type: SUNW.HAStoragePlus:10
Type_version: 10
Group: z-test-rg
R_description:
Resource_project_name: default
Enabled{zclnode1}: True
Enabled{zclnode2}: True
Monitored{zclnode1}: True
Monitored{zclnode2}: True
The error, for lh resource:
root@zclnode2:~# clrslh create -g z-test-rg -h zclusterip zclusterip-rs
clrslh: No IPMP group on zclnode1 matches prefix and IP version for zclusterip
Any ideas?
Much appreciated!Hello,
First of all, I detected a mistake in my previous config: instead of adding an ipmp, a "simple" NIC was added to cluster. I rectified that (I created zclusteripmp0 ipmp out of net11):
root@node1:~# ipadm
NAME CLASS/TYPE STATE UNDER ADDR
clprivnet0 ip ok -- --
clprivnet0/? static ok -- 172.16.3.66/26
clprivnet0/? static ok -- 172.16.2.2/24
lo0 loopback ok -- --
lo0/v4 static ok -- 127.0.0.1/8
lo0/v6 static ok -- ::1/128
lo0/zoneadmd-v4 static ok -- 127.0.0.1/8
lo0/zoneadmd-v6 static ok -- ::1/128
net0 ip ok sc_ipmp0 --
net1 ip ok sc_ipmp1 --
net2 ip ok -- --
net2/? static ok -- 172.16.0.66/26
net3 ip ok -- --
net3/? static ok -- 172.16.0.130/26
net4 ip ok sc_ipmp2 --
net5 ip ok sc_ipmp2 --
net11 ip ok zclusteripmp0 --
sc_ipmp0 ipmp ok -- --
sc_ipmp0/out dhcp ok -- 192.168.1.3/24
sc_ipmp1 ipmp ok -- --
sc_ipmp1/static1 static ok -- 192.168.10.11/24
sc_ipmp2 ipmp ok -- --
sc_ipmp2/static1 static ok -- 192.168.30.11/24
sc_ipmp2/static2 static ok -- 192.168.30.12/24
zclusteripmp0 ipmp ok -- --
zclusteripmp0/zoneadmd-v4 static ok -- 192.168.10.51/24
root@node1:~# clzonecluster export ztestcluster
create -b
set zonepath=/zcluster/ztestcluster
set brand=solaris
set autoboot=true
set enable_priv_net=true
set ip-type=shared
add net
set address=192.168.10.55
set physical=auto
end
add dataset
set name=zclusterpool
end
add attr
set name=cluster
set type=boolean
set value=true
end
add node
set physical-host=node2
set hostname=zclnode2
add net
set address=192.168.10.52
set physical=zclusteripmp0
end
end
add node
set physical-host=node1
set hostname=zclnode1
add net
set address=192.168.10.51
set physical=zclusteripmp0
end
end
An then I tried again to add the lh, but getting the same error:
root@node2:~# zlogin -C ztestcluster
[Connected to zone 'ztestcluster' console]
zclnode2 console login: root
Password:
Last login: Mon Jan 19 15:28:28 on console
Jan 19 19:17:24 zclnode2 login: ROOT LOGIN /dev/console
Oracle Corporation SunOS 5.11 11.2 June 2014
root@zclnode2:~# ipadm
NAME CLASS/TYPE STATE UNDER ADDR
clprivnet0 ip ok -- --
clprivnet0/? inherited ok -- 172.16.3.65/26
lo0 loopback ok -- --
lo0/? inherited ok -- 127.0.0.1/8
lo0/? inherited ok -- ::1/128
zclusteripmp0 ipmp ok -- --
zclusteripmp0/? inherited ok -- 192.168.10.52/24
root@zclnode2:~# cluster status
=== Cluster Resource Groups ===
Group Name Node Name Suspended State
z-test-rg zclnode1 No Offline
zclnode2 No Online
=== Cluster Resources ===
Resource Name Node Name State Status Message
zclusterpool-rs zclnode1 Offline Offline
zclnode2 Online Online
root@zclnode2:~# ipadm
NAME CLASS/TYPE STATE UNDER ADDR
clprivnet0 ip ok -- --
clprivnet0/? inherited ok -- 172.16.3.65/26
lo0 loopback ok -- --
lo0/? inherited ok -- 127.0.0.1/8
lo0/? inherited ok -- ::1/128
zclusteripmp0 ipmp ok -- --
zclusteripmp0/? inherited ok -- 192.168.10.52/24
root@zclnode2:~# clreslogicalhostname create -g z-test-rg -h zclusterip zcluste
rip-rs
clreslogicalhostname: No IPMP group on zclnode1 matches prefix and IP version for zclusterip
root@zclnode2:~#
To answer your first question, yes - all global nodes and zone cluster nodes have entries for zclusterip:
root@zclnode2:~# cat /etc/hosts
# Copyright 2009 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
# Internet host table
::1 localhost
127.0.0.1 localhost loghost
#zone cluster
192.168.10.51 zclnode1
192.168.10.52 zclnode2
192.168.10.55 zclusterip
root@zclnode2:~# ping zclnode1
zclnode1 is alive
When I tried the command you mentioned, first it gave me an error ( there was a space between interfaces), then I changed the rg group to fit mine (z-test-rg) and it (partially) worked:
root@zclnode2:~# clrs create -g z-test-rg -t LogicalHostname -p Netiflist=sc_ip
mp0@1,sc_ipmp0@2 -p Hostnamelist=zclusterip zclusterip-rs
root@zclnode2:~# clrg show
=== Resource Groups and Resources ===
Resource Group: z-test-rg
RG_description: <NULL>
RG_mode: Failover
RG_state: Managed
Failback: False
Nodelist: zclnode1 zclnode2
--- Resources for Group z-test-rg ---
Resource: zclusterpool-rs
Type: SUNW.HAStoragePlus:10
Type_version: 10
Group: z-test-rg
R_description:
Resource_project_name: default
Enabled{zclnode1}: True
Enabled{zclnode2}: True
Monitored{zclnode1}: True
Monitored{zclnode2}: True
Resource: zclusterip-rs
Type: SUNW.LogicalHostname:5
Type_version: 5
Group: z-test-rg
R_description:
Resource_project_name: default
Enabled{zclnode1}: True
Enabled{zclnode2}: True
Monitored{zclnode1}: True
Monitored{zclnode2}: True
root@zclnode2:~# cluster status
=== Cluster Resource Groups ===
Group Name Node Name Suspended State
z-test-rg zclnode1 No Offline
zclnode2 No Online
=== Cluster Resources ===
Resource Name Node Name State Status Message
zclusterip-rs zclnode1 Offline Offline
zclnode2 Online Online - LogicalHostname online.
zclusterpool-rs zclnode1 Offline Offline
zclnode2 Online Online
root@zclnode2:~# ipadm
NAME CLASS/TYPE STATE UNDER ADDR
clprivnet0 ip ok -- --
clprivnet0/? inherited ok -- 172.16.3.65/26
lo0 loopback ok -- --
lo0/? inherited ok -- 127.0.0.1/8
lo0/? inherited ok -- ::1/128
sc_ipmp0 ipmp ok -- --
sc_ipmp0/? inherited ok -- 192.168.10.55/24
zclusteripmp0 ipmp ok -- --
zclusteripmp0/? inherited ok -- 192.168.10.52/24
root@zclnode2:~# ping zclusterip
zclusterip is alive
root@zclnode2:~# clrg switch -n zclnode1 z-test-rg
root@zclnode2:~# cluster status
=== Cluster Resource Groups ===
Group Name Node Name Suspended State
z-test-rg zclnode1 No Online
zclnode2 No Offline
=== Cluster Resources ===
Resource Name Node Name State Status Message
zclusterip-rs zclnode1 Online Online - LogicalHostname online.
zclnode2 Offline Offline - LogicalHostname offline.
zclusterpool-rs zclnode1 Online Online
zclnode2 Offline Offline
root@zclnode2:~# ping zclusterip
no answer from zclusterip
root@zclnode2:~# ping zclusterip
no answer from zclusterip
root@zclnode2:~#
So, the lh was added, the rg can switch over to the other node, but zclusterip is pingable only from that cluster zone; I cannot ping zcluster ip from the cluster zone that does not hold the rg, nor from any global cluster node (node1, node2)... -
The hostname test01 is not authorized to be used in this zone cluster
Hi,
I have problems to register a LogicalHostname to a Zone Cluster.
Here my steps:
- create the ZoneCluster
# clzc configure test01
clzc:test01> info
zonename: test01
zonepath: /export/zones/test01
autoboot: true
brand: cluster
bootargs:
pool: test
limitpriv:
scheduling-class:
ip-type: shared
enable_priv_net: true
sysid:
name_service not specified
nfs4_domain: dynamic
security_policy: NONE
system_locale: en_US.UTF-8
terminal: vt100
timezone: Europe/Berlin
node:
physical-host: farm01a
hostname: test01a
net:
address: 172.19.115.232
physical: e1000g0
node:
physical-host: farm01b
hostname: test01b
net:
address: 172.19.115.233
physical: e1000g0
- create a RG
# clrg create -Z test01 test01-rg
- create Logicalhostname (with error)
# clrslh create -g test01-rg -Z test01 -h test01 test01-ip
clrslh: farm01b:test01 - The hostname test01 is not authorized to be used in this zone cluster test01.
clrslh: farm01b:test01 - Resource contains invalid hostnames.
clrslh: (C189917) VALIDATE on resource test01-ip, resource group test01-rg, exited with non-zero exit status.
clrslh: (C720144) Validation of resource test01-ip in resource group test01-rg on node test01b failed.
clrslh: (C891200) Failed to create resource "test01:test01-ip".
Here the entries in /etc/hosts from farm01a and farm01b
172.19.115.119 farm01a # Cluster Node
172.19.115.120 farm01b loghost
172.19.115.232 test01a
172.19.115.233 test01b
172.19.115.252 test01
Hope somebody could help.
regards,
Sascha
Edited by: sbrech on 13.05.2009 11:44When I scanned my last example of a zone cluster, I spotted, that I added my logical host to the zone clusters configuration.
create -b
set zonepath=/zones/cluster
set brand=cluster
set autoboot=true
set enable_priv_net=true
set ip-type=shared
add inherit-pkg-dir
set dir=/lib
end
add inherit-pkg-dir
set dir=/platform
end
add inherit-pkg-dir
set dir=/sbin
end
add inherit-pkg-dir
set dir=/usr
end
add net
set address=applh
set physical=auto
end
add dataset
set name=applpool
end
add node
set physical-host=deulwork80
set hostname=deulclu
add net
set address=172.16.30.81
set physical=e1000g0
end
end
add sysid
set root_password=nMKsicI310jEM
set name_service=""
set nfs4_domain=dynamic
set security_policy=NONE
set system_locale=C
set terminal=vt100
set timezone=Europe/Berlin
end
I am refering to:
add net
set address=applh
set physical=auto
end
So as far as I can see this is missing from your configuration. Sorry for leading you in the wrong way.
Detlef -
2 Logical Host, 2 Web Servers, Big Problem?
I am setting up a sun HA cluster using 2 E4500 servers. I have created 2 logical hosts, each one needs to host a Netscape iPlanet 4.0 web server sitting at port 80. Each logical host is serving up web applications for different clients.
If I need to fail over one of the logical hosts so that they are both running on the same system, the newly imported instance of the web server will fail because port 80 is already in use by the logical host that is on that physical host.
At first this seemed totally wrong. Each logical host should be able to run applications on any port they need to. Then someone who has a lot more time on Solaris told me that this was not the case, and each logical host had to steer clear of using the same ports as other logical hosts in the same cluster.
Can someone clue me into what is reality?
Any good documentation that tells how to set this stuff up?
Thanks!
BruceHi,
The best way to resolve this would be to try implementing the same.
Instead of going through an entire cluster install/configuration
process, you may want to try setting up two different web servers
on a single node. You may want to set up a virtual interface (like
le0:1, hme0:1 etc) for this. You coud then try connecting to
individual web servers on port 80. If this works, then the two
webserver/two node cluster implementation should also work.
Hope this helps.
Thanks,
Gopinath. -
Zone Cluster - oracle_server resource create
I am having a problem trying to create an oracle_server resource for my zone cluster.
I have a 2-node zone cluster that utilizes a shared storage zpool to house an Oracle installation and its database files. This is a test system so don't worry about the Oracle setup. I obviously wouldn't put the application and database files on the same storage.
When I run the following command from a global zone member:
clrs create -Z test -g test-rg -t SUNW.oracle_server -p Connect_string=user/password-p ORACLE_SID=SNOW -p ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_snow -p Alert_log_file=/u01/app/oracle/product/10.2.0/db_snow/admin/SNOW/bdump -p Restart_type=RESOURCE_RESTART -p resource_dependencies=test-storage-rs test-oracle_server-rsI get the following errors:
clrs: taloraan.admin.uncw.edu:test - Validation failed. ORACLE_HOME /u01/app/oracle/product/10.2.0/db_snow does not exist
clrs: taloraan.admin.uncw.edu:test - ALERT_LOG_FILE /u01/app/oracle/product/10.2.0/db_snow/admin/SNOW/bdump doesn't exist
clrs: taloraan.admin.uncw.edu:test - PARAMETER_FILE: /u01/app/oracle/product/10.2.0/db_snow/dbs/initSNOW.ora nor server PARAMETER_FILE: /u01/app/oracle/product/10.2.0/db_snow/dbs/spfileSNOW.ora exists
clrs: taloraan.admin.uncw.edu:test - This resource depends on a HAStoragePlus resouce that is not online on this node. Ignoring validation errors.
clrs: tatooine.admin.uncw.edu:test - ALERT_LOG_FILE /u01/app/oracle/product/10.2.0/db_snow/admin/SNOW/bdump doesn't exist
clrs: (C189917) VALIDATE on resource test-oracle_server-rs, resource group test-rg, exited with non-zero exit status.
clrs: (C720144) Validation of resource test-oracle_server-rs in resource group test-rg on node tatooine failed.
clrs: (C891200) Failed to create resource "test:test-oracle_server-rs".So obviously, the crls command cannot find the files (which are located on my shared storage). I am guessing I need to point the command at a global mountpoint.
Regardless, can anyone shed some light on how I make this happen?
I am referencing http://docs.sun.com/app/docs/doc/821-0274/chdiggib?a=view
The section that reads "Example 3 Registering Sun Cluster HA for Oracle to Run in a Zone Cluster"The storage is mounted but it only shows up inside the active node. You can't "see" it from a global cluster member. I am now trying to add the listener but am hitting a dead-end.
# clrs create -Z test -g test-rg -t SUNW.oracle_listener -p ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_snow -p LISTENER_NAME=test-ora-lsnr test-oracle_listener
clrs: taloraan.admin.uncw.edu:test - ORACLE_HOME /u01/app/oracle/product/10.2.0/db_snow does not exist
clrs: (C189917) VALIDATE on resource test-oracle_listener, resource group test-rg, exited with non-zero exit status.
clrs: (C720144) Validation of resource test-oracle_listener in resource group test-rg on node taloraan failed.
clrs: (C891200) Failed to create resource "test:test-oracle_listener".Is the LISTENER_NAME something that is assigned to the listener by Oracle or is it simply something of my choosing?
Also, how can I see a detailed status listing of the zone cluster? When I execute "cluster status", it doesn't give a verbose listing of my zone cluster. I have tried using "clzc status test" but am not afforded much in the way of output. As in, I would like to see some output that lists all of my resources. -
Error when creating zone cluster
Hello,
I have the following setup: Solaris 11.2 x86, cluster 4.2. I have already configured the cluster and it's up and running. I am trying to create a zone cluster, but getting the following error:
>>> Result of the Creation for the Zone cluster(ztestcluster) <<<
The zone cluster is being configured with the following configuration
/usr/cluster/bin/clzonecluster configure ztestcluster
create
set zonepath=/zclusterpool/znode
set brand=cluster
set ip-type=shared
set enable_priv_net=true
add sysid
set root_password=********
end
add node
set physical-host=node2
set hostname=zclnode2
add net
set address=192.168.10.52
set physical=net1
end
end
add node
set physical-host=node1
set hostname=zclnode1
add net
set address=192.168.10.51
set physical=net1
end
end
add net
set address=192.168.10.55
end
java.lang.NullPointerException
at java.util.regex.Matcher.getTextLength(Matcher.java:1234)
at java.util.regex.Matcher.reset(Matcher.java:308)
at java.util.regex.Matcher.<init>(Matcher.java:228)
at java.util.regex.Pattern.matcher(Pattern.java:1088)
at com.sun.cluster.zcwizards.zonecluster.ZCWizardResultPanel.consoleInteraction(ZCWizardResultPanel.java:181)
at com.sun.cluster.dswizards.clisdk.core.IteratorLayout.cliConsoleInteraction(IteratorLayout.java:563)
at com.sun.cluster.dswizards.clisdk.core.IteratorLayout.displayPanel(IteratorLayout.java:623)
at com.sun.cluster.dswizards.clisdk.core.IteratorLayout.run(IteratorLayout.java:607)
at java.lang.Thread.run(Thread.java:745)
ERROR: System configuration error
As a result of a change to the system configuration, a resource that this
wizard will create is now invalid. Review any changes that were made to the
system after you started this wizard to determine which changes might have
caused this error. Then quit and restart this wizard.
Press RETURN to close the wizard
No errors in /var/adm/messages.
Any ideas?
Thank you!I must be doing some obvious, stupid mistake, cause I still get that "not enough space" error
root@node1:~# clzonecluster show ztestcluster
=== Zone Clusters ===
Zone Cluster Name: ztestcluster
zonename: ztestcluster
zonepath: /zcluster/znode
autoboot: TRUE
brand: solaris
bootargs: <NULL>
pool: <NULL>
limitpriv: <NULL>
scheduling-class: <NULL>
ip-type: shared
enable_priv_net: TRUE
resource_security: SECURE
--- Solaris Resources for ztestcluster ---
Resource Name: net
address: 192.168.10.55
physical: auto
--- Zone Cluster Nodes for ztestcluster ---
Node Name: node2
physical-host: node2
hostname: zclnode2
--- Solaris Resources for node2 ---
Node Name: node1
physical-host: node1
hostname: zclnode1
--- Solaris Resources for node1 ---
root@node1:~# clzonecluster install ztestcluster
Waiting for zone install commands to complete on all the nodes of the zone cluster "ztestcluster"...
clzonecluster: (C801046) Command execution failed on node node2. Please refer to the console for more information
clzonecluster: (C801046) Command execution failed on node node1. Please refer to the console for more information
But I have enough FS space. I increased the virtual HDD to 25GB on each node. After global cluster installation, I still have 16GB free on each node. During the install I constantly check the free space and it should be enough (only about 500MB is consumed by downloaded packages, which leaves about 15.5GB free). And every time the installation fails at "apply-sysconfig checkpoint"... -
Cluster Resource Failed in Clustered Role
Hello All.
We’re running our VMs on the Cluster of Hyper-V hosts of Windows Server 2012 R2.
We frequently encounter the following error in the Cluster Events:
“Cluster resource 'Virtual Machine VM123' of type 'Virtual Machine' in clustered role 'VM123' failed.
Based on the failure policies for the resource and role, the cluster service may try to bring the resource online on this node or move the group to another node of the cluster and then restart
it. Check the resource and group state using Failover Cluster Manager or the Get-ClusterResource Windows PowerShell cmdlet.”
VM123 is the hostname of VM. We encounter this error for most of the VMs, however the VMs keep running in the cluster.
Anybody please guide me what this error mean.
Please help and advise.
Regards,
Hasan Bin HasibHi Hasan Bin Hasib,
Since in your description there don’t have others detail clue, in your current information we can get the VM123 resource at failed status, you can first run the cluster validation
to verify whether your cluster configuration is correct and install the Recommended hotfixes and updates first.
Recommended hotfixes and updates for Windows Server 2012 R2-based failover clusters
https://support.microsoft.com/en-us/kb/2920151?wa=wsignin1.0
Understanding how Failover Clustering Recovers from Unresponsive Resources
http://blogs.msdn.com/b/clustering/archive/2013/01/24/10388009.aspx
I’m glad to be of help to you!
Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]
Maybe you are looking for
-
ITunes wont open because of runtime error? please help
Whenever I try to open my itunes the window pops up but then is followed by a message saying: "Runtime Error! This application has reguested the Runtime to terminate it in an unusual way." Please help guys
-
hello there when selecting the camera on my 3gs the screen goes black and has no responce i can take a picture with the top volume button but cannot see it until i open photos please can anyone help thankyou andy
-
I see a lot of articles on the forums talking about making H.264 work which is a good thing. I'd like to use it myself as a small web provider of video content. However I don't think I can. H.264 requires a license for decoding of content for commerc
-
Lost 30 pin dock connector cap - need info as to where I can replace it
I feel I'm alone in the universe. I have lost the little white 30 pin dock connector cap for my 3G, 15 Gb iPod. Does anyone know how I can get some replacement caps? Apple does not seem to offer them as a after sale item. I have searched iLounge and
-
When I write formulas, I sometimes get a syntax error. However when I quit Numbers, open my document again, click on the syntax error and then press "enter," the syntax error has been corrected. Because I've done this many times, I know it's not that