Zone cluster and NFS
Hiya folks.
Setup is, 2 global nodes running 3.3 and a zone cluster setup among them. NFS share from a Netapp filer that could be mounted on both the global zones.
I’m not aware of way how I can present this NFS share to the zone clusters.
This is a failover cluster setup and there won’t be any parallel I/O from the other cluster node.
Heard the path I should follow is loopback F/S. Seek your advice. Thanks in advance.
Cheers
osp
Hi,
I had been confused by the docs and needed confirmation before replying.
You have to issue the clnas command from the global zone but can use the -Z <zoneclustername> option to work in the zonecluster itself. E.g.
# clnas add -t netapp -p userid=nasadmin -f <passwd-file> -Z <zc> <appliance>
# clnas add-dir -Z <zc> -d <dir> <appliance>
Your proposal (it must be clnas, not clns)
clns -t netapp -u nasadmin -f /home/nasadmin/passwd.txt -Z zc1 netapp_nfs_vfiler1
clns add-dir -d /nfs_share1 netapp_nfs_vfiler1 is not quite correct.
Few concerns here, the -u and the password should be vfiler users or are they unix users ? This is the vfiler user!
where does the share get presented to on the zone cluster ?.Good question. Just give it a try.
Let us know whether that worked.
Hartmut
Similar Messages
-
How to configure Oracle database in a failover zone cluster
Setup: Oracle database and zone configured on highly available local filesystems.
Two node cluster.
Oracle database running inside the zone.
Note: I dont have a zone cluster.
1. I need to make the zone and the oracle database highly available.
2. Can I configure the Oracle Data service directly to run in the zone or will it involve creating sczsh_config script to do the same.
I have been going through the guides and searching over the net, but haven't found any help so far. Its so much simpler to configure this environment in Veritas cluster. Hope I find some help here.
TIA,
SudhirThe Oracle Solaris Cluster concepts guide has some information on which zone model to choose:
http://docs.oracle.com/cd/E18728_01/html/821-2682/gcbkf.html#scrolltoc
When managing a Solaris zone with the HA Zones agent, the cluster basically regards the non-global zone as a blackbox.
As such you can either start the Oracle database as part of the runlevel/SMF startup of the non-global zone, or you can use the sczsh or sczsmf component and use your own scripts to start, probe and stop the Oracle database.
Usage of the standard HA Oracle data service is not supported in combination with the HA Zones agent.
If you require a more fine grained control of services running in non-global zones, why not setup a zone cluster and then having HA Oracle failover the Oracle database between non-global zones?
Regards
Thorsten -
The hostname test01 is not authorized to be used in this zone cluster
Hi,
I have problems to register a LogicalHostname to a Zone Cluster.
Here my steps:
- create the ZoneCluster
# clzc configure test01
clzc:test01> info
zonename: test01
zonepath: /export/zones/test01
autoboot: true
brand: cluster
bootargs:
pool: test
limitpriv:
scheduling-class:
ip-type: shared
enable_priv_net: true
sysid:
name_service not specified
nfs4_domain: dynamic
security_policy: NONE
system_locale: en_US.UTF-8
terminal: vt100
timezone: Europe/Berlin
node:
physical-host: farm01a
hostname: test01a
net:
address: 172.19.115.232
physical: e1000g0
node:
physical-host: farm01b
hostname: test01b
net:
address: 172.19.115.233
physical: e1000g0
- create a RG
# clrg create -Z test01 test01-rg
- create Logicalhostname (with error)
# clrslh create -g test01-rg -Z test01 -h test01 test01-ip
clrslh: farm01b:test01 - The hostname test01 is not authorized to be used in this zone cluster test01.
clrslh: farm01b:test01 - Resource contains invalid hostnames.
clrslh: (C189917) VALIDATE on resource test01-ip, resource group test01-rg, exited with non-zero exit status.
clrslh: (C720144) Validation of resource test01-ip in resource group test01-rg on node test01b failed.
clrslh: (C891200) Failed to create resource "test01:test01-ip".
Here the entries in /etc/hosts from farm01a and farm01b
172.19.115.119 farm01a # Cluster Node
172.19.115.120 farm01b loghost
172.19.115.232 test01a
172.19.115.233 test01b
172.19.115.252 test01
Hope somebody could help.
regards,
Sascha
Edited by: sbrech on 13.05.2009 11:44When I scanned my last example of a zone cluster, I spotted, that I added my logical host to the zone clusters configuration.
create -b
set zonepath=/zones/cluster
set brand=cluster
set autoboot=true
set enable_priv_net=true
set ip-type=shared
add inherit-pkg-dir
set dir=/lib
end
add inherit-pkg-dir
set dir=/platform
end
add inherit-pkg-dir
set dir=/sbin
end
add inherit-pkg-dir
set dir=/usr
end
add net
set address=applh
set physical=auto
end
add dataset
set name=applpool
end
add node
set physical-host=deulwork80
set hostname=deulclu
add net
set address=172.16.30.81
set physical=e1000g0
end
end
add sysid
set root_password=nMKsicI310jEM
set name_service=""
set nfs4_domain=dynamic
set security_policy=NONE
set system_locale=C
set terminal=vt100
set timezone=Europe/Berlin
end
I am refering to:
add net
set address=applh
set physical=auto
end
So as far as I can see this is missing from your configuration. Sorry for leading you in the wrong way.
Detlef -
Scalable Apache service in a zone cluster
Hi,
I have set up a 2-virtual node zone cluster (on a physical 2-node cluster).
I am trying to set up an apache resource group (scalable) on the two nodes, but I don't get the zone cluster nodes as valid options to use while running clsetup and/or the web gui wizard.
They only see the two global zones and one failover zone I have configured on the cluster.
Is there no way to set up a scalable apache service on the zone cluster? Any help will be appreciated.
Thanks,
DNever mind...I had to run the clrg,clrs and clressharedaddress commands from within the zone cluster nodes and not from the global.
It's going to take a while to come to terms with the clustering within the cluster.
:) -
Creating logical host on zone cluster causing SEG fault
As noted in previous questions, I've got a two node cluster. I am now creating zone clusters on these nodes. I've got two problems that seem to be showing up.
I have one working zone cluster with the application up and running with the required resources including a logical host and a shared address.
I am now trying to configure the resource groups and resources on additional zone clusters.
In some cases when I install the zone cluster the clzc command core dumps at the end. The resulting zones appear to be bootable and running.
I log onto the zone and I create a failover resource group, no problem. I then try to create a logical host and I get:
"Method hafoip_validate on resource xxx stopped or terminated due to receipt of signal 11"
This error appears to be happening on the other node, ie: not the one that I'm building from.
Anyone seen anything like, this have any thoughts on where I should go with it?
Thanks.Hi,
In some cases when I install the zone cluster the clzc command core dumps at the end. The resulting zones appear to be bootable and running.Look at the stack from your core dump and see whether this is matching with the bug:
6763940 clzc dumped core after zones were installed
As far as I know, the above bug is harmless and no functionality should be impacted. This bug is already fixed in the later release.
"Method hafoip_validate on resource xxx stopped or terminated due to receipt of signal 11" The above message is not enough to figure out what's wrong. Please look at the below:
1) Check the /var/adm/messages on the nodes and observe the messages that got printed around the same time that the above
message got printed and see whether that gives more clues.
2) Also see whether there is a core dump associated with the above message and that might also provide more information.
If you need more help, please provide the output for the above.
Thanks,
Prasanna Kunisetty -
Add New Device to Zone Cluster without Reboot
We have a zone cluster running Oracle RAC on it.
We need to ass new did device for use of Oracle RAC.
How can I add a new did device with out rebooting the zone cluster?
Edited by: klkumar10 on Jan 5, 2011 9:41 PMHi shibigoku and welcome to the forums!
You can access the router's setup page manually and check your settings to add more computers to your network. Please click on the link below for instructions.
Checking the Wireless Settings on a Linksys Router -
Zone Cluster - oracle_server resource create
I am having a problem trying to create an oracle_server resource for my zone cluster.
I have a 2-node zone cluster that utilizes a shared storage zpool to house an Oracle installation and its database files. This is a test system so don't worry about the Oracle setup. I obviously wouldn't put the application and database files on the same storage.
When I run the following command from a global zone member:
clrs create -Z test -g test-rg -t SUNW.oracle_server -p Connect_string=user/password-p ORACLE_SID=SNOW -p ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_snow -p Alert_log_file=/u01/app/oracle/product/10.2.0/db_snow/admin/SNOW/bdump -p Restart_type=RESOURCE_RESTART -p resource_dependencies=test-storage-rs test-oracle_server-rsI get the following errors:
clrs: taloraan.admin.uncw.edu:test - Validation failed. ORACLE_HOME /u01/app/oracle/product/10.2.0/db_snow does not exist
clrs: taloraan.admin.uncw.edu:test - ALERT_LOG_FILE /u01/app/oracle/product/10.2.0/db_snow/admin/SNOW/bdump doesn't exist
clrs: taloraan.admin.uncw.edu:test - PARAMETER_FILE: /u01/app/oracle/product/10.2.0/db_snow/dbs/initSNOW.ora nor server PARAMETER_FILE: /u01/app/oracle/product/10.2.0/db_snow/dbs/spfileSNOW.ora exists
clrs: taloraan.admin.uncw.edu:test - This resource depends on a HAStoragePlus resouce that is not online on this node. Ignoring validation errors.
clrs: tatooine.admin.uncw.edu:test - ALERT_LOG_FILE /u01/app/oracle/product/10.2.0/db_snow/admin/SNOW/bdump doesn't exist
clrs: (C189917) VALIDATE on resource test-oracle_server-rs, resource group test-rg, exited with non-zero exit status.
clrs: (C720144) Validation of resource test-oracle_server-rs in resource group test-rg on node tatooine failed.
clrs: (C891200) Failed to create resource "test:test-oracle_server-rs".So obviously, the crls command cannot find the files (which are located on my shared storage). I am guessing I need to point the command at a global mountpoint.
Regardless, can anyone shed some light on how I make this happen?
I am referencing http://docs.sun.com/app/docs/doc/821-0274/chdiggib?a=view
The section that reads "Example 3 Registering Sun Cluster HA for Oracle to Run in a Zone Cluster"The storage is mounted but it only shows up inside the active node. You can't "see" it from a global cluster member. I am now trying to add the listener but am hitting a dead-end.
# clrs create -Z test -g test-rg -t SUNW.oracle_listener -p ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_snow -p LISTENER_NAME=test-ora-lsnr test-oracle_listener
clrs: taloraan.admin.uncw.edu:test - ORACLE_HOME /u01/app/oracle/product/10.2.0/db_snow does not exist
clrs: (C189917) VALIDATE on resource test-oracle_listener, resource group test-rg, exited with non-zero exit status.
clrs: (C720144) Validation of resource test-oracle_listener in resource group test-rg on node taloraan failed.
clrs: (C891200) Failed to create resource "test:test-oracle_listener".Is the LISTENER_NAME something that is assigned to the listener by Oracle or is it simply something of my choosing?
Also, how can I see a detailed status listing of the zone cluster? When I execute "cluster status", it doesn't give a verbose listing of my zone cluster. I have tried using "clzc status test" but am not afforded much in the way of output. As in, I would like to see some output that lists all of my resources. -
Zone Cluster: Creating a logical host resource fails
Hi All,
I am trybing to create a logical host resource with two logical addresses to be part of the resource, however, the command is failing. Here is what I run to create the resource:
clrslh create -Z pgsql-production -g pgsql-rg -h pgprddb-prod,pgprddb-voip pgsql-halh-rs
And I am presented with this failure:
clrslh: specified hostname(s) cannot be hosted by any adapter on bfieprddb01
clrslh: Hostname(s): pgprddb-prod pgprddb-voip
I have pgprddb-prod and pgprddb-voip defined in the /etc/hosts files on the 2 global cluster nodes and also within the two zones in the zone cluster.
I have also modified the zone cluster configuration as described in the following thread:
http://forums.sun.com/thread.jspa?threadID=5420128
This is what I have done to the zone cluster:
clzc configure pgsql-production
clzc:pgsql-production> add net
clzc:pgsql-production:net> set address=pgprddb-prod
clzc:pgsql-production:net> end
clzc:pgsql-production> add net
clzc:pgsql-production:net> set address=pgprddb-voip
clzc:pgsql-production:net> end
clzc:pgsql-production> verify
clzc:pgsql-production> commit
clzc:pgsql-production> quit
Am I missing something here, help please :)
I did read a blog post mentioning that the logical host resource is not supported with exclusive-ip zones at the moment, but I have checked my configuration and I am running with ip-type=shared.
Any suggestions would be greatly appreciated.
ThanksI managed to fix the issue, I got the hint from the following thread:
http://72.5.124.102/thread.jspa?threadID=5432115&tstart=15
Turns out that you can only define more than one logical host if they all reside on the same subnet. I therefor had to create 2 logical host resources for each subnet by doing the following in the global zone:
clrslh create -g pgsql-rg -Z pgsql-production -h pgprddb-prod pgsql-halh-prod-rs
clrslh create -g pgsql-rg -Z pgsql-production -h pgprddb-voip pgsql-halh-voip-rs
Thanks for reading :) -
Scalable service instance deregistered on multi-zone cluster
I have a pair of systems clustered with multiple zones configured on each. The zones are not clustered, however the dataservices are run on the zones in pairs. Some services are failover, some are scalable.
The problem arises with the scalable resources. There are multiple instances of the same application running on different zones (by intances I mean dev on one pair, tst on another pair, etc.). These instances all use the same ports on different IP addresses, where the IPs are configured as shared addresses. If I stop the application on one zone the ports that it uses will be deregistered on all of the zones therefore killing instances that I'm not working on. This problem happens even for instances of the application which have not yet been configured into the cluster as dataservices. This defeats the purpose of having zones if I can't work on them in isolation.
I could cluster the zones, but I have no need to fail over the whole zone, and I need to have both failover and scalable resources so I'd need double the number of zones if I clustered the zones themselves.
If anyone has some thoughts I'd appreciate it.
Edited by: taccooper on Dec 8, 2009 10:14 AMHi,
you are hitting a restriction with scalable addresse and normal zones (zone nodes), let me elaborate a bit.
Sun Cluster is supporting 3 type of local zone models.
1. the failover zoone, where a zone is installed on shared storage and is failed over between the nodes. Note that sclabel addresse do not work here.
2. zone nodes you are failing over resource groups between zones, scalable addresses are supported here, but you can have one port bound to only one address.
3. zone clusters, in a zone cluster you have an almost complete cluster running between th zones. the zone clustrs are isolated against each other. Here you are completely free in deploying scalble address. The zones of a zone cluster are from the special brand cluster.
You have configured model 2, but you need model 3 to deploy what you want. The bad news is that you have to delete your zones and reinstall them with the clzonecluster utility. If you do not want this you must configure different ports between the multiple instances of your application. This is the only way to keep model two.
Hope that helps
Detlef -
Failover on zone cluster configured for apache on zfs filesystem takes 30 M
Hi all
I have configured zone cluster for apache service, i have used ZFS file-system as high available storage.
The failover takes around 30mts which is not acceptable. my configuration steps are outlined as below
1) configured a 2 node physical cluster.
2) configured a quorum server.
3) configured a zone cluster.
4) created a resource group in the zone cluster.
5) created a resource for logical hostname and added to the above resource group
6) created a resource for Highavailable storage ( ZFS here) and added to the above resource group
7) created a resource for apache and added to the above resource group
the failover is taking 30mts of time and shows "pending offline/online" most of the time
I reduced the number of retry's to 1 , but of no use
Any help will be appreciated
Thanks in advance
SidSorry guys for the late reply,
I tried to switch the owners of RG to both the nodes simultaniously,which is taking reasonable time.But the failover for a dry run is taking 30mts
The same setup with SVM is working fine, but i want to have ZFS in my zone cluster
Thanks in advance
Sid -
Error when creating zone cluster
Hello,
I have the following setup: Solaris 11.2 x86, cluster 4.2. I have already configured the cluster and it's up and running. I am trying to create a zone cluster, but getting the following error:
>>> Result of the Creation for the Zone cluster(ztestcluster) <<<
The zone cluster is being configured with the following configuration
/usr/cluster/bin/clzonecluster configure ztestcluster
create
set zonepath=/zclusterpool/znode
set brand=cluster
set ip-type=shared
set enable_priv_net=true
add sysid
set root_password=********
end
add node
set physical-host=node2
set hostname=zclnode2
add net
set address=192.168.10.52
set physical=net1
end
end
add node
set physical-host=node1
set hostname=zclnode1
add net
set address=192.168.10.51
set physical=net1
end
end
add net
set address=192.168.10.55
end
java.lang.NullPointerException
at java.util.regex.Matcher.getTextLength(Matcher.java:1234)
at java.util.regex.Matcher.reset(Matcher.java:308)
at java.util.regex.Matcher.<init>(Matcher.java:228)
at java.util.regex.Pattern.matcher(Pattern.java:1088)
at com.sun.cluster.zcwizards.zonecluster.ZCWizardResultPanel.consoleInteraction(ZCWizardResultPanel.java:181)
at com.sun.cluster.dswizards.clisdk.core.IteratorLayout.cliConsoleInteraction(IteratorLayout.java:563)
at com.sun.cluster.dswizards.clisdk.core.IteratorLayout.displayPanel(IteratorLayout.java:623)
at com.sun.cluster.dswizards.clisdk.core.IteratorLayout.run(IteratorLayout.java:607)
at java.lang.Thread.run(Thread.java:745)
ERROR: System configuration error
As a result of a change to the system configuration, a resource that this
wizard will create is now invalid. Review any changes that were made to the
system after you started this wizard to determine which changes might have
caused this error. Then quit and restart this wizard.
Press RETURN to close the wizard
No errors in /var/adm/messages.
Any ideas?
Thank you!I must be doing some obvious, stupid mistake, cause I still get that "not enough space" error
root@node1:~# clzonecluster show ztestcluster
=== Zone Clusters ===
Zone Cluster Name: ztestcluster
zonename: ztestcluster
zonepath: /zcluster/znode
autoboot: TRUE
brand: solaris
bootargs: <NULL>
pool: <NULL>
limitpriv: <NULL>
scheduling-class: <NULL>
ip-type: shared
enable_priv_net: TRUE
resource_security: SECURE
--- Solaris Resources for ztestcluster ---
Resource Name: net
address: 192.168.10.55
physical: auto
--- Zone Cluster Nodes for ztestcluster ---
Node Name: node2
physical-host: node2
hostname: zclnode2
--- Solaris Resources for node2 ---
Node Name: node1
physical-host: node1
hostname: zclnode1
--- Solaris Resources for node1 ---
root@node1:~# clzonecluster install ztestcluster
Waiting for zone install commands to complete on all the nodes of the zone cluster "ztestcluster"...
clzonecluster: (C801046) Command execution failed on node node2. Please refer to the console for more information
clzonecluster: (C801046) Command execution failed on node node1. Please refer to the console for more information
But I have enough FS space. I increased the virtual HDD to 25GB on each node. After global cluster installation, I still have 16GB free on each node. During the install I constantly check the free space and it should be enough (only about 500MB is consumed by downloaded packages, which leaves about 15.5GB free). And every time the installation fails at "apply-sysconfig checkpoint"... -
High available address with zone cluster
Hi,
I run through Sun documentation, and didn't fully understand, how should I ensure the high availability of an IP address "inside" Solaris zone cluster.
I installed a new zone cluster with the following configuration:
zonename: proxy
zonepath: /proxy_zone
autoboot: true
brand: cluster
bootargs:
pool:
limitpriv:
scheduling-class:
ip-type: shared
enable_priv_net: true
net:
address: 193.219.80.85
physical: auto
sysid:
root_password: ********************
name_service: NONE
nfs4_domain: dynamic
security_policy: NONE
system_locale: C
terminal: ansi
node:
physical-host: cluster1
hostname: proxy_cl1
net:
address: 193.219.80.92/27
physical: vnet0
defrouter: 193.219.80.65
node:
physical-host: cluster2
hostname: proxy_cl2
net:
address: 193.219.80.94/27
physical: vnet0
defrouter: 193.219.80.65
clzc:proxy>
clzc:proxy>
After installation, I've tried to configure a new resource group with a logicalhostname resource in it inside the zone cluster:
/usr/cluster/bin/clresourcegroup create -n proxy_cl1,proxy_cl2 sharedip
and got the following error:
clresourcegroup: (C145848) proxy_cl1: Invalid node
Is there any other way to make an IP address inside a "proxy" zone cluster high available?
Thanks.I have rapid spanning tree enabled on both switches.
The problem is that I have to disabled spanning tree on the link connecting the two switches together. If not the inter switch link will be blocked the moment I fail over the network bond because it probably thinks there is a redundant path. Is there some other way to prevent the inter switch link from blocking?
If not, how can I disable spanning tree on the aggregated link? So far I only managed to do this on a normal link, but cannot do it on an aggregated link. -
Errors when adding resources to rg in zone cluster
Hi guys,
I managed to create and bring up a zone cluster, create a rg and add a HAStoragePlus resource (zpool), but getting errors when I want to add a lh resource. Here's the output I find relevant:
root@node1:~# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
rpool 24.6G 10.0G 14.6G 40% 1.00x ONLINE -
zclusterpool 187M 98.5K 187M 0% 1.00x ONLINE -
root@node1:~# clzonecluster show ztestcluster
=== Zone Clusters ===
Zone Cluster Name: ztestcluster
zonename: ztestcluster
zonepath: /zcluster/ztestcluster
autoboot: TRUE
brand: solaris
bootargs: <NULL>
pool: <NULL>
limitpriv: <NULL>
scheduling-class: <NULL>
ip-type: shared
enable_priv_net: TRUE
resource_security: SECURE
--- Solaris Resources for ztestcluster ---
Resource Name: net
address: 192.168.10.55
physical: auto
Resource Name: dataset
name: zclusterpool
--- Zone Cluster Nodes for ztestcluster ---
Node Name: node2
physical-host: node2
hostname: zclnode2
--- Solaris Resources for node2 ---
Node Name: node1
physical-host: node1
hostname: zclnode1
--- Solaris Resources for node1 ---
Now I want to add a lh (zclusterip - 192.168.10.55) to a resource group named z-test-rg.
root@zclnode2:~# cat /etc/hosts
# Copyright 2009 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
# Internet host table
::1 localhost
127.0.0.1 localhost loghost
#zone cluster
192.168.10.51 zclnode1
192.168.10.52 zclnode2
192.168.10.55 zclusterip
root@zclnode2:~# cluster status
=== Cluster Resource Groups ===
Group Name Node Name Suspended State
z-test-rg zclnode1 No Online
zclnode2 No Offline
=== Cluster Resources ===
Resource Name Node Name State Status Message
zclusterpool-rs zclnode1 Online Online
zclnode2 Offline Offline
root@zclnode2:~# clrg show
=== Resource Groups and Resources ===
Resource Group: z-test-rg
RG_description: <NULL>
RG_mode: Failover
RG_state: Managed
Failback: False
Nodelist: zclnode1 zclnode2
--- Resources for Group z-test-rg ---
Resource: zclusterpool-rs
Type: SUNW.HAStoragePlus:10
Type_version: 10
Group: z-test-rg
R_description:
Resource_project_name: default
Enabled{zclnode1}: True
Enabled{zclnode2}: True
Monitored{zclnode1}: True
Monitored{zclnode2}: True
The error, for lh resource:
root@zclnode2:~# clrslh create -g z-test-rg -h zclusterip zclusterip-rs
clrslh: No IPMP group on zclnode1 matches prefix and IP version for zclusterip
Any ideas?
Much appreciated!Hello,
First of all, I detected a mistake in my previous config: instead of adding an ipmp, a "simple" NIC was added to cluster. I rectified that (I created zclusteripmp0 ipmp out of net11):
root@node1:~# ipadm
NAME CLASS/TYPE STATE UNDER ADDR
clprivnet0 ip ok -- --
clprivnet0/? static ok -- 172.16.3.66/26
clprivnet0/? static ok -- 172.16.2.2/24
lo0 loopback ok -- --
lo0/v4 static ok -- 127.0.0.1/8
lo0/v6 static ok -- ::1/128
lo0/zoneadmd-v4 static ok -- 127.0.0.1/8
lo0/zoneadmd-v6 static ok -- ::1/128
net0 ip ok sc_ipmp0 --
net1 ip ok sc_ipmp1 --
net2 ip ok -- --
net2/? static ok -- 172.16.0.66/26
net3 ip ok -- --
net3/? static ok -- 172.16.0.130/26
net4 ip ok sc_ipmp2 --
net5 ip ok sc_ipmp2 --
net11 ip ok zclusteripmp0 --
sc_ipmp0 ipmp ok -- --
sc_ipmp0/out dhcp ok -- 192.168.1.3/24
sc_ipmp1 ipmp ok -- --
sc_ipmp1/static1 static ok -- 192.168.10.11/24
sc_ipmp2 ipmp ok -- --
sc_ipmp2/static1 static ok -- 192.168.30.11/24
sc_ipmp2/static2 static ok -- 192.168.30.12/24
zclusteripmp0 ipmp ok -- --
zclusteripmp0/zoneadmd-v4 static ok -- 192.168.10.51/24
root@node1:~# clzonecluster export ztestcluster
create -b
set zonepath=/zcluster/ztestcluster
set brand=solaris
set autoboot=true
set enable_priv_net=true
set ip-type=shared
add net
set address=192.168.10.55
set physical=auto
end
add dataset
set name=zclusterpool
end
add attr
set name=cluster
set type=boolean
set value=true
end
add node
set physical-host=node2
set hostname=zclnode2
add net
set address=192.168.10.52
set physical=zclusteripmp0
end
end
add node
set physical-host=node1
set hostname=zclnode1
add net
set address=192.168.10.51
set physical=zclusteripmp0
end
end
An then I tried again to add the lh, but getting the same error:
root@node2:~# zlogin -C ztestcluster
[Connected to zone 'ztestcluster' console]
zclnode2 console login: root
Password:
Last login: Mon Jan 19 15:28:28 on console
Jan 19 19:17:24 zclnode2 login: ROOT LOGIN /dev/console
Oracle Corporation SunOS 5.11 11.2 June 2014
root@zclnode2:~# ipadm
NAME CLASS/TYPE STATE UNDER ADDR
clprivnet0 ip ok -- --
clprivnet0/? inherited ok -- 172.16.3.65/26
lo0 loopback ok -- --
lo0/? inherited ok -- 127.0.0.1/8
lo0/? inherited ok -- ::1/128
zclusteripmp0 ipmp ok -- --
zclusteripmp0/? inherited ok -- 192.168.10.52/24
root@zclnode2:~# cluster status
=== Cluster Resource Groups ===
Group Name Node Name Suspended State
z-test-rg zclnode1 No Offline
zclnode2 No Online
=== Cluster Resources ===
Resource Name Node Name State Status Message
zclusterpool-rs zclnode1 Offline Offline
zclnode2 Online Online
root@zclnode2:~# ipadm
NAME CLASS/TYPE STATE UNDER ADDR
clprivnet0 ip ok -- --
clprivnet0/? inherited ok -- 172.16.3.65/26
lo0 loopback ok -- --
lo0/? inherited ok -- 127.0.0.1/8
lo0/? inherited ok -- ::1/128
zclusteripmp0 ipmp ok -- --
zclusteripmp0/? inherited ok -- 192.168.10.52/24
root@zclnode2:~# clreslogicalhostname create -g z-test-rg -h zclusterip zcluste
rip-rs
clreslogicalhostname: No IPMP group on zclnode1 matches prefix and IP version for zclusterip
root@zclnode2:~#
To answer your first question, yes - all global nodes and zone cluster nodes have entries for zclusterip:
root@zclnode2:~# cat /etc/hosts
# Copyright 2009 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
# Internet host table
::1 localhost
127.0.0.1 localhost loghost
#zone cluster
192.168.10.51 zclnode1
192.168.10.52 zclnode2
192.168.10.55 zclusterip
root@zclnode2:~# ping zclnode1
zclnode1 is alive
When I tried the command you mentioned, first it gave me an error ( there was a space between interfaces), then I changed the rg group to fit mine (z-test-rg) and it (partially) worked:
root@zclnode2:~# clrs create -g z-test-rg -t LogicalHostname -p Netiflist=sc_ip
mp0@1,sc_ipmp0@2 -p Hostnamelist=zclusterip zclusterip-rs
root@zclnode2:~# clrg show
=== Resource Groups and Resources ===
Resource Group: z-test-rg
RG_description: <NULL>
RG_mode: Failover
RG_state: Managed
Failback: False
Nodelist: zclnode1 zclnode2
--- Resources for Group z-test-rg ---
Resource: zclusterpool-rs
Type: SUNW.HAStoragePlus:10
Type_version: 10
Group: z-test-rg
R_description:
Resource_project_name: default
Enabled{zclnode1}: True
Enabled{zclnode2}: True
Monitored{zclnode1}: True
Monitored{zclnode2}: True
Resource: zclusterip-rs
Type: SUNW.LogicalHostname:5
Type_version: 5
Group: z-test-rg
R_description:
Resource_project_name: default
Enabled{zclnode1}: True
Enabled{zclnode2}: True
Monitored{zclnode1}: True
Monitored{zclnode2}: True
root@zclnode2:~# cluster status
=== Cluster Resource Groups ===
Group Name Node Name Suspended State
z-test-rg zclnode1 No Offline
zclnode2 No Online
=== Cluster Resources ===
Resource Name Node Name State Status Message
zclusterip-rs zclnode1 Offline Offline
zclnode2 Online Online - LogicalHostname online.
zclusterpool-rs zclnode1 Offline Offline
zclnode2 Online Online
root@zclnode2:~# ipadm
NAME CLASS/TYPE STATE UNDER ADDR
clprivnet0 ip ok -- --
clprivnet0/? inherited ok -- 172.16.3.65/26
lo0 loopback ok -- --
lo0/? inherited ok -- 127.0.0.1/8
lo0/? inherited ok -- ::1/128
sc_ipmp0 ipmp ok -- --
sc_ipmp0/? inherited ok -- 192.168.10.55/24
zclusteripmp0 ipmp ok -- --
zclusteripmp0/? inherited ok -- 192.168.10.52/24
root@zclnode2:~# ping zclusterip
zclusterip is alive
root@zclnode2:~# clrg switch -n zclnode1 z-test-rg
root@zclnode2:~# cluster status
=== Cluster Resource Groups ===
Group Name Node Name Suspended State
z-test-rg zclnode1 No Online
zclnode2 No Offline
=== Cluster Resources ===
Resource Name Node Name State Status Message
zclusterip-rs zclnode1 Online Online - LogicalHostname online.
zclnode2 Offline Offline - LogicalHostname offline.
zclusterpool-rs zclnode1 Online Online
zclnode2 Offline Offline
root@zclnode2:~# ping zclusterip
no answer from zclusterip
root@zclnode2:~# ping zclusterip
no answer from zclusterip
root@zclnode2:~#
So, the lh was added, the rg can switch over to the other node, but zclusterip is pingable only from that cluster zone; I cannot ping zcluster ip from the cluster zone that does not hold the rg, nor from any global cluster node (node1, node2)... -
When you try to create a zone on an NFS filesystem, zoneadm is unhappy:
Zonepath /zones/test-brew is on an NFS mounted file-system.
A local file-system must be used.
could not verify zonepath /zones/test-brew because of the above errors.
zoneadm: zone test-brew failed to verify
Why is this not allowed? Systems have been using diskless setups for ages, in fact this was fairly common in the 80s and 90s. Is there a good technical reason why this is not allowed, or is it a case of "we don't think you should be doing this"? It would make zone migrations a lot easier :).murphys.law wrote:
When you try to create a zone on an NFS filesystem, zoneadm is unhappy:
Zonepath /zones/test-brew is on an NFS mounted file-system.
A local file-system must be used.
could not verify zonepath /zones/test-brew because of the above errors.
zoneadm: zone test-brew failed to verify
Why is this not allowed? My guess would be that is has something to do with server up time. NFS is a stateless protocol so if you lose access to the NFS server the server hosting the zone will just sit there and hang until the NFS server service is restored.
If you want to have mutliple servers having the possibility of serving the zone you should look into zone clustering.
alan -
I have a zone cluster up and running at present. One of my nodes died and I have replaced the hardware with another server.
How do I go about bringing this node back into the zone cluster? I plan on using the same hostname/IP.
Also, how do I import/copy the zone and its configuration to my new node?
What if both machines died and my zone was on shared storage that is still accessible? How would I import this zone into a fresh zone cluster environment?OK, nevermind.. it appears you can ping the individual nodes by simply
specifying the node's own IP address in stead of DNS names.
However, my other question remains, regarding the weblogic.Admin tool from
thread "Using weblogic.Admin to retrieve memory usage".
Erik
Erik van Zijst wrote:
Hi,
I'm writing scripts to monitor the nodes of our weblogic cluster. One of
the things I'd like to add to the crontab, is a weblogic.Admin PING to see
if nodes die. If they do, the script will notify me by SMS and mail.
Anyway, how can I ping a specific node? The cluster's DNS name contains
all node IP's, so if I use that name, it will round-robin through the
nodes, presumably pinging a different one each time.
How can I ping a specific node? Will it work if I pass the IP address of
the node as the URL? Or will it resolve and connect to a random node
anyway?
Thank in advance,
Erik van Zijst
MarketXS--
Said the attractive, cigar-smoking housewife to her girl-friend: "I got
started one night when George came home and found one burning in the
ashtray."
Maybe you are looking for
-
Report for PO wise balance order quantity with delivery due date
Dear Sir, How can we get a Report for all POs ,showing Vendor wise / PO wise / Item wise balance item quantity along with the Delivery Due Date With Thanks and Regards B Mittal
-
How to use online help instead of local help files?
I know that if I am looking for help in Adobe Acrobat Pro 9, I can hit F1 or go to Help -> Adobe Acrobat 9 Pro Help... However, I do not like that doing that sends me to the limited help files that are stored on my local drive. I would like that lin
-
This phone is new to me. How do I download my pictures - Gallery - to my pc?
This Galaxy III is new to me. How to I transfer pictures - Gallery - to my PC. I have over 500 now and can't get them over.
-
IOS app fails to install on iPod touch using flash professional cs5.5
I am having trouble testing apps on my 3rd generation iPod touch. The very first app I tried to put on there worked, the app was called mynewapp. Then I tried to put another app on there using the same provisional profile and same app id, the app fai
-
Run 10.5 and 10.4??
Hi I want to be able to install either 10.5or 10.4 as the host OS on my intel mac but want to be able to run the other version along side it similar to running Windows in Parallels or VMWare. Is this possible? thanks