Cluster to Cluster communication
Hi Folks,
I have 2 UNIX box's with Cluster A and Cluster B.I want Cluster A communicate's
with Cluster B.
How can I do that?
Thanks
- Robot
Hi Folks,
I have 2 UNIX box's with Cluster A and Cluster B.I want Cluster A communicate's
with Cluster B.
How can I do that?
Thanks
- Robot
Similar Messages
-
Help needed setting up cluster communication network
Hi,
I need some help setting up a two node cluster communication network.
I have two virtual Servers each with four virtual nics.
I want to create a network for:
Management
Heartbeat
Live Migration
iSCSI
What type of switch would I need to create for each of them? (External/Internal/Private)
I presume that they would all have to be external as they will have to be able to communicate with the other host in the cluster through the physical network.
Would I need to configure VLANs on the physical switch to keep the traffic isolated?
ThanksThat's all. I don't see any reason to create 4th node. Regarding the iSCSI target - for the test you can install this one
https://www.starwindsoftware.com/starwind-virtual-san-free on the Hyper-V host which can create iSCSI disks. Just create Internal network for the iSCSI traffic (add third adapter to the
FS nodes of course) which is providing network visibility between the VMs and the hypervisor. (you can use the external network too of course).
Regarding the live migration - this is something you cannot test with single hyper-v host. You'll need at least two. You can try ESX which can expose virtualization instructions to the VMs and create two Hyper-V VM to test live migration.
1) It's actually possible to install StarWind Virtual SAN on the same hosts where hypervisor (in this case Hyper-V) runs. That would build so-called hyper-converged setup, something Microsoft is going to represent with Windows Server 2016 (vNext) and Storage
Spaces Direct. Except MSFT requires at least 4 nodes and StarWind can do 2 and 3 :)
2) All hypervisors except Hyper-V do support "nested" virtualization these days. So for very lean labs f.e. ESXi sounds like a better choice.
Cheers,
Anton Kolomyeytsev [MVP]
StarWind Software Chief Architect
Profile:
Blog:
Twitter:
LinkedIn:
Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.
Hi VR38DETT,
I may need your help with a couple of Starwind Virtual SAN questions.
Thanks -
Problem encountered in Cluster Communication for WLS 6.1 SP2
Hi,
Currently, we have an admin server with 2 clusters of managed servers. We realise
that the managed server in the cluster will communicate with the other servers
in the other cluster using port 7001. Is there any way to stop the communication
between the 2 clusters, as this causes our servers to hang when one of the managed
servers is hanged? Thanks.
We are using WebLogic 6.1 SP 2.
Regards,
apple
They are different. We have configured them differently.
"Sree Bodapati" <[email protected]> wrote:
>Is the multicast address for the two clusters different?
>
>sree
>
>"apple" <[email protected]> wrote in message
>news:[email protected]..
>>
>>
>> Hi,
>>
>> Currently, we have an admin server with 2 clusters of managed servers.
>We
>realise
>> that the managed server in the cluster will communicate with the other
>servers
>> in the other cluster using port 7001. Is there any way to stop the
>communication
>> between the 2 clusters, as this causes our servers to hang when one
>of the
>managed
>> servers is hanged? Thanks.
>>
>> We are using WebLogic 6.1 SP 2.
>>
>> Regards,
>> apple
>
>
-
Weblogic cluster - communication problems
We have 2 app servers clustered and they seem to be running fine other than the
following issue. Usually, once, maybe twice a day, the servers are unable to communicate...
It happens early in the morning when load is low... So it does not seem to be
a load issue. Could something be timing out? We are using 6.1 and we have JMS
on one of the servers. The only RMI we are doing between the 2 app servers are
the JMS calls, which is where we are seeing the problem. Any help would be great
Thank you. We will try this.
"Zach" <[email protected]> wrote:
>Check. SP3. This is likely a DGC timeout mismatch problem that was
>fixed
>in SP3.
>_sjz.
>
>"Rajesh Mirchandani" <[email protected]> wrote in message
>news:[email protected]...
>> If you are on WLS 6.1SP2, upgrade to SP3.
>>
>> Ryan Stuetzer wrote:
>>
>> > Zach,
>> >
>> > Another symptom we are noticing (during the time we are having the
>communication
>> > problem) is that the weblogic console displays the state of the
>cluster(s) as
>> > not running. It appears as if they are not participating in the cluster,
>however
>> > they are running and processing our client requests normally according
>to our
>> > logs.
>> >
>> > Thanks,
>> > Ryan
>> >
>> > "Zach" <[email protected]> wrote:
>> > >What type of exceptiosn are you receiving?
>> > >_sjz.
>> > >
>> > >"Ryan Stuetzer" <[email protected]> wrote in message
>> > >news:[email protected]...
>> > >>
>> > >> We have 2 app servers clustered and they seem to be running fine
>other
>> > >than the
>> > >> following issue. Usually, once, maybe twice a day, the servers
>are
>> > >unable
>> > >to communicate...
>> > >> It happens early in the morning when load is low... So it does
>not
>> > >seem to
>> > >be
>> > >> a load issue. Could something be timing out? We are using 6.1
>and we
>> > >have
>> > >JMS
>> > >> on one of the servers. The only RMI we are doing between the 2
>app
>> > >servers
>> > >are
>> > >> the JMS calls, which is where we are seeing the problem. Any help
>would
>> > >be
>> > >great
>> > >> !!!!
>> > >
>> > >
>>
>> --
>> Rajesh Mirchandani
>> Developer Relations Engineer
>> BEA Support
>>
>>
>
>
-
Problem encountered in Cluster Communication for SP2
Hi,
Currently, we have an admin server with 2 clusters of managed servers. We realise
that the managed server in the cluster will communicate with the other servers
in the other cluster using port 7001. Is there any way to stop the communication
between the 2 clusters, as this causes our servers to hang when one of the managed
servers is hanged? Thanks.
We are using WebLogic 6.1 SP 2.
Regards,
appleOur application is running on a server in cluster mode. how will this affect the
communication bet clusters?
"Wenjin Zhang" <[email protected]> wrote:
>
Unless your cluster servers run on port 7001 and you have your applications
from
one cluster talk to another cluster. Check your application flows first.
"apple" <[email protected]> wrote:
Hi,
Currently, we have an admin server with 2 clusters of managed servers.
We realise
that the managed server in the cluster will communicate with the other
servers
in the other cluster using port 7001. Is there any way to stop the communication
between the 2 clusters, as this causes our servers to hang when oneof
the managed
servers is hanged? Thanks.
We are using WebLogic 6.1 SP 2.
Regards,
apple -
Cluster Network communication doesn't go correctly through adapters
Hi!
I have from VMM configured cluster with logical networks for cluster communication, live migration and public network with management. These networks are in team of two physical adapters with converged switch. For each network is created virtual adapter on
every cluster host.
Communication in team goes weird. Communication in using one adapter and communication out using the second adapter. There should be aggregation. Team is LACP and we tried this with Hyper-V Port and Adress Hash - same behavior on both of these
configurations?
Anybody know what to do for aggregated communication?
ThanksI'd like to push this up to the labview dev folks and ask why could a string outputing version of an enum not be created and functional as a type def. I understand the difference between enums an rings. But I would like to suggest that the powers that be create a data type version for strings similar to the enum which updates all instances of a type def when changes are made to the string content within.
To draw a parallel. An enum pairs the string label with a number and packages the two as part of the type def. Instead could we not pair the string label with a string and again package the two as a type def? You could choose to have the label and string content match or not as needed but the pair would update universally.
Here is one instance where this would be useful. I have been using the Asynchronous Message Communication Library lately which utilizes the standard Labview queue funcitons. While the standard queue functions can accept various data types the AMC library is limited to string messages only and rewriting the entire AMC lib is time prehibitive.
So it would be very convienent to have something that looks like a combo box constant as a type def to feed into the AMC libraries instead of individual string constants. It would significantly reduce errors when repeatedly typing the same sting constants repeatedly only to find them the hard way after hours of debug. -
This cluster has been up and working for maybe a year and a half the way it is. There are two nodes, running Server 2012. In addition to a couple network interfaces devoted to VM traffic each node has:
Management Interface: 192.168.1.0/24
iSCSI Interface: 192.168.1.0/24
Internal Cluster Interface: 192.168.99.0/24
The iSCSI interfaces have to be on same subnet as management interfaces due to limitations in the shared storage. Basically if I segregate it I wouldn't be able access the shared storage itself for any kind of management or maintenance tasks.
I have restricted the iSCSI traffic to only use the one interface on each cluster node but I noticed that one of the cluster networks is connecting the management interface on one cluster node member with the iSCSI interface on the other cluster node member.
I would like for the cluster network to be using the management interface on both cluster node members so as not to interfere with iSCSI traffic. Can I change this?
Binding order of interfaces is the same on both boxes but maybe I did that after I created the cluster, not sure.Hi MnM Show,
Tim is correct, if you are using ISCSI Storage and using the network to get to it, it is recommended that the iSCSI Storage fabric have a dedicated and isolated network. This
network should be disabled for Cluster communications so that the network is dedicated to only storage related traffic.
This prevents intra-cluster communication as well as CSV traffic from flowing over same network. During the creation of the Cluster, ISCSI traffic will be detected and the network
will be disabled from Cluster use. This network should set to lowest in the binding order.
The related article:
Configuring Windows Failover Cluster Networks
http://blogs.technet.com/b/askcore/archive/2014/02/20/configuring-windows-failover-cluster-networks.aspx
I’m glad to be of help to you!
Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected] -
What are the host network requirements for a 2012 R2 failover cluster using fiber channel?
I've seen comments on here regarding how the heartbeat signal isn't really required anymore - is that true? We started using Hyper-V in its infancy and have upgraded gleefully every step of the way. With 2012 R2, we also upgraded from 1gb iSCSI
to 8GB Fiber Channel. Currently, I have three NICs in use on each host. One for "No cluster communication" on it's own VLAN. Another for "Allow cluster network communication on this network" but NOT allowing clients, on
a different VLAN. And lastly the public network which allows cluster comms and clients on it (public VLAN).
Is it still necessary to have all three of these NICs in use? If the heartbeat isn't necessary any more, is there any reason to not have two public IPs and do away with the rest of the network? (two for fault tolerance) Does Live Migration
still use Ethernet if FC is available? I wasn't sure what all has changed with these requirements since Hyper-V first came out.
If it matters, we have 5 servers w/160GB RAM, 8 NICs, dual HBAs connected to redundant FC switches, going to two SANs. We're running around 30 VMs right now.
Can someone share their knowledge with me regarding the proper setup for my environment? Many Thanks!Hi,
You can setup cluster with a single network but that leaves you with single point of failure on the Networking front, it is still recommended to have a heartbeat network.
Live migration would still happen though Ethernet, it has nothing to do with FC. Don't get confused, you had iSCSI for storage which used one of your VLAN and now you have FC for your storage.
Your hardware specs looks good. You can set up the following networks -
1. Public Network - Team two or more NICs (based on bandwidth aggregation)
2. Heartbeat Network - Don't use teamed Adaptor
3. Live Migration - Team two or more NICs (based on bandwidth aggregation)
Plan properly and draw guidelines to visualize and to remove single point of failure at all points.
Feel free to ask if you have some more queries.
Regards
Prabhash -
Deactivating and Activating Brokers all the time in Conv Cluster normal?
hi *,
i just wanted to know if this is a normal behaviour.
we do have a conventional cluster with 8 + 1 node
8 workes and 1 master.
in our cluster jmq logfiles we are very often getting this messages:
[26/Sep/2008:13:54:16 CEST] [B1179]: Activated broker
Address = mq://example.com:36105/?instName=jmqDivAccA2&brokerSessionUID=6667329240416509952
StartTime = 1222429874806
ProtocolVersion = 410
[26/Sep/2008:13:54:16 CEST] [B1072]: Closed cluster connection to broker mq://example.com:36105/?instName=jmqDivAccA2&brokerSessionUID=6667329240416509952
[26/Sep/2008:13:54:16 CEST] [B1180]: Deactivated broker
Address = mq://example.com:36105/?instName=jmqDivAccA2&brokerSessionUID=6667329240416509952
StartTime = 1222429874806
ProtocolVersion = 410
[26/Sep/2008:13:54:16 CEST] [B1179]: Activated broker
Address = mq://example.com:39105/?instName=jmqCzeAccA2&brokerSessionUID=8358993850445347840
StartTime = 1222429866204
ProtocolVersion = 410
[26/Sep/2008:13:54:16 CEST] [B1072]: Closed cluster connection to broker mq://example.com:39105/?instName=jmqCzeAccA2&brokerSessionUID=8358993850445347840
[26/Sep/2008:13:54:16 CEST] [B1180]: Deactivated broker
Address = mq://example.com:39105/?instName=jmqCzeAccA2&brokerSessionUID=8358993850445347840
StartTime = 1222429866204
ProtocolVersion = 410
[26/Sep/2008:13:54:16 CEST] [B1179]: Activated broker
Address = mq://example.com:37100/?instName=jmqSsfAccA1&brokerSessionUID=1687473902759115264
StartTime = 1222235763760
ProtocolVersion = 410
[26/Sep/2008:13:54:16 CEST] [B1072]: Closed cluster connection to broker mq://example.com:37100/?instName=jmqSsfAccA1&brokerSessionUID=1687473902759115264
[26/Sep/2008:13:54:16 CEST] [B1180]: Deactivated broker
Address = mq://example.com:37100/?instName=jmqSsfAccA1&brokerSessionUID=1687473902759115264
StartTime = 1222235763760
ProtocolVersion = 410 any idea why this happens so often?
right now all the nodes reside on the same hardware (very low delay) and use almost non CPU usage.
regards chrishi linda,
this cluster is for test cases on the same machine and using the very same binaries.
therefore there is for shure no version conflict or something similar.
can you send me the line how to turn on debug for cluster communication?
every broker is started like this:
nohup imqbrokerd -name jmqXXXXXXXXA1 -port 39100 -bgnd -Dimq.cluster.url=file:/home/caps/XXXXXXXXXX/broker.config -varhome /home/caps/XXXXXXXX/jmqXXXXA1 > jmqXXXXA1.out 2>&1 &and broker config is like this:
imq.cluster.brokerlist=servername:35100,servername:35105,servername:36100,servername:36105,servername:37100,servername:37105,servername:38100,servername:38105,servername:39100,servername:39105
imq.cluster.masterbroker=servername:34100after our system was patched yesterday i can not reproduce this behaviour anymore.
BUT it would be great if you can send me the lines for debuging....
regards chris -
Unable to bring up ASM on 2nd node of a 2-node Cluster
Having a very wierd problem on a 2-node cluster. I can only bring up on ASM instance at a time. If i bring up the second, it hangs. This is what the second (hung) instance puts in the alert log:
Starting ORACLE instance (normal)
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Picked latch-free SCN scheme 3
Using LOG_ARCHIVE_DEST_1 parameter default value as /ORAUTL/oraasm/product/ASM/dbs/arch
Autotune of undo retention is turned off.
LICENSE_MAX_USERS = 0
SYS auditing is disabled
ksdpec: called for event 13740 prior to event group initialization
Starting up ORACLE RDBMS Version: 10.2.0.3.0.
System parameters with non-default values:
large_pool_size = 12582912
instance_type = asm
cluster_interconnects = 192.168.0.12
cluster_database = TRUE
instance_number = 2
remote_login_passwordfile= EXCLUSIVE
background_dump_dest = /ORAUTL/oraasm/admin/+ASM2/bdump
user_dump_dest = /ORAUTL/oraasm/admin/+ASM2/udump
core_dump_dest = /ORAUTL/oraasm/admin/+ASM2/cdump
pga_aggregate_target = 0
Cluster communication is configured to use the following interface(s) for this instance
192.168.0.12
Fri Nov 21 21:10:48 2008
cluster interconnect IPC version:Oracle UDP/IP (generic)
IPC Vendor 1 proto 2
PMON started with pid=2, OS id=5428
DIAG started with pid=3, OS id=5430
PSP0 started with pid=4, OS id=5432
LMON started with pid=5, OS id=5434
LMD0 started with pid=6, OS id=5436
LMS0 started with pid=7, OS id=5438
MMAN started with pid=8, OS id=5442
DBW0 started with pid=9, OS id=5444
LGWR started with pid=10, OS id=5446
CKPT started with pid=11, OS id=5448
SMON started with pid=12, OS id=5458
RBAL started with pid=13, OS id=5475
GMON started with pid=14, OS id=5487
Fri Nov 21 21:10:49 2008
lmon registered with NM - instance id 2 (internal mem no 1)
Fri Nov 21 21:10:49 2008
Reconfiguration started (old inc 0, new inc 2)
ASM instance
List of nodes:
0 1
Global Resource Directory frozen
Communication channels reestablished
After this it hangs. i've checked everything. CRS is fine.
I suspect its the kernel revision. This is a cluster of two v890's. Kernel rev is 127127-11. Anyone seen this issue ?
thanksResponses in-line:
Have you got any issue reported from Lock Monitor's (LMON) ? (those messages are in the alert.log are summaries of the reconfiguration event.
No issues that I have seen. I see trc files on both nodes for lmon, but neither contain errors.Do you have any post issues on the date that issue began (something with Reconfiguration started) ?
This is a new build. Its going to be a DR environment (Dataguard Physical Standby), so we've never managed to get ASM up yet.Do you have any other errors on the second node on the date the issue appears (some ORA-27041 or other messages) errors?
No errors at all.What is the result of a crs_stat -t ?
HA Resource Target State
ora.vzdfwsdbp01.LISTENER_VZDFWSDBP01.lsnr ONLINE ONLINE on vzdfwsdbp01
ora.vzdfwsdbp01.gsd ONLINE ONLINE on vzdfwsdbp01
ora.vzdfwsdbp01.ons ONLINE ONLINE on vzdfwsdbp01
ora.vzdfwsdbp01.vip ONLINE ONLINE on vzdfwsdbp01
ora.vzdfwsdbp02.LISTENER_VZDFWSDBP02.lsnr ONLINE ONLINE on vzdfwsdbp02
ora.vzdfwsdbp02.gsd ONLINE ONLINE on vzdfwsdbp02
ora.vzdfwsdbp02.ons ONLINE ONLINE on vzdfwsdbp02
ora.vzdfwsdbp02.vip ONLINE ONLINE on vzdfwsdbp02
ASM isn't registered with CRS/OCR yet. I did add it at one time, but it didnt seem to make any difference.What is the release of your installation 10.2.0.4? Otherwise control if you can upgrade CRS, ASM and your RDBMS to that release.
CRS, ASM and Oracle will be 10.2.0.3Can't go to 10.2.0.4 yet as primary site is at 10.2.0.3 on a live system.
Can you please tell us what is the OS / Hardware in use?
Solaris 10, Sun v890$ uname -a
SunOS dbp02 5.10 Generic_127127-11 sun4u sparc SUNW,Sun-Fire-V890
What is the result of that on the second node:
even a startup nomount hangs on second node.connect sqlplus / as sysdba;
startup nomount
desc v$asmdiskgroup;
select name, mount from v$diskgroup;
In the case that no group is mounted do
alter database mount diskgroup 'your diskgroupname';
What is the result of that?
thanks
-toby -
CUC Cluster not functioning correctly
Saw a few posts and documents relating to this issue but they don't match up perfectly with my particular scenario. Basically, the customer could no longer log into the Unity Pub so we had to do a rebuild as nothing was working. The sub took over as it should, a co-worker rebuilt their Pub and the split-brained effect never went away. In, fact they arn't communicating at all almost a week later. Here are the things I've checked so far:
DB Replication: (from the subs perspective)
DB and Replication Services: ALL RUNNING
Cluster Replication State: Only available on the PUB
DB Version: ccm9_1_1_10000_11
Repltimeout set to: 300s
PROCESS option set to: 1
Cluster Detailed View from XXXXX-UCXN02 (2 Servers):
PING CDR Server REPL. DBver& REPL. REPLICATION SETUP
SERVER-NAME IP ADDRESS (msec) RPC? (ID) & STATUS QUEUE TABLES LOOP? (RTMT)
XXXXX-UCXN01 10.200.9.21 0.575 Yes (2) Connected 0 match Yes (2)
XXXXX-UCXN02 10.103.9.22 0.067 Yes (3) Connected 0 match Yes (2)
(Pubs perspective)
DB and Replication Services: ALL RUNNING
DB CLI Status: No other dbreplication CLI is running...
Cluster Replication State: BROADCAST SYNC Completed on 1 servers at: 2015-01-23-17-19
Last Sync Result: SYNC COMPLETED 603 tables sync'ed out of 603
Sync Errors: NO ERRORS
DB Version: ccm9_1_1_10000_11
Repltimeout set to: 300s
PROCESS option set to: 1
Cluster Detailed View from XXXXX-UCXN01 (2 Servers):
PING CDR Server REPL. DBver& REPL. REPLICATION SETUP
SERVER-NAME IP ADDRESS (msec) RPC? (ID) & STATUS QUEUE TABLES LOOP? (RTMT) & details
XXXXX-UCXN01 10.200.9.21 0.084 Yes (2) Connected 0 match Yes (2) PUB Setup Completed
XXXXX-UCXN02 10.103.9.22 0.663 Yes (3) Connected 0 match Yes (2) Setup Completed
Clusters look good:
admin:show network cluster
10.200.9.21 xxxxx-ucxn01.xxxxx.local xxxxx-ucxn01 Publisher authenticated
10.103.9.22 xxxxx-ucxn02.xxxxx.local xxxxx-ucxn02 Subscriber authenticated using TCP since Fri Jan 23 16:42:15 2015
Server Table (processnode) Entries
10.200.9.21
10.103.9.22
Successful
Overall, they are in that split-brained mode and working with CUCM but I'm not sure why it hasn't corrected itself. Both the pub and sub have been restarted to no effect.... Any ideas on why this is still happening? I am in the process of pulling logs.
Error shown by CUC at the Admin page after login:
Communication is not functioning correctly between the servers in the Cisco Unity Connection cluster. To review server status for the cluster, go to the Tools > Cluster Management page of Cisco Unity Connection Serviceability.Check NTP. Ensure Unity is synced to a stable good stratum source - preferably stratum 1,2 or 3.
On your version, time slips can cause memory leaks on servm. This in turn affects cluster communication.
You said you couldn't access Pub. Pub would have been on high CPU. Another symptom of this issue.
You can confirm by looking at the core dumps - 'utils core active list'
See if there are any servm core dumps. Most likely the server is affected by CSCug53756 / CSCud58000
HTH
Anirudh -
Failover Cluster 2008 R2 - VM lose connectivity after live migration
Hello,
I have a Failover Cluster with 3 server nodes running. I have 2 VMs running in one the the host without problems, but when I do a live migration of the VM to another host the VM lose network connectivity, for example if I leave a ping running, the ping command
has 2 response, and 3 packets lost, then 1 response again, then 4 packets lost again, and so on... If I live migrate the VM to the original host, everything goes OK again.
The same bihavior is for the 2 VMs, but I do a test with a new VM and with that new VM everything Works fine, I can live migrate it to every host.
Any advice?
Cristian L RuizHi Cristian Ruiz,
What your current host nic settings now, from you description it seems you are using the incorrect network nic design. If you are using iSCSI storage it need use the dedicate
network in cluster.
If your NIC teaming is iconfigured in switch independent + dynamic, please try to disable VMQ on VM Setting for narrow down the issue area.
More information:
VMQ Deep Dive, 1 of 3
http://blogs.technet.com/b/networking/archive/2013/09/10/vmq-deep-dive-1-of-3.aspx
I’m glad to be of help to you!
We
are trying to better understand customer views on social support experience, so your participation in this
interview project would be greatly appreciated if you have time.
Thanks for helping make community forums a great place.
Hi!
thank you for your reply!
Yes, We are using iSCSI storage, but it has its own NICs for that (2 independant NIcs just to connect the server with the storage) and they are configured to not use those NICs to cluster communication. The team configuration is just for the LAN connectivity.
The NIC teaming is configured using BACS4 software from a DELL server and in Smart Load Balancing and Failover (as you can see here
http://www.micronova.com.ar/cap01.jpg). The link you passed is for Windows Server 2012 and we are running Windows Server 2008 R2, BUT as you can see in the following capture the NICs has that feature disabled
( http://www.micronova.com.ar/cap02.jpg ).
One test that I'm thinking to do is to remove teaming configuration and test just with one independant NIC for LAN connection. But, I do not know if you think another choice.
Thanks in advance.
Cristian L Ruiz
Sorry, another choice I'm thinking too is to update the driver versión. But the server is in production and I need to take a downtime window for test that.
Cristian L Ruiz -
Packets sent out the wrong Interface on Hyper-V 2012 Failover Cluster
Here is some background information:
2 Dell PowerEdge servers running Windows Server 2012 w/ Hyper-V in a Failover Cluster environment. Each has:
1 NIC for Live Migration 192.168.80.x/24 (connected to a private switch)
1 NIC for Cluster Communication 192.168.90.x/24 (connected to a private switch)
1 NIC for iscsi 192.168.100.x/24 (connected to a private switch)
1 NIC for host management with a routable public IP (*connected to corp network) w/ gateway on this interface
1 NIC for Virtual Machine traffic (*connected to corp network)
All NICs are up, we can ping the IPs between servers on the private network and on the public facing networks. All functions of hyper-v are working and the failover cluster reports all interfaces are up and we receive no errors. Live migration
works fine. In the live migration settings i have restricted the use of the 2 NICs (live migration or cluster comm).
My problem is that our networking/security group sees on occasion (about every 10 minutes with a few other packets thrown in at different times) syn packets that are destined for the 192.168.80.3 interface goes out of the public interface and is dropped
at our border router. These should be heading out of the 192.168.80.x or 192.168.90.x interfaces without ever hitting our corporate network. Anyone have an idea of why this might be happening? Traffic is on TCP 445.
Appreciate the help.
NateHi,
Please check live migration and Cluster Communication network settings in cluster:
In the Failover Cluster Manager snap-in, if the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
If the console tree is collapsed, expand the tree under the cluster that you want to configure.
Expand Networks.
Right-click the network that you want to modify settings for, and then click Properties.
There are two options:
Allow cluster network communication on this network-- Allow clients to connect through this network
Do not allow cluster network communication on this network
If the network is used only for cluster node communication, clear “Allow clients to connect through this network” option.
Check that and give us feedback for further troubleshooting, for more information please refer to following MS articles:
Modify Network Settings for a Failover Cluster
http://technet.microsoft.com/en-us/library/cc725775.aspx
Lawrence
TechNet Community Support -
Failover Cluster Manager - Partioned Networks Server 2012
Hello,
I have a DR cluster that I am trying to validate. It has 3 nodes. Each has a teamed vEthernet adapter for Cluster Communications and Live Migration. I can start cluster service on 2 of the 3 nodes without the networks entering a partioned state. However,
I can only ping the I
Ps for those adapters from their own server . Also, It doesn't matter which 2 nodes are brought up. Any order will produce the same results. Validation gives the following error for connections between all nodes:
Network interfaces DRHost3.m1ad.xxxxxx.biz - vEthernet
(vNIC-LiveMig) and DRHost2.m1ad.xxxxxx.biz - vEthernet (vNIC-LiveMig) are on
the same cluster network, yet address 192.168.xxx.xx is not reachable from
192.168.xxx.xx using UDP on port 3343.
Update: I have created a specific inbound rule on the server firewalls for port 3343. The networks no longer show as partitioned. However, I still receive the same errors about communication on port 3343 to and from all nodes on the LiveMig and ClustPriv
networks. Any help would be appreciated.
Brian Gilmore Lead IT Technician Don-Nan Pump & SupplyWindows IP Configuration
Host Name . . . . . . . . . . . . : DRHost1
Primary Dns Suffix . . . . . . . : m1ad.don-nan.biz
Node Type . . . . . . . . . . . . : Hybrid
IP Routing Enabled. . . . . . . . : No
WINS Proxy Enabled. . . . . . . . : No
DNS Suffix Search List. . . . . . : m1ad.don-nan.biz
Ethernet adapter vEthernet (VM Public Network):
Connection-specific DNS Suffix . :
Description . . . . . . . . . . . : Hyper-V Virtual Ethernet Adapter #2
Physical Address. . . . . . . . . : 14-FE-B5-CA-35-6C
DHCP Enabled. . . . . . . . . . . : No
Autoconfiguration Enabled . . . . : Yes
Link-local IPv6 Address . . . . . : fe80::609a:8da3:7bce:c32f%31(Preferred)
IPv4 Address. . . . . . . . . . . : 192.168.9.113(Preferred)
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . : 192.168.9.1
DHCPv6 IAID . . . . . . . . . . . : 1091894965
DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-1A-F2-58-CA-14-FE-B5-CA-35-6E
DNS Servers . . . . . . . . . . . : 192.168.9.21
192.168.9.23
NetBIOS over Tcpip. . . . . . . . : Enabled
Ethernet adapter vEthernet (vNIC-ClusterPriv):
Connection-specific DNS Suffix . :
Description . . . . . . . . . . . : Hyper-V Virtual Ethernet Adapter #3
Physical Address. . . . . . . . . : 00-1D-D8-B7-1C-14
DHCP Enabled. . . . . . . . . . . : No
Autoconfiguration Enabled . . . . : Yes
Link-local IPv6 Address . . . . . : fe80::2481:996:cf44:dc3d%32(Preferred)
IPv4 Address. . . . . . . . . . . : 192.168.108.31(Preferred)
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . :
DHCPv6 IAID . . . . . . . . . . . : 1258298840
DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-1A-F2-58-CA-14-FE-B5-CA-35-6E
DNS Servers . . . . . . . . . . . : fec0:0:0:ffff::1%1
fec0:0:0:ffff::2%1
fec0:0:0:ffff::3%1
NetBIOS over Tcpip. . . . . . . . : Disabled
Ethernet adapter vEthernet (vNIC-LiveMig):
Connection-specific DNS Suffix . :
Description . . . . . . . . . . . : Hyper-V Virtual Ethernet Adapter #4
Physical Address. . . . . . . . . : 00-1D-D8-B7-1C-15
DHCP Enabled. . . . . . . . . . . : No
Autoconfiguration Enabled . . . . : Yes
Link-local IPv6 Address . . . . . : fe80::f884:a35d:aa43:720e%33(Preferred)
IPv4 Address. . . . . . . . . . . : 192.168.109.31(Preferred)
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . :
DHCPv6 IAID . . . . . . . . . . . : 1358962136
DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-1A-F2-58-CA-14-FE-B5-CA-35-6E
DNS Servers . . . . . . . . . . . : fec0:0:0:ffff::1%1
fec0:0:0:ffff::2%1
fec0:0:0:ffff::3%1
NetBIOS over Tcpip. . . . . . . . : Disabled
Ethernet adapter BC-PCI3 - iSCSI1:
Connection-specific DNS Suffix . :
Description . . . . . . . . . . . : Broadcom BCM57711 NetXtreme II 10 GigE (N
DIS VBD Client) #49
Physical Address. . . . . . . . . : 00-0A-F7-2E-B5-3C
DHCP Enabled. . . . . . . . . . . : No
Autoconfiguration Enabled . . . . : Yes
IPv4 Address. . . . . . . . . . . : 192.168.107.22(Preferred)
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . :
NetBIOS over Tcpip. . . . . . . . : Disabled
Ethernet adapter BC-PCI4 - iSCSI2:
Connection-specific DNS Suffix . :
Description . . . . . . . . . . . : Broadcom BCM57711 NetXtreme II 10 GigE (N
DIS VBD Client) #50
Physical Address. . . . . . . . . : 00-0A-F7-2E-B5-3E
DHCP Enabled. . . . . . . . . . . : No
Autoconfiguration Enabled . . . . : Yes
IPv4 Address. . . . . . . . . . . : 192.168.107.23(Preferred)
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . :
NetBIOS over Tcpip. . . . . . . . : Disabled
Ethernet adapter vEthernet (VM Public Network):
Connection-specific DNS Suffix . :
Description . . . . . . . . . . . : Hyper-V Virtual Ethernet Adapter #2
Physical Address. . . . . . . . . : 14-FE-B5-CA-35-6C
DHCP Enabled. . . . . . . . . . . : No
Autoconfiguration Enabled . . . . : Yes
Link-local IPv6 Address . . . . . : fe80::609a:8da3:7bce:c32f%31(Preferred)
IPv4 Address. . . . . . . . . . . : 192.168.9.113(Preferred)
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . : 192.168.9.1
DHCPv6 IAID . . . . . . . . . . . : 1091894965
DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-1A-F2-58-CA-14-FE-B5-CA-35-6E
DNS Servers . . . . . . . . . . . : 192.168.9.21
192.168.9.23
NetBIOS over Tcpip. . . . . . . . : Enabled
Ethernet adapter vEthernet (vNIC-ClusterPriv):
Connection-specific DNS Suffix . :
Description . . . . . . . . . . . : Hyper-V Virtual Ethernet Adapter #3
Physical Address. . . . . . . . . : 00-1D-D8-B7-1C-14
DHCP Enabled. . . . . . . . . . . : No
Autoconfiguration Enabled . . . . : Yes
Link-local IPv6 Address . . . . . : fe80::2481:996:cf44:dc3d%32(Preferred)
IPv4 Address. . . . . . . . . . . : 192.168.108.31(Preferred)
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . :
DHCPv6 IAID . . . . . . . . . . . : 1258298840
DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-1A-F2-58-CA-14-FE-B5-CA-35-6E
DNS Servers . . . . . . . . . . . : fec0:0:0:ffff::1%1
fec0:0:0:ffff::2%1
fec0:0:0:ffff::3%1
NetBIOS over Tcpip. . . . . . . . : Disabled
Ethernet adapter vEthernet (vNIC-LiveMig):
Connection-specific DNS Suffix . :
Description . . . . . . . . . . . : Hyper-V Virtual Ethernet Adapter #4
Physical Address. . . . . . . . . : 00-1D-D8-B7-1C-15
DHCP Enabled. . . . . . . . . . . : No
Autoconfiguration Enabled . . . . : Yes
Link-local IPv6 Address . . . . . : fe80::f884:a35d:aa43:720e%33(Preferred)
IPv4 Address. . . . . . . . . . . : 192.168.109.31(Preferred)
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . :
DHCPv6 IAID . . . . . . . . . . . : 1358962136
DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-1A-F2-58-CA-14-FE-B5-CA-35-6E
DNS Servers . . . . . . . . . . . : fec0:0:0:ffff::1%1
fec0:0:0:ffff::2%1
fec0:0:0:ffff::3%1
NetBIOS over Tcpip. . . . . . . . : Disabled
Ethernet adapter BC-PCI3 - iSCSI1:
Connection-specific DNS Suffix . :
Description . . . . . . . . . . . : Broadcom BCM57711 NetXtreme II 10 GigE (N
DIS VBD Client) #49
Physical Address. . . . . . . . . : 00-0A-F7-2E-B5-3C
DHCP Enabled. . . . . . . . . . . : No
Autoconfiguration Enabled . . . . : Yes
IPv4 Address. . . . . . . . . . . : 192.168.107.22(Preferred)
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . :
NetBIOS over Tcpip. . . . . . . . : Disabled
Ethernet adapter BC-PCI4 - iSCSI2:
Connection-specific DNS Suffix . :
Description . . . . . . . . . . . : Broadcom BCM57711 NetXtreme II 10 GigE (N
DIS VBD Client) #50
Physical Address. . . . . . . . . : 00-0A-F7-2E-B5-3E
DHCP Enabled. . . . . . . . . . . : No
Autoconfiguration Enabled . . . . : Yes
IPv4 Address. . . . . . . . . . . : 192.168.107.23(Preferred)
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . :
NetBIOS over Tcpip. . . . . . . . : Disabled
Windows IP Configuration
Host Name . . . . . . . . . . . . : DRHost3
Primary Dns Suffix . . . . . . . : m1ad.don-nan.biz
Node Type . . . . . . . . . . . . : Hybrid
IP Routing Enabled. . . . . . . . : No
WINS Proxy Enabled. . . . . . . . : No
DNS Suffix Search List. . . . . . : m1ad.don-nan.biz
Ethernet adapter vEthernet (VM Public Network):
Connection-specific DNS Suffix . :
Description . . . . . . . . . . . : Hyper-V Virtual Ethernet Adapter #2
Physical Address. . . . . . . . . : D0-67-E5-FB-A2-3F
DHCP Enabled. . . . . . . . . . . : No
Autoconfiguration Enabled . . . . : Yes
Link-local IPv6 Address . . . . . : fe80::9928:4d4f:4862:2ecd%31(Preferred)
IPv4 Address. . . . . . . . . . . : 192.168.9.115(Preferred)
Subnet Mask . . . . . . . . . . . : 255.255.255.0
IPv4 Address. . . . . . . . . . . : 192.168.9.119(Preferred)
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . : 192.168.9.1
DHCPv6 IAID . . . . . . . . . . . : 1104177125
DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-1A-F2-3C-E0-D0-67-E5-FB-A2-43
DNS Servers . . . . . . . . . . . : 192.168.9.21
192.168.9.23
NetBIOS over Tcpip. . . . . . . . : Enabled
Ethernet adapter vEthernet (vNIC-ClusterPriv):
Connection-specific DNS Suffix . :
Description . . . . . . . . . . . : Hyper-V Virtual Ethernet Adapter #3
Physical Address. . . . . . . . . : 00-1D-D8-B7-1C-18
DHCP Enabled. . . . . . . . . . . : No
Autoconfiguration Enabled . . . . : Yes
Link-local IPv6 Address . . . . . : fe80::3d99:312c:8f31:6411%32(Preferred)
IPv4 Address. . . . . . . . . . . : 192.168.108.33(Preferred)
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . :
DHCPv6 IAID . . . . . . . . . . . : 1258298840
DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-1A-F2-3C-E0-D0-67-E5-FB-A2-43
DNS Servers . . . . . . . . . . . : fec0:0:0:ffff::1%1
fec0:0:0:ffff::2%1
fec0:0:0:ffff::3%1
NetBIOS over Tcpip. . . . . . . . : Disabled
Ethernet adapter vEthernet (vNIC-LiveMig):
Connection-specific DNS Suffix . :
Description . . . . . . . . . . . : Hyper-V Virtual Ethernet Adapter #4
Physical Address. . . . . . . . . : 00-1D-D8-B7-1C-19
DHCP Enabled. . . . . . . . . . . : No
Autoconfiguration Enabled . . . . : Yes
Link-local IPv6 Address . . . . . : fe80::d859:b18a:71d6:8cef%33(Preferred)
IPv4 Address. . . . . . . . . . . : 192.168.109.33(Preferred)
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . :
DHCPv6 IAID . . . . . . . . . . . : 1358962136
DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-1A-F2-3C-E0-D0-67-E5-FB-A2-43
DNS Servers . . . . . . . . . . . : fec0:0:0:ffff::1%1
fec0:0:0:ffff::2%1
fec0:0:0:ffff::3%1
NetBIOS over Tcpip. . . . . . . . : Disabled
Ethernet adapter BC-PCI3-iSCSI1:
Connection-specific DNS Suffix . :
Description . . . . . . . . . . . : Broadcom BCM57711 NetXtreme II 10 GigE (N
DIS VBD Client) #49
Physical Address. . . . . . . . . : 00-0A-F7-2E-B5-60
DHCP Enabled. . . . . . . . . . . : No
Autoconfiguration Enabled . . . . : Yes
IPv4 Address. . . . . . . . . . . : 192.168.107.26(Preferred)
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . :
NetBIOS over Tcpip. . . . . . . . : Disabled
Ethernet adapter BC-PCI4-iSCSI2:
Connection-specific DNS Suffix . :
Description . . . . . . . . . . . : Broadcom BCM57711 NetXtreme II 10 GigE (N
DIS VBD Client) #50
Physical Address. . . . . . . . . : 00-0A-F7-2E-B5-62
DHCP Enabled. . . . . . . . . . . : No
Autoconfiguration Enabled . . . . : Yes
IPv4 Address. . . . . . . . . . . : 192.168.107.27(Preferred)
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . :
NetBIOS over Tcpip. . . . . . . . : Disabled
Brian Gilmore Lead IT Technician Don-Nan Pump & Supply -
Hyper-V Failover Cluster - 4 NIC - QoS
Dear All,
Please have a look at the link below:
http://technet.microsoft.com/en-us/library/jj735302.aspx#bkmk_1
I am trying to achieve the third configuration (4 NICs in two teams), which is thrid on the list.
I have built two NIC Teams as suggested in the configuration. One for Virtual Machines (Switch Independent, Hyper-V Port) and one for Management OS (Switch Independent, Dynamic). The TCP/IP Settings on NIC Team for Virtual Machines has everything
configured that is IP Address, Subnet Mask, Gateway and DNS. The TCP/IP Settings on the NIC Team for Management OS has only IP Address and Subnet Mask defined.
1. Is this the only configuration required at Management OS NIC Team or do i need to provide the Gateway and DNS as well? How will two-2 Gateways be handled if gateway is required on both teams? or they will work?
2. Secondly, how will I be adding tNICs to the NIC Team for ManagementOS for Cluster Communication, Heartbeat and Live Migration? What would be the TCP/IP settings (IP Address, Gateway, DNS) on those three tNICs? would they need to be on different
networks, i.e. different network address for all three tNICs?
3. Thirdly, how will I be adding vNICs to virtual switch of hyper-v in case there are virtual machines who are from different VLANs? What configuration is required on the hyper-v for different VLANs to work on virtual machines?
4. Fourthly, what would be configuration required on the network/switch level?
5. Is configuration for both teams correct?
Thanks in advance.Hi Junkie,
Here is a link regarding to teaming mode :
http://blogs.technet.com/b/keithmayer/archive/2012/10/16/nic-teaming-in-windows-server-2012-do-i-need-to-configure-my-switch.aspx#.UsU-6ySQ-M9
As for cluster network please refer to the following article :
http://technet.microsoft.com/en-us/library/dn550728.aspx#BKMK_NICTeam
Best Regards
Elton Ji
We
are trying to better understand customer views on social support experience, so your participation in this
interview project would be greatly appreciated if you have time.
Thanks for helping make community forums a great place.
Maybe you are looking for
-
I am unable to install Itunes on Windows 7 as I am getting the message "an error occured during the installation of assembly "Microsoft VC80.CRT Type ="win32" version="8.0.50727.6195",public key token 1fc8b 3b 9a 1e 18e 3b",processorArchitecture="x86
-
Exchange rate difference not getting posted to KDM account during MIGO/MIRO
Hello Gurus, I have a purchase order with many line items. For some of the line items, the exchagne rate difference between the date of MIGO and the date of MIGO is taken to the KDM account. But for some other line items, it is not being taken the KD
-
Match code : value #
Hello all, I wish to delete value '#', when I execute match code of master data on query. This value is display, even if this value never exist in my infoprovider. Someone have a solution ? Regards, Sebastien
-
Best way to back up your app.
What is considered "Best Practices" for the method of saving different versions of your APEX application as you are developing, so that you can go back to that version if something goes wrong? APEX allows you to Export/Import as well as to copy the a
-
Trying to install Cs2 but old indesign won't uninstall or repair?
Trying to install the "new" posted on adobe, Cs2. Win vista for business. 1st step- add/remove programs. old indesign won't uninstall or repair! Get " was interrupted - try again" "can't find xxxxx file in dat 1 cabinet" ??? Searched forums for hours