Nexus 1000V VEM Issues
I am having some problems with VSM/VEM connectivity after an upgrade that I'm hoping someone can help with.
I have a 2 ESXi host cluster that I am upgrading from vSphere 5.0 to 5.5u1, and upgrading a Nexus 1000V from SV2(2.1) to SV2(2.2). I upgraded vCenter without issue (I'm using the vCSA), but when I attempted to upgrade ESXi-1 to 5.5u1 using VUM it complained that a VIB was incompatible. After tracing this VIB to the 1000V VEM, I created an ESXi 5.5u1 installer package containing the SV2(2.2) VEM VIB for ESXi 5.5 and attempted to use VUM again but was still unsuccessful
I removed the VEM VIB from the vDS and the host and was able to upgrade the host to 5.5u1. I tried to add it back to the vDS and was given the error below:
vDS operation failed on host esxi1, Received SOAP response fault from [<cs p:00007fa5d778d290, TCP:esxi1.gooch.net:443>]: invokeHostTransactionCall
Received SOAP response fault from [<cs p:1f3cee20, TCP:localhost:8307>]: invokeHostTransactionCall
An error occurred during host configuration. got (vim.fault.PlatformConfigFault) exception
I installed the VEM VIB manually at the CLI with 'esxcli software vib install -d /tmp/cisco-vem-v164-4.2.1.2.2.2.0-3.2.1.zip' and I'm able to add to to the vDS, but when I connect the uplinks and migrate the L3 Control VMKernel, I get the following error where it complains about the SPROM when the module comes online, then it eventually drops the VEM.
2014 Mar 29 15:34:54 n1kv %VEM_MGR-2-VEM_MGR_DETECTED: Host esxi1 detected as module 3
2014 Mar 29 15:34:54 n1kv %VDC_MGR-2-VDC_CRITICAL: vdc_mgr has hit a critical error: SPROM data is invalid. Please reprogram your SPROM!
2014 Mar 29 15:34:54 n1kv %VEM_MGR-2-MOD_ONLINE: Module 3 is online
2014 Mar 29 15:37:14 n1kv %VEM_MGR-2-VEM_MGR_REMOVE_NO_HB: Removing VEM 3 (heartbeats lost)
2014 Mar 29 15:37:19 n1kv %STP-2-SET_PORT_STATE_FAIL: Port state change req to PIXM failed, status = 0x41e80001 [failure] vdc 1, tree id 0, num ports 1, ports state BLK, opcode MTS_OPC_PIXM_SET_MULT_CBL_VLAN_BM_FOR_MULT_PORTS, msg id (2274781), rr_token 0x22B5DD
2014 Mar 29 15:37:21 n1kv %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
I have tried gracefully removing ESXi-1 from the vDS and cluster, reformatting it with a fresh install of ESXi 5.5u1, but when I try to join it to the N1KV it throws the same error.
Hi,
The SET_PORT_STATE_FAIL message is usually thrown when there is a communication issue between the VSM and the VEM while the port-channel interface is being programmed.
What is the uplink port profile configuration?
Other hosts are using this uplink port profile successfully?
The upstream configuration on an affected and a working host is the same? (ie control VLAN allowed where necessary)
Per kpate's post, control VLAN needs to be a system VLAN on the uplink port profile.
The VDC SPROM message is a cosmetic defect
https://tools.cisco.com/bugsearch/bug/CSCul65853/
HTH,
Joe
Similar Messages
-
[Nexus 1000v] VEM can't be add into VSM
hi all,
following my lab, i have some problems with Nexus 1000V when VEM can't be add into VSM.
+ on VSM has already installed on ESX 1 (standalone or ha) and you can see:
Cisco_N1KV# show module
Mod Ports Module-Type Model Status
1 0 Virtual Supervisor Module Nexus1000V active *
Mod Sw Hw
1 4.2(1)SV1(4a) 0.0
Mod MAC-Address(es) Serial-Num
1 00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8 NA
Mod Server-IP Server-UUID Server-Name
1 10.4.110.123 NA NA
+ on ESX2 that 's installed VEM
[root@esxhoadq ~]# vem status
VEM modules are loaded
Switch Name Num Ports Used Ports Configured Ports MTU Uplinks
vSwitch0 128 3 128 1500 vmnic0
VEM Agent (vemdpa) is running
[root@esxhoadq ~]#
any advices for this,
thanks so muchHi,
i'm having similar issue: the VEM insatlled on the ESXi is not showing up on the VSM.
please check from the following what can be wrong?
This is the VEM status:
~ # vem status -v
Package vssnet-esx5.5.0-00000-release
Version 4.2.1.1.4.1.0-2.0.1
Build 1
Date Wed Jul 27 04:42:14 PDT 2011
Number of PassThru NICs are 0
VEM modules are loaded
Switch Name Num Ports Used Ports Configured Ports MTU Uplinks
vSwitch0 128 4 128 1500 vmnic0
DVS Name Num Ports Used Ports Configured Ports MTU Uplinks
VSM11 256 40 256 1500 vmnic2,vmnic1
Number of PassThru NICs are 0
VEM Agent (vemdpa) is running
~ # vemcmd show port
LTL VSM Port Admin Link State PC-LTL SGID Vem Port
18 UP UP F/B* 0 vmnic1
19 DOWN UP BLK 0 vmnic2
* F/B: Port is BLOCKED on some of the vlans.
Please run "vemcmd show port vlans" to see the details.
~ # vemcmd show trunk
Trunk port 6 native_vlan 1 CBL 1
vlan(1) cbl 1, vlan(111) cbl 1, vlan(112) cbl 1, vlan(3968) cbl 1, vlan(3969) cbl 1, vlan(3970) cbl 1, vlan(3971) cbl 1,
Trunk port 16 native_vlan 1 CBL 1
vlan(1) cbl 1, vlan(111) cbl 1, vlan(112) cbl 1, vlan(3968) cbl 1, vlan(3969) cbl 1, vlan(3970) cbl 1, vlan(3971) cbl 1,
Trunk port 18 native_vlan 1 CBL 0
vlan(111) cbl 1, vlan(112) cbl 1,
~ # vemcmd show port
LTL VSM Port Admin Link State PC-LTL SGID Vem Port
18 UP UP F/B* 0 vmnic1
19 DOWN UP BLK 0 vmnic2
* F/B: Port is BLOCKED on some of the vlans.
Please run "vemcmd show port vlans" to see the details.
~ # vemcmd show port vlans
Native VLAN Allowed
LTL VSM Port Mode VLAN State Vlans
18 T 1 FWD 111-112
19 A 1 BLK 1
~ # vemcmd show port
LTL VSM Port Admin Link State PC-LTL SGID Vem Port
18 UP UP F/B* 0 vmnic1
19 DOWN UP BLK 0 vmnic2
* F/B: Port is BLOCKED on some of the vlans.
Please run "vemcmd show port vlans" to see the details.
~ # vemcmd show port vlans
Native VLAN Allowed
LTL VSM Port Mode VLAN State Vlans
18 T 1 FWD 111-112
19 A 1 BLK 1
~ # vemcmd show trunk
Trunk port 6 native_vlan 1 CBL 1
vlan(1) cbl 1, vlan(111) cbl 1, vlan(112) cbl 1, vlan(3968) cbl 1, vlan(3969) cbl 1, vlan(3970) cbl 1, vlan(3971) cbl 1,
Trunk port 16 native_vlan 1 CBL 1
vlan(1) cbl 1, vlan(111) cbl 1, vlan(112) cbl 1, vlan(3968) cbl 1, vlan(3969) cbl 1, vlan(3970) cbl 1, vlan(3971) cbl 1,
Trunk port 18 native_vlan 1 CBL 0
vlan(111) cbl 1, vlan(112) cbl 1,
~ # vemcmd show card
Card UUID type 2: ebd44e72-456b-11e0-0610-00000000108f
Card name: esx
Switch name: VSM11
Switch alias: DvsPortset-0
Switch uuid: c4 be 2c 50 36 c5 71 97-44 41 1f c0 43 8e 45 78
Card domain: 1
Card slot: 1
VEM Tunnel Mode: L2 Mode
VEM Control (AIPC) MAC: 00:02:3d:10:01:00
VEM Packet (Inband) MAC: 00:02:3d:20:01:00
VEM Control Agent (DPA) MAC: 00:02:3d:40:01:00
VEM SPAN MAC: 00:02:3d:30:01:00
Primary VSM MAC : 00:50:56:ac:00:42
Primary VSM PKT MAC : 00:50:56:ac:00:44
Primary VSM MGMT MAC : 00:50:56:ac:00:43
Standby VSM CTRL MAC : ff:ff:ff:ff:ff:ff
Management IPv4 address: 10.1.240.30
Management IPv6 address: 0000:0000:0000:0000:0000:0000:0000:0000
Secondary VSM MAC : 00:00:00:00:00:00
Secondary L3 Control IPv4 address: 0.0.0.0
Upgrade : Default
Max physical ports: 32
Max virtual ports: 216
Card control VLAN: 111
Card packet VLAN: 112
Card Headless Mode : Yes
Processors: 8
Processor Cores: 4
Processor Sockets: 1
Kernel Memory: 16712336
Port link-up delay: 5s
Global UUFB: DISABLED
Heartbeat Set: False
PC LB Algo: source-mac
Datapath portset event in progress : no
~ #
On VSM
VSM11# sh svs conn
connection vcenter:
ip address: 10.1.240.38
remote port: 80
protocol: vmware-vim https
certificate: default
datacenter name: New Datacenter
admin:
max-ports: 8192
DVS uuid: c4 be 2c 50 36 c5 71 97-44 41 1f c0 43 8e 45 78
config status: Enabled
operational status: Connected
sync status: Complete
version: VMware vCenter Server 4.1.0 build-345043
VSM11# sh svs ?
connections Show connection information
domain Domain Configuration
neighbors Svs neighbors information
upgrade Svs upgrade information
VSM11# sh svs dom
SVS domain config:
Domain id: 1
Control vlan: 111
Packet vlan: 112
L2/L3 Control mode: L2
L3 control interface: NA
Status: Config push to VC successful.
VSM11# sh port
^
% Invalid command at '^' marker.
VSM11# sh run
!Command: show running-config
!Time: Sun Nov 20 11:35:52 2011
version 4.2(1)SV1(4a)
feature telnet
username admin password 5 $1$QhO77JvX$A8ykNUSxMRgqZ0DUUIn381 role network-admin
banner motd #Nexus 1000v Switch#
ssh key rsa 2048
ip domain-lookup
ip domain-lookup
hostname VSM11
snmp-server user admin network-admin auth md5 0x389a68db6dcbd7f7887542ea6f8effa1
priv 0x389a68db6dcbd7f7887542ea6f8effa1 localizedkey
vrf context management
ip route 0.0.0.0/0 10.1.240.254
vlan 1,111-112
port-channel load-balance ethernet source-mac
port-profile default max-ports 32
port-profile type ethernet Unused_Or_Quarantine_Uplink
vmware port-group
shutdown
description Port-group created for Nexus1000V internal usage. Do not use.
state enabled
port-profile type vethernet Unused_Or_Quarantine_Veth
vmware port-group
shutdown
description Port-group created for Nexus1000V internal usage. Do not use.
state enabled
port-profile type ethernet system-uplink
vmware port-group
switchport mode trunk
switchport trunk allowed vlan 111-112
no shutdown
system vlan 111-112
description "System profile"
state enabled
port-profile type vethernet servers11
vmware port-group
switchport mode access
switchport access vlan 11
no shutdown
description "Data Profile for VM Traffic"
port-profile type ethernet vm-uplink
vmware port-group
switchport mode access
switchport access vlan 11
no shutdown
description "Uplink profile for VM traffic"
state enabled
vdc VSM11 id 1
limit-resource vlan minimum 16 maximum 2049
limit-resource monitor-session minimum 0 maximum 2
limit-resource vrf minimum 16 maximum 8192
limit-resource port-channel minimum 0 maximum 768
limit-resource u4route-mem minimum 32 maximum 32
limit-resource u6route-mem minimum 16 maximum 16
limit-resource m4route-mem minimum 58 maximum 58
limit-resource m6route-mem minimum 8 maximum 8
interface mgmt0
ip address 10.1.240.124/24
interface control0
line console
boot kickstart bootflash:/nexus-1000v-kickstart-mz.4.2.1.SV1.4a.bin sup-1
boot system bootflash:/nexus-1000v-mz.4.2.1.SV1.4a.bin sup-1
boot kickstart bootflash:/nexus-1000v-kickstart-mz.4.2.1.SV1.4a.bin sup-2
boot system bootflash:/nexus-1000v-mz.4.2.1.SV1.4a.bin sup-2
svs-domain
domain id 1
control vlan 111
packet vlan 112
svs mode L2
svs connection vcenter
protocol vmware-vim
remote ip address 10.1.240.38 port 80
vmware dvs uuid "c4 be 2c 50 36 c5 71 97-44 41 1f c0 43 8e 45 78" datacenter-n
ame New Datacenter
max-ports 8192
connect
vsn type vsg global
tcp state-checks
vnm-policy-agent
registration-ip 0.0.0.0
shared-secret **********
log-level
thank you
Michel -
Nexus 1000v VEM module bouncing between hosts
I'm receiving these error messages on my N1KV and don't know how to fix it. I've tried removing, rebooting, reinstalling host B's VEM but that did not fix the issue. How do I debug this?
My setup,
Two physical hosts running esxi 5.1, vcenter appliance, n1kv with two system uplinks and two uplinks for iscsi for each host. Let me know if you need more output from logs or commands, thanks.
N1KV# 2013 Jun 17 18:18:07 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.52.100 detected as module 3
2013 Jun 17 18:18:07 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
2013 Jun 17 18:18:08 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_UNEXP_NODEID_REQ: Removing VEM 3 (Unexpected Node Id Request)
2013 Jun 17 18:18:09 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
2013 Jun 17 18:18:13 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.51.100 detected as module 3
2013 Jun 17 18:18:13 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
2013 Jun 17 18:18:16 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_UNEXP_NODEID_REQ: Removing VEM 3 (Unexpected Node Id Request)
2013 Jun 17 18:18:17 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
2013 Jun 17 18:18:21 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.52.100 detected as module 3
2013 Jun 17 18:18:21 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
2013 Jun 17 18:18:22 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_UNEXP_NODEID_REQ: Removing VEM 3 (Unexpected Node Id Request)
2013 Jun 17 18:18:23 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
2013 Jun 17 18:18:28 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.51.100 detected as module 3
2013 Jun 17 18:18:29 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
2013 Jun 17 18:18:44 N1KV %PLATFORM-2-MOD_DETECT: Module 2 detected (Serial number :unavailable) Module-Type Virtual Supervisor Module Model :unavailable
N1KV# sh module
Mod Ports Module-Type Model Status
1 0 Virtual Supervisor Module Nexus1000V ha-standby
2 0 Virtual Supervisor Module Nexus1000V active *
3 248 Virtual Ethernet Module NA ok
Mod Sw Hw
1 4.2(1)SV2(1.1a) 0.0
2 4.2(1)SV2(1.1a) 0.0
3 4.2(1)SV2(1.1a) VMware ESXi 5.1.0 Releasebuild-838463 (3.1)
Mod MAC-Address(es) Serial-Num
1 00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8 NA
2 00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8 NA
3 02-00-0c-00-03-00 to 02-00-0c-00-03-80 NA
Mod Server-IP Server-UUID Server-Name
1 192.168.54.2 NA NA
2 192.168.54.2 NA NA
3 192.168.51.100 03000200-0400-0500-0006-000700080009 NA
* this terminal session
~ # vemcmd show card
Card UUID type 2: 03000200-0400-0500-0006-000700080009
Card name:
Switch name: N1KV
Switch alias: DvsPortset-1
Switch uuid: e6 dc 36 50 c0 a9 d9 a5-0b 98 fb 90 e1 fc 99 af
Card domain: 2
Card slot: 3
VEM Tunnel Mode: L3 Mode
L3 Ctrl Index: 49
L3 Ctrl VLAN: 51
VEM Control (AIPC) MAC: 00:02:3d:10:02:02
VEM Packet (Inband) MAC: 00:02:3d:20:02:02
VEM Control Agent (DPA) MAC: 00:02:3d:40:02:02
VEM SPAN MAC: 00:02:3d:30:02:02
Primary VSM MAC : 00:50:56:b6:0c:b2
Primary VSM PKT MAC : 00:50:56:b6:35:3f
Primary VSM MGMT MAC : 00:50:56:b6:d5:12
Standby VSM CTRL MAC : 00:50:56:b6:96:f2
Management IPv4 address: 192.168.51.100
Management IPv6 address: 0000:0000:0000:0000:0000:0000:0000:0000
Primary L3 Control IPv4 address: 192.168.54.2
Secondary VSM MAC : 00:00:00:00:00:00
Secondary L3 Control IPv4 address: 0.0.0.0
Upgrade : Default
Max physical ports: 32
Max virtual ports: 216
Card control VLAN: 1
Card packet VLAN: 1
Control type multicast: No
Card Headless Mode : No
Processors: 4
Processor Cores: 4
Processor Sockets: 1
Kernel Memory: 16669760
Port link-up delay: 5s
Global UUFB: DISABLED
Heartbeat Set: True
PC LB Algo: source-mac
Datapath portset event in progress : no
Licensed: Yes
~ # vemcmd show card
Card UUID type 2: 03000200-0400-0500-0006-000700080009
Card name:
Switch name: N1KV
Switch alias: DvsPortset-0
Switch uuid: e6 dc 36 50 c0 a9 d9 a5-0b 98 fb 90 e1 fc 99 af
Card domain: 2
Card slot: 3
VEM Tunnel Mode: L3 Mode
L3 Ctrl Index: 49
L3 Ctrl VLAN: 52
VEM Control (AIPC) MAC: 00:02:3d:10:02:02
VEM Packet (Inband) MAC: 00:02:3d:20:02:02
VEM Control Agent (DPA) MAC: 00:02:3d:40:02:02
VEM SPAN MAC: 00:02:3d:30:02:02
Primary VSM MAC : 00:50:56:b6:0c:b2
Primary VSM PKT MAC : 00:50:56:b6:35:3f
Primary VSM MGMT MAC : 00:50:56:b6:d5:12
Standby VSM CTRL MAC : 00:50:56:b6:96:f2
Management IPv4 address: 192.168.52.100
Management IPv6 address: 0000:0000:0000:0000:0000:0000:0000:0000
Primary L3 Control IPv4 address: 192.168.54.2
Secondary VSM MAC : 00:00:00:00:00:00
Secondary L3 Control IPv4 address: 0.0.0.0
Upgrade : Default
Max physical ports: 32
Max virtual ports: 216
Card control VLAN: 1
Card packet VLAN: 1
Control type multicast: No
Card Headless Mode : Yes
Processors: 4
Processor Cores: 4
Processor Sockets: 1
Kernel Memory: 16669764
Port link-up delay: 5s
Global UUFB: DISABLED
Heartbeat Set: False
PC LB Algo: source-mac
Datapath portset event in progress : no
Licensed: Yes
! ports 1-6 connected to physical host A
interface GigabitEthernet1/0/1
description VMWARE ESXi Trunk
switchport trunk encapsulation dot1q
switchport mode trunk
switchport nonegotiate
spanning-tree portfast trunk
spanning-tree bpdufilter enable
spanning-tree bpduguard enable
channel-group 1 mode active
! ports 7-12 connected to phys host B
interface GigabitEthernet1/0/7
description VMWARE ESXi Trunk
switchport trunk encapsulation dot1q
switchport mode trunk
switchport nonegotiate
spanning-tree portfast trunk
spanning-tree bpdufilter enable
spanning-tree bpduguard enable
channel-group 2 mode activeok after deleteing the n1kv vms and vcenter and then reinstalling all I got the error again,
N1KV# 2013 Jun 18 17:48:12 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_STATE_CONFLICT: Removing VEM 3 due to state conflict VSM(NodeId Processed), VEM(ModIns End Rcvd)
2013 Jun 18 17:48:13 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
2013 Jun 18 17:48:16 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.52.100 detected as module 3
2013 Jun 18 17:48:16 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
2013 Jun 18 17:48:22 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_STATE_CONFLICT: Removing VEM 3 due to state conflict VSM(NodeId Processed), VEM(ModIns End Rcvd)
2013 Jun 18 17:48:23 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
2013 Jun 18 17:48:34 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.52.100 detected as module 3
2013 Jun 18 17:48:34 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
2013 Jun 18 17:48:41 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_STATE_CONFLICT: Removing VEM 3 due to state conflict VSM(NodeId Processed), VEM(ModIns End Rcvd)
2013 Jun 18 17:48:42 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
2013 Jun 18 17:49:03 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.52.100 detected as module 3
2013 Jun 18 17:49:03 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
2013 Jun 18 17:49:10 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_STATE_CONFLICT: Removing VEM 3 due to state conflict VSM(NodeId Processed), VEM(ModIns End Rcvd)
2013 Jun 18 17:49:11 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
2013 Jun 18 17:49:29 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.51.100 detected as module 3
2013 Jun 18 17:49:29 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
2013 Jun 18 17:49:35 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_STATE_CONFLICT: Removing VEM 3 due to state conflict VSM(NodeId Processed), VEM(ModIns End Rcvd)
2013 Jun 18 17:49:36 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
2013 Jun 18 17:49:53 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.51.100 detected as module 3
2013 Jun 18 17:49:53 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
2013 Jun 18 17:49:59 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_STATE_CONFLICT: Removing VEM 3 due to state conflict VSM(NodeId Processed), VEM(ModIns End Rcvd)
2013 Jun 18 17:50:00 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
2013 Jun 18 17:50:05 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.52.100 detected as module 3
2013 Jun 18 17:50:05 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
Host A
~ # vemcmd show card
Card UUID type 2: 03000200-0400-0500-0006-000700080009
Card name:
Switch name: N1KV
Switch alias: DvsPortset-0
Switch uuid: e6 dc 36 50 c0 a9 d9 a5-0b 98 fb 90 e1 fc 99 af
Card domain: 2
Card slot: 1
VEM Tunnel Mode: L3 Mode
L3 Ctrl Index: 49
L3 Ctrl VLAN: 52
VEM Control (AIPC) MAC: 00:02:3d:10:02:00
VEM Packet (Inband) MAC: 00:02:3d:20:02:00
VEM Control Agent (DPA) MAC: 00:02:3d:40:02:00
VEM SPAN MAC: 00:02:3d:30:02:00
Primary VSM MAC : 00:50:56:b6:96:f2
Primary VSM PKT MAC : 00:50:56:b6:11:b6
Primary VSM MGMT MAC : 00:50:56:b6:48:c6
Standby VSM CTRL MAC : ff:ff:ff:ff:ff:ff
Management IPv4 address: 192.168.52.100
Management IPv6 address: 0000:0000:0000:0000:0000:0000:0000:0000
Primary L3 Control IPv4 address: 192.168.54.2
Secondary VSM MAC : 00:00:00:00:00:00
Secondary L3 Control IPv4 address: 0.0.0.0
Upgrade : Default
Max physical ports: 32
Max virtual ports: 216
Card control VLAN: 1
Card packet VLAN: 1
Control type multicast: No
Card Headless Mode : Yes
Processors: 4
Processor Cores: 4
Processor Sockets: 1
Kernel Memory: 16669764
Port link-up delay: 5s
Global UUFB: DISABLED
Heartbeat Set: False
PC LB Algo: source-mac
Datapath portset event in progress : no
Licensed: No
Host B
~ # vemcmd show card
Card UUID type 2: 03000200-0400-0500-0006-000700080009
Card name:
Switch name: N1KV
Switch alias: DvsPortset-0
Switch uuid: bf fb 28 50 1b 26 dd ae-05 bd 4e 48 2e 37 56 f3
Card domain: 2
Card slot: 3
VEM Tunnel Mode: L3 Mode
L3 Ctrl Index: 49
L3 Ctrl VLAN: 51
VEM Control (AIPC) MAC: 00:02:3d:10:02:02
VEM Packet (Inband) MAC: 00:02:3d:20:02:02
VEM Control Agent (DPA) MAC: 00:02:3d:40:02:02
VEM SPAN MAC: 00:02:3d:30:02:02
Primary VSM MAC : 00:50:56:a8:f5:f0
Primary VSM PKT MAC : 00:50:56:a8:3c:62
Primary VSM MGMT MAC : 00:50:56:a8:b4:a4
Standby VSM CTRL MAC : 00:50:56:a8:30:d5
Management IPv4 address: 192.168.51.100
Management IPv6 address: 0000:0000:0000:0000:0000:0000:0000:0000
Primary L3 Control IPv4 address: 192.168.54.2
Secondary VSM MAC : 00:00:00:00:00:00
Secondary L3 Control IPv4 address: 0.0.0.0
Upgrade : Default
Max physical ports: 32
Max virtual ports: 216
Card control VLAN: 1
Card packet VLAN: 1
Control type multicast: No
Card Headless Mode : No
Processors: 4
Processor Cores: 4
Processor Sockets: 1
Kernel Memory: 16669760
Port link-up delay: 5s
Global UUFB: DISABLED
Heartbeat Set: True
PC LB Algo: source-mac
Datapath portset event in progress : no
Licensed: Yes
I used the nexus 1000v java installer so I don't know what it keeps assigning the same UUID nor do I know how to change it.
Here is the other output you requested,
N1KV# show vms internal info dvs
DVS INFO:
DVS name: [N1KV]
UUID: [bf fb 28 50 1b 26 dd ae-05 bd 4e 48 2e 37 56 f3]
Description: [(null)]
Config version: [1]
Max ports: [8192]
DC name: [Galaxy]
OPQ data: size [1121], data: [data-version 1.0
switch-domain 2
switch-name N1KV
cp-version 4.2(1)SV2(1.1a)
control-vlan 1
system-primary-mac 00:50:56:a8:f5:f0
active-vsm packet mac 00:50:56:a8:3c:62
active-vsm mgmt mac 00:50:56:a8:b4:a4
standby-vsm ctrl mac 0050-56a8-30d5
inband-vlan 1
svs-mode L3
l3control-ipaddr 192.168.54.2
upgrade state 0 mac 0050-56a8-30d5 l3control-ipv4 null
cntl-type-mcast 0
profile dvportgroup-26 trunk 1,51-57,110
profile dvportgroup-26 mtu 9000
profile dvportgroup-27 access 51
profile dvportgroup-27 mtu 1500
profile dvportgroup-27 capability l3control
profile dvportgroup-28 access 52
profile dvportgroup-28 mtu 1500
profile dvportgroup-28 capability l3control
profile dvportgroup-29 access 53
profile dvportgroup-29 mtu 1500
profile dvportgroup-30 access 54
profile dvportgroup-30 mtu 1500
profile dvportgroup-31 access 55
profile dvportgroup-31 mtu 1500
profile dvportgroup-32 access 56
profile dvportgroup-32 mtu 1500
profile dvportgroup-34 trunk 220
profile dvportgroup-34 mtu 9000
profile dvportgroup-35 access 220
profile dvportgroup-35 mtu 1500
profile dvportgroup-35 capability iscsi-multipath
end-version 1.0
push_opq_data flag: [1]
show svs neighbors
Active Domain ID: 2
AIPC Interface MAC: 0050-56a8-f5f0
Inband Interface MAC: 0050-56a8-3c62
Src MAC Type Domain-id Node-id Last learnt (Sec. ago)
0050-56a8-30d5 VSM 2 0201 1020.45
0002-3d40-0202 VEM 2 0302 1.33
I cannot add Host A to the N1KV it errors out with,
vDS operation failed on host 192.168.52.100, An error occurred during host configuration. got (vim.fault.PlatformConfigFault) exception
Host B (192.168.51.100) was added fine, then I moved a vmkernel to the N1KV which brought up the VEM and got the VEM flapping errors. -
Need download link for Cisco Nexus 1000V InterCloud
We had a simliar issue with 5.2(1)SV3(1.3) and found this in the release notes:
ERSPAN
If the ERSPAN source and destination are in different subnets, and if the ERSPAN source is an L3 control VM kernel NIC attached to a Cisco Nexus 1000V VEM, you must enable proxy-ARP on the upstream switch.
If you do not enable proxy-ARP on the upstream switch (or router, if there is no default gateway), ERSPAN packets are not sent to the destination.
http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus1000/sw/5_x/release_notes/b_Cisco_N1KV_VMware_521SV313_ReleaseNotes.html#concept_652D9BADC4B04C0997E7F6C29A2C8B1F
After enabling 'ip proxy-arp' on the upstream SVI it started working properly. -
VN-Link Hardware require Nexus 1000v yes or not?
I have a problem about VN-Link Hardware. When i create port profile on UCS Manager and Create Port Profile Client then vCenter will create Port Group too. But when i apply network in Virtual Machine by select Port Group in vCenter i can't see Virtual Maching Guest in VM tab on UCS Manager.
Finally question VN-Link Hardware require Nexus 1000v install on ESX yes or not? in UCS Manager GUI document say need require DVS Switch.Thank you for reply. I have successfully turn on VN-Link hardware by follow this video --> http://tinyurl.com/23p896k
and i have install Nexus 1000v VEM in ESX for turn on VN-Link hardware.
I need test performance of CNA Card (palo) and report to my CEO.
- How to test it?
- What is tool for test?
PS.Sorry for English language -
Nexus 1000v VSM can't comunicate with the VEM
This is the configuration I have on my vsm
!Command: show running-config
!Time: Thu Dec 20 02:15:30 2012
version 4.2(1)SV2(1.1)
svs switch edition essential
no feature telnet
banner motd #Nexus 1000v Switch#
ssh key rsa 2048
ip domain-lookup
ip host Nexus-1000v 172.16.0.69
hostname Nexus-1000v
errdisable recovery cause failed-port-state
vem 3
host vmware id 78201fe5-cc43-e211-0000-00000000000c
vem 4
host vmware id e51f2078-43cc-11e2-0000-000000000009
priv 0xa2cb98ffa3f2bc53380d54d63b6752db localizedkey
vrf context management
ip route 0.0.0.0/0 172.16.0.1
vlan 1-2
port-channel load-balance ethernet source-mac
port-profile default max-ports 32
port-profile type ethernet Unused_Or_Quarantine_Uplink
vmware port-group
shutdown
description Port-group created for Nexus1000V internal usage. Do not use.
state enabled
port-profile type vethernet Unused_Or_Quarantine_Veth
vmware port-group
shutdown
description Port-group created for Nexus1000V internal usage. Do not use.
state enabled
port-profile type ethernet vmware-uplinks
vmware port-group
switchport mode trunk
switchport trunk allowed vlan 1-3967,4048-4093
channel-group auto mode on
no shutdown
system vlan 2
state enabled
port-profile type vethernet Management
vmware port-group
switchport mode access
switchport access vlan 2
no shutdown
state enabled
port-profile type vethernet vMotion
vmware port-group
switchport mode access
switchport access vlan 2
no shutdown
state enabled
port-profile type vethernet ServidoresGestion
vmware port-group
switchport mode access
switchport access vlan 2
no shutdown
state enabled
port-profile type vethernet L3-VSM
capability l3control
vmware port-group
switchport mode access
switchport access vlan 2
no shutdown
system vlan 2
state enabled
port-profile type vethernet VSG-Data
vmware port-group
switchport mode access
switchport access vlan 2
no shutdown
state enabled
port-profile type vethernet VSG-HA
vmware port-group
switchport mode access
switchport access vlan 2
no shutdown
state enabled
vdc Nexus-1000v id 1
limit-resource vlan minimum 16 maximum 2049
limit-resource monitor-session minimum 0 maximum 2
limit-resource vrf minimum 16 maximum 8192
limit-resource port-channel minimum 0 maximum 768
limit-resource u4route-mem minimum 1 maximum 1
limit-resource u6route-mem minimum 1 maximum 1
interface mgmt0
ip address 172.16.0.69/25
interface control0
line console
boot kickstart bootflash:/nexus-1000v-kickstart.4.2.1.SV2.1.1.bin sup-1
boot system bootflash:/nexus-1000v.4.2.1.SV2.1.1.bin sup-1
boot kickstart bootflash:/nexus-1000v-kickstart.4.2.1.SV2.1.1.bin sup-2
boot system bootflash:/nexus-1000v.4.2.1.SV2.1.1.bin sup-2
svs-domain
domain id 1
control vlan 1
packet vlan 1
svs mode L3 interface mgmt0
svs connection vcenter
protocol vmware-vim
remote ip address 172.16.0.66 port 80
vmware dvs uuid "ae 31 14 50 cf b2 e7 3a-5c 48 65 0f 01 9b b5 b1" datacenter-n
ame DTIC Datacenter
admin user n1kUser
max-ports 8192
connect
vservice global type vsg
tcp state-checks invalid-ack
tcp state-checks seq-past-window
no tcp state-checks window-variation
no bypass asa-traffic
vnm-policy-agent
registration-ip 172.16.0.70
shared-secret **********
policy-agent-image bootflash:/vnmc-vsmpa.2.0.0.38.bin
log-level
for some reason my vsm can't the the vem. I could before, but then my server crashed without doing a copy run start and when it booted up all my config but the uplinks was lost.
When I tried to configure the connection again it wasn't working.
I'm also attaching a screen capture of the vds
and a capture of the regular switch.
I will appreciate very much any help you could give me and will provide any configuration details that you might need.
Thank you so much.Carlos,
Looking at vds.jpg, you do not have any VEM vmkernel interface attached to port-profile L3-VSM. So fix VSM-VEM communication problem, you either migrate your VEM management vmkernel interface to L3-VSM port-profile of the vds, or create new VMkernel port on your VEM/host and attach it to L3-VSM port-profile. -
Nexus 1000V private-vlan issue
Hello
/* Style Definitions */
table.MsoNormalTable
{mso-style-name:Standardowy;
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-parent:"";
mso-padding-alt:0cm 5.4pt 0cm 5.4pt;
mso-para-margin:0cm;
mso-para-margin-bottom:.0001pt;
mso-pagination:widow-orphan;
font-size:10.0pt;
font-family:"Times New Roman";
mso-ansi-language:#0400;
mso-fareast-language:#0400;
mso-bidi-language:#0400;}
I need to transmit both the private-vlans (as promiscous trunk) and regular vlans on the trunk port between the Nexus 1000V and the physical switch. Do you know how to properly configure the uplink port to accomplish that ?
Thank you in advance
LucasControl vlan is a totally seperate VLAN then your System Console. The VLAN just needs to be available to the ESX host through the upstream physical switch and then make sure the VLAN is passed on the uplink port-profile that you assign the ESX host to.
We only need an interface on the ESX host if you decide to use L3 control. In that instance you would create or use an existing VMK interface on the ESX host. -
Nexus 1000v - port-channel "refresh"
Hi All,
My question is, does anyone have any information on this 1000v command:
Nexus-1000v(config)# port-channel internal device-id table refresh
I am looking for a way for the port-channel interface to be automatically removed from the 1000v once the VEM has been deleted, currently the port-channel interface does not disappear when the VEM has been removed. This seems to be causing problems once the same VEM is re-added later on. Ports are getting sent into quarantine states and ending up in invalid states (eg. NoPortProfile state when there is actually a port-profile attached).
Anyway, if anyone can explain the above command or tell me how to find out more, it would be great, I can't find it documented anywhere and the context-sensitive help in the NXOS is vague at best.Brendan,
I don't have much information on that command, but I do know it wont remove any unused port channels. They have to be manually deleted if they're no longer needed.
The port Channel ID will remain even after a VEM is removed in case the assigned VEM comes back. When a VEM is decommisioned permanently, I'll do a "no vem x" to also remove the Host entry for that VEM from the VSM. This way the module slot # can be re-assigned to the next new VEM inserted. After adding/removing VEMs just do a "show port-channel summary" to see any unused Port Channel IDs, and delete them. It's a quick & painless task.
I would hope this wouldn't be a common issue - how often are you deleting/removing VEMs?
Regards,
Robert -
Weird syslog format messages with Nexus 1000v
I'm trying out the Nexus 1000v, and have the VEM configured to write logs to my syslog server. The thing is, the messages are in a weird format that my log management tools cannot parse. Here is an example:
<189>: 2012 Oct 21 15:22:40 UTC: %VSHD-5-VSHD_SYSLOG_CONFIG_I: Configured from vty by admin on unknown_session
I found the documentation rather amusing, where it states "The syslog client functionality is RFC-5424 compliant" - doesn't look like they've even read the RFC! This is closer to the format of the older (but more often found in the wild, RFC3164... though not compliant with that either :/
Anyway, I guess the main issue here is that the hostname of the 1000v is not being added to the logs (it is set in my config). Any ideas how I can fix this?
Thanks!Hi,
Do you have vCenter install on Win2012 Server? The installation would not continue until you have vCenter installed.
Hardik -
Cisco Nexus 1000v on Hyper-v 2012 R2
Dears;
I have deployed Cisco Nexus 1000v on Hyper-v hosts 2012 R2, and I'm in phase of testing and exploring feature, while doing this I removed the Nexus Virtual Switch {VEM} from HOST, it disappeared from host but I couldn't use the uplink attached previously with the switch as it sees it still attached on Nexus 1000v. I tried to remove it by several ways finally the host gets unusable and I had to setup the host again.
the question here; there is no mention on cisco documents for how to uninstall or remove the VEM attached to a host, can any one help in this ?
Thanks
RegardsZoning is generally a term used with fibre channel, but I think I understand what you mean.
Microsoft Failover Clusters rely on shared storage. So you would configure your storage so that it is accessible from all three nodes of the cluster. Any LUN you want to be part of the cluster should be presented to all nodes. With iSCSI,
it is recommended to use two different IP subnets and configure MPIO. The LUNs have to be formatted as NTFS volumes. Run the cluster validation wizard once you think you have things configured correctly. It will help you find any potential
configuration issues.
After you have run a cluster validation and there aren't any warnings left that you can't resolve, build the cluster. The cluster will form with the available LUNs as storage to the cluster. Configure the storage to be Cluster Shared Volumes
for the VMs, and left the witness as the witness. By default, the cluster will take the smallest LUN to be the witness disk. If you are just using the cluster for Hyper-V (recommended) you do not need to assign drive letters to any of the disks.
You do not need, nor is it recommended to use, pass-through disks. There are many downsides to using pass through disks, and maybe one benefit, and that one is very iffy.
. : | : . : | : . tim -
Nexus 1000v repo is not available
Hi everyone.
Cisco Yum repo for nexus 1000v is not available at the moment. I am wondering, is it Ok and Cisco finished it experiment with free Nexus1k or I need to contact someon (who?) to ask him to fix this problem.
PS Link to the repo: https://cnsg-yum-server.cisco.com/yumrepoLet's set the record straight here - to avoid confusion.
1. VEMs will continue to forward traffic in the event one or both VSM are unavailable - this requires the VEM to remain online and not reboot while both VSMs are offline. VSM communication is only required for config changes (and LACP negociation prior to 1.4)
2. If there is no VSM reachable, and a VEM is reboot, only then will the System VLANs go into a forwarding state. All other non-system VLANs will remain down. This is to faciliate the Chicken & Egg theory of a VEM being able to initially communicate with a VSM to obtain its programming.
The ONLY VLANs & vEth Profiles that should be set as system vlans are:
1000v-Control
1000v-Packet
Service Console/VMkernel for Mgmt
IP Storage (iSCSI or NFS)
Everything else should not be defined as a system VLAN including VMotion - which is a common Mistake.
**Remember that for a vEth port profile to behave like a system profile, it must be define on BOTH the vEth and Eth port profiles. Two factor check. This allows port profiles that maybe are not critical, yet share the same VLAN ID to behave differently.
There are a total of 16 profiles that can include system VLANs. If you exceed this, you can potentially run into issues with the Opaque data pushed from vCenter is truncated causing programming errors on your VEMs. Adhering to the limitations above should never lead to this situation.
Regards,
Robert -
Cisco Nexus 1000v stops inheriting
Guys,
I have an issue with the Nexus 1000v, basically the trunk ports on the ESXi hosts stop inheriting from the main DATA-UP link port profile, which means that not all VLANS get presented down that given trunk port, its like it gets completey out of sync somehow. An example is below,
THIS IS A PC CONFIG THAT'S NOT WOKRING CORRECTLY
show int trunk
Po9 100,400-401,405-406,412,430,434,438-439,446,449-450,591,850
sh run int po9
interface port-channel9
inherit port-profile DATA-UP
switchport trunk allowed vlan add 438-439,446,449-450,591,850 (the system as added this not user)
THIS IS A PC CONFIG THAT IS WORKING CORRECTLY
show int trunk
Po2 100,292,300,313,400-401,405-406,412,429-430,434,438-439,446,449-450,582,591,850
sh run int po2
interface port-channel2
inherit port-profile DATA-UP
I have no idea why this keeps happening, when i remove the manual static trunk configuration on po9, everything is fine, few days later, it happens again, its not just po9, there is at least 3 port-channel that it affects.
My DATA-UP link port-profile configuration looks like this and all port channels should reflect the VLANs allowed but some are way out.
port-profile type ethernet DATA-UP
vmware port-group
switchport mode trunk
switchport trunk allowed vlan 100,292,300,313,400-401,405-406,412,429-430,434,438-439,446,449-450,5
82,591,850
channel-group auto mode on sub-group cdp
no shutdown
state enabled
The upstream switches match the same VLANs allowed and the VLAN database is a mirror image between Nexus and Upstream switches.
The Cisco Nexus version is 4.2.1
Anyone seen this problem?
CheersUsing vMotion you can perform the entire upgrade with no disruption to your virtual infrastructure.
If this is your first upgrade, I highly recommend you go through the upgrade guides in detail.
There are two main guides. One details the VSM and overall process, the other covers the VEM (ESX) side of the upgrade. They're not very long guides, and should be easy to follow.
1000v Upgrade Guide:
http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_2_1_s_v_1_4_a/upgrade/software/guide/n1000v_upgrade_software.html
VEM Upgrade Guides:
http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_2_1_s_v_1_4_a/install/vem/guide/n1000v_vem_install.html
In a nutshell the procedure looks like this:
-Backup of VSM Config
-Run pre-upgrade check script (which will identify any config issues & ensures validation of new version with old config)
-Upgrade standby VSM
-Perform switchover
-Upgrade image on old active (current standby)
-Upgrade VEM modules
One decision you'll need to make is whether to use Update Manager or not for the VEM upgrades. If you don't have many hosts, the manual method is a nice way to maintain control on exactly what's being upgrade & when. It will allow you to migrate VMs off the host, upgrade it, and then continue in this manner for all remaining hosts. The alternate is Update Manager, which can be a little sticky if it runs into issues. This method will automatically put hosts in Maintenance Mode, migrate VMs off, and then upgrade each VEM one by one. This is a non-stop process so there's a little less control from that perspective. My own preference is any environment with 10 or less hosts, I use manual, for more than that let VUM do the work.
Let me know if you have any other questions.
Regards,
Robert -
Nexus 1K VEM module shutdown (with DELL BLADE server)
Hello, This is Vince.
I am doing one of PoC with important customer.
Can anyone help me to explain what the problem is?
I have been found couples of strange situation in a Nexus 1000V with DELL BLADE server)
Actually, Network diagram is like below.
I installed each two Vsphere Esxi on the Dell Blade server.
As Diagram shows each server is connected to Cisco N5K via M8024 Dell Blade Switch.
- two N1KV VM are installed on the Esxi. (of course as Primary and Secondary)
- N5K is connected to M8024 in vPC.
- VSM and VEM are checking each other via Layer3 control interface.
- the way of uplink's port-profile port channel LB is mac pinning.
interface control0
ip address 10.10.100.10/24
svs-domain
domain id 1
control vlan 1
packet vlan 1
svs mode L3 interface control0
port-profile type ethernet Up-Link
vmware port-group
switchport mode trunk
switchport trunk allowed vlan 1-2,10,16,30,77-78,88,100,110,120-121,130
switchport trunk allowed vlan add 140-141,150,160-161,166,266,366
service-policy type queuing output N1KV_SVC_Uplink
channel-group auto mode on mac-pinning
no shutdown
system vlan 1,10,30,100
state enabled
n1000v# show module
Mod Ports Module-Type Model Status
1 0 Virtual Supervisor Module Nexus1000V ha-standby
2 0 Virtual Supervisor Module Nexus1000V active *
3 332 Virtual Ethernet Module NA ok
4 332 Virtual Ethernet Module NA ok
Mod Sw Hw
1 4.2(1)SV2(2.1a) 0.0
2 4.2(1)SV2(2.1a) 0.0
3 4.2(1)SV2(2.1a) VMware ESXi 5.5.0 Releasebuild-1331820 (3.2)
4 4.2(1)SV2(2.1a) VMware ESXi 5.5.0 Releasebuild-1331820 (3.2)
Mod Server-IP Server-UUID Server-Name
1 10.10.10.10 NA NA
2 10.10.10.10 NA NA
3 10.10.10.101 4c4c4544-0038-4210-8053-b5c04f485931 10.10.10.101
4 10.10.10.102 4c4c4544-0043-5710-8053-b4c04f335731 10.10.10.102
Let me explain what the strange things happened from now on.
If I move the Primary N1KV on the module 3 to the another Esxi of the module 4, VEM will be shutdown suddenly.
Here is sys logs.
2013 Dec 20 15:45:22 n1000v %VEM_MGR-2-VEM_MGR_REMOVE_NO_HB: Removing VEM 4 (heartbeats lost)
2013 Dec 20 15:45:22 n1000v %VIM-5-IF_DETACHED_MODULE_REMOVED: Interface Ethernet4/7 is detached (module removed)
2013 Dec 20 15:45:22 n1000v %VIM-5-IF_DETACHED_MODULE_REMOVED: Interface Ethernet4/8 is detached (module removed)
2013 Dec 20 15:45:22 n1000v %VIM-5-IF_DETACHED_MODULE_REMOVED: Interface Vethernet1 is detached (module removed)
2013 Dec 20 15:45:22 n1000v %VIM-5-IF_DETACHED_MODULE_REMOVED: Interface Vethernet17 is detached (module removed)
2013 Dec 20 15:45:22 n1000v %VIM-5-IF_DETACHED_MODULE_REMOVED: Interface Vethernet9 is detached (module removed)
2013 Dec 20 15:45:22 n1000v %VIM-5-IF_DETACHED_MODULE_REMOVED: Interface Vethernet37 is detached (module removed)
2013 Dec 20 15:46:53 n1000v %VEM_MGR-2-MOD_OFFLINE: Module 4 is offline
If I wanna make it works again then I have to do two things.
First of all, It should be selected on the Source MAC Check the way of vSwitch's Load balance.
(Port ID check is the default)
Second of all, the the order of Switch's fail over is very important.
If I change this order then VEM will be off in very soon.
Here you go, the screen capture file of These option. (you may not understand these Korean letters.)
In my opinion, the main problem is the link part between Esxi and M8024.
As you saw, Each Esxi is connected to two M8024 Dell Blade switches separately.
I saw the manual for the way N1K's uplink Load balance.
Even though there are 16 different port-channel LB way,
but It should be used only the way of src-mac If there is no supporting port-channel option in the upstreaming switches.
But I don't know exactly why this situation happened.
Can anyone help me how I make it works better.
Thanks in advance.
Best Regards,
VinceThere's not enough information to determine the reason by those two outputs alone. All those commands tell us is the VSM is removing/attaching the VEM.
The normal cause for the VEM to flap is a problem with the Control VLAN communication. The loss of 6 consecutive heart beats will cause the VEM to detach from the VSM. We need to isolate the reason why.
-Which version of 1000v & ESX?
-Are multiple VEMs affected or just one?
-Are the VSM's interfaces hosted on the DVS or vSwitch?
-What is the network topology between the VEM and VSM (primarily the control VLAN)
-Do you have the Cisco SR # I can take a look into it. TAC is your best course of action for an issue like this. There will likely need to be live troubleshooting into your network environment to determine the cause.
Regards,
Robert -
Nexus 1000V and strange ping behavior
Hi ,
I am using a Nexus 1000v a FI 6248 with a Nexus 5K in redundant architecture and I have a strange bevahior with VMs.
I am using port-profiles without any problems but in one case I have this issue
I have 2 VMs assigned to the same port profile
When the 2 Vms are on the same esx I can ping (from a VM) the gateway and the other VM, now when I move one of the VM to an other ESX (same chassis or not).
From both , I can ping the gateway, a remote IP but VMs are unreachable between them.
and a remote PC are able to ping both Vms.
I checked the mac table, from N5k it's Ok , from FI 6348 it's Ok , but from N1K I am unable to see the mac address of both VMs.
Why I tried ( I performed at each step a clear mac table)
Assign to an other vmnic , it works.
On UCS I moved it to an other vmnic , it works
On UCS I Changed the QOS policy , it works.
I reassigned it , and I had the old behavior
I checked all trunk links it's ok
So i didn't understand why I have this strange behavior and how I can troubleshoot it deeper?
I would like if possible to avoid to do that but the next step will be to create a new vmnic card and assign the same policy and after to suppress the vnmic and to recreate the old one.
RegardsFrom what you mentioned here's my thoughts.
When the two VMs are on the same host, they can reach each other. This is because they're locally switching in the VEM so this doesn't tell us much other than the VEM is working as expected.
When you move one of the VMs to a different UCS ESX host, the path changes. Let's assume you've moved one VM to a different host, within the UCS system.
UCS-Blade1(Host-A) - VM1
UCS-Blade2(Host-B) - VM2
There are two paths option from VM1 -> VM2
VM1 -> Blade1 Uplink -> Fabric Interconnect A -> Blade 2 Uplink -> VM2
or
VM1-> Blade1 Uplink -> Fabric Interconnect A -> Upstream Switch -> Fabric Interconnect B -> Blade 2 Uplink -> VM2
For the two options I've seen many instances were the FIRST option works fine, but the second doesn't. Why? Well as you can see option 1 has a path from Host A to FI-A and back down to Host B. In this path there's no northbound switching outside of UCS. This would require both VMs to be be pinned to the Hosts Uplink going to the same Fabric Interconnect.
In the second option if the path involves going from Host-A up to FI-A, then northbound to the upstream switch, then back down eventually to FI-B and then Host-B. When this path is taken, if the two VMs can't reach each other then you have some problem with your upstream switches. If both VMs reside in the same subnet, it's a Layer2 problem. If they're in different subnets, then it's a Layer 2 or 3 problem somewhere north of UCS.
So knowing this - why did manual pinning on the N1K fix your problem? What pinning does is forces a VM to a particular uplink. What likely happened in your case is you pinned both VMs to Host Uplinks that both go to the same UCS Fabric Interconnect (avoiding having to be switched northbound). Your original problem still exists, so you're not clear out of the woods yet.
Ask yourself is - Why are just these two VMs affected. Are they possibly the only VMs using a particular VLAN or subnet?
An easy test to verify the pinning to to use the command below. "x" is the module # for the host the VMs are running on.
module vem x execute vemcmd show port-old
I explain the command further in another post here -> https://supportforums.cisco.com/message/3717261#3717261. In your case you'll be looking for the VM1 and VM2 LTL's and finding out which SubGroup ID they use, then which SG_ID belongs to whch VMNIC.
I bet your find the manual pinning "that works" takes the path from each host to the same FI. If this is the case, look northbound for your L2 problem.
Regards,
Robert -
Nexus 1000v, VMWare ESX and Microsoft SC VMM
Hi,
Im curious if anybody has worked up any solutions managing network infrastructure for VMWare ESX hosts/vms with the Nexus 1000v and Microsoft's System Center Virtual Machine Manager.
There currently exists support for the 1000v and ESX and SCVMM using the Cisco 1000v software for MS Hyper-V and SCVMM. There is no suck support for VMWare ESX.
Im curious as to what others with VMWare, Nexus 1000v or equivalent and SCVMM have done to work around this issue.
Trying to get some ideas.
ThanksAaron,
The steps you have above are correct, you will need steps 1 - 4 to get it working correctly. Normally people will create a separate VLAN for their NLB interfaces/subnet, to prevent uncessisary flooding of mcast frames within the network.
To answer your questions
1) I've seen multiple customer run this configuration
2) The steps you have are correct
3) You can't enable/disable IGMP snooping on UCS. It's enabled by default and not a configurable option. There's no need to change anything within UCS in regards to MS NLB with the procedure above. FYI - the ability to disable/enable IGMP snooping on UCS is slated for an upcoming release 2.1.
This is the correct method untill the time we have the option of configuring static multicast mac entries on
the Nexus 1000v. If this is a feature you'd like, please open a TAC case and request for bug CSCtb93725 to be linked to your SR.
This will give more "push" to our develpment team to prioritize this request.
Hopefully some other customers can share their experience.
Regards,
Robert
Maybe you are looking for
-
Im trying to read data from V_TVKBZ_ASSIGN view using the query *select * from V_TVKBZ_ASSIGN into zV_TVKBZ_ASSIGN where VKORG = gs_bapikna102_gwa-SALESORG VTWEG = gs_bapikna102_gwa-DISTR_CHAN SPART = gs_bapikna102_gwa-DIVISION.* endselect. However
-
Purchasing Creative Cloud in a country where it is not available
I live in Bangladesh, a country where Adobe Creative Cloud is not available for purchase. Many people here who need it use the crack version. However, I would like to be able to use the version Adobe is offering on its site. Would it be possible for
-
How can I rotate a small video
Hi I took a small 1 minute video with my phone, but I was holding it the wrong way. I have downloaded it and the extension is .3g2 How can I rotate it? I only have regular quicktime thanks J
-
Bloody itunes 7.7 that I just updated to undid all the work I did of ordering the albums chronologically in sort artist field. Why doesn't itunes just order albums chronologically when you order by artist. So annoying!
-
CS3 / Leopard / EPSON R2400 issue
Hi Everyone, I'm a design professional who is pretty dumb when it comes to color management. I have an EPSON R2400 at home and have never gotten it to print right. I had CS2 and had all kinds of problems with it, and have since upgraded to CS3. Over