Migrate physical adapter to Nexus 1000v's specific uplink port Group
When I run the below script in vmware powercli, the physical adapters get added to N1K's "Unused_Or_Quarantine_Uplink" port group. I got "sys-uplink" PortGroup in my N1K (VSM-DVS-SCALE), and I want the physical adapters to get added to this "sys-uplink".
Issue is Add-VDSwitchPhysicalNetworkAdapter does not have a option to specify which Port-Group the adapter should be added to. Any workarounds to solve this issue. Looks like customers are facing similar issue and moving away from N1k to vmware's dvs (see
https://communities.vmware.com/thread/442897?start=0&tstart=0)
$vmhost = Get-Datacenter Dao | Get-VMHost "192.100.12.16"
$myVDSwitch = Get-VDSwitch -Name "VSM-DVS-SCALE" -Location Dao
$hostsphysicalnic = $vmhost | Get-VMHostNetworkAdapter -name vmnic2,vmnic1
$myVDPortGroup = get-vdportgroup -name $myVDPortGroup -vdswitch $myVDSwitch
Add-VDSwitchPhysicalNetworkAdapter -VMHostNetworkAdapter $hostsPhysicalNic -DistributedSwitch "VSM-DVS-SCALE"
When I run the below script in vmware powercli, the physical adapters get added to N1K's "Unused_Or_Quarantine_Uplink" port group. I got "sys-uplink" PortGroup in my N1K (VSM-DVS-SCALE), and I want the physical adapters to get added to this "sys-uplink".
Issue is Add-VDSwitchPhysicalNetworkAdapter does not have a option to specify which Port-Group the adapter should be added to. Any workarounds to solve this issue. Looks like customers are facing similar issue and moving away from N1k to vmware's dvs (see
https://communities.vmware.com/thread/442897?start=0&tstart=0)
$vmhost = Get-Datacenter Dao | Get-VMHost "192.100.12.16"
$myVDSwitch = Get-VDSwitch -Name "VSM-DVS-SCALE" -Location Dao
$hostsphysicalnic = $vmhost | Get-VMHostNetworkAdapter -name vmnic2,vmnic1
$myVDPortGroup = get-vdportgroup -name $myVDPortGroup -vdswitch $myVDSwitch
Add-VDSwitchPhysicalNetworkAdapter -VMHostNetworkAdapter $hostsPhysicalNic -DistributedSwitch "VSM-DVS-SCALE"
Similar Messages
-
Cisco Nexus 1000v stops inheriting
Guys,
I have an issue with the Nexus 1000v, basically the trunk ports on the ESXi hosts stop inheriting from the main DATA-UP link port profile, which means that not all VLANS get presented down that given trunk port, its like it gets completey out of sync somehow. An example is below,
THIS IS A PC CONFIG THAT'S NOT WOKRING CORRECTLY
show int trunk
Po9 100,400-401,405-406,412,430,434,438-439,446,449-450,591,850
sh run int po9
interface port-channel9
inherit port-profile DATA-UP
switchport trunk allowed vlan add 438-439,446,449-450,591,850 (the system as added this not user)
THIS IS A PC CONFIG THAT IS WORKING CORRECTLY
show int trunk
Po2 100,292,300,313,400-401,405-406,412,429-430,434,438-439,446,449-450,582,591,850
sh run int po2
interface port-channel2
inherit port-profile DATA-UP
I have no idea why this keeps happening, when i remove the manual static trunk configuration on po9, everything is fine, few days later, it happens again, its not just po9, there is at least 3 port-channel that it affects.
My DATA-UP link port-profile configuration looks like this and all port channels should reflect the VLANs allowed but some are way out.
port-profile type ethernet DATA-UP
vmware port-group
switchport mode trunk
switchport trunk allowed vlan 100,292,300,313,400-401,405-406,412,429-430,434,438-439,446,449-450,5
82,591,850
channel-group auto mode on sub-group cdp
no shutdown
state enabled
The upstream switches match the same VLANs allowed and the VLAN database is a mirror image between Nexus and Upstream switches.
The Cisco Nexus version is 4.2.1
Anyone seen this problem?
CheersUsing vMotion you can perform the entire upgrade with no disruption to your virtual infrastructure.
If this is your first upgrade, I highly recommend you go through the upgrade guides in detail.
There are two main guides. One details the VSM and overall process, the other covers the VEM (ESX) side of the upgrade. They're not very long guides, and should be easy to follow.
1000v Upgrade Guide:
http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_2_1_s_v_1_4_a/upgrade/software/guide/n1000v_upgrade_software.html
VEM Upgrade Guides:
http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_2_1_s_v_1_4_a/install/vem/guide/n1000v_vem_install.html
In a nutshell the procedure looks like this:
-Backup of VSM Config
-Run pre-upgrade check script (which will identify any config issues & ensures validation of new version with old config)
-Upgrade standby VSM
-Perform switchover
-Upgrade image on old active (current standby)
-Upgrade VEM modules
One decision you'll need to make is whether to use Update Manager or not for the VEM upgrades. If you don't have many hosts, the manual method is a nice way to maintain control on exactly what's being upgrade & when. It will allow you to migrate VMs off the host, upgrade it, and then continue in this manner for all remaining hosts. The alternate is Update Manager, which can be a little sticky if it runs into issues. This method will automatically put hosts in Maintenance Mode, migrate VMs off, and then upgrade each VEM one by one. This is a non-stop process so there's a little less control from that perspective. My own preference is any environment with 10 or less hosts, I use manual, for more than that let VUM do the work.
Let me know if you have any other questions.
Regards,
Robert -
Nexus 1000V VMotion between 2 different 1KV Switches
Hello Virtual Experts,
I was informed that you cannot Vmotion from one N1kv in one domain ID instance to another N1kv in a different domain ID.
As I understand, every Nexus 1000v switch needs to be in its own domain.
If this is the case, how does Cisco facilitate VMotion between switches? How does Cisco facilitate long range Vmotion?
Any response is much appreciated.
/r
RobRobert,
You are correct, just as with any vDS and a standard vSwitch you can't VMotion them between (while the Network interfaces are connected anyway). VMotion will fail the Network Port Group validation. The networking is what is tripping you up here, and it's not specific to Cisco, it's a VMware validation requirement.
With long distance vMotion, the VMs are still part of the same DVS so there's no issue here.
You have a couple options here.
1. You can do a cold migration, then re-assign the network binding on the destination switch. This would require VM downtime.
2. If going from a Host connected to a vDS to a Host using a vSwitch, you can create a temporaty vSwitch on the source host, create the Port Group with the same name as the Destination host's Port Group, give it an uplink and then migrate it that way from there. This can be done online w/o downtime of the VM.
Not sure of any other methods, but if anyone else has an idea, feel free to share!
Regards,
Robert -
Can a Nexus 1000v be configured to NOT do local switching in an ESX host?
Before the big YES, use an external Nexus switch and use VN-Tag. The question is when there is a 3120 in a blade chassis that connects to the ESX hosts that have a 1000v installed on the ESX host. So, first hop outside the ESX host is not a Nexus box.
Looking for if this is possible, if so how, and if not, where that might be documented. I have a client who's security policy prohibits switching (yes, even on the same VLAN) within a host (in this case blade server). Oh and there is an insistance to use 3120s inside the blade chassis.
Has to be the strangest request I have had in a while.
Any data would be GREATY appreciated!Thanks for the follow up.
So by private VLANs, are you referring to "PVLAN":
"PVLANs: PVLANs are a new feature available with the VMware vDS and the Cisco Nexus
1000V Series. PVLANs provide a simple mechanism for isolating virtual machines in the
same VLAN from each other. The VMware vDS implements PVLAN enforcement at the
destination host. The Cisco Nexus 1000V Series supports a highly efficient enforcement
mechanism that filters packets at the source rather than at the destination, helping ensure
that no unwanted traffic traverses the physical network and so increasing the network
bandwidth available to other virtual machines" -
VN-Tag with Nexus 1000v and Blades
Hi folks,
A while ago there was a discussion on this forum regarding the use of Catalyst 3020/3120 blades switches in conjunction with VN-tag. Specifically, you can't do VN-Tag with that Catalyst blade switch sitting inbetween the Nexus 1000V and the Nexus 5000. I know there's a Blade switch for the IBM blade servers, but will there be a similar version for the HP C-class blades? My guess is NO, since Cisco just kicked HP to the curb. But if that's the case, what are my options? Pass-through switches? (ugh!)
Previous thread:
https://supportforums.cisco.com/message/469303#469303wondering the same...
-
Nexus 1000V private-vlan issue
Hello
/* Style Definitions */
table.MsoNormalTable
{mso-style-name:Standardowy;
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-parent:"";
mso-padding-alt:0cm 5.4pt 0cm 5.4pt;
mso-para-margin:0cm;
mso-para-margin-bottom:.0001pt;
mso-pagination:widow-orphan;
font-size:10.0pt;
font-family:"Times New Roman";
mso-ansi-language:#0400;
mso-fareast-language:#0400;
mso-bidi-language:#0400;}
I need to transmit both the private-vlans (as promiscous trunk) and regular vlans on the trunk port between the Nexus 1000V and the physical switch. Do you know how to properly configure the uplink port to accomplish that ?
Thank you in advance
LucasControl vlan is a totally seperate VLAN then your System Console. The VLAN just needs to be available to the ESX host through the upstream physical switch and then make sure the VLAN is passed on the uplink port-profile that you assign the ESX host to.
We only need an interface on the ESX host if you decide to use L3 control. In that instance you would create or use an existing VMK interface on the ESX host. -
Hi,
We are planning to install Cisco Nexus 1000v in our environment. Before we want to install we want to explore little bit about Cisco Nexus 1000v
• I know there is 2 elements for Cisco 1k, VEM and VSM. Does VSM is required? Can we configure VEM individually?
• How does Nexus 1k integrated with vCenter. Can we do all Nexus 1000v configuration from vCenter without going to VEM or VSM?
• In term of alarming and reporting, does we need to get SNMP trap and get from individual VEM or can be use VSM to do that. OR can we get Cisco Nexus 1000v alarming and reporting form VMware vCenter.
• Apart from using Nexus 1010 can what’s the recommended hosting location for VSM, (same Host as VEM, different VM, and different physical server)
Foyez AhammedHi Foyez,
Here is a brief on the Nexus1000v and I'll answer some of your questions in that:
The Nexus1000v is a Virtual Distributed Switch (software based) from Cisco which integrated with the vSphere environment to provide uniform networking across your vmware environment for the host as well as the VMs. There are two components to the N1K infrastructure 1) VSM 2) VEM.
VSM - Virtual supervisor module is the one which controls the entire N1K setup and is from where the configuration is done for the VEM modules, interfaces, security, monitoring etc. VSM is the one which interacts with the VC.
VEM - Virtual ethernet module are simply the module or virtual linecards which provide the connectivity option or virtual ports for the VMs and other virtaul interfaces. Each ESX host today can only have one VEM. These VEMs recieve their configuration / programing from the VSM.
If you are aware of any other switching products from Cisco like the Cat 6k switches, the n1k behaves the same way but in a software / virtual environment. Where the VSM are equal of a SUPs and the VEM are similar to the line cards. The control and the packet VLANs in the n1k provide the same kind of AIPC and Inband connectivity as the 6k backplane would for the communication between the modules and the SUP (VSM in this case).
*The n1k configuration is done only from the VSM and is visible in the VC.However the port-profiles created from the VSM are pushed from the VSM to the VC and have to be assigned to the virtual / physical ports from the VC.
*You can run the VSM either on the Nexus1010 as a Virtual service blade (VSB) or as a normal VM on any of the ESX/ESXi server. The VSM and the VEM on the same server are fully supported.
You can refer the following deployment guide for some more details: http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/guide_c07-556626.html
Hope this answers your queries!
./Abhinav -
I am having some problems with VSM/VEM connectivity after an upgrade that I'm hoping someone can help with.
I have a 2 ESXi host cluster that I am upgrading from vSphere 5.0 to 5.5u1, and upgrading a Nexus 1000V from SV2(2.1) to SV2(2.2). I upgraded vCenter without issue (I'm using the vCSA), but when I attempted to upgrade ESXi-1 to 5.5u1 using VUM it complained that a VIB was incompatible. After tracing this VIB to the 1000V VEM, I created an ESXi 5.5u1 installer package containing the SV2(2.2) VEM VIB for ESXi 5.5 and attempted to use VUM again but was still unsuccessful
I removed the VEM VIB from the vDS and the host and was able to upgrade the host to 5.5u1. I tried to add it back to the vDS and was given the error below:
vDS operation failed on host esxi1, Received SOAP response fault from [<cs p:00007fa5d778d290, TCP:esxi1.gooch.net:443>]: invokeHostTransactionCall
Received SOAP response fault from [<cs p:1f3cee20, TCP:localhost:8307>]: invokeHostTransactionCall
An error occurred during host configuration. got (vim.fault.PlatformConfigFault) exception
I installed the VEM VIB manually at the CLI with 'esxcli software vib install -d /tmp/cisco-vem-v164-4.2.1.2.2.2.0-3.2.1.zip' and I'm able to add to to the vDS, but when I connect the uplinks and migrate the L3 Control VMKernel, I get the following error where it complains about the SPROM when the module comes online, then it eventually drops the VEM.
2014 Mar 29 15:34:54 n1kv %VEM_MGR-2-VEM_MGR_DETECTED: Host esxi1 detected as module 3
2014 Mar 29 15:34:54 n1kv %VDC_MGR-2-VDC_CRITICAL: vdc_mgr has hit a critical error: SPROM data is invalid. Please reprogram your SPROM!
2014 Mar 29 15:34:54 n1kv %VEM_MGR-2-MOD_ONLINE: Module 3 is online
2014 Mar 29 15:37:14 n1kv %VEM_MGR-2-VEM_MGR_REMOVE_NO_HB: Removing VEM 3 (heartbeats lost)
2014 Mar 29 15:37:19 n1kv %STP-2-SET_PORT_STATE_FAIL: Port state change req to PIXM failed, status = 0x41e80001 [failure] vdc 1, tree id 0, num ports 1, ports state BLK, opcode MTS_OPC_PIXM_SET_MULT_CBL_VLAN_BM_FOR_MULT_PORTS, msg id (2274781), rr_token 0x22B5DD
2014 Mar 29 15:37:21 n1kv %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
I have tried gracefully removing ESXi-1 from the vDS and cluster, reformatting it with a fresh install of ESXi 5.5u1, but when I try to join it to the N1KV it throws the same error.Hi,
The SET_PORT_STATE_FAIL message is usually thrown when there is a communication issue between the VSM and the VEM while the port-channel interface is being programmed.
What is the uplink port profile configuration?
Other hosts are using this uplink port profile successfully?
The upstream configuration on an affected and a working host is the same? (ie control VLAN allowed where necessary)
Per kpate's post, control VLAN needs to be a system VLAN on the uplink port profile.
The VDC SPROM message is a cosmetic defect
https://tools.cisco.com/bugsearch/bug/CSCul65853/
HTH,
Joe -
[Nexus 1000v] VEM can't be add into VSM
hi all,
following my lab, i have some problems with Nexus 1000V when VEM can't be add into VSM.
+ on VSM has already installed on ESX 1 (standalone or ha) and you can see:
Cisco_N1KV# show module
Mod Ports Module-Type Model Status
1 0 Virtual Supervisor Module Nexus1000V active *
Mod Sw Hw
1 4.2(1)SV1(4a) 0.0
Mod MAC-Address(es) Serial-Num
1 00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8 NA
Mod Server-IP Server-UUID Server-Name
1 10.4.110.123 NA NA
+ on ESX2 that 's installed VEM
[root@esxhoadq ~]# vem status
VEM modules are loaded
Switch Name Num Ports Used Ports Configured Ports MTU Uplinks
vSwitch0 128 3 128 1500 vmnic0
VEM Agent (vemdpa) is running
[root@esxhoadq ~]#
any advices for this,
thanks so muchHi,
i'm having similar issue: the VEM insatlled on the ESXi is not showing up on the VSM.
please check from the following what can be wrong?
This is the VEM status:
~ # vem status -v
Package vssnet-esx5.5.0-00000-release
Version 4.2.1.1.4.1.0-2.0.1
Build 1
Date Wed Jul 27 04:42:14 PDT 2011
Number of PassThru NICs are 0
VEM modules are loaded
Switch Name Num Ports Used Ports Configured Ports MTU Uplinks
vSwitch0 128 4 128 1500 vmnic0
DVS Name Num Ports Used Ports Configured Ports MTU Uplinks
VSM11 256 40 256 1500 vmnic2,vmnic1
Number of PassThru NICs are 0
VEM Agent (vemdpa) is running
~ # vemcmd show port
LTL VSM Port Admin Link State PC-LTL SGID Vem Port
18 UP UP F/B* 0 vmnic1
19 DOWN UP BLK 0 vmnic2
* F/B: Port is BLOCKED on some of the vlans.
Please run "vemcmd show port vlans" to see the details.
~ # vemcmd show trunk
Trunk port 6 native_vlan 1 CBL 1
vlan(1) cbl 1, vlan(111) cbl 1, vlan(112) cbl 1, vlan(3968) cbl 1, vlan(3969) cbl 1, vlan(3970) cbl 1, vlan(3971) cbl 1,
Trunk port 16 native_vlan 1 CBL 1
vlan(1) cbl 1, vlan(111) cbl 1, vlan(112) cbl 1, vlan(3968) cbl 1, vlan(3969) cbl 1, vlan(3970) cbl 1, vlan(3971) cbl 1,
Trunk port 18 native_vlan 1 CBL 0
vlan(111) cbl 1, vlan(112) cbl 1,
~ # vemcmd show port
LTL VSM Port Admin Link State PC-LTL SGID Vem Port
18 UP UP F/B* 0 vmnic1
19 DOWN UP BLK 0 vmnic2
* F/B: Port is BLOCKED on some of the vlans.
Please run "vemcmd show port vlans" to see the details.
~ # vemcmd show port vlans
Native VLAN Allowed
LTL VSM Port Mode VLAN State Vlans
18 T 1 FWD 111-112
19 A 1 BLK 1
~ # vemcmd show port
LTL VSM Port Admin Link State PC-LTL SGID Vem Port
18 UP UP F/B* 0 vmnic1
19 DOWN UP BLK 0 vmnic2
* F/B: Port is BLOCKED on some of the vlans.
Please run "vemcmd show port vlans" to see the details.
~ # vemcmd show port vlans
Native VLAN Allowed
LTL VSM Port Mode VLAN State Vlans
18 T 1 FWD 111-112
19 A 1 BLK 1
~ # vemcmd show trunk
Trunk port 6 native_vlan 1 CBL 1
vlan(1) cbl 1, vlan(111) cbl 1, vlan(112) cbl 1, vlan(3968) cbl 1, vlan(3969) cbl 1, vlan(3970) cbl 1, vlan(3971) cbl 1,
Trunk port 16 native_vlan 1 CBL 1
vlan(1) cbl 1, vlan(111) cbl 1, vlan(112) cbl 1, vlan(3968) cbl 1, vlan(3969) cbl 1, vlan(3970) cbl 1, vlan(3971) cbl 1,
Trunk port 18 native_vlan 1 CBL 0
vlan(111) cbl 1, vlan(112) cbl 1,
~ # vemcmd show card
Card UUID type 2: ebd44e72-456b-11e0-0610-00000000108f
Card name: esx
Switch name: VSM11
Switch alias: DvsPortset-0
Switch uuid: c4 be 2c 50 36 c5 71 97-44 41 1f c0 43 8e 45 78
Card domain: 1
Card slot: 1
VEM Tunnel Mode: L2 Mode
VEM Control (AIPC) MAC: 00:02:3d:10:01:00
VEM Packet (Inband) MAC: 00:02:3d:20:01:00
VEM Control Agent (DPA) MAC: 00:02:3d:40:01:00
VEM SPAN MAC: 00:02:3d:30:01:00
Primary VSM MAC : 00:50:56:ac:00:42
Primary VSM PKT MAC : 00:50:56:ac:00:44
Primary VSM MGMT MAC : 00:50:56:ac:00:43
Standby VSM CTRL MAC : ff:ff:ff:ff:ff:ff
Management IPv4 address: 10.1.240.30
Management IPv6 address: 0000:0000:0000:0000:0000:0000:0000:0000
Secondary VSM MAC : 00:00:00:00:00:00
Secondary L3 Control IPv4 address: 0.0.0.0
Upgrade : Default
Max physical ports: 32
Max virtual ports: 216
Card control VLAN: 111
Card packet VLAN: 112
Card Headless Mode : Yes
Processors: 8
Processor Cores: 4
Processor Sockets: 1
Kernel Memory: 16712336
Port link-up delay: 5s
Global UUFB: DISABLED
Heartbeat Set: False
PC LB Algo: source-mac
Datapath portset event in progress : no
~ #
On VSM
VSM11# sh svs conn
connection vcenter:
ip address: 10.1.240.38
remote port: 80
protocol: vmware-vim https
certificate: default
datacenter name: New Datacenter
admin:
max-ports: 8192
DVS uuid: c4 be 2c 50 36 c5 71 97-44 41 1f c0 43 8e 45 78
config status: Enabled
operational status: Connected
sync status: Complete
version: VMware vCenter Server 4.1.0 build-345043
VSM11# sh svs ?
connections Show connection information
domain Domain Configuration
neighbors Svs neighbors information
upgrade Svs upgrade information
VSM11# sh svs dom
SVS domain config:
Domain id: 1
Control vlan: 111
Packet vlan: 112
L2/L3 Control mode: L2
L3 control interface: NA
Status: Config push to VC successful.
VSM11# sh port
^
% Invalid command at '^' marker.
VSM11# sh run
!Command: show running-config
!Time: Sun Nov 20 11:35:52 2011
version 4.2(1)SV1(4a)
feature telnet
username admin password 5 $1$QhO77JvX$A8ykNUSxMRgqZ0DUUIn381 role network-admin
banner motd #Nexus 1000v Switch#
ssh key rsa 2048
ip domain-lookup
ip domain-lookup
hostname VSM11
snmp-server user admin network-admin auth md5 0x389a68db6dcbd7f7887542ea6f8effa1
priv 0x389a68db6dcbd7f7887542ea6f8effa1 localizedkey
vrf context management
ip route 0.0.0.0/0 10.1.240.254
vlan 1,111-112
port-channel load-balance ethernet source-mac
port-profile default max-ports 32
port-profile type ethernet Unused_Or_Quarantine_Uplink
vmware port-group
shutdown
description Port-group created for Nexus1000V internal usage. Do not use.
state enabled
port-profile type vethernet Unused_Or_Quarantine_Veth
vmware port-group
shutdown
description Port-group created for Nexus1000V internal usage. Do not use.
state enabled
port-profile type ethernet system-uplink
vmware port-group
switchport mode trunk
switchport trunk allowed vlan 111-112
no shutdown
system vlan 111-112
description "System profile"
state enabled
port-profile type vethernet servers11
vmware port-group
switchport mode access
switchport access vlan 11
no shutdown
description "Data Profile for VM Traffic"
port-profile type ethernet vm-uplink
vmware port-group
switchport mode access
switchport access vlan 11
no shutdown
description "Uplink profile for VM traffic"
state enabled
vdc VSM11 id 1
limit-resource vlan minimum 16 maximum 2049
limit-resource monitor-session minimum 0 maximum 2
limit-resource vrf minimum 16 maximum 8192
limit-resource port-channel minimum 0 maximum 768
limit-resource u4route-mem minimum 32 maximum 32
limit-resource u6route-mem minimum 16 maximum 16
limit-resource m4route-mem minimum 58 maximum 58
limit-resource m6route-mem minimum 8 maximum 8
interface mgmt0
ip address 10.1.240.124/24
interface control0
line console
boot kickstart bootflash:/nexus-1000v-kickstart-mz.4.2.1.SV1.4a.bin sup-1
boot system bootflash:/nexus-1000v-mz.4.2.1.SV1.4a.bin sup-1
boot kickstart bootflash:/nexus-1000v-kickstart-mz.4.2.1.SV1.4a.bin sup-2
boot system bootflash:/nexus-1000v-mz.4.2.1.SV1.4a.bin sup-2
svs-domain
domain id 1
control vlan 111
packet vlan 112
svs mode L2
svs connection vcenter
protocol vmware-vim
remote ip address 10.1.240.38 port 80
vmware dvs uuid "c4 be 2c 50 36 c5 71 97-44 41 1f c0 43 8e 45 78" datacenter-n
ame New Datacenter
max-ports 8192
connect
vsn type vsg global
tcp state-checks
vnm-policy-agent
registration-ip 0.0.0.0
shared-secret **********
log-level
thank you
Michel -
Nexus 1000v 4.2.1 - Interface Ethernet3/5 has been quarantined due to Cmd Failure
Hello,
i get the error message "Interface Ethernet3/5 has been quarantined due to Cmd Failure" when i try to activate the System Uplink ports on the Nexus 1000v VSM. The symptom occurs under 4.2.1.SV1.4 (has been fresh setup, did before tests with 4.0.4). Unfortunately, the link to the 4.2.1 troubleshooting guide does not work (seems it hasn't been released yet).
Has anyone an idea what the root cause could be?
The VSM and VEM run on a GP DL3xxG7 with 2 x Dual Port 10Gbit CNA Adapters.
Nexus 1k config:
vlan 1
vlan 260
name Servers
vlan 340
name NfsA
vlan 357
name vMotion
vlan 920
name Packet_Control
port-profile type ethernet SYSTEM-UPLINK
vmware port-group
switchport mode trunk
switchport trunk allowed vlan 1,260,301,303,305,307,357,544,920
spanning-tree port type edge trunk
switchport trunk native vlan 1
channel-group auto mode active
no shutdown
system vlan 1,357,920
state enabled
port-profile type ethernet STORAGE-UPLINK
vmware port-group
switchport mode trunk
switchport trunk allowed vlan 340
channel-group auto mode active
no shutdown
system vlan 340
state enabled
When i do a no shut on the physical ports i get:
switch(config-if)# no shut
2011 Feb 24 11:43:55 switch %PORT-PROFILE-2-INTERFACE_QUARANTINED: Interface Ethernet3/7 has been quarantined due to Cmd Failure
2011 Feb 24 11:43:55 switch %PORT-PROFILE-2-INTERFACE_QUARANTINED: Interface Ethernet3/5 has been quarantined due to Cmd Failure
The other etherchannel (Port Profile STORAGE-UPLINK) does work pretty well...
The peer switches are two Nexus 5k with VPC.
config:
port-profile type port-channel VMWare-LAN
switchport mode trunk
switchport trunk allowed vlan 260, 301, 303, 305, 307, 357, 544, 920
spanning-tree port type edge trunk
switchport trunk native vlan 1
state enabled!
interface port-channel18
inherit port-profile VMWare-LAN
description CHA vshpvm001 LAN
vpc 18
speed 10000!
interface Ethernet1/18
description CHA vshpvm001 LAN
switchport mode trunk
switchport trunk allowed vlan 260,301,303,305,307,357,544,920
channel-group 18 mode active
switch# show port-profile sync-status
Ethernet3/5
port-profile: SYSTEM-UPLINK
interface status: quarantine
sync status: out of sync
cached commands:
errors:
cached command failed
recovery steps:
unshut interface
Ethernet3/7
port-profile: SYSTEM-UPLINK
interface status: quarantine
sync status: out of sync
cached commands:
errors:
cached command failed
recovery steps:
unshut interface
kind regards,
andySean,
thank you !
"show accounting log" helped me - i had the command spanning-tree port type edge trunk in the config which i somehow didn't realize that we hadn't this command in the 4.0.4 lab setup...so it was a copy/paste error (i copied the port-profile config from the N5k down to the N1k).
Fri Feb 25 07:20:32 2011:update:ppm.13880:admin:configure terminal ; interface Ethernet3/5 ; spanning-tree port type edge trunk (FAILURE)
Fri Feb 25 07:20:32 2011:update:ppm.13890:admin:configure terminal ; interface Ethernet3/5 ; shutdown (FAILURE)
As the N1k doesn't do STP at all (or does it? ) it's no wonder that the cli was complaining ...
Maybe this command should get more attention in the tshoot guide as it seems to be a very helpful one.
Cheers & Thanks,
Andy -
Nexus 1000v VEM module bouncing between hosts
I'm receiving these error messages on my N1KV and don't know how to fix it. I've tried removing, rebooting, reinstalling host B's VEM but that did not fix the issue. How do I debug this?
My setup,
Two physical hosts running esxi 5.1, vcenter appliance, n1kv with two system uplinks and two uplinks for iscsi for each host. Let me know if you need more output from logs or commands, thanks.
N1KV# 2013 Jun 17 18:18:07 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.52.100 detected as module 3
2013 Jun 17 18:18:07 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
2013 Jun 17 18:18:08 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_UNEXP_NODEID_REQ: Removing VEM 3 (Unexpected Node Id Request)
2013 Jun 17 18:18:09 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
2013 Jun 17 18:18:13 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.51.100 detected as module 3
2013 Jun 17 18:18:13 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
2013 Jun 17 18:18:16 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_UNEXP_NODEID_REQ: Removing VEM 3 (Unexpected Node Id Request)
2013 Jun 17 18:18:17 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
2013 Jun 17 18:18:21 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.52.100 detected as module 3
2013 Jun 17 18:18:21 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
2013 Jun 17 18:18:22 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_UNEXP_NODEID_REQ: Removing VEM 3 (Unexpected Node Id Request)
2013 Jun 17 18:18:23 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
2013 Jun 17 18:18:28 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.51.100 detected as module 3
2013 Jun 17 18:18:29 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
2013 Jun 17 18:18:44 N1KV %PLATFORM-2-MOD_DETECT: Module 2 detected (Serial number :unavailable) Module-Type Virtual Supervisor Module Model :unavailable
N1KV# sh module
Mod Ports Module-Type Model Status
1 0 Virtual Supervisor Module Nexus1000V ha-standby
2 0 Virtual Supervisor Module Nexus1000V active *
3 248 Virtual Ethernet Module NA ok
Mod Sw Hw
1 4.2(1)SV2(1.1a) 0.0
2 4.2(1)SV2(1.1a) 0.0
3 4.2(1)SV2(1.1a) VMware ESXi 5.1.0 Releasebuild-838463 (3.1)
Mod MAC-Address(es) Serial-Num
1 00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8 NA
2 00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8 NA
3 02-00-0c-00-03-00 to 02-00-0c-00-03-80 NA
Mod Server-IP Server-UUID Server-Name
1 192.168.54.2 NA NA
2 192.168.54.2 NA NA
3 192.168.51.100 03000200-0400-0500-0006-000700080009 NA
* this terminal session
~ # vemcmd show card
Card UUID type 2: 03000200-0400-0500-0006-000700080009
Card name:
Switch name: N1KV
Switch alias: DvsPortset-1
Switch uuid: e6 dc 36 50 c0 a9 d9 a5-0b 98 fb 90 e1 fc 99 af
Card domain: 2
Card slot: 3
VEM Tunnel Mode: L3 Mode
L3 Ctrl Index: 49
L3 Ctrl VLAN: 51
VEM Control (AIPC) MAC: 00:02:3d:10:02:02
VEM Packet (Inband) MAC: 00:02:3d:20:02:02
VEM Control Agent (DPA) MAC: 00:02:3d:40:02:02
VEM SPAN MAC: 00:02:3d:30:02:02
Primary VSM MAC : 00:50:56:b6:0c:b2
Primary VSM PKT MAC : 00:50:56:b6:35:3f
Primary VSM MGMT MAC : 00:50:56:b6:d5:12
Standby VSM CTRL MAC : 00:50:56:b6:96:f2
Management IPv4 address: 192.168.51.100
Management IPv6 address: 0000:0000:0000:0000:0000:0000:0000:0000
Primary L3 Control IPv4 address: 192.168.54.2
Secondary VSM MAC : 00:00:00:00:00:00
Secondary L3 Control IPv4 address: 0.0.0.0
Upgrade : Default
Max physical ports: 32
Max virtual ports: 216
Card control VLAN: 1
Card packet VLAN: 1
Control type multicast: No
Card Headless Mode : No
Processors: 4
Processor Cores: 4
Processor Sockets: 1
Kernel Memory: 16669760
Port link-up delay: 5s
Global UUFB: DISABLED
Heartbeat Set: True
PC LB Algo: source-mac
Datapath portset event in progress : no
Licensed: Yes
~ # vemcmd show card
Card UUID type 2: 03000200-0400-0500-0006-000700080009
Card name:
Switch name: N1KV
Switch alias: DvsPortset-0
Switch uuid: e6 dc 36 50 c0 a9 d9 a5-0b 98 fb 90 e1 fc 99 af
Card domain: 2
Card slot: 3
VEM Tunnel Mode: L3 Mode
L3 Ctrl Index: 49
L3 Ctrl VLAN: 52
VEM Control (AIPC) MAC: 00:02:3d:10:02:02
VEM Packet (Inband) MAC: 00:02:3d:20:02:02
VEM Control Agent (DPA) MAC: 00:02:3d:40:02:02
VEM SPAN MAC: 00:02:3d:30:02:02
Primary VSM MAC : 00:50:56:b6:0c:b2
Primary VSM PKT MAC : 00:50:56:b6:35:3f
Primary VSM MGMT MAC : 00:50:56:b6:d5:12
Standby VSM CTRL MAC : 00:50:56:b6:96:f2
Management IPv4 address: 192.168.52.100
Management IPv6 address: 0000:0000:0000:0000:0000:0000:0000:0000
Primary L3 Control IPv4 address: 192.168.54.2
Secondary VSM MAC : 00:00:00:00:00:00
Secondary L3 Control IPv4 address: 0.0.0.0
Upgrade : Default
Max physical ports: 32
Max virtual ports: 216
Card control VLAN: 1
Card packet VLAN: 1
Control type multicast: No
Card Headless Mode : Yes
Processors: 4
Processor Cores: 4
Processor Sockets: 1
Kernel Memory: 16669764
Port link-up delay: 5s
Global UUFB: DISABLED
Heartbeat Set: False
PC LB Algo: source-mac
Datapath portset event in progress : no
Licensed: Yes
! ports 1-6 connected to physical host A
interface GigabitEthernet1/0/1
description VMWARE ESXi Trunk
switchport trunk encapsulation dot1q
switchport mode trunk
switchport nonegotiate
spanning-tree portfast trunk
spanning-tree bpdufilter enable
spanning-tree bpduguard enable
channel-group 1 mode active
! ports 7-12 connected to phys host B
interface GigabitEthernet1/0/7
description VMWARE ESXi Trunk
switchport trunk encapsulation dot1q
switchport mode trunk
switchport nonegotiate
spanning-tree portfast trunk
spanning-tree bpdufilter enable
spanning-tree bpduguard enable
channel-group 2 mode activeok after deleteing the n1kv vms and vcenter and then reinstalling all I got the error again,
N1KV# 2013 Jun 18 17:48:12 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_STATE_CONFLICT: Removing VEM 3 due to state conflict VSM(NodeId Processed), VEM(ModIns End Rcvd)
2013 Jun 18 17:48:13 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
2013 Jun 18 17:48:16 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.52.100 detected as module 3
2013 Jun 18 17:48:16 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
2013 Jun 18 17:48:22 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_STATE_CONFLICT: Removing VEM 3 due to state conflict VSM(NodeId Processed), VEM(ModIns End Rcvd)
2013 Jun 18 17:48:23 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
2013 Jun 18 17:48:34 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.52.100 detected as module 3
2013 Jun 18 17:48:34 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
2013 Jun 18 17:48:41 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_STATE_CONFLICT: Removing VEM 3 due to state conflict VSM(NodeId Processed), VEM(ModIns End Rcvd)
2013 Jun 18 17:48:42 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
2013 Jun 18 17:49:03 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.52.100 detected as module 3
2013 Jun 18 17:49:03 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
2013 Jun 18 17:49:10 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_STATE_CONFLICT: Removing VEM 3 due to state conflict VSM(NodeId Processed), VEM(ModIns End Rcvd)
2013 Jun 18 17:49:11 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
2013 Jun 18 17:49:29 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.51.100 detected as module 3
2013 Jun 18 17:49:29 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
2013 Jun 18 17:49:35 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_STATE_CONFLICT: Removing VEM 3 due to state conflict VSM(NodeId Processed), VEM(ModIns End Rcvd)
2013 Jun 18 17:49:36 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
2013 Jun 18 17:49:53 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.51.100 detected as module 3
2013 Jun 18 17:49:53 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
2013 Jun 18 17:49:59 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_STATE_CONFLICT: Removing VEM 3 due to state conflict VSM(NodeId Processed), VEM(ModIns End Rcvd)
2013 Jun 18 17:50:00 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
2013 Jun 18 17:50:05 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.52.100 detected as module 3
2013 Jun 18 17:50:05 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
Host A
~ # vemcmd show card
Card UUID type 2: 03000200-0400-0500-0006-000700080009
Card name:
Switch name: N1KV
Switch alias: DvsPortset-0
Switch uuid: e6 dc 36 50 c0 a9 d9 a5-0b 98 fb 90 e1 fc 99 af
Card domain: 2
Card slot: 1
VEM Tunnel Mode: L3 Mode
L3 Ctrl Index: 49
L3 Ctrl VLAN: 52
VEM Control (AIPC) MAC: 00:02:3d:10:02:00
VEM Packet (Inband) MAC: 00:02:3d:20:02:00
VEM Control Agent (DPA) MAC: 00:02:3d:40:02:00
VEM SPAN MAC: 00:02:3d:30:02:00
Primary VSM MAC : 00:50:56:b6:96:f2
Primary VSM PKT MAC : 00:50:56:b6:11:b6
Primary VSM MGMT MAC : 00:50:56:b6:48:c6
Standby VSM CTRL MAC : ff:ff:ff:ff:ff:ff
Management IPv4 address: 192.168.52.100
Management IPv6 address: 0000:0000:0000:0000:0000:0000:0000:0000
Primary L3 Control IPv4 address: 192.168.54.2
Secondary VSM MAC : 00:00:00:00:00:00
Secondary L3 Control IPv4 address: 0.0.0.0
Upgrade : Default
Max physical ports: 32
Max virtual ports: 216
Card control VLAN: 1
Card packet VLAN: 1
Control type multicast: No
Card Headless Mode : Yes
Processors: 4
Processor Cores: 4
Processor Sockets: 1
Kernel Memory: 16669764
Port link-up delay: 5s
Global UUFB: DISABLED
Heartbeat Set: False
PC LB Algo: source-mac
Datapath portset event in progress : no
Licensed: No
Host B
~ # vemcmd show card
Card UUID type 2: 03000200-0400-0500-0006-000700080009
Card name:
Switch name: N1KV
Switch alias: DvsPortset-0
Switch uuid: bf fb 28 50 1b 26 dd ae-05 bd 4e 48 2e 37 56 f3
Card domain: 2
Card slot: 3
VEM Tunnel Mode: L3 Mode
L3 Ctrl Index: 49
L3 Ctrl VLAN: 51
VEM Control (AIPC) MAC: 00:02:3d:10:02:02
VEM Packet (Inband) MAC: 00:02:3d:20:02:02
VEM Control Agent (DPA) MAC: 00:02:3d:40:02:02
VEM SPAN MAC: 00:02:3d:30:02:02
Primary VSM MAC : 00:50:56:a8:f5:f0
Primary VSM PKT MAC : 00:50:56:a8:3c:62
Primary VSM MGMT MAC : 00:50:56:a8:b4:a4
Standby VSM CTRL MAC : 00:50:56:a8:30:d5
Management IPv4 address: 192.168.51.100
Management IPv6 address: 0000:0000:0000:0000:0000:0000:0000:0000
Primary L3 Control IPv4 address: 192.168.54.2
Secondary VSM MAC : 00:00:00:00:00:00
Secondary L3 Control IPv4 address: 0.0.0.0
Upgrade : Default
Max physical ports: 32
Max virtual ports: 216
Card control VLAN: 1
Card packet VLAN: 1
Control type multicast: No
Card Headless Mode : No
Processors: 4
Processor Cores: 4
Processor Sockets: 1
Kernel Memory: 16669760
Port link-up delay: 5s
Global UUFB: DISABLED
Heartbeat Set: True
PC LB Algo: source-mac
Datapath portset event in progress : no
Licensed: Yes
I used the nexus 1000v java installer so I don't know what it keeps assigning the same UUID nor do I know how to change it.
Here is the other output you requested,
N1KV# show vms internal info dvs
DVS INFO:
DVS name: [N1KV]
UUID: [bf fb 28 50 1b 26 dd ae-05 bd 4e 48 2e 37 56 f3]
Description: [(null)]
Config version: [1]
Max ports: [8192]
DC name: [Galaxy]
OPQ data: size [1121], data: [data-version 1.0
switch-domain 2
switch-name N1KV
cp-version 4.2(1)SV2(1.1a)
control-vlan 1
system-primary-mac 00:50:56:a8:f5:f0
active-vsm packet mac 00:50:56:a8:3c:62
active-vsm mgmt mac 00:50:56:a8:b4:a4
standby-vsm ctrl mac 0050-56a8-30d5
inband-vlan 1
svs-mode L3
l3control-ipaddr 192.168.54.2
upgrade state 0 mac 0050-56a8-30d5 l3control-ipv4 null
cntl-type-mcast 0
profile dvportgroup-26 trunk 1,51-57,110
profile dvportgroup-26 mtu 9000
profile dvportgroup-27 access 51
profile dvportgroup-27 mtu 1500
profile dvportgroup-27 capability l3control
profile dvportgroup-28 access 52
profile dvportgroup-28 mtu 1500
profile dvportgroup-28 capability l3control
profile dvportgroup-29 access 53
profile dvportgroup-29 mtu 1500
profile dvportgroup-30 access 54
profile dvportgroup-30 mtu 1500
profile dvportgroup-31 access 55
profile dvportgroup-31 mtu 1500
profile dvportgroup-32 access 56
profile dvportgroup-32 mtu 1500
profile dvportgroup-34 trunk 220
profile dvportgroup-34 mtu 9000
profile dvportgroup-35 access 220
profile dvportgroup-35 mtu 1500
profile dvportgroup-35 capability iscsi-multipath
end-version 1.0
push_opq_data flag: [1]
show svs neighbors
Active Domain ID: 2
AIPC Interface MAC: 0050-56a8-f5f0
Inband Interface MAC: 0050-56a8-3c62
Src MAC Type Domain-id Node-id Last learnt (Sec. ago)
0050-56a8-30d5 VSM 2 0201 1020.45
0002-3d40-0202 VEM 2 0302 1.33
I cannot add Host A to the N1KV it errors out with,
vDS operation failed on host 192.168.52.100, An error occurred during host configuration. got (vim.fault.PlatformConfigFault) exception
Host B (192.168.51.100) was added fine, then I moved a vmkernel to the N1KV which brought up the VEM and got the VEM flapping errors. -
Hello everyone
I am confused about how works the integration between N1K and UCS Manager:
First question:
If two VMs on different ESXi and different VEM but in the same VLAN,would like to talk each other, the data flow between them is managed from the upstream switch( in this case UCS Fabric Inteconnect), isn'it?
I created a Ethernet uplink port-profile on N1K in switch port mode access(100), I created a vEthernet port-profile for the VM in switchport mode access(100) as well. In the Fabric Interconnect I created a vNIC profile for the physical NICs of ESXi(where there are the VMs). Also I created the vlan 100(the same in N1K)
Second question: With the configuration above, if I include in the vNIC profile the vlan 100 (not as native vlan) only, the two VMs can not ping each other. Instead if I include in the vNIC profile only the defaul vlan(I think it is the vlan 1) as native vlan evereything works fine. WHY????
Third question: How it works the tagging vlan on Fabric interconnectr and also in N1K.
I tried to read differnt documents, but I did not understand.
ThanksSince you have defined switchport mode access vlan 100 on uplink port-profile of Nexus 1000v, it sends all ethernet frames untagged(without 802.1q tag).
When you include in the vNIC profile the vlan 100 (not as native vlan) ONLY like below screenshot, untagged frames are dropped because UCS expects all frames received on this port as tagged frames.
When you change vNIC template to include default vlan as native vlan ONLY like below screen shot, you basically bridge two vlans (vlan 100 and vlan 1) because UCS FI now puts all untagged frames in vlan 1. and sends untagged frames to other ESXi host and ESXi host again bridge vlan 1 to vlan 100 with switchport mode access vlan 100 on uplink port profile. -
Nexus 1000v UCS Manager and Cisco UCS M81KR
Hello everyone
I am confused about how works the integration between N1K and UCS Manager:
First question:
If two VMs on different ESXi and different VEM but in the same VLAN,would like to talk each other, the data flow between them is managed from the upstream switch( in this case UCS Fabric Inteconnect), isn'it?
I created a Ethernet uplink port-profile on N1K in switch port mode access(100), I created a vEthernet port-profile for the VM in switchport mode access(100) as well. In the Fabric Interconnect I created a vNIC profile for the physical NICs of ESXi(where there are the VMs). Also I created the vlan 100(the same in N1K)
Second question: With the configuration above, if I include in the vNIC profile the vlan 100 (not as native vlan) only, the two VMs can not ping each other. Instead if I include in the vNIC profile only the defaul vlan(I think it is the vlan 1) as native vlan evereything works fine. WHY????
Third question: How it works the tagging vlan on Fabric interconnectr and also in N1K.
I tried to read differnt documents, but I did not understand.
ThanksThis document may help...
Best Practices in Deploying Cisco Nexus 1000V Series Switches on Cisco UCS B and C Series Cisco UCS Manager Servers
http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/white_paper_c11-558242.html
If two VMs on different ESXi and different VEM but in the same VLAN,would like to talk each other, the data flow between them is managed from the upstream switch( in this case UCS Fabric Inteconnect), isn'it?
-Yes. Each ESX host with the VEM will have one or more dedicated NICs for the VEMs to communicate with the upstream network. These would be your 'type ethernet' port-profiles. The ustream network would need to bridge the vlan between the two physicall nics.
Second question: With the configuration above, if I include in the vNIC profile the vlan 100 (not as native vlan) only, the two VMs can not ping each other. Instead if I include in the vNIC profile only the defaul vlan(I think it is the vlan 1) as native vlan evereything works fine. WHY????
- The N1K port profiles are switchport access making them untagged. This would be the native vlan in ucs. If there is no native vlan in the UCS configuration, we do not have the upstream networking bridging the vlan.
Third question: How it works the tagging vlan on Fabric interconnectr and also in N1K.
- All ports on the UCS are effectively trunks and you can define what vlans are allowed on the trunk as well as what vlan is passed natively or untagged. In N1K, you will want to leave your vEthernet port profiles as 'switchport mode access'. For your Ethernet profiles, you will want them to be 'switchport mode trunk'. Use an used used vlan as the native vlan. All production vlans will be passed from N1K to UCS as tagged vlans.
Thank You,
Dan Laden
PDI Helpdesk
http://www.cisco.com/go/pdihelpdesk -
VM-FEX and Nexus 1000v relation
Hi
I am a new in virtulaization world and I need to know what is the relation between Cisco Nexus 1000v and Cisco VM-FEX?, and when to use VM-FEX and when to use Nexus 1000v.
RegardsAhmed,
Sorry for taking this long to get back to you.
Nexus 1000v is a virtualized switch and as such will require that any traffic coming in or leaving the VM will first need to pass through the virtualization layer, therefore causing a minimum delay that for some applications (VMs) can be catastrophic enough that may mean too much delay.
With VM-FEX you gain the option to bypass the virtualization layer with for example "Pass-Through" mode where the vmnics are really assigned and managed by the OS, minimizing the delay and making the VMs look as if they were directly attached, also, this offloads CPU workload in the mean time, optimizing the host/VM's performance.
The need for one or the other will be defined as always by the needs your organization/business has.
Benefits of VM-FEX (from cisco.com):
Simplified operations: Eliminates the need for a separate, virtual networking infrastructure
Improved network security: Contains VLAN proliferation
Optimized network utilization: Reduces broadcast domains
Enhanced application performance: Offloads virtual machine switching from host CPU to parent switch application-specific integrated circuits (ASICs)
Benefits of Nexus 1000v here on another post from Rob Burns:
https://supportforums.cisco.com/thread/2087541
https://communities.vmware.com/thread/316542?tstart=0
I hope that helps
-Kenny -
Nexus 1000v port-channels questions
Hi,
I’m running vCenter 4.1 and Nexus 1000v and about 30 ESX Hosts.
I’m using one system uplink port profile for all 30 ESX Host; On each of the ESX host I have 2 NICs going to a Catalyst 3750 switch stack (Switch A), and another 2 NICs going to another Catalyst 3750 switch stack (Switch B).
The Nexus is configured with the “sub-group CDP” command on the system uplink port profile like the following:
port-profile type ethernet uplink
vmware port-group
switchport mode trunk
switchport trunk allowed vlan 1,800,802,900,988-991,996-997,999
switchport trunk native vlan 500
mtu 1500
channel-group auto mode on sub-group cdp
no shutdown
system vlan 988-989
description System-Uplink
state enabled
And the port channel on the Catalyst 3750 are configured like the following:
interface Port-channel11
description ESX-10(Virtual Machine)
switchport trunk encapsulation dot1q
switchport trunk native vlan 500
switchport trunk allowed vlan 800,802,900,988-991
switchport mode trunk
switchport nonegotiate
spanning-tree portfast trunk
end
interface GigabitEthernet1/0/18
description ESX-10(Virtual Machine)
switchport trunk encapsulation dot1q
switchport trunk native vlan 500
switchport trunk allowed vlan 800,802,900,988-991
switchport mode trunk
switchport nonegotiate
channel-group 11 mode on
spanning-tree portfast trunk
spanning-tree guard root
end
interface GigabitEthernet1/0/1
description ESX-10(Virtual Machine)
switchport trunk encapsulation dot1q
switchport trunk native vlan 500
switchport trunk allowed vlan 800,802,900,988-991
switchport mode trunk
switchport nonegotiate
channel-group 11 mode on
spanning-tree portfast trunk
spanning-tree guard root
end
Now Cisco is telling me that I should be using MAC pinning when doing a trunk to two different stacks , and that each interface on 3750 should not be configured in a port-channel like above, but should be configured as individual trunks.
First question: Is the above statement correct, are my uplinks configured wrong? Should they be configured individually in trunks instead of a port-channel?
Second questions: If I need to add the MAC pinning configuration on my system uplink port-profile can I create a new system uplink port profile with the MAC pinning configuration and then move one ESX host (with no VM on them) one at a time to that new system uplink port profile? This way, I could migrate one ESX host at a time without outages to my VMs. Or is there an easier way to move 30 ESX hosts to a new system uplink profile with the MAC Pinning configuration.
Thanks.Hello,
From what I understood, you have the following setup:
- Each ESX host has 4 NICS
- 2 of them go to a 3750 stack and the other 2 go to a different 3750 stack
- all 4 vmnics on the ESX host use the same Ethernet port-profile
- this has 'channel-group auto mode on sub-group cdp'
- The 2 interfaces on each 3750 stack are in a port-channel (just 'mode on')
If yes, then this sort of a setup is correct. The only problem with this is the dependance on CDP. With CDP loss, the port-channels would go down.
'mac-pinning' is the recommended option for this sort of a setup. You don't have to bundle the interfaces on the 3750 for this and these can be just regular trunk ports. If all your ports are on the same stack, then you can look at LACP. The CDP option would not be supported in the future releases. In fact, it is supposed to be removed from 4.2(1)SV1(2.1) but I still see the command available (ignore 4.2(1)SV1(4) next to it) - I'll follow up on this internally:
http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_2_1_s_v_2_1_1/interface/configuration/guide/b_Cisco_Nexus_1000V_Interface_Configuration_Guide_Release_4_2_1_SV_2_1_1_chapter_01.html
For migrating, the best option would be as you suggested. Create a new port-profile with mac-pinning and move one host at a time. You can migrate VMs off the host before you change the port-profile and can remove the upstream port-channel config as well.
Thanks,
Shankar
Maybe you are looking for
-
Apple Id has become corrupt and I can't fix it
One of my Apple Ids is demanding a password and the one I have always used doesn't work any more. I suspect it has been hacked because when I try to fix it I can't get past the security questions as it won't recognise my birth date. Moreover, when
-
User Exit required for restricting 1 vauation type in ME51N
Hi Experts, I want to restrict that during the creation of PR i can only create the PR for materials for same valuation type. If there are 10 material then that should be of same valuation type. Please suggests.
-
Apple is often very hot, What should I do?
The joints of my MacBook Air power will be hot, Then, it will sound. How should I do?
-
Issue copying account assignment from SO to PR
I have one issue : The SO has Internal order and Profit center information. When a PR is created for the SO it only copies PC but not IO for some SO types. Our client uses a Z transaction which releases the requirement + automatically updates the PR
-
NetInstall - Grey screen after logging in for user created during install
I am creating a NetInstall of 10.8 for mass deployment in my workplace. I am currently having difficulty with a user account that was created by a script that runs in place of the Setup Assistant. I am trying to skip the Setup Assistant, and as such,