Nexus 1000v 4.2.1 - Interface Ethernet3/5 has been quarantined due to Cmd Failure

Hello,
i get the error message "Interface Ethernet3/5 has been quarantined due to Cmd Failure" when i try to activate the System Uplink ports on the Nexus 1000v VSM. The symptom occurs under 4.2.1.SV1.4 (has been fresh setup, did before tests with 4.0.4). Unfortunately, the link to the 4.2.1 troubleshooting guide does not work (seems it hasn't been released yet).
Has anyone an idea what the root cause could be?
The VSM and VEM run on a GP DL3xxG7 with 2 x Dual Port 10Gbit CNA Adapters.
     Nexus 1k config:
vlan 1
vlan 260
  name Servers
vlan 340
  name NfsA
vlan 357
  name vMotion
vlan 920
  name Packet_Control
port-profile type ethernet SYSTEM-UPLINK
  vmware port-group
  switchport mode trunk
  switchport trunk allowed vlan 1,260,301,303,305,307,357,544,920
  spanning-tree port type edge trunk
  switchport trunk native vlan 1
  channel-group auto mode active
  no shutdown
  system vlan 1,357,920
  state enabled
port-profile type ethernet STORAGE-UPLINK
  vmware port-group
  switchport mode trunk
  switchport trunk allowed vlan 340
  channel-group auto mode active
  no shutdown
  system vlan 340
  state enabled
When i do a no shut on the physical ports i get:
switch(config-if)# no shut
2011 Feb 24 11:43:55 switch %PORT-PROFILE-2-INTERFACE_QUARANTINED: Interface Ethernet3/7 has been quarantined due to Cmd Failure
2011 Feb 24 11:43:55 switch %PORT-PROFILE-2-INTERFACE_QUARANTINED: Interface Ethernet3/5 has been quarantined due to Cmd Failure
The other etherchannel (Port Profile STORAGE-UPLINK) does work pretty well...
The peer switches are two Nexus 5k with VPC.
config:
port-profile type port-channel VMWare-LAN
  switchport mode trunk
  switchport trunk allowed vlan 260, 301, 303, 305, 307, 357, 544, 920
  spanning-tree port type edge trunk
  switchport trunk native vlan 1
  state enabled!
interface port-channel18
  inherit port-profile VMWare-LAN
  description CHA vshpvm001 LAN
  vpc 18
  speed 10000!
interface Ethernet1/18
  description CHA vshpvm001 LAN
  switchport mode trunk
  switchport trunk allowed vlan 260,301,303,305,307,357,544,920
  channel-group 18 mode active
switch# show port-profile sync-status
Ethernet3/5
port-profile: SYSTEM-UPLINK
interface status: quarantine
sync status: out of sync
cached commands: 
errors:
    cached command failed
recovery steps:
    unshut interface
Ethernet3/7
port-profile: SYSTEM-UPLINK
interface status: quarantine
sync status: out of sync
cached commands: 
errors:
    cached command failed
recovery steps:
    unshut interface
kind regards,
andy

Sean,
thank you !
"show accounting log" helped me - i had the command spanning-tree port type edge trunk in the config which i somehow didn't realize that we hadn't this command in the 4.0.4 lab setup...so it was a copy/paste error (i copied the port-profile config from the N5k down to the N1k).
Fri Feb 25 07:20:32 2011:update:ppm.13880:admin:configure terminal ; interface Ethernet3/5 ; spanning-tree port type edge trunk (FAILURE)
Fri Feb 25 07:20:32 2011:update:ppm.13890:admin:configure terminal ; interface Ethernet3/5 ; shutdown (FAILURE)
As the N1k doesn't do STP at all (or does it? ) it's no wonder that the cli was complaining ...
Maybe this command should get more attention in the tshoot guide as it seems to be a very helpful one.
Cheers & Thanks,
Andy

Similar Messages

  • Renumbering Nexus 1000V External Interfaces?

    Hello All,
    Does anyone know a way to re-number the interfaces on a VEM, so they're sequential? My VEMs have 4 interfaces that I'm using as uplinks, and they're all sequential starting at 1. However, my VEM 7 has a mis-order going on. Truncated output from show interface status is shown below.
    Eth5/1         --                 up       trunk     full    unknown --
    Eth5/2         --                 up       trunk     full    unknown --
    Eth5/3         --                 up       trunk     full    unknown --
    Eth5/4         --                 up       trunk     full    unknown --
    Eth7/1         --                 up       trunk     full    unknown --
    Eth7/4         --                 up       trunk     full    unknown --
    Eth7/6         --                 up       trunk     full    unknown --
    Eth7/7         --                 up       trunk     full    unknown --
    The disparity in numbering is messing up some automation.
    Regards,
    Tony

    I may have answered my own question.
    The interface number in the Nexus 1000V are taken from the vmnic numbers in the ESXi host that the VEM resides. So going into /etc/vmware/esx.conf (be careful, of course) I was able to renumber the interfaces there. After a reboot, I had to change the port groups on the newly numbered interfaces in vCeneter DVS, and voila. Interfaces are back to normal.

  • Nexus 1000v, VMWare ESX and Microsoft SC VMM

    Hi,
    Im curious if anybody has worked up any solutions managing network infrastructure for VMWare ESX hosts/vms with the Nexus 1000v and Microsoft's System Center Virtual Machine Manager.
    There currently exists support for the 1000v and ESX and SCVMM using the Cisco 1000v software for MS Hyper-V and SCVMM.   There is no suck support for VMWare ESX.
    Im curious as to what others with VMWare, Nexus 1000v or equivalent and SCVMM have done to work around this issue.
    Trying to get some ideas.
    Thanks

    Aaron,
    The steps you have above are correct, you will need steps 1 - 4 to get it working correctly.  Normally people will create a separate VLAN for their NLB interfaces/subnet, to prevent uncessisary flooding of mcast frames within the network.
    To answer your questions
    1) I've seen multiple customer run this configuration
    2) The steps you have are correct
    3) You can't enable/disable IGMP snooping on UCS.  It's enabled by default and not a configurable option.  There's no need to change anything within UCS in regards to MS NLB with the procedure above.  FYI - the ability to disable/enable IGMP snooping on UCS is slated for an upcoming release 2.1.
    This is the correct method untill the time we have the option of configuring static multicast mac entries on
    the Nexus 1000v.  If this is a feature you'd like, please open a TAC case and request for bug CSCtb93725 to be linked to your SR. 
    This will give more "push" to our develpment team to prioritize this request.
    Hopefully some other customers can share their experience.
    Regards,
    Robert

  • SCVMM Kicks out Nexus 1000V Uplink NIC Any ideas?

    The SCVMM suddenly kicks out the Nexus 1000V Uplink NIC,
    thus preventing me from remediating the change.
    Also i get this error message
    Using Hyper V as virtualization platform

    Hello,
    You can use one Ethernet port-profile with a channel-group command (like 'channel-group auto mode on mac-pinning') and assign it to all the vmnic interfaces that need to carry the same set of VLANs
    The same port-profile can be used on other hosts too. The N1k would automatically bundle (port-channel) the interfaces that belong to the same ESX host (accomplished through the 'channel-group auto' command)
    If you need the interfaces to carry separate sets of VLANs, then you need a different port-profile.
    Port-profile is just a container for a common set of configuration that you can apply for multiple interfaces across multiple hosts.
    Thanks,
    Shankar

  • Firewall between Nexus 1000V VSM and vCenter

    Hi,
    Customer has multiple security zones in environment, and VMware vCenter is located in a Management Security Zone. VSMs in security zones have dedicated management interface facing Management Security Zone with firewall in between. What ports do we need to open for the communication between VSMs and vCenter? The Nexus 1000V troubleshooting guide only mentioned TCP/80 and TCP/443. Are these outbound from VSM to vCenter? Is there any requirements from vCenter to VSM? What's the best practice for VSM management interface configuration in multiple security zones environment? Thanks.

    Avi -
    You need the connection between vCenter and the VSM anytime you want to add or make any changes to the existing port-profiles.  This is how the port-profiles become available to the virtual machines that reside on your ESX hosts.
    One problem when the vCenter is down is what you pointed out - configuration changes cannot be pushed
    The VEM/VSM relationship is independent of the VSM/vCenter connection.  There are separate VLANs or L3 interfaces that are used to pass information and heartbeats between the VSM and its VEMs.
    Jen

  • Nexus 1000v - port-channel "refresh"

    Hi All,
    My question is, does anyone have any information on this 1000v command:
    Nexus-1000v(config)# port-channel internal device-id table refresh
    I am looking for a way for the port-channel interface to be automatically removed from the 1000v once the VEM has been deleted, currently the port-channel interface does not disappear when the VEM has been removed. This seems to be causing problems once the same VEM is re-added later on. Ports are getting sent into quarantine states and ending up in invalid states (eg. NoPortProfile state when there is actually a port-profile attached).
    Anyway, if anyone can explain the above command or tell me how to find out more, it would be great, I can't find it documented anywhere and the context-sensitive help in the NXOS is vague at best.

    Brendan,
    I don't have much information on that command, but I do know it wont remove any unused port channels.  They have to be manually deleted if they're no longer needed.
    The port Channel ID will remain even after a VEM is removed in case the assigned VEM comes back.  When a VEM is decommisioned permanently, I'll do a "no vem x" to also remove the Host entry for that VEM from the VSM.  This way the module slot # can be re-assigned to the next new VEM inserted.  After adding/removing VEMs just do a "show port-channel summary" to see any unused Port Channel IDs, and delete them.  It's a quick & painless task.
    I would hope this wouldn't be a common issue - how often are you deleting/removing VEMs?
    Regards,
    Robert

  • Nexus 1000V private-vlan issue

    Hello
    /* Style Definitions */
    table.MsoNormalTable
    {mso-style-name:Standardowy;
    mso-tstyle-rowband-size:0;
    mso-tstyle-colband-size:0;
    mso-style-noshow:yes;
    mso-style-parent:"";
    mso-padding-alt:0cm 5.4pt 0cm 5.4pt;
    mso-para-margin:0cm;
    mso-para-margin-bottom:.0001pt;
    mso-pagination:widow-orphan;
    font-size:10.0pt;
    font-family:"Times New Roman";
    mso-ansi-language:#0400;
    mso-fareast-language:#0400;
    mso-bidi-language:#0400;}
    I need to transmit both the private-vlans (as promiscous trunk) and regular vlans on the trunk port between the Nexus 1000V and the physical switch. Do you know how to properly configure the uplink port to accomplish that ?
    Thank you in advance
    Lucas

    Control vlan is a totally seperate VLAN then your System Console. The VLAN just needs to be available to the ESX host through the upstream physical switch and then make sure the VLAN is passed on the uplink port-profile that you assign the ESX host to.
    We only need an interface on the ESX host if you decide to use L3 control. In that instance you would create or use an existing VMK interface on the ESX host.

  • VSM and Cisco nexus 1000v

    Hi,
    We are planning to install Cisco Nexus 1000v in our environment. Before we want to install we want to explore little bit about Cisco Nexus 1000v
    •  I know there is 2 elements for Cisco 1k, VEM and VSM. Does VSM is required? Can we configure VEM individually?
    •   How does Nexus 1k integrated with vCenter. Can we do all Nexus 1000v configuration from vCenter without going to VEM or VSM?
    •   In term of alarming and reporting, does we need to get SNMP trap and get from individual VEM or can be use VSM to do that. OR can we   get    Cisco Nexus 1000v alarming and reporting form VMware vCenter.
    •  Apart from using Nexus 1010 can what’s the recommended hosting location for VSM, (same Host as VEM, different VM, and different physical server)
    Foyez Ahammed

    Hi Foyez,
    Here is a brief on the Nexus1000v and I'll answer some of your questions in that:
    The Nexus1000v is a Virtual Distributed Switch (software based) from Cisco which integrated with the vSphere environment to provide uniform networking across your vmware environment for the host as well as the VMs. There are two components to the N1K infrastructure 1) VSM 2) VEM.
    VSM - Virtual supervisor module is the one which controls the entire N1K setup and is from where the configuration is done for the VEM modules, interfaces, security, monitoring etc. VSM is the one which interacts with the VC.
    VEM - Virtual ethernet module are simply the module or virtual linecards which provide the connectivity option or virtual ports for the VMs and other virtaul interfaces. Each ESX host today can only have one VEM. These VEMs recieve their configuration / programing from the VSM.
    If you are aware of any other switching products from Cisco like the Cat 6k switches, the n1k behaves the same way but in a software / virtual environment. Where the VSM are equal of a SUPs and the VEM are similar to the line cards. The control and the packet VLANs in the n1k provide the same kind of AIPC and Inband connectivity as the 6k backplane would for the communication between the modules and the SUP (VSM in this case).
    *The n1k configuration is done only from the VSM and is visible in the VC.However the port-profiles created from the VSM are pushed from the VSM to the VC and have to be assigned to the virtual / physical ports from the VC.
    *You can run the VSM either on the Nexus1010 as a Virtual service blade (VSB) or as a normal VM on any of the ESX/ESXi server. The VSM and the VEM on the same server are fully supported.
    You can refer the following deployment guide for some more details: http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/guide_c07-556626.html
    Hope this answers your queries!
    ./Abhinav

  • Nexus 1000V VEM Issues

    I am having some problems with VSM/VEM connectivity after an upgrade that I'm hoping someone can help with.
    I have a 2 ESXi host cluster that I am upgrading from vSphere 5.0 to 5.5u1, and upgrading a Nexus 1000V from SV2(2.1) to SV2(2.2).  I upgraded vCenter without issue (I'm using the vCSA), but when I attempted to upgrade ESXi-1 to 5.5u1 using VUM it complained that a VIB was incompatible.  After tracing this VIB to the 1000V VEM, I created an ESXi 5.5u1 installer package containing the SV2(2.2) VEM VIB for ESXi 5.5 and attempted to use VUM again but was still unsuccessful
    I removed the VEM VIB from the vDS and the host and was able to upgrade the host to 5.5u1.  I tried to add it back to the vDS and was given the error below:
    vDS operation failed on host esxi1, Received SOAP response fault from [<cs p:00007fa5d778d290, TCP:esxi1.gooch.net:443>]: invokeHostTransactionCall
    Received SOAP response fault from [<cs p:1f3cee20, TCP:localhost:8307>]: invokeHostTransactionCall
    An error occurred during host configuration. got (vim.fault.PlatformConfigFault) exception
    I installed the VEM VIB manually at the CLI with 'esxcli software vib install -d /tmp/cisco-vem-v164-4.2.1.2.2.2.0-3.2.1.zip' and I'm able to add to to the vDS, but when I connect the uplinks and migrate the L3 Control VMKernel, I get the following error where it complains about the SPROM when the module comes online, then it eventually drops the VEM.
    2014 Mar 29 15:34:54 n1kv %VEM_MGR-2-VEM_MGR_DETECTED: Host esxi1 detected as module 3
    2014 Mar 29 15:34:54 n1kv %VDC_MGR-2-VDC_CRITICAL: vdc_mgr has hit a critical error: SPROM data is invalid. Please reprogram your SPROM!
    2014 Mar 29 15:34:54 n1kv %VEM_MGR-2-MOD_ONLINE: Module 3 is online
    2014 Mar 29 15:37:14 n1kv %VEM_MGR-2-VEM_MGR_REMOVE_NO_HB: Removing VEM 3 (heartbeats lost)
    2014 Mar 29 15:37:19 n1kv %STP-2-SET_PORT_STATE_FAIL: Port state change req to PIXM failed, status = 0x41e80001 [failure] vdc 1, tree id 0, num ports 1, ports  state BLK, opcode MTS_OPC_PIXM_SET_MULT_CBL_VLAN_BM_FOR_MULT_PORTS, msg id (2274781), rr_token 0x22B5DD
    2014 Mar 29 15:37:21 n1kv %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
    I have tried gracefully removing ESXi-1 from the vDS and cluster, reformatting it with a fresh install of ESXi 5.5u1, but when I try to join it to the N1KV it throws the same error.

    Hi, 
    The SET_PORT_STATE_FAIL message is usually thrown when there is a communication issue between the VSM and the VEM while the port-channel interface is being programmed. 
    What is the uplink port profile configuration? 
    Other hosts are using this uplink port profile successfully?
    The upstream configuration on an affected and a working host is the same? (ie control VLAN allowed where necessary)
    Per kpate's post, control VLAN needs to be a system VLAN on the uplink port profile.
    The VDC SPROM message is a cosmetic defect
    https://tools.cisco.com/bugsearch/bug/CSCul65853/
    HTH,
    Joe

  • [Nexus 1000v] VEM can't be add into VSM

    hi all,
    following my lab, i have some problems with Nexus 1000V when VEM can't be add into VSM.
    + on VSM has already installed on ESX 1 (standalone or ha) and you can see:
    Cisco_N1KV# show module
    Mod  Ports  Module-Type                       Model               Status
    1    0      Virtual Supervisor Module         Nexus1000V          active *
    Mod  Sw                Hw
    1    4.2(1)SV1(4a)     0.0
    Mod  MAC-Address(es)                         Serial-Num
    1    00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8  NA
    Mod  Server-IP        Server-UUID                           Server-Name
    1    10.4.110.123     NA                                    NA
    + on ESX2 that 's installed VEM
    [root@esxhoadq ~]# vem status
    VEM modules are loaded
    Switch Name      Num Ports   Used Ports  Configured Ports  MTU     Uplinks
    vSwitch0         128         3           128               1500    vmnic0
    VEM Agent (vemdpa) is running
    [root@esxhoadq ~]#
    any advices for this,
    thanks so much

    Hi,
    i'm having similar issue: the VEM insatlled on the ESXi is not showing up on the VSM.
    please check from the following what can be wrong?
    This is the VEM status:
    ~ # vem status -v
    Package vssnet-esx5.5.0-00000-release
    Version 4.2.1.1.4.1.0-2.0.1
    Build 1
    Date Wed Jul 27 04:42:14 PDT 2011
    Number of PassThru NICs are 0
    VEM modules are loaded
    Switch Name     Num Ports   Used Ports Configured Ports MTU     Uplinks  
    vSwitch0         128         4           128               1500   vmnic0  
    DVS Name         Num Ports   Used Ports Configured Ports MTU     Uplinks  
    VSM11           256         40         256               1500   vmnic2,vmnic1
    Number of PassThru NICs are 0
    VEM Agent (vemdpa) is running
    ~ # vemcmd show port    
    LTL   VSM Port Admin Link State PC-LTL SGID Vem Port
       18               UP   UP   F/B*     0       vmnic1
       19             DOWN   UP   BLK       0       vmnic2
    * F/B: Port is BLOCKED on some of the vlans.
    Please run "vemcmd show port vlans" to see the details.
    ~ # vemcmd show trunk
    Trunk port 6 native_vlan 1 CBL 1
    vlan(1) cbl 1, vlan(111) cbl 1, vlan(112) cbl 1, vlan(3968) cbl 1, vlan(3969) cbl 1, vlan(3970) cbl 1, vlan(3971) cbl 1,
    Trunk port 16 native_vlan 1 CBL 1
    vlan(1) cbl 1, vlan(111) cbl 1, vlan(112) cbl 1, vlan(3968) cbl 1, vlan(3969) cbl 1, vlan(3970) cbl 1, vlan(3971) cbl 1,
    Trunk port 18 native_vlan 1 CBL 0
    vlan(111) cbl 1, vlan(112) cbl 1,
    ~ # vemcmd show port
    LTL   VSM Port Admin Link State PC-LTL SGID Vem Port
       18               UP   UP   F/B*     0       vmnic1
       19            DOWN   UP   BLK       0       vmnic2
    * F/B: Port is BLOCKED on some of the vlans.
    Please run "vemcmd show port vlans" to see the details.
    ~ # vemcmd show port vlans
                           Native VLAN   Allowed
    LTL   VSM Port Mode VLAN   State Vlans
       18             T       1   FWD   111-112
       19             A       1   BLK   1
    ~ # vemcmd show port
    LTL   VSM Port Admin Link State PC-LTL SGID Vem Port
       18               UP   UP   F/B*     0       vmnic1
       19             DOWN   UP   BLK       0       vmnic2
    * F/B: Port is BLOCKED on some of the vlans.
    Please run "vemcmd show port vlans" to see the details.
    ~ # vemcmd show port vlans
                           Native VLAN   Allowed
    LTL   VSM Port Mode VLAN   State Vlans
       18             T       1   FWD   111-112
       19             A       1   BLK   1
    ~ # vemcmd show trunk
    Trunk port 6 native_vlan 1 CBL 1
    vlan(1) cbl 1, vlan(111) cbl 1, vlan(112) cbl 1, vlan(3968) cbl 1, vlan(3969) cbl 1, vlan(3970) cbl 1, vlan(3971) cbl 1,
    Trunk port 16 native_vlan 1 CBL 1
    vlan(1) cbl 1, vlan(111) cbl 1, vlan(112) cbl 1, vlan(3968) cbl 1, vlan(3969) cbl 1, vlan(3970) cbl 1, vlan(3971) cbl 1,
    Trunk port 18 native_vlan 1 CBL 0
    vlan(111) cbl 1, vlan(112) cbl 1,
    ~ # vemcmd show card
    Card UUID type 2: ebd44e72-456b-11e0-0610-00000000108f
    Card name: esx
    Switch name: VSM11
    Switch alias: DvsPortset-0
    Switch uuid: c4 be 2c 50 36 c5 71 97-44 41 1f c0 43 8e 45 78
    Card domain: 1
    Card slot: 1
    VEM Tunnel Mode: L2 Mode
    VEM Control (AIPC) MAC: 00:02:3d:10:01:00
    VEM Packet (Inband) MAC: 00:02:3d:20:01:00
    VEM Control Agent (DPA) MAC: 00:02:3d:40:01:00
    VEM SPAN MAC: 00:02:3d:30:01:00
    Primary VSM MAC : 00:50:56:ac:00:42
    Primary VSM PKT MAC : 00:50:56:ac:00:44
    Primary VSM MGMT MAC : 00:50:56:ac:00:43
    Standby VSM CTRL MAC : ff:ff:ff:ff:ff:ff
    Management IPv4 address: 10.1.240.30
    Management IPv6 address: 0000:0000:0000:0000:0000:0000:0000:0000
    Secondary VSM MAC : 00:00:00:00:00:00
    Secondary L3 Control IPv4 address: 0.0.0.0
    Upgrade : Default
    Max physical ports: 32
    Max virtual ports: 216
    Card control VLAN: 111
    Card packet VLAN: 112
    Card Headless Mode : Yes
           Processors: 8
    Processor Cores: 4
    Processor Sockets: 1
    Kernel Memory:   16712336
    Port link-up delay: 5s
    Global UUFB: DISABLED
    Heartbeat Set: False
    PC LB Algo: source-mac
    Datapath portset event in progress : no
    ~ #
    On VSM
    VSM11# sh svs conn
    connection vcenter:
       ip address: 10.1.240.38
       remote port: 80
       protocol: vmware-vim https
       certificate: default
       datacenter name: New Datacenter
       admin:  
       max-ports: 8192
       DVS uuid: c4 be 2c 50 36 c5 71 97-44 41 1f c0 43 8e 45 78
       config status: Enabled
       operational status: Connected
       sync status: Complete
       version: VMware vCenter Server 4.1.0 build-345043
    VSM11# sh svs ?
    connections Show connection information
    domain       Domain Configuration
    neighbors   Svs neighbors information
    upgrade     Svs upgrade information
    VSM11# sh svs dom
    SVS domain config:
    Domain id:   1  
    Control vlan: 111
    Packet vlan: 112
    L2/L3 Control mode: L2
    L3 control interface: NA
    Status: Config push to VC successful.
    VSM11# sh port
               ^
    % Invalid command at '^' marker.
    VSM11# sh run
    !Command: show running-config
    !Time: Sun Nov 20 11:35:52 2011
    version 4.2(1)SV1(4a)
    feature telnet
    username admin password 5 $1$QhO77JvX$A8ykNUSxMRgqZ0DUUIn381 role network-admin
    banner motd #Nexus 1000v Switch#
    ssh key rsa 2048
    ip domain-lookup
    ip domain-lookup
    hostname VSM11
    snmp-server user admin network-admin auth md5 0x389a68db6dcbd7f7887542ea6f8effa1
    priv 0x389a68db6dcbd7f7887542ea6f8effa1 localizedkey
    vrf context management
    ip route 0.0.0.0/0 10.1.240.254
    vlan 1,111-112
    port-channel load-balance ethernet source-mac
    port-profile default max-ports 32
    port-profile type ethernet Unused_Or_Quarantine_Uplink
    vmware port-group
    shutdown
    description Port-group created for Nexus1000V internal usage. Do not use.
    state enabled
    port-profile type vethernet Unused_Or_Quarantine_Veth
    vmware port-group
    shutdown
    description Port-group created for Nexus1000V internal usage. Do not use.
    state enabled
    port-profile type ethernet system-uplink
    vmware port-group
    switchport mode trunk
    switchport trunk allowed vlan 111-112
    no shutdown
    system vlan 111-112
    description "System profile"
    state enabled
    port-profile type vethernet servers11
    vmware port-group
    switchport mode access
    switchport access vlan 11
    no shutdown
    description "Data Profile for VM Traffic"
    port-profile type ethernet vm-uplink
    vmware port-group
    switchport mode access
    switchport access vlan 11
    no shutdown
    description "Uplink profile for VM traffic"
    state enabled
    vdc VSM11 id 1
    limit-resource vlan minimum 16 maximum 2049
    limit-resource monitor-session minimum 0 maximum 2
    limit-resource vrf minimum 16 maximum 8192
    limit-resource port-channel minimum 0 maximum 768
    limit-resource u4route-mem minimum 32 maximum 32
    limit-resource u6route-mem minimum 16 maximum 16
    limit-resource m4route-mem minimum 58 maximum 58
    limit-resource m6route-mem minimum 8 maximum 8
    interface mgmt0
    ip address 10.1.240.124/24
    interface control0
    line console
    boot kickstart bootflash:/nexus-1000v-kickstart-mz.4.2.1.SV1.4a.bin sup-1
    boot system bootflash:/nexus-1000v-mz.4.2.1.SV1.4a.bin sup-1
    boot kickstart bootflash:/nexus-1000v-kickstart-mz.4.2.1.SV1.4a.bin sup-2
    boot system bootflash:/nexus-1000v-mz.4.2.1.SV1.4a.bin sup-2
    svs-domain
    domain id 1
    control vlan 111
    packet vlan 112
    svs mode L2
    svs connection vcenter
    protocol vmware-vim
    remote ip address 10.1.240.38 port 80
    vmware dvs uuid "c4 be 2c 50 36 c5 71 97-44 41 1f c0 43 8e 45 78" datacenter-n
    ame New Datacenter
    max-ports 8192
    connect
    vsn type vsg global
    tcp state-checks
    vnm-policy-agent
    registration-ip 0.0.0.0
    shared-secret **********
    log-level
    thank you
    Michel

  • Nexus 1000v VEM module bouncing between hosts

    I'm receiving these error messages on my N1KV and don't know how to fix it.  I've tried removing, rebooting, reinstalling host B's VEM but that did not fix the issue.  How do I debug this?
    My setup,
    Two physical hosts running esxi 5.1, vcenter appliance, n1kv with two system uplinks and two uplinks for iscsi for each host.  Let me know if you need more output from logs or commands, thanks.
    N1KV# 2013 Jun 17 18:18:07 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.52.100 detected as module 3
    2013 Jun 17 18:18:07 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
    2013 Jun 17 18:18:08 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_UNEXP_NODEID_REQ: Removing VEM 3 (Unexpected Node Id Request)
    2013 Jun 17 18:18:09 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
    2013 Jun 17 18:18:13 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.51.100 detected as module 3
    2013 Jun 17 18:18:13 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
    2013 Jun 17 18:18:16 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_UNEXP_NODEID_REQ: Removing VEM 3 (Unexpected Node Id Request)
    2013 Jun 17 18:18:17 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
    2013 Jun 17 18:18:21 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.52.100 detected as module 3
    2013 Jun 17 18:18:21 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
    2013 Jun 17 18:18:22 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_UNEXP_NODEID_REQ: Removing VEM 3 (Unexpected Node Id Request)
    2013 Jun 17 18:18:23 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
    2013 Jun 17 18:18:28 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.51.100 detected as module 3
    2013 Jun 17 18:18:29 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
    2013 Jun 17 18:18:44 N1KV %PLATFORM-2-MOD_DETECT: Module 2 detected (Serial number :unavailable) Module-Type Virtual Supervisor Module Model :unavailable
    N1KV# sh module
    Mod  Ports  Module-Type                       Model               Status
    1    0      Virtual Supervisor Module         Nexus1000V          ha-standby
    2    0      Virtual Supervisor Module         Nexus1000V          active *
    3    248    Virtual Ethernet Module           NA                  ok
    Mod  Sw                  Hw     
    1    4.2(1)SV2(1.1a)     0.0                                             
    2    4.2(1)SV2(1.1a)     0.0                                             
    3    4.2(1)SV2(1.1a)     VMware ESXi 5.1.0 Releasebuild-838463 (3.1)     
    Mod  MAC-Address(es)                         Serial-Num
    1    00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8  NA
    2    00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8  NA
    3    02-00-0c-00-03-00 to 02-00-0c-00-03-80  NA
    Mod  Server-IP        Server-UUID                           Server-Name
    1    192.168.54.2     NA                                    NA
    2    192.168.54.2     NA                                    NA
    3    192.168.51.100   03000200-0400-0500-0006-000700080009  NA
    * this terminal session
    ~ # vemcmd show card
    Card UUID type  2: 03000200-0400-0500-0006-000700080009
    Card name:
    Switch name: N1KV
    Switch alias: DvsPortset-1
    Switch uuid: e6 dc 36 50 c0 a9 d9 a5-0b 98 fb 90 e1 fc 99 af
    Card domain: 2
    Card slot: 3
    VEM Tunnel Mode: L3 Mode
    L3 Ctrl Index: 49
    L3 Ctrl VLAN: 51
    VEM Control (AIPC) MAC: 00:02:3d:10:02:02
    VEM Packet (Inband) MAC: 00:02:3d:20:02:02
    VEM Control Agent (DPA) MAC: 00:02:3d:40:02:02
    VEM SPAN MAC: 00:02:3d:30:02:02
    Primary VSM MAC : 00:50:56:b6:0c:b2
    Primary VSM PKT MAC : 00:50:56:b6:35:3f
    Primary VSM MGMT MAC : 00:50:56:b6:d5:12
    Standby VSM CTRL MAC : 00:50:56:b6:96:f2
    Management IPv4 address: 192.168.51.100
    Management IPv6 address: 0000:0000:0000:0000:0000:0000:0000:0000
    Primary L3 Control IPv4 address: 192.168.54.2
    Secondary VSM MAC : 00:00:00:00:00:00
    Secondary L3 Control IPv4 address: 0.0.0.0
    Upgrade : Default
    Max physical ports: 32
    Max virtual ports: 216
    Card control VLAN: 1
    Card packet VLAN: 1
    Control type multicast: No
    Card Headless Mode : No
           Processors: 4
      Processor Cores: 4
    Processor Sockets: 1
      Kernel Memory:   16669760
    Port link-up delay: 5s
    Global UUFB: DISABLED
    Heartbeat Set: True
    PC LB Algo: source-mac
    Datapath portset event in progress : no
    Licensed: Yes
    ~ # vemcmd show card
    Card UUID type  2: 03000200-0400-0500-0006-000700080009
    Card name:
    Switch name: N1KV
    Switch alias: DvsPortset-0
    Switch uuid: e6 dc 36 50 c0 a9 d9 a5-0b 98 fb 90 e1 fc 99 af
    Card domain: 2
    Card slot: 3
    VEM Tunnel Mode: L3 Mode
    L3 Ctrl Index: 49
    L3 Ctrl VLAN: 52
    VEM Control (AIPC) MAC: 00:02:3d:10:02:02
    VEM Packet (Inband) MAC: 00:02:3d:20:02:02
    VEM Control Agent (DPA) MAC: 00:02:3d:40:02:02
    VEM SPAN MAC: 00:02:3d:30:02:02
    Primary VSM MAC : 00:50:56:b6:0c:b2
    Primary VSM PKT MAC : 00:50:56:b6:35:3f
    Primary VSM MGMT MAC : 00:50:56:b6:d5:12
    Standby VSM CTRL MAC : 00:50:56:b6:96:f2
    Management IPv4 address: 192.168.52.100
    Management IPv6 address: 0000:0000:0000:0000:0000:0000:0000:0000
    Primary L3 Control IPv4 address: 192.168.54.2
    Secondary VSM MAC : 00:00:00:00:00:00
    Secondary L3 Control IPv4 address: 0.0.0.0
    Upgrade : Default
    Max physical ports: 32
    Max virtual ports: 216
    Card control VLAN: 1
    Card packet VLAN: 1
    Control type multicast: No
    Card Headless Mode : Yes
           Processors: 4
      Processor Cores: 4
    Processor Sockets: 1
      Kernel Memory:   16669764
    Port link-up delay: 5s
    Global UUFB: DISABLED
    Heartbeat Set: False
    PC LB Algo: source-mac
    Datapath portset event in progress : no
    Licensed: Yes
    ! ports 1-6 connected to physical host A
    interface GigabitEthernet1/0/1
    description VMWARE ESXi Trunk
    switchport trunk encapsulation dot1q
    switchport mode trunk
    switchport nonegotiate
    spanning-tree portfast trunk
    spanning-tree bpdufilter enable
    spanning-tree bpduguard enable
    channel-group 1 mode active
    ! ports 7-12 connected to phys host B
    interface GigabitEthernet1/0/7
    description VMWARE ESXi Trunk
    switchport trunk encapsulation dot1q
    switchport mode trunk
    switchport nonegotiate
    spanning-tree portfast trunk
    spanning-tree bpdufilter enable
    spanning-tree bpduguard enable
    channel-group 2 mode active

    ok after deleteing the n1kv vms and vcenter and then reinstalling all I got the error again,
    N1KV# 2013 Jun 18 17:48:12 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_STATE_CONFLICT: Removing VEM 3 due to state conflict VSM(NodeId Processed), VEM(ModIns End Rcvd)
    2013 Jun 18 17:48:13 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
    2013 Jun 18 17:48:16 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.52.100 detected as module 3
    2013 Jun 18 17:48:16 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
    2013 Jun 18 17:48:22 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_STATE_CONFLICT: Removing VEM 3 due to state conflict VSM(NodeId Processed), VEM(ModIns End Rcvd)
    2013 Jun 18 17:48:23 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
    2013 Jun 18 17:48:34 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.52.100 detected as module 3
    2013 Jun 18 17:48:34 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
    2013 Jun 18 17:48:41 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_STATE_CONFLICT: Removing VEM 3 due to state conflict VSM(NodeId Processed), VEM(ModIns End Rcvd)
    2013 Jun 18 17:48:42 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
    2013 Jun 18 17:49:03 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.52.100 detected as module 3
    2013 Jun 18 17:49:03 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
    2013 Jun 18 17:49:10 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_STATE_CONFLICT: Removing VEM 3 due to state conflict VSM(NodeId Processed), VEM(ModIns End Rcvd)
    2013 Jun 18 17:49:11 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
    2013 Jun 18 17:49:29 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.51.100 detected as module 3
    2013 Jun 18 17:49:29 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
    2013 Jun 18 17:49:35 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_STATE_CONFLICT: Removing VEM 3 due to state conflict VSM(NodeId Processed), VEM(ModIns End Rcvd)
    2013 Jun 18 17:49:36 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
    2013 Jun 18 17:49:53 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.51.100 detected as module 3
    2013 Jun 18 17:49:53 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
    2013 Jun 18 17:49:59 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_STATE_CONFLICT: Removing VEM 3 due to state conflict VSM(NodeId Processed), VEM(ModIns End Rcvd)
    2013 Jun 18 17:50:00 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
    2013 Jun 18 17:50:05 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.52.100 detected as module 3
    2013 Jun 18 17:50:05 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
    Host A
    ~ # vemcmd show card
    Card UUID type  2: 03000200-0400-0500-0006-000700080009
    Card name:
    Switch name: N1KV
    Switch alias: DvsPortset-0
    Switch uuid: e6 dc 36 50 c0 a9 d9 a5-0b 98 fb 90 e1 fc 99 af
    Card domain: 2
    Card slot: 1
    VEM Tunnel Mode: L3 Mode
    L3 Ctrl Index: 49
    L3 Ctrl VLAN: 52
    VEM Control (AIPC) MAC: 00:02:3d:10:02:00
    VEM Packet (Inband) MAC: 00:02:3d:20:02:00
    VEM Control Agent (DPA) MAC: 00:02:3d:40:02:00
    VEM SPAN MAC: 00:02:3d:30:02:00
    Primary VSM MAC : 00:50:56:b6:96:f2
    Primary VSM PKT MAC : 00:50:56:b6:11:b6
    Primary VSM MGMT MAC : 00:50:56:b6:48:c6
    Standby VSM CTRL MAC : ff:ff:ff:ff:ff:ff
    Management IPv4 address: 192.168.52.100
    Management IPv6 address: 0000:0000:0000:0000:0000:0000:0000:0000
    Primary L3 Control IPv4 address: 192.168.54.2
    Secondary VSM MAC : 00:00:00:00:00:00
    Secondary L3 Control IPv4 address: 0.0.0.0
    Upgrade : Default
    Max physical ports: 32
    Max virtual ports: 216
    Card control VLAN: 1
    Card packet VLAN: 1
    Control type multicast: No
    Card Headless Mode : Yes
           Processors: 4
      Processor Cores: 4
    Processor Sockets: 1
      Kernel Memory:   16669764
    Port link-up delay: 5s
    Global UUFB: DISABLED
    Heartbeat Set: False
    PC LB Algo: source-mac
    Datapath portset event in progress : no
    Licensed: No
    Host B
    ~ # vemcmd show card
    Card UUID type  2: 03000200-0400-0500-0006-000700080009
    Card name:
    Switch name: N1KV
    Switch alias: DvsPortset-0
    Switch uuid: bf fb 28 50 1b 26 dd ae-05 bd 4e 48 2e 37 56 f3
    Card domain: 2
    Card slot: 3
    VEM Tunnel Mode: L3 Mode
    L3 Ctrl Index: 49
    L3 Ctrl VLAN: 51
    VEM Control (AIPC) MAC: 00:02:3d:10:02:02
    VEM Packet (Inband) MAC: 00:02:3d:20:02:02
    VEM Control Agent (DPA) MAC: 00:02:3d:40:02:02
    VEM SPAN MAC: 00:02:3d:30:02:02
    Primary VSM MAC : 00:50:56:a8:f5:f0
    Primary VSM PKT MAC : 00:50:56:a8:3c:62
    Primary VSM MGMT MAC : 00:50:56:a8:b4:a4
    Standby VSM CTRL MAC : 00:50:56:a8:30:d5
    Management IPv4 address: 192.168.51.100
    Management IPv6 address: 0000:0000:0000:0000:0000:0000:0000:0000
    Primary L3 Control IPv4 address: 192.168.54.2
    Secondary VSM MAC : 00:00:00:00:00:00
    Secondary L3 Control IPv4 address: 0.0.0.0
    Upgrade : Default
    Max physical ports: 32
    Max virtual ports: 216
    Card control VLAN: 1
    Card packet VLAN: 1
    Control type multicast: No
    Card Headless Mode : No
           Processors: 4
      Processor Cores: 4
    Processor Sockets: 1
      Kernel Memory:   16669760
    Port link-up delay: 5s
    Global UUFB: DISABLED
    Heartbeat Set: True
    PC LB Algo: source-mac
    Datapath portset event in progress : no
    Licensed: Yes
    I used the nexus 1000v java installer so I don't know what it keeps assigning the same UUID nor do I know how to change it.
    Here is the other output you requested,
    N1KV# show vms internal info dvs
      DVS INFO:
    DVS name: [N1KV]
          UUID: [bf fb 28 50 1b 26 dd ae-05 bd 4e 48 2e 37 56 f3]
          Description: [(null)]
          Config version: [1]
          Max ports: [8192]
          DC name: [Galaxy]
         OPQ data: size [1121], data: [data-version 1.0
    switch-domain 2
    switch-name N1KV
    cp-version 4.2(1)SV2(1.1a)
    control-vlan 1
    system-primary-mac 00:50:56:a8:f5:f0
    active-vsm packet mac 00:50:56:a8:3c:62
    active-vsm mgmt mac 00:50:56:a8:b4:a4
    standby-vsm ctrl mac 0050-56a8-30d5
    inband-vlan 1
    svs-mode L3
    l3control-ipaddr 192.168.54.2
    upgrade state 0 mac 0050-56a8-30d5 l3control-ipv4 null
    cntl-type-mcast 0
    profile dvportgroup-26 trunk 1,51-57,110
    profile dvportgroup-26 mtu 9000
    profile dvportgroup-27 access 51
    profile dvportgroup-27 mtu 1500
    profile dvportgroup-27 capability l3control
    profile dvportgroup-28 access 52
    profile dvportgroup-28 mtu 1500
    profile dvportgroup-28 capability l3control
    profile dvportgroup-29 access 53
    profile dvportgroup-29 mtu 1500
    profile dvportgroup-30 access 54
    profile dvportgroup-30 mtu 1500
    profile dvportgroup-31 access 55
    profile dvportgroup-31 mtu 1500
    profile dvportgroup-32 access 56
    profile dvportgroup-32 mtu 1500
    profile dvportgroup-34 trunk 220
    profile dvportgroup-34 mtu 9000
    profile dvportgroup-35 access 220
    profile dvportgroup-35 mtu 1500
    profile dvportgroup-35 capability iscsi-multipath
    end-version 1.0
          push_opq_data flag: [1]
    show svs neighbors
    Active Domain ID: 2
    AIPC Interface MAC: 0050-56a8-f5f0
    Inband Interface MAC: 0050-56a8-3c62
    Src MAC           Type   Domain-id    Node-id     Last learnt (Sec. ago)
    0050-56a8-30d5     VSM         2         0201      1020.45
    0002-3d40-0202     VEM         2         0302         1.33
    I cannot add Host A to the N1KV it errors out with,
    vDS operation failed on host 192.168.52.100, An error occurred during host configuration. got (vim.fault.PlatformConfigFault) exception
    Host B (192.168.51.100) was added fine, then I moved a vmkernel to the N1KV which brought up the VEM and got the VEM flapping errors.

  • Nexus 1000V VMotion between 2 different 1KV Switches

    Hello Virtual Experts,
    I was informed that you cannot Vmotion from one N1kv in one domain ID instance to another N1kv in a different domain ID. 
    As I understand, every Nexus 1000v switch needs to be in its own domain. 
    If this is the case, how does Cisco facilitate VMotion between switches?  How does Cisco facilitate long range Vmotion?
    Any response is much appreciated.
    /r
    Rob

    Robert,
    You are correct, just as with any vDS and a standard vSwitch you can't VMotion them between (while the Network interfaces are connected anyway).  VMotion will fail the Network Port Group validation.  The networking is what is tripping you up here, and it's not specific to Cisco, it's a VMware validation requirement.
    With long distance vMotion, the VMs are still part of the same DVS so there's no issue here. 
    You have a couple options here.
    1. You can do a cold migration, then re-assign the network binding on the destination switch.  This would require VM downtime.
    2. If going from a Host connected to a vDS to a Host using a vSwitch, you can create a temporaty vSwitch on the source host, create the Port Group with the same name as the Destination host's Port Group, give it an uplink and then migrate it that way from there.  This can be done online w/o downtime of the VM.
    Not sure of any other methods, but if anyone else has an idea, feel free to share!
    Regards,
    Robert

  • Cisco Nexus 1000v stops inheriting

    Guys,
    I have an issue with the Nexus 1000v, basically the trunk ports on the ESXi hosts stop inheriting from the main DATA-UP link port profile, which means that not all VLANS get presented down that given trunk port, its like it gets completey out of sync somehow. An example is below,
    THIS IS A PC CONFIG THAT'S NOT WOKRING CORRECTLY
    show int trunk
    Po9        100,400-401,405-406,412,430,434,438-439,446,449-450,591,850
    sh run int po9
    interface port-channel9
      inherit port-profile DATA-UP
      switchport trunk allowed vlan add 438-439,446,449-450,591,850 (the system as added this not user)
    THIS IS A PC CONFIG THAT IS WORKING CORRECTLY
    show int trunk
    Po2        100,292,300,313,400-401,405-406,412,429-430,434,438-439,446,449-450,582,591,850
    sh run int po2
    interface port-channel2
        inherit port-profile DATA-UP
    I have no idea why this keeps happening, when i remove the manual static trunk configuration on po9, everything is fine, few days later, it happens again, its not just po9, there is at least 3 port-channel that it affects.
    My DATA-UP link port-profile configuration looks like this and all port channels should reflect the VLANs allowed but some are way out.
    port-profile type ethernet DATA-UP
      vmware port-group
      switchport mode trunk
      switchport trunk allowed vlan 100,292,300,313,400-401,405-406,412,429-430,434,438-439,446,449-450,5
    82,591,850
      channel-group auto mode on sub-group cdp
      no shutdown
      state enabled
    The upstream switches match the same VLANs allowed and the VLAN database is a mirror image between Nexus and Upstream switches.
    The Cisco Nexus version is 4.2.1
    Anyone seen this problem?
    Cheers

    Using vMotion you can perform the entire upgrade with no disruption to your virtual infrastructure. 
    If this is your first upgrade, I highly recommend you go through the upgrade guides in detail.
    There are two main guides.  One details the VSM and overall process, the other covers the VEM (ESX) side of the upgrade.  They're not very long guides, and should be easy to follow.
    1000v Upgrade Guide:
    http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_2_1_s_v_1_4_a/upgrade/software/guide/n1000v_upgrade_software.html
    VEM Upgrade Guides:
    http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_2_1_s_v_1_4_a/install/vem/guide/n1000v_vem_install.html
    In a nutshell the procedure looks like this:
    -Backup of VSM Config
    -Run pre-upgrade check script (which will identify any config issues & ensures validation of new version with old config)
    -Upgrade standby VSM
    -Perform switchover
    -Upgrade image on old active (current standby)
    -Upgrade VEM modules
    One decision you'll need to make is whether to use Update Manager or not for the VEM upgrades.  If you don't have many hosts, the manual method is a nice way to maintain control on exactly what's being upgrade & when.  It will allow you to migrate VMs off the host, upgrade it, and then continue in this manner for all remaining hosts.  The alternate is Update Manager, which can be a little sticky if it runs into issues.  This method will automatically put hosts in Maintenance Mode, migrate VMs off, and then upgrade each VEM one by one.  This is a non-stop process so there's a little less control from that perspective.   My own preference is any environment with 10 or less hosts, I use manual, for more than that let VUM do the work.
    Let me know if you have any other questions.
    Regards,
    Robert

  • Nexus 1000v port-channels questions

    Hi,
    I’m running vCenter 4.1 and Nexus 1000v and about 30 ESX Hosts.
    I’m using one system uplink port profile for all 30 ESX Host; On each of the ESX host I have 2 NICs going to a Catalyst 3750 switch stack (Switch A), and another 2 NICs going to another Catalyst 3750 switch stack (Switch B).
    The Nexus is configured with the “sub-group CDP” command on the system uplink port profile like the following:
    port-profile type ethernet uplink
    vmware port-group
    switchport mode trunk
    switchport trunk allowed vlan 1,800,802,900,988-991,996-997,999
    switchport trunk native vlan 500
    mtu 1500
    channel-group auto mode on sub-group cdp
    no shutdown
    system vlan 988-989
    description System-Uplink
    state enabled
    And the port channel on the Catalyst 3750 are configured like the following:
    interface Port-channel11
    description ESX-10(Virtual Machine)
    switchport trunk encapsulation dot1q
    switchport trunk native vlan 500
    switchport trunk allowed vlan 800,802,900,988-991
    switchport mode trunk
    switchport nonegotiate
    spanning-tree portfast trunk
    end
    interface GigabitEthernet1/0/18
    description ESX-10(Virtual Machine)
    switchport trunk encapsulation dot1q
    switchport trunk native vlan 500
    switchport trunk allowed vlan 800,802,900,988-991
    switchport mode trunk
    switchport nonegotiate
    channel-group 11 mode on
    spanning-tree portfast trunk
    spanning-tree guard root
    end
    interface GigabitEthernet1/0/1
    description ESX-10(Virtual Machine)
    switchport trunk encapsulation dot1q
    switchport trunk native vlan 500
    switchport trunk allowed vlan 800,802,900,988-991
    switchport mode trunk
    switchport nonegotiate
    channel-group 11 mode on
    spanning-tree portfast trunk
    spanning-tree guard root
    end
    Now Cisco is telling me that I should be using MAC pinning when doing a trunk to two different stacks , and that each interface on 3750 should not be configured in a port-channel like above,  but should be configured as individual trunks.
    First question: Is the above statement correct, are my uplinks configured wrong?  Should they be configured individually in trunks instead of a port-channel?
    Second questions: If I need to add the MAC pinning configuration on my system uplink port-profile can I create a new system uplink port profile with the MAC pinning configuration and then move one ESX host (with no VM on them) one at a time to that new system uplink port profile? This way, I could migrate one ESX host at a time without outages to my VMs. Or is there an easier way to move 30 ESX hosts to a new system uplink profile with the MAC Pinning configuration.
    Thanks.

    Hello,
    From what I understood, you have the following setup:
         - Each ESX host has 4 NICS
         - 2 of them go to a 3750 stack and the other 2 go to a different 3750 stack
         - all 4 vmnics on the ESX host use the same Ethernet port-profile
              - this has 'channel-group auto mode on sub-group cdp'
         - The 2 interfaces on each 3750 stack are in a port-channel (just 'mode on')
    If yes, then this sort of a setup is correct. The only problem with this is the dependance on CDP. With CDP loss, the port-channels would go down.
    'mac-pinning' is the recommended option for this sort of a setup. You don't have to bundle the interfaces on the 3750 for this and these can be just regular trunk ports. If all your ports are on the same stack, then you can look at LACP. The CDP option would not be supported in the future releases. In fact, it is supposed to be removed from 4.2(1)SV1(2.1) but I still see the command available (ignore 4.2(1)SV1(4) next to it) - I'll follow up on this internally:
    http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_2_1_s_v_2_1_1/interface/configuration/guide/b_Cisco_Nexus_1000V_Interface_Configuration_Guide_Release_4_2_1_SV_2_1_1_chapter_01.html
    For migrating, the best option would be as you suggested. Create a new port-profile with mac-pinning and move one host at a time. You can migrate VMs off the host before you change the port-profile and can remove the upstream port-channel config as well.
    Thanks,
    Shankar

  • Nexus 1000v: Control VLAN must be same VLAN as ESX hosts?

    Hello,
    I'm trying to install nexus 1000v and came across the below prerequisite.
    The below release notes for Nexus 1000v states
    VMware and Host Prerequisites
    The VSM VM control interface must be on the same Layer 2 VLAN as the ESX 4.0 host that it manages. If you configure Layer 3, then you do not have this restriction. In each case however, the two VSMs must run in the same IP subnet.
    What I'm trying to do is to create 2 VLANs - one for management and the other for control & Data (as per latest deployment guide, we can put control & data in the same vlan).
    However, I wanted to have all ESX host management same VLAN as the VSM management as well as the vCenter Management. Essentially, creating a management network.
    However, from the above "VMWare and Host Prerequisites", does this means I cannot do this?
    I need to have the ESX host management same VLAN as the control VLAN?
    This means that my ESX host will reside in a different VLAN than my management subnet?
    Thanks...

    Control vlan is a totally seperate VLAN then your System Console. The VLAN just needs to be available to the ESX host through the upstream physical switch and then make sure the VLAN is passed on the uplink port-profile that you assign the ESX host to.
    We only need an interface on the ESX host if you decide to use L3 control. In that instance you would create or use an existing VMK interface on the ESX host.

Maybe you are looking for

  • Imac froze and on re load folder with question mark appeared.  Now will not start up to home screen.

    The other night my Imac froze during updates.  I found it in the morning with the screen and mouse frozen and only the loading icon spinning.  I re stared the computer from the power button. Upon re loading the apple logo did not appear and was repla

  • Artifact Lines in Line Pattern Swatch

    Hi, I am trying to use a basic line pattern swatch as a fill. It all looks fine in Illustrator but when I go and try to save it under "save to web and devices" the fill pattern shows extra lines in the pattern. These lines are darker and run in a dif

  • Albums With Over 50 Tracks Split

    Is it normal for albums with over 50 tracks to split up into different sections? How do I take a screenshot with Mac, so I can show you what I mean.

  • [SOLVED] [WOL] Arch changes "wol g" setting to "wol d" on boot

    I'm trying to get Wake-On-LAN working on my Arch Linux server. The network card uses the 8139too module. I've added a line to modprobe.conf to accomplish this: install 8139too /sbin/modprobe -i 8139too; ethtool -s eth0 wol g This line is correct, bec

  • Canon 6100 Communication Error

    I have a Canon 6100 All-in-one printer/scanner. When I first installed it (under Snow Leopard) it would often throw up a 'communications error' when sending a print. Pausing the print, then resuming would make it retry and the print would go through.