Trunking Vlans in Nexus 1000V

I am looking to design a solution for a customer and they run a very tight hosting environment with Nexus 1000V switches and want to setup private vlans as they are running out of vlans
I need to find some info on if it is possible to trunk a private vlan between 2 nexus switches
Or any info on private vlans on Nexus 1000V
Thanks
Roger

Hello Roger,
Yes, pVLANs can be trunked between switches.  A good discussion can be found here.  Have you considered VXLAN as an alternative to pVLANs?  VXLAN allows up to 16M segments definied though they differ slightly from pVLAN in that all VMs in a VXLAN segment can communicate.
Matthew

Similar Messages

  • Nexus 1000V private-vlan issue

    Hello
    /* Style Definitions */
    table.MsoNormalTable
    {mso-style-name:Standardowy;
    mso-tstyle-rowband-size:0;
    mso-tstyle-colband-size:0;
    mso-style-noshow:yes;
    mso-style-parent:"";
    mso-padding-alt:0cm 5.4pt 0cm 5.4pt;
    mso-para-margin:0cm;
    mso-para-margin-bottom:.0001pt;
    mso-pagination:widow-orphan;
    font-size:10.0pt;
    font-family:"Times New Roman";
    mso-ansi-language:#0400;
    mso-fareast-language:#0400;
    mso-bidi-language:#0400;}
    I need to transmit both the private-vlans (as promiscous trunk) and regular vlans on the trunk port between the Nexus 1000V and the physical switch. Do you know how to properly configure the uplink port to accomplish that ?
    Thank you in advance
    Lucas

    Control vlan is a totally seperate VLAN then your System Console. The VLAN just needs to be available to the ESX host through the upstream physical switch and then make sure the VLAN is passed on the uplink port-profile that you assign the ESX host to.
    We only need an interface on the ESX host if you decide to use L3 control. In that instance you would create or use an existing VMK interface on the ESX host.

  • Nexus 1000v / pvlan promiscuous trunk / Cross-host communication.

    Hello,
    We are planning the deployment of Nexus 1000v with “promiscuous trunk” uplink ports. We want to be sure cross-host in isolated pvlan will not be possible .
    Looking at the picture, I was wondering if the communication between VM-A on ESX1 and VM-B on ESX 2 (both on isolated pvlan) will be impossible as expected.
    Example: If VM-A on ESX 1 tries to send traffic to VM-B on ESX-2, the vlan 11 tag is remapped to vlan 10 tag at the outgoing uplink on ESX 1.
    Then the flow arrives on ESX 2 with vlan 10 tag on the promiscuous trunk. I understand the promiscuous port can talk to all secondary pvlans, so VM-A can in this case talk with VM-B.
    Is my understanding correct ?
    Or does the Nexus 1000v have an enhanced cross-VEM mechanism which allow to check the source mac address and know that it comes from pan isolated pvlan and as a result cannot communicate.
    Best regards.
    Karim  

    Hello Karim,
    N1k enforces pVLANs across all hosts.  Think of all the N1k VEMs as a single switch.  In your example, VM-A will not be able to talk with VM-B.  We accomplish this isolation by poisoning VEM mac address tables with a null destination.  For example, ESX1 would contain a dynamic entry for VM-B that points to a null LTL value.  If VM-A attempted to send traffic to VM-B's mac, it would not leave the host.
    Please be aware that N1k can only enforce pVLANs for traffic behind the VEMs.  If you have other servers in VLAN 10 on the blue switch, it would be seen as a promiscuous port from N1k standpoint.  Additional configuration would be required to prevent communication.

  • Nexus 1000v: Control VLAN must be same VLAN as ESX hosts?

    Hello,
    I'm trying to install nexus 1000v and came across the below prerequisite.
    The below release notes for Nexus 1000v states
    VMware and Host Prerequisites
    The VSM VM control interface must be on the same Layer 2 VLAN as the ESX 4.0 host that it manages. If you configure Layer 3, then you do not have this restriction. In each case however, the two VSMs must run in the same IP subnet.
    What I'm trying to do is to create 2 VLANs - one for management and the other for control & Data (as per latest deployment guide, we can put control & data in the same vlan).
    However, I wanted to have all ESX host management same VLAN as the VSM management as well as the vCenter Management. Essentially, creating a management network.
    However, from the above "VMWare and Host Prerequisites", does this means I cannot do this?
    I need to have the ESX host management same VLAN as the control VLAN?
    This means that my ESX host will reside in a different VLAN than my management subnet?
    Thanks...

    Control vlan is a totally seperate VLAN then your System Console. The VLAN just needs to be available to the ESX host through the upstream physical switch and then make sure the VLAN is passed on the uplink port-profile that you assign the ESX host to.
    We only need an interface on the ESX host if you decide to use L3 control. In that instance you would create or use an existing VMK interface on the ESX host.

  • [Nexus 1000v] VEM can't be add into VSM

    hi all,
    following my lab, i have some problems with Nexus 1000V when VEM can't be add into VSM.
    + on VSM has already installed on ESX 1 (standalone or ha) and you can see:
    Cisco_N1KV# show module
    Mod  Ports  Module-Type                       Model               Status
    1    0      Virtual Supervisor Module         Nexus1000V          active *
    Mod  Sw                Hw
    1    4.2(1)SV1(4a)     0.0
    Mod  MAC-Address(es)                         Serial-Num
    1    00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8  NA
    Mod  Server-IP        Server-UUID                           Server-Name
    1    10.4.110.123     NA                                    NA
    + on ESX2 that 's installed VEM
    [root@esxhoadq ~]# vem status
    VEM modules are loaded
    Switch Name      Num Ports   Used Ports  Configured Ports  MTU     Uplinks
    vSwitch0         128         3           128               1500    vmnic0
    VEM Agent (vemdpa) is running
    [root@esxhoadq ~]#
    any advices for this,
    thanks so much

    Hi,
    i'm having similar issue: the VEM insatlled on the ESXi is not showing up on the VSM.
    please check from the following what can be wrong?
    This is the VEM status:
    ~ # vem status -v
    Package vssnet-esx5.5.0-00000-release
    Version 4.2.1.1.4.1.0-2.0.1
    Build 1
    Date Wed Jul 27 04:42:14 PDT 2011
    Number of PassThru NICs are 0
    VEM modules are loaded
    Switch Name     Num Ports   Used Ports Configured Ports MTU     Uplinks  
    vSwitch0         128         4           128               1500   vmnic0  
    DVS Name         Num Ports   Used Ports Configured Ports MTU     Uplinks  
    VSM11           256         40         256               1500   vmnic2,vmnic1
    Number of PassThru NICs are 0
    VEM Agent (vemdpa) is running
    ~ # vemcmd show port    
    LTL   VSM Port Admin Link State PC-LTL SGID Vem Port
       18               UP   UP   F/B*     0       vmnic1
       19             DOWN   UP   BLK       0       vmnic2
    * F/B: Port is BLOCKED on some of the vlans.
    Please run "vemcmd show port vlans" to see the details.
    ~ # vemcmd show trunk
    Trunk port 6 native_vlan 1 CBL 1
    vlan(1) cbl 1, vlan(111) cbl 1, vlan(112) cbl 1, vlan(3968) cbl 1, vlan(3969) cbl 1, vlan(3970) cbl 1, vlan(3971) cbl 1,
    Trunk port 16 native_vlan 1 CBL 1
    vlan(1) cbl 1, vlan(111) cbl 1, vlan(112) cbl 1, vlan(3968) cbl 1, vlan(3969) cbl 1, vlan(3970) cbl 1, vlan(3971) cbl 1,
    Trunk port 18 native_vlan 1 CBL 0
    vlan(111) cbl 1, vlan(112) cbl 1,
    ~ # vemcmd show port
    LTL   VSM Port Admin Link State PC-LTL SGID Vem Port
       18               UP   UP   F/B*     0       vmnic1
       19            DOWN   UP   BLK       0       vmnic2
    * F/B: Port is BLOCKED on some of the vlans.
    Please run "vemcmd show port vlans" to see the details.
    ~ # vemcmd show port vlans
                           Native VLAN   Allowed
    LTL   VSM Port Mode VLAN   State Vlans
       18             T       1   FWD   111-112
       19             A       1   BLK   1
    ~ # vemcmd show port
    LTL   VSM Port Admin Link State PC-LTL SGID Vem Port
       18               UP   UP   F/B*     0       vmnic1
       19             DOWN   UP   BLK       0       vmnic2
    * F/B: Port is BLOCKED on some of the vlans.
    Please run "vemcmd show port vlans" to see the details.
    ~ # vemcmd show port vlans
                           Native VLAN   Allowed
    LTL   VSM Port Mode VLAN   State Vlans
       18             T       1   FWD   111-112
       19             A       1   BLK   1
    ~ # vemcmd show trunk
    Trunk port 6 native_vlan 1 CBL 1
    vlan(1) cbl 1, vlan(111) cbl 1, vlan(112) cbl 1, vlan(3968) cbl 1, vlan(3969) cbl 1, vlan(3970) cbl 1, vlan(3971) cbl 1,
    Trunk port 16 native_vlan 1 CBL 1
    vlan(1) cbl 1, vlan(111) cbl 1, vlan(112) cbl 1, vlan(3968) cbl 1, vlan(3969) cbl 1, vlan(3970) cbl 1, vlan(3971) cbl 1,
    Trunk port 18 native_vlan 1 CBL 0
    vlan(111) cbl 1, vlan(112) cbl 1,
    ~ # vemcmd show card
    Card UUID type 2: ebd44e72-456b-11e0-0610-00000000108f
    Card name: esx
    Switch name: VSM11
    Switch alias: DvsPortset-0
    Switch uuid: c4 be 2c 50 36 c5 71 97-44 41 1f c0 43 8e 45 78
    Card domain: 1
    Card slot: 1
    VEM Tunnel Mode: L2 Mode
    VEM Control (AIPC) MAC: 00:02:3d:10:01:00
    VEM Packet (Inband) MAC: 00:02:3d:20:01:00
    VEM Control Agent (DPA) MAC: 00:02:3d:40:01:00
    VEM SPAN MAC: 00:02:3d:30:01:00
    Primary VSM MAC : 00:50:56:ac:00:42
    Primary VSM PKT MAC : 00:50:56:ac:00:44
    Primary VSM MGMT MAC : 00:50:56:ac:00:43
    Standby VSM CTRL MAC : ff:ff:ff:ff:ff:ff
    Management IPv4 address: 10.1.240.30
    Management IPv6 address: 0000:0000:0000:0000:0000:0000:0000:0000
    Secondary VSM MAC : 00:00:00:00:00:00
    Secondary L3 Control IPv4 address: 0.0.0.0
    Upgrade : Default
    Max physical ports: 32
    Max virtual ports: 216
    Card control VLAN: 111
    Card packet VLAN: 112
    Card Headless Mode : Yes
           Processors: 8
    Processor Cores: 4
    Processor Sockets: 1
    Kernel Memory:   16712336
    Port link-up delay: 5s
    Global UUFB: DISABLED
    Heartbeat Set: False
    PC LB Algo: source-mac
    Datapath portset event in progress : no
    ~ #
    On VSM
    VSM11# sh svs conn
    connection vcenter:
       ip address: 10.1.240.38
       remote port: 80
       protocol: vmware-vim https
       certificate: default
       datacenter name: New Datacenter
       admin:  
       max-ports: 8192
       DVS uuid: c4 be 2c 50 36 c5 71 97-44 41 1f c0 43 8e 45 78
       config status: Enabled
       operational status: Connected
       sync status: Complete
       version: VMware vCenter Server 4.1.0 build-345043
    VSM11# sh svs ?
    connections Show connection information
    domain       Domain Configuration
    neighbors   Svs neighbors information
    upgrade     Svs upgrade information
    VSM11# sh svs dom
    SVS domain config:
    Domain id:   1  
    Control vlan: 111
    Packet vlan: 112
    L2/L3 Control mode: L2
    L3 control interface: NA
    Status: Config push to VC successful.
    VSM11# sh port
               ^
    % Invalid command at '^' marker.
    VSM11# sh run
    !Command: show running-config
    !Time: Sun Nov 20 11:35:52 2011
    version 4.2(1)SV1(4a)
    feature telnet
    username admin password 5 $1$QhO77JvX$A8ykNUSxMRgqZ0DUUIn381 role network-admin
    banner motd #Nexus 1000v Switch#
    ssh key rsa 2048
    ip domain-lookup
    ip domain-lookup
    hostname VSM11
    snmp-server user admin network-admin auth md5 0x389a68db6dcbd7f7887542ea6f8effa1
    priv 0x389a68db6dcbd7f7887542ea6f8effa1 localizedkey
    vrf context management
    ip route 0.0.0.0/0 10.1.240.254
    vlan 1,111-112
    port-channel load-balance ethernet source-mac
    port-profile default max-ports 32
    port-profile type ethernet Unused_Or_Quarantine_Uplink
    vmware port-group
    shutdown
    description Port-group created for Nexus1000V internal usage. Do not use.
    state enabled
    port-profile type vethernet Unused_Or_Quarantine_Veth
    vmware port-group
    shutdown
    description Port-group created for Nexus1000V internal usage. Do not use.
    state enabled
    port-profile type ethernet system-uplink
    vmware port-group
    switchport mode trunk
    switchport trunk allowed vlan 111-112
    no shutdown
    system vlan 111-112
    description "System profile"
    state enabled
    port-profile type vethernet servers11
    vmware port-group
    switchport mode access
    switchport access vlan 11
    no shutdown
    description "Data Profile for VM Traffic"
    port-profile type ethernet vm-uplink
    vmware port-group
    switchport mode access
    switchport access vlan 11
    no shutdown
    description "Uplink profile for VM traffic"
    state enabled
    vdc VSM11 id 1
    limit-resource vlan minimum 16 maximum 2049
    limit-resource monitor-session minimum 0 maximum 2
    limit-resource vrf minimum 16 maximum 8192
    limit-resource port-channel minimum 0 maximum 768
    limit-resource u4route-mem minimum 32 maximum 32
    limit-resource u6route-mem minimum 16 maximum 16
    limit-resource m4route-mem minimum 58 maximum 58
    limit-resource m6route-mem minimum 8 maximum 8
    interface mgmt0
    ip address 10.1.240.124/24
    interface control0
    line console
    boot kickstart bootflash:/nexus-1000v-kickstart-mz.4.2.1.SV1.4a.bin sup-1
    boot system bootflash:/nexus-1000v-mz.4.2.1.SV1.4a.bin sup-1
    boot kickstart bootflash:/nexus-1000v-kickstart-mz.4.2.1.SV1.4a.bin sup-2
    boot system bootflash:/nexus-1000v-mz.4.2.1.SV1.4a.bin sup-2
    svs-domain
    domain id 1
    control vlan 111
    packet vlan 112
    svs mode L2
    svs connection vcenter
    protocol vmware-vim
    remote ip address 10.1.240.38 port 80
    vmware dvs uuid "c4 be 2c 50 36 c5 71 97-44 41 1f c0 43 8e 45 78" datacenter-n
    ame New Datacenter
    max-ports 8192
    connect
    vsn type vsg global
    tcp state-checks
    vnm-policy-agent
    registration-ip 0.0.0.0
    shared-secret **********
    log-level
    thank you
    Michel

  • Nexus 1000v 4.2.1 - Interface Ethernet3/5 has been quarantined due to Cmd Failure

    Hello,
    i get the error message "Interface Ethernet3/5 has been quarantined due to Cmd Failure" when i try to activate the System Uplink ports on the Nexus 1000v VSM. The symptom occurs under 4.2.1.SV1.4 (has been fresh setup, did before tests with 4.0.4). Unfortunately, the link to the 4.2.1 troubleshooting guide does not work (seems it hasn't been released yet).
    Has anyone an idea what the root cause could be?
    The VSM and VEM run on a GP DL3xxG7 with 2 x Dual Port 10Gbit CNA Adapters.
         Nexus 1k config:
    vlan 1
    vlan 260
      name Servers
    vlan 340
      name NfsA
    vlan 357
      name vMotion
    vlan 920
      name Packet_Control
    port-profile type ethernet SYSTEM-UPLINK
      vmware port-group
      switchport mode trunk
      switchport trunk allowed vlan 1,260,301,303,305,307,357,544,920
      spanning-tree port type edge trunk
      switchport trunk native vlan 1
      channel-group auto mode active
      no shutdown
      system vlan 1,357,920
      state enabled
    port-profile type ethernet STORAGE-UPLINK
      vmware port-group
      switchport mode trunk
      switchport trunk allowed vlan 340
      channel-group auto mode active
      no shutdown
      system vlan 340
      state enabled
    When i do a no shut on the physical ports i get:
    switch(config-if)# no shut
    2011 Feb 24 11:43:55 switch %PORT-PROFILE-2-INTERFACE_QUARANTINED: Interface Ethernet3/7 has been quarantined due to Cmd Failure
    2011 Feb 24 11:43:55 switch %PORT-PROFILE-2-INTERFACE_QUARANTINED: Interface Ethernet3/5 has been quarantined due to Cmd Failure
    The other etherchannel (Port Profile STORAGE-UPLINK) does work pretty well...
    The peer switches are two Nexus 5k with VPC.
    config:
    port-profile type port-channel VMWare-LAN
      switchport mode trunk
      switchport trunk allowed vlan 260, 301, 303, 305, 307, 357, 544, 920
      spanning-tree port type edge trunk
      switchport trunk native vlan 1
      state enabled!
    interface port-channel18
      inherit port-profile VMWare-LAN
      description CHA vshpvm001 LAN
      vpc 18
      speed 10000!
    interface Ethernet1/18
      description CHA vshpvm001 LAN
      switchport mode trunk
      switchport trunk allowed vlan 260,301,303,305,307,357,544,920
      channel-group 18 mode active
    switch# show port-profile sync-status
    Ethernet3/5
    port-profile: SYSTEM-UPLINK
    interface status: quarantine
    sync status: out of sync
    cached commands: 
    errors:
        cached command failed
    recovery steps:
        unshut interface
    Ethernet3/7
    port-profile: SYSTEM-UPLINK
    interface status: quarantine
    sync status: out of sync
    cached commands: 
    errors:
        cached command failed
    recovery steps:
        unshut interface
    kind regards,
    andy

    Sean,
    thank you !
    "show accounting log" helped me - i had the command spanning-tree port type edge trunk in the config which i somehow didn't realize that we hadn't this command in the 4.0.4 lab setup...so it was a copy/paste error (i copied the port-profile config from the N5k down to the N1k).
    Fri Feb 25 07:20:32 2011:update:ppm.13880:admin:configure terminal ; interface Ethernet3/5 ; spanning-tree port type edge trunk (FAILURE)
    Fri Feb 25 07:20:32 2011:update:ppm.13890:admin:configure terminal ; interface Ethernet3/5 ; shutdown (FAILURE)
    As the N1k doesn't do STP at all (or does it? ) it's no wonder that the cli was complaining ...
    Maybe this command should get more attention in the tshoot guide as it seems to be a very helpful one.
    Cheers & Thanks,
    Andy

  • Virtualisation - trunking Vlans

    Hi,
    I am working on a requirement on virtualisation involving Business crtical applications in multiple data centers. The challenges being currently faced are:
    1. The 3 Tier architecture with web servers, app servers and db servers to be virtualised with common ESX hosts along with multiple other intranet applications. Issues around security between environments, management of ESX, logging etc.
    2. multiple swtiched environments to be virtualized with clash of Vlan id's, Vlan in excess on 512 to be trunked.
    3. The ultimate goal is to go for the complete virtualised environment with full DR capability and flexibility akin Cloud computing.
    4. Can we think of Q in Q support on Nexus 1000v?
    Any help in untangling this situation will be highly appreciated.
    regds/John

    John,
    With 512 VLANs just keep in mind you are at the upper limit of Nexus 1000V number of active VLANs supported (512).
    While the Nexus 1000V does not support Q-in-Q, the best place to implement such a feature would be at the physical switch layer anyway.
    Or perhaps another approach would be to implement your own VPLS cloud to interconnect the various switched environments together. The VLAN #'s dont need to be the same at each location, you could for example have VLAN 10 at Site A bridged to VLAN 20 at Site B. The advantage of VPLS over plain Q-in-Q would be preserving STP isolation and autonomy between sites.
    Also, talk to your Cisco SE about OTV for Nexus 7000 :)
    Cheers,
    Brad
    p.s. please rate if helpful

  • Nexus 1000v VEM module bouncing between hosts

    I'm receiving these error messages on my N1KV and don't know how to fix it.  I've tried removing, rebooting, reinstalling host B's VEM but that did not fix the issue.  How do I debug this?
    My setup,
    Two physical hosts running esxi 5.1, vcenter appliance, n1kv with two system uplinks and two uplinks for iscsi for each host.  Let me know if you need more output from logs or commands, thanks.
    N1KV# 2013 Jun 17 18:18:07 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.52.100 detected as module 3
    2013 Jun 17 18:18:07 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
    2013 Jun 17 18:18:08 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_UNEXP_NODEID_REQ: Removing VEM 3 (Unexpected Node Id Request)
    2013 Jun 17 18:18:09 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
    2013 Jun 17 18:18:13 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.51.100 detected as module 3
    2013 Jun 17 18:18:13 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
    2013 Jun 17 18:18:16 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_UNEXP_NODEID_REQ: Removing VEM 3 (Unexpected Node Id Request)
    2013 Jun 17 18:18:17 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
    2013 Jun 17 18:18:21 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.52.100 detected as module 3
    2013 Jun 17 18:18:21 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
    2013 Jun 17 18:18:22 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_UNEXP_NODEID_REQ: Removing VEM 3 (Unexpected Node Id Request)
    2013 Jun 17 18:18:23 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
    2013 Jun 17 18:18:28 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.51.100 detected as module 3
    2013 Jun 17 18:18:29 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
    2013 Jun 17 18:18:44 N1KV %PLATFORM-2-MOD_DETECT: Module 2 detected (Serial number :unavailable) Module-Type Virtual Supervisor Module Model :unavailable
    N1KV# sh module
    Mod  Ports  Module-Type                       Model               Status
    1    0      Virtual Supervisor Module         Nexus1000V          ha-standby
    2    0      Virtual Supervisor Module         Nexus1000V          active *
    3    248    Virtual Ethernet Module           NA                  ok
    Mod  Sw                  Hw     
    1    4.2(1)SV2(1.1a)     0.0                                             
    2    4.2(1)SV2(1.1a)     0.0                                             
    3    4.2(1)SV2(1.1a)     VMware ESXi 5.1.0 Releasebuild-838463 (3.1)     
    Mod  MAC-Address(es)                         Serial-Num
    1    00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8  NA
    2    00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8  NA
    3    02-00-0c-00-03-00 to 02-00-0c-00-03-80  NA
    Mod  Server-IP        Server-UUID                           Server-Name
    1    192.168.54.2     NA                                    NA
    2    192.168.54.2     NA                                    NA
    3    192.168.51.100   03000200-0400-0500-0006-000700080009  NA
    * this terminal session
    ~ # vemcmd show card
    Card UUID type  2: 03000200-0400-0500-0006-000700080009
    Card name:
    Switch name: N1KV
    Switch alias: DvsPortset-1
    Switch uuid: e6 dc 36 50 c0 a9 d9 a5-0b 98 fb 90 e1 fc 99 af
    Card domain: 2
    Card slot: 3
    VEM Tunnel Mode: L3 Mode
    L3 Ctrl Index: 49
    L3 Ctrl VLAN: 51
    VEM Control (AIPC) MAC: 00:02:3d:10:02:02
    VEM Packet (Inband) MAC: 00:02:3d:20:02:02
    VEM Control Agent (DPA) MAC: 00:02:3d:40:02:02
    VEM SPAN MAC: 00:02:3d:30:02:02
    Primary VSM MAC : 00:50:56:b6:0c:b2
    Primary VSM PKT MAC : 00:50:56:b6:35:3f
    Primary VSM MGMT MAC : 00:50:56:b6:d5:12
    Standby VSM CTRL MAC : 00:50:56:b6:96:f2
    Management IPv4 address: 192.168.51.100
    Management IPv6 address: 0000:0000:0000:0000:0000:0000:0000:0000
    Primary L3 Control IPv4 address: 192.168.54.2
    Secondary VSM MAC : 00:00:00:00:00:00
    Secondary L3 Control IPv4 address: 0.0.0.0
    Upgrade : Default
    Max physical ports: 32
    Max virtual ports: 216
    Card control VLAN: 1
    Card packet VLAN: 1
    Control type multicast: No
    Card Headless Mode : No
           Processors: 4
      Processor Cores: 4
    Processor Sockets: 1
      Kernel Memory:   16669760
    Port link-up delay: 5s
    Global UUFB: DISABLED
    Heartbeat Set: True
    PC LB Algo: source-mac
    Datapath portset event in progress : no
    Licensed: Yes
    ~ # vemcmd show card
    Card UUID type  2: 03000200-0400-0500-0006-000700080009
    Card name:
    Switch name: N1KV
    Switch alias: DvsPortset-0
    Switch uuid: e6 dc 36 50 c0 a9 d9 a5-0b 98 fb 90 e1 fc 99 af
    Card domain: 2
    Card slot: 3
    VEM Tunnel Mode: L3 Mode
    L3 Ctrl Index: 49
    L3 Ctrl VLAN: 52
    VEM Control (AIPC) MAC: 00:02:3d:10:02:02
    VEM Packet (Inband) MAC: 00:02:3d:20:02:02
    VEM Control Agent (DPA) MAC: 00:02:3d:40:02:02
    VEM SPAN MAC: 00:02:3d:30:02:02
    Primary VSM MAC : 00:50:56:b6:0c:b2
    Primary VSM PKT MAC : 00:50:56:b6:35:3f
    Primary VSM MGMT MAC : 00:50:56:b6:d5:12
    Standby VSM CTRL MAC : 00:50:56:b6:96:f2
    Management IPv4 address: 192.168.52.100
    Management IPv6 address: 0000:0000:0000:0000:0000:0000:0000:0000
    Primary L3 Control IPv4 address: 192.168.54.2
    Secondary VSM MAC : 00:00:00:00:00:00
    Secondary L3 Control IPv4 address: 0.0.0.0
    Upgrade : Default
    Max physical ports: 32
    Max virtual ports: 216
    Card control VLAN: 1
    Card packet VLAN: 1
    Control type multicast: No
    Card Headless Mode : Yes
           Processors: 4
      Processor Cores: 4
    Processor Sockets: 1
      Kernel Memory:   16669764
    Port link-up delay: 5s
    Global UUFB: DISABLED
    Heartbeat Set: False
    PC LB Algo: source-mac
    Datapath portset event in progress : no
    Licensed: Yes
    ! ports 1-6 connected to physical host A
    interface GigabitEthernet1/0/1
    description VMWARE ESXi Trunk
    switchport trunk encapsulation dot1q
    switchport mode trunk
    switchport nonegotiate
    spanning-tree portfast trunk
    spanning-tree bpdufilter enable
    spanning-tree bpduguard enable
    channel-group 1 mode active
    ! ports 7-12 connected to phys host B
    interface GigabitEthernet1/0/7
    description VMWARE ESXi Trunk
    switchport trunk encapsulation dot1q
    switchport mode trunk
    switchport nonegotiate
    spanning-tree portfast trunk
    spanning-tree bpdufilter enable
    spanning-tree bpduguard enable
    channel-group 2 mode active

    ok after deleteing the n1kv vms and vcenter and then reinstalling all I got the error again,
    N1KV# 2013 Jun 18 17:48:12 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_STATE_CONFLICT: Removing VEM 3 due to state conflict VSM(NodeId Processed), VEM(ModIns End Rcvd)
    2013 Jun 18 17:48:13 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
    2013 Jun 18 17:48:16 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.52.100 detected as module 3
    2013 Jun 18 17:48:16 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
    2013 Jun 18 17:48:22 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_STATE_CONFLICT: Removing VEM 3 due to state conflict VSM(NodeId Processed), VEM(ModIns End Rcvd)
    2013 Jun 18 17:48:23 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
    2013 Jun 18 17:48:34 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.52.100 detected as module 3
    2013 Jun 18 17:48:34 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
    2013 Jun 18 17:48:41 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_STATE_CONFLICT: Removing VEM 3 due to state conflict VSM(NodeId Processed), VEM(ModIns End Rcvd)
    2013 Jun 18 17:48:42 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
    2013 Jun 18 17:49:03 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.52.100 detected as module 3
    2013 Jun 18 17:49:03 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
    2013 Jun 18 17:49:10 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_STATE_CONFLICT: Removing VEM 3 due to state conflict VSM(NodeId Processed), VEM(ModIns End Rcvd)
    2013 Jun 18 17:49:11 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
    2013 Jun 18 17:49:29 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.51.100 detected as module 3
    2013 Jun 18 17:49:29 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
    2013 Jun 18 17:49:35 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_STATE_CONFLICT: Removing VEM 3 due to state conflict VSM(NodeId Processed), VEM(ModIns End Rcvd)
    2013 Jun 18 17:49:36 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
    2013 Jun 18 17:49:53 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.51.100 detected as module 3
    2013 Jun 18 17:49:53 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
    2013 Jun 18 17:49:59 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_STATE_CONFLICT: Removing VEM 3 due to state conflict VSM(NodeId Processed), VEM(ModIns End Rcvd)
    2013 Jun 18 17:50:00 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
    2013 Jun 18 17:50:05 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.52.100 detected as module 3
    2013 Jun 18 17:50:05 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
    Host A
    ~ # vemcmd show card
    Card UUID type  2: 03000200-0400-0500-0006-000700080009
    Card name:
    Switch name: N1KV
    Switch alias: DvsPortset-0
    Switch uuid: e6 dc 36 50 c0 a9 d9 a5-0b 98 fb 90 e1 fc 99 af
    Card domain: 2
    Card slot: 1
    VEM Tunnel Mode: L3 Mode
    L3 Ctrl Index: 49
    L3 Ctrl VLAN: 52
    VEM Control (AIPC) MAC: 00:02:3d:10:02:00
    VEM Packet (Inband) MAC: 00:02:3d:20:02:00
    VEM Control Agent (DPA) MAC: 00:02:3d:40:02:00
    VEM SPAN MAC: 00:02:3d:30:02:00
    Primary VSM MAC : 00:50:56:b6:96:f2
    Primary VSM PKT MAC : 00:50:56:b6:11:b6
    Primary VSM MGMT MAC : 00:50:56:b6:48:c6
    Standby VSM CTRL MAC : ff:ff:ff:ff:ff:ff
    Management IPv4 address: 192.168.52.100
    Management IPv6 address: 0000:0000:0000:0000:0000:0000:0000:0000
    Primary L3 Control IPv4 address: 192.168.54.2
    Secondary VSM MAC : 00:00:00:00:00:00
    Secondary L3 Control IPv4 address: 0.0.0.0
    Upgrade : Default
    Max physical ports: 32
    Max virtual ports: 216
    Card control VLAN: 1
    Card packet VLAN: 1
    Control type multicast: No
    Card Headless Mode : Yes
           Processors: 4
      Processor Cores: 4
    Processor Sockets: 1
      Kernel Memory:   16669764
    Port link-up delay: 5s
    Global UUFB: DISABLED
    Heartbeat Set: False
    PC LB Algo: source-mac
    Datapath portset event in progress : no
    Licensed: No
    Host B
    ~ # vemcmd show card
    Card UUID type  2: 03000200-0400-0500-0006-000700080009
    Card name:
    Switch name: N1KV
    Switch alias: DvsPortset-0
    Switch uuid: bf fb 28 50 1b 26 dd ae-05 bd 4e 48 2e 37 56 f3
    Card domain: 2
    Card slot: 3
    VEM Tunnel Mode: L3 Mode
    L3 Ctrl Index: 49
    L3 Ctrl VLAN: 51
    VEM Control (AIPC) MAC: 00:02:3d:10:02:02
    VEM Packet (Inband) MAC: 00:02:3d:20:02:02
    VEM Control Agent (DPA) MAC: 00:02:3d:40:02:02
    VEM SPAN MAC: 00:02:3d:30:02:02
    Primary VSM MAC : 00:50:56:a8:f5:f0
    Primary VSM PKT MAC : 00:50:56:a8:3c:62
    Primary VSM MGMT MAC : 00:50:56:a8:b4:a4
    Standby VSM CTRL MAC : 00:50:56:a8:30:d5
    Management IPv4 address: 192.168.51.100
    Management IPv6 address: 0000:0000:0000:0000:0000:0000:0000:0000
    Primary L3 Control IPv4 address: 192.168.54.2
    Secondary VSM MAC : 00:00:00:00:00:00
    Secondary L3 Control IPv4 address: 0.0.0.0
    Upgrade : Default
    Max physical ports: 32
    Max virtual ports: 216
    Card control VLAN: 1
    Card packet VLAN: 1
    Control type multicast: No
    Card Headless Mode : No
           Processors: 4
      Processor Cores: 4
    Processor Sockets: 1
      Kernel Memory:   16669760
    Port link-up delay: 5s
    Global UUFB: DISABLED
    Heartbeat Set: True
    PC LB Algo: source-mac
    Datapath portset event in progress : no
    Licensed: Yes
    I used the nexus 1000v java installer so I don't know what it keeps assigning the same UUID nor do I know how to change it.
    Here is the other output you requested,
    N1KV# show vms internal info dvs
      DVS INFO:
    DVS name: [N1KV]
          UUID: [bf fb 28 50 1b 26 dd ae-05 bd 4e 48 2e 37 56 f3]
          Description: [(null)]
          Config version: [1]
          Max ports: [8192]
          DC name: [Galaxy]
         OPQ data: size [1121], data: [data-version 1.0
    switch-domain 2
    switch-name N1KV
    cp-version 4.2(1)SV2(1.1a)
    control-vlan 1
    system-primary-mac 00:50:56:a8:f5:f0
    active-vsm packet mac 00:50:56:a8:3c:62
    active-vsm mgmt mac 00:50:56:a8:b4:a4
    standby-vsm ctrl mac 0050-56a8-30d5
    inband-vlan 1
    svs-mode L3
    l3control-ipaddr 192.168.54.2
    upgrade state 0 mac 0050-56a8-30d5 l3control-ipv4 null
    cntl-type-mcast 0
    profile dvportgroup-26 trunk 1,51-57,110
    profile dvportgroup-26 mtu 9000
    profile dvportgroup-27 access 51
    profile dvportgroup-27 mtu 1500
    profile dvportgroup-27 capability l3control
    profile dvportgroup-28 access 52
    profile dvportgroup-28 mtu 1500
    profile dvportgroup-28 capability l3control
    profile dvportgroup-29 access 53
    profile dvportgroup-29 mtu 1500
    profile dvportgroup-30 access 54
    profile dvportgroup-30 mtu 1500
    profile dvportgroup-31 access 55
    profile dvportgroup-31 mtu 1500
    profile dvportgroup-32 access 56
    profile dvportgroup-32 mtu 1500
    profile dvportgroup-34 trunk 220
    profile dvportgroup-34 mtu 9000
    profile dvportgroup-35 access 220
    profile dvportgroup-35 mtu 1500
    profile dvportgroup-35 capability iscsi-multipath
    end-version 1.0
          push_opq_data flag: [1]
    show svs neighbors
    Active Domain ID: 2
    AIPC Interface MAC: 0050-56a8-f5f0
    Inband Interface MAC: 0050-56a8-3c62
    Src MAC           Type   Domain-id    Node-id     Last learnt (Sec. ago)
    0050-56a8-30d5     VSM         2         0201      1020.45
    0002-3d40-0202     VEM         2         0302         1.33
    I cannot add Host A to the N1KV it errors out with,
    vDS operation failed on host 192.168.52.100, An error occurred during host configuration. got (vim.fault.PlatformConfigFault) exception
    Host B (192.168.51.100) was added fine, then I moved a vmkernel to the N1KV which brought up the VEM and got the VEM flapping errors.

  • Nexus 1000v UCS Manager and Cisco UCS M81KR

    Hello everyone
    I am confused about how works the integration between N1K and UCS Manager:
    First question:
    If two VMs on different ESXi and different VEM but in the same VLAN,would like to talk each other, the data flow between them is managed from the upstream switch( in this case UCS Fabric Inteconnect), isn'it?
    I created a Ethernet uplink port-profile on N1K in switch port mode access(100), I created a vEthernet port-profile for the VM in switchport mode access(100) as well. In the Fabric Interconnect I created a vNIC profile for the physical NICs of ESXi(where there are the VMs). Also I created the vlan 100(the same in N1K)
    Second question: With the configuration above, if I include in the vNIC profile the vlan 100 (not as native vlan) only, the two VMs can not ping each other. Instead if I include in the vNIC profile only the defaul vlan(I think it is the vlan 1) as native vlan evereything works fine. WHY????
    Third question: How it works the tagging vlan on Fabric interconnectr and also in N1K.
    I tried to read differnt documents, but I did not understand.
    Thanks                 

    This document may help...
    Best Practices in Deploying Cisco Nexus 1000V Series Switches on Cisco UCS B and C Series Cisco UCS Manager Servers
    http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/white_paper_c11-558242.html
    If two VMs on different ESXi and different VEM but in the same  VLAN,would like to talk each other, the data flow between them is  managed from the upstream switch( in this case UCS Fabric Inteconnect),  isn'it?
    -Yes.  Each ESX host with the VEM will have one or more dedicated NICs for the VEMs to communicate with the upstream network.  These would be your 'type ethernet' port-profiles.  The ustream network would need to bridge the vlan between the two physicall nics.
    Second question: With the configuration above, if I include in the vNIC  profile the vlan 100 (not as native vlan) only, the two VMs can not ping  each other. Instead if I include in the vNIC profile only the defaul  vlan(I think it is the vlan 1) as native vlan evereything works fine.  WHY????
    -  The N1K port profiles are switchport access making them untagged.  This would be the native vlan in ucs.  If there is no native vlan in the UCS configuration, we do not have the upstream networking bridging the vlan.
    Third question: How it works the tagging vlan on Fabric interconnectr and also in N1K.
    -  All ports on the UCS are effectively trunks and you can define what vlans are allowed on the trunk as well as what vlan is passed natively or untagged.  In N1K, you will want to leave your vEthernet port profiles as 'switchport mode access'.  For your Ethernet profiles, you will want them to be 'switchport mode trunk'.  Use an used used vlan as the native vlan.  All production vlans will be passed from N1K to UCS as tagged vlans.
    Thank You,
    Dan Laden
    PDI Helpdesk
    http://www.cisco.com/go/pdihelpdesk

  • Nexus 1000v Network State tracking

    Hi,
    I have a pair of 1000v VSM running version 4.2(1)SV1(5.2)
    The ESXi host is configured with 2 uplink connecting to 2 different switch.
    Using port channel mac pinning.
    i configured Network State tracking (NST) with default timer. and repin action.
    but when i put "switchport trunk allowed vlan none" on the upstream switch facing the host,
    the NST status said it is split.
    but the VMs connected is not repin to other uplink.
    is there any other misconfiguration that i miss? or my NST verification is wrong (though, the status is showing network is split)?
    Thanks,
    Ivan

    By default Nexus 1000V chooses the lowest forwarding VLAN ID that is allowed on the uplink port-profile. You can choose and configure a low-id VLAN (say, 4) suitable for tracking. You have to remove all lower-id VLANs from the trunk:
    port-profile type ethernet nexus1000v-uplink  switchport trunk allowed vlan 4-3967,4048-4093

  • Cisco Nexus 1000v stops inheriting

    Guys,
    I have an issue with the Nexus 1000v, basically the trunk ports on the ESXi hosts stop inheriting from the main DATA-UP link port profile, which means that not all VLANS get presented down that given trunk port, its like it gets completey out of sync somehow. An example is below,
    THIS IS A PC CONFIG THAT'S NOT WOKRING CORRECTLY
    show int trunk
    Po9        100,400-401,405-406,412,430,434,438-439,446,449-450,591,850
    sh run int po9
    interface port-channel9
      inherit port-profile DATA-UP
      switchport trunk allowed vlan add 438-439,446,449-450,591,850 (the system as added this not user)
    THIS IS A PC CONFIG THAT IS WORKING CORRECTLY
    show int trunk
    Po2        100,292,300,313,400-401,405-406,412,429-430,434,438-439,446,449-450,582,591,850
    sh run int po2
    interface port-channel2
        inherit port-profile DATA-UP
    I have no idea why this keeps happening, when i remove the manual static trunk configuration on po9, everything is fine, few days later, it happens again, its not just po9, there is at least 3 port-channel that it affects.
    My DATA-UP link port-profile configuration looks like this and all port channels should reflect the VLANs allowed but some are way out.
    port-profile type ethernet DATA-UP
      vmware port-group
      switchport mode trunk
      switchport trunk allowed vlan 100,292,300,313,400-401,405-406,412,429-430,434,438-439,446,449-450,5
    82,591,850
      channel-group auto mode on sub-group cdp
      no shutdown
      state enabled
    The upstream switches match the same VLANs allowed and the VLAN database is a mirror image between Nexus and Upstream switches.
    The Cisco Nexus version is 4.2.1
    Anyone seen this problem?
    Cheers

    Using vMotion you can perform the entire upgrade with no disruption to your virtual infrastructure. 
    If this is your first upgrade, I highly recommend you go through the upgrade guides in detail.
    There are two main guides.  One details the VSM and overall process, the other covers the VEM (ESX) side of the upgrade.  They're not very long guides, and should be easy to follow.
    1000v Upgrade Guide:
    http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_2_1_s_v_1_4_a/upgrade/software/guide/n1000v_upgrade_software.html
    VEM Upgrade Guides:
    http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_2_1_s_v_1_4_a/install/vem/guide/n1000v_vem_install.html
    In a nutshell the procedure looks like this:
    -Backup of VSM Config
    -Run pre-upgrade check script (which will identify any config issues & ensures validation of new version with old config)
    -Upgrade standby VSM
    -Perform switchover
    -Upgrade image on old active (current standby)
    -Upgrade VEM modules
    One decision you'll need to make is whether to use Update Manager or not for the VEM upgrades.  If you don't have many hosts, the manual method is a nice way to maintain control on exactly what's being upgrade & when.  It will allow you to migrate VMs off the host, upgrade it, and then continue in this manner for all remaining hosts.  The alternate is Update Manager, which can be a little sticky if it runs into issues.  This method will automatically put hosts in Maintenance Mode, migrate VMs off, and then upgrade each VEM one by one.  This is a non-stop process so there's a little less control from that perspective.   My own preference is any environment with 10 or less hosts, I use manual, for more than that let VUM do the work.
    Let me know if you have any other questions.
    Regards,
    Robert

  • Nexus 1000v port-channels questions

    Hi,
    I’m running vCenter 4.1 and Nexus 1000v and about 30 ESX Hosts.
    I’m using one system uplink port profile for all 30 ESX Host; On each of the ESX host I have 2 NICs going to a Catalyst 3750 switch stack (Switch A), and another 2 NICs going to another Catalyst 3750 switch stack (Switch B).
    The Nexus is configured with the “sub-group CDP” command on the system uplink port profile like the following:
    port-profile type ethernet uplink
    vmware port-group
    switchport mode trunk
    switchport trunk allowed vlan 1,800,802,900,988-991,996-997,999
    switchport trunk native vlan 500
    mtu 1500
    channel-group auto mode on sub-group cdp
    no shutdown
    system vlan 988-989
    description System-Uplink
    state enabled
    And the port channel on the Catalyst 3750 are configured like the following:
    interface Port-channel11
    description ESX-10(Virtual Machine)
    switchport trunk encapsulation dot1q
    switchport trunk native vlan 500
    switchport trunk allowed vlan 800,802,900,988-991
    switchport mode trunk
    switchport nonegotiate
    spanning-tree portfast trunk
    end
    interface GigabitEthernet1/0/18
    description ESX-10(Virtual Machine)
    switchport trunk encapsulation dot1q
    switchport trunk native vlan 500
    switchport trunk allowed vlan 800,802,900,988-991
    switchport mode trunk
    switchport nonegotiate
    channel-group 11 mode on
    spanning-tree portfast trunk
    spanning-tree guard root
    end
    interface GigabitEthernet1/0/1
    description ESX-10(Virtual Machine)
    switchport trunk encapsulation dot1q
    switchport trunk native vlan 500
    switchport trunk allowed vlan 800,802,900,988-991
    switchport mode trunk
    switchport nonegotiate
    channel-group 11 mode on
    spanning-tree portfast trunk
    spanning-tree guard root
    end
    Now Cisco is telling me that I should be using MAC pinning when doing a trunk to two different stacks , and that each interface on 3750 should not be configured in a port-channel like above,  but should be configured as individual trunks.
    First question: Is the above statement correct, are my uplinks configured wrong?  Should they be configured individually in trunks instead of a port-channel?
    Second questions: If I need to add the MAC pinning configuration on my system uplink port-profile can I create a new system uplink port profile with the MAC pinning configuration and then move one ESX host (with no VM on them) one at a time to that new system uplink port profile? This way, I could migrate one ESX host at a time without outages to my VMs. Or is there an easier way to move 30 ESX hosts to a new system uplink profile with the MAC Pinning configuration.
    Thanks.

    Hello,
    From what I understood, you have the following setup:
         - Each ESX host has 4 NICS
         - 2 of them go to a 3750 stack and the other 2 go to a different 3750 stack
         - all 4 vmnics on the ESX host use the same Ethernet port-profile
              - this has 'channel-group auto mode on sub-group cdp'
         - The 2 interfaces on each 3750 stack are in a port-channel (just 'mode on')
    If yes, then this sort of a setup is correct. The only problem with this is the dependance on CDP. With CDP loss, the port-channels would go down.
    'mac-pinning' is the recommended option for this sort of a setup. You don't have to bundle the interfaces on the 3750 for this and these can be just regular trunk ports. If all your ports are on the same stack, then you can look at LACP. The CDP option would not be supported in the future releases. In fact, it is supposed to be removed from 4.2(1)SV1(2.1) but I still see the command available (ignore 4.2(1)SV1(4) next to it) - I'll follow up on this internally:
    http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_2_1_s_v_2_1_1/interface/configuration/guide/b_Cisco_Nexus_1000V_Interface_Configuration_Guide_Release_4_2_1_SV_2_1_1_chapter_01.html
    For migrating, the best option would be as you suggested. Create a new port-profile with mac-pinning and move one host at a time. You can migrate VMs off the host before you change the port-profile and can remove the upstream port-channel config as well.
    Thanks,
    Shankar

  • Installation error Nexus 1000v

    When I want to install the Nexus 1000v on a standalone ESX1 5.1 installation I get at Step 5. Confirmation the error (see attachment):
    Unable to retrieve deployed VM. Please check VC connection
    It's an easy setup with 4 vlans on a trunk, and the managment ip of the ESXi server in vlan 1 which is allowed on the trunk.
    What am I doing wrong..... Please help..

    I am getting the same error when trying to deploy a new VSM. Tried Layer 2 standard and custom, adjusted different things but can't get it to work. Getting that same error:
    Unable to retrieve deployed VM. Please check VC connection.
    I am trying to add the VSM with the name '87457-n1kv01.den03'. I checked the hostd.log file and see a bunch of errors. Any idea how to fix this?
    2014-01-31T23:02:02.872Z [50281B70 info 'vm:Vix: [34498 foundryVMPowerOps.c:980]: FoundryVMPowerStateChangeCallback: /vmfs/volumes/5101904b-227e85e8-3e4a-78e7d17c7994/87457-n1kv01.den03-1/87457-n1kv01.den03-1.vmx']  vmx/execState/val = poweredOff.
    2014-01-31T23:02:02.872Z [FF9475B0 verbose 'Default' opID=5755438a-77-db-65-43 user=vpxuser] SetVmHandle /vmfs/volumes/5101904b-227e85e8-3e4a-78e7d17c7994/87457-n1kv01.den03-1/87457-n1kv01.den03-1.vmx 10485766
    2014-01-31T23:02:02.872Z [FF9475B0 info 'Vmsvc.vm:/vmfs/volumes/5101904b-227e85e8-3e4a-78e7d17c7994/87457-n1kv01.den03-1/87457-n1kv01.den03-1.vmx' opID=5755438a-77-db-65-43 user=vpxuser] Upgrade is required for virtual machine, version: 7
    2014-01-31T23:02:02.876Z [FF9475B0 verbose 'Vmsvc.vm:/vmfs/volumes/5101904b-227e85e8-3e4a-78e7d17c7994/87457-n1kv01.den03-1/87457-n1kv01.den03-1.vmx' opID=5755438a-77-db-65-43 user=vpxuser] Time to gather config: 3 (msecs)
    2014-01-31T23:02:02.876Z [50FC2B70 verbose 'Hbrsvc'] Replicator: ReconfigListener triggered for config VM 14
    2014-01-31T23:02:02.877Z [50FC2B70 error 'Hbrsvc'] Failed to retrieve VM config (id=14)
    2014-01-31T23:02:02.877Z [50FC2B70 error 'Hbrsvc'] Replicator: VmReconfig failed to retrieve replication config for VM 14, ignoring: vim.fault.ReplicationVmConfigFault
    2014-01-31T23:02:02.877Z [FF9475B0 verbose 'Vmsvc.vm:/vmfs/volumes/5101904b-227e85e8-3e4a-78e7d17c7994/87457-n1kv01.den03-1/87457-n1kv01.den03-1.vmx' opID=5755438a-77-db-65-43 user=vpxuser] Current VM Tracking state: disabled
    2014-01-31T23:02:02.877Z [FF9475B0 verbose 'Vmsvc.vm:/vmfs/volumes/5101904b-227e85e8-3e4a-78e7d17c7994/87457-n1kv01.den03-1/87457-n1kv01.den03-1.vmx' opID=5755438a-77-db-65-43 user=vpxuser] CheckForOpaqueNetworkImpact: VM instance UUID set in spec: [50197f97-da7f-d145-d8bf-8b41df5732f0]
    2014-01-31T23:02:02.877Z [FF9475B0 verbose 'Vmsvc.vm:/vmfs/volumes/5101904b-227e85e8-3e4a-78e7d17c7994/87457-n1kv01.den03-1/87457-n1kv01.den03-1.vmx' opID=5755438a-77-db-65-43 user=vpxuser] CheckForOpaqueNetworkImpact: opaque network changes in deviceSpec: { [-105->:] [-104->:] [-103->:]}
    2014-01-31T23:02:02.878Z [FF9475B0 info 'Vmsvc.vm:/vmfs/volumes/5101904b-227e85e8-3e4a-78e7d17c7994/87457-n1kv01.den03-1/87457-n1kv01.den03-1.vmx' opID=5755438a-77-db-65-43 user=vpxuser] Storage policy for disk '/vmfs/volumes/5101904b-227e85e8-3e4a-78e7d17c7994/87457-n1kv01.den03-1/87457-n1kv01.den03-1.vmdk' not specified.
    2014-01-31T23:02:03.265Z [FF9475B0 verbose 'Vmsvc.VmDiskMgr' opID=5755438a-77-db-65-43 user=vpxuser] Disk created successfully.
    2014-01-31T23:02:03.267Z [FF9475B0 verbose 'Vmsvc.vm:/vmfs/volumes/5101904b-227e85e8-3e4a-78e7d17c7994/87457-n1kv01.den03-1/87457-n1kv01.den03-1.vmx' opID=5755438a-77-db-65-43 user=vpxuser] Is disk present translated error to vim.fault.GenericVmConfigFault
    2014-01-31T23:02:03.267Z [FF9475B0 info 'Vmsvc.vm:/vmfs/volumes/5101904b-227e85e8-3e4a-78e7d17c7994/87457-n1kv01.den03-1/87457-n1kv01.den03-1.vmx' opID=5755438a-77-db-65-43 user=vpxuser] Is disk present failed: vim.fault.GenericVmConfigFault
    2014-01-31T23:02:03.267Z [FF9475B0 verbose 'Vmsvc.vm:/vmfs/volumes/5101904b-227e85e8-3e4a-78e7d17c7994/87457-n1kv01.den03-1/87457-n1kv01.den03-1.vmx' opID=5755438a-77-db-65-43 user=vpxuser] Is disk present message: Expected device (ide1:0) does not exist.
    -->
    2014-01-31T23:02:03.268Z [FF9475B0 verbose 'Vmsvc.vm:/vmfs/volumes/5101904b-227e85e8-3e4a-78e7d17c7994/87457-n1kv01.den03-1/87457-n1kv01.den03-1.vmx' opID=5755438a-77-db-65-43 user=vpxuser] Is disk present translated error to vim.fault.GenericVmConfigFault
    2014-01-31T23:02:03.268Z [FF9475B0 info 'Vmsvc.vm:/vmfs/volumes/5101904b-227e85e8-3e4a-78e7d17c7994/87457-n1kv01.den03-1/87457-n1kv01.den03-1.vmx' opID=5755438a-77-db-65-43 user=vpxuser] Is disk present failed: vim.fault.GenericVmConfigFault
    2014-01-31T23:02:03.268Z [FF9475B0 verbose 'Vmsvc.vm:/vmfs/volumes/5101904b-227e85e8-3e4a-78e7d17c7994/87457-n1kv01.den03-1/87457-n1kv01.den03-1.vmx' opID=5755438a-77-db-65-43 user=vpxuser] Is disk present message: Expected device (scsi0:0) does not exist.
    -->
    2014-01-31T23:02:03.269Z [FF9475B0 verbose 'Vmsvc.vm:/vmfs/volumes/5101904b-227e85e8-3e4a-78e7d17c7994/87457-n1kv01.den03-1/87457-n1kv01.den03-1.vmx' opID=5755438a-77-db-65-43 user=vpxuser] Setting tracking state for disk w. key -102 to false
    2014-01-31T23:02:03.269Z [FF9475B0 verbose 'Vmsvc.vm:/vmfs/volumes/5101904b-227e85e8-3e4a-78e7d17c7994/87457-n1kv01.den03-1/87457-n1kv01.den03-1.vmx' opID=5755438a-77-db-65-43 user=vpxuser] NIC: is present translated error to vim.fault.GenericVmConfigFault
    2014-01-31T23:02:03.269Z [FF9475B0 info 'Vmsvc.vm:/vmfs/volumes/5101904b-227e85e8-3e4a-78e7d17c7994/87457-n1kv01.den03-1/87457-n1kv01.den03-1.vmx' opID=5755438a-77-db-65-43 user=vpxuser] NIC: is present failed: vim.fault.GenericVmConfigFault
    2014-01-31T23:02:03.269Z [FF9475B0 verbose 'Vmsvc.vm:/vmfs/volumes/5101904b-227e85e8-3e4a-78e7d17c7994/87457-n1kv01.den03-1/87457-n1kv01.den03-1.vmx' opID=5755438a-77-db-65-43 user=vpxuser] NIC: is present message: Ethernet adapter 'ethernet0' does not exist.
    -->
    2014-01-31T23:02:03.270Z [FF9475B0 verbose 'Vmsvc.vm:/vmfs/volumes/5101904b-227e85e8-3e4a-78e7d17c7994/87457-n1kv01.den03-1/87457-n1kv01.den03-1.vmx' opID=5755438a-77-db-65-43 user=vpxuser] NIC: is present translated error to vim.fault.GenericVmConfigFault
    2014-01-31T23:02:03.270Z [FF9475B0 info 'Vmsvc.vm:/vmfs/volumes/5101904b-227e85e8-3e4a-78e7d17c7994/87457-n1kv01.den03-1/87457-n1kv01.den03-1.vmx' opID=5755438a-77-db-65-43 user=vpxuser] NIC: is present failed: vim.fault.GenericVmConfigFault
    2014-01-31T23:02:03.270Z [FF9475B0 verbose 'Vmsvc.vm:/vmfs/volumes/5101904b-227e85e8-3e4a-78e7d17c7994/87457-n1kv01.den03-1/87457-n1kv01.den03-1.vmx' opID=5755438a-77-db-65-43 user=vpxuser] NIC: is present message: Ethernet adapter 'ethernet1' does not exist.
    -->
    2014-01-31T23:02:03.270Z [FF9475B0 verbose 'Vmsvc.vm:/vmfs/volumes/5101904b-227e85e8-3e4a-78e7d17c7994/87457-n1kv01.den03-1/87457-n1kv01.den03-1.vmx' opID=5755438a-77-db-65-43 user=vpxuser] NIC: is present translated error to vim.fault.GenericVmConfigFault
    2014-01-31T23:02:03.270Z [FF9475B0 info 'Vmsvc.vm:/vmfs/volumes/5101904b-227e85e8-3e4a-78e7d17c7994/87457-n1kv01.den03-1/87457-n1kv01.den03-1.vmx' opID=5755438a-77-db-65-43 user=vpxuser] NIC: is present failed: vim.fault.GenericVmConfigFault
    2014-01-31T23:02:03.270Z [FF9475B0 verbose 'Vmsvc.vm:/vmfs/volumes/5101904b-227e85e8-3e4a-78e7d17c7994/87457-n1kv01.den03-1/87457-n1kv01.den03-1.vmx' opID=5755438a-77-db-65-43 user=vpxuser] NIC: is present message: Ethernet adapter 'ethernet2' does not exist.
    -->
    2014-01-31T23:02:03.271Z [FF9475B0 info 'Vimsvc.ha-eventmgr' opID=5755438a-77-db-65-43 user=vpxuser] Event 213 : Assigned new BIOS UUID (42197e34-23f7-c33f-cf97-6edc92e8a120) to 87457-n1kv01.den03-1 on 87457-esx01.den03 in ha-datacenter
    2014-01-31T23:02:03.271Z [FF9475B0 info 'Vimsvc.ha-eventmgr' opID=5755438a-77-db-65-43 user=vpxuser] Event 214 : Assign a new instance UUID (50197f97-da7f-d145-d8bf-8b41df5732f0) to 87457-n1kv01.den03-1
    2014-01-31T23:02:03.272Z [FF9475B0 verbose 'Vmsvc.vm:/vmfs/volumes/5101904b-227e85e8-3e4a-78e7d17c7994/87457-n1kv01.den03-1/87457-n1kv01.den03-1.vmx' opID=5755438a-77-db-65-43 user=vpxuser] Setting VM's tracking state to disabled.
    2014-01-31T23:02:03.272Z [FF9475B0 info 'Vmsvc.vm:/vmfs/volumes/5101904b-227e85e8-3e4a-78e7d17c7994/87457-n1kv01.den03-1/87457-n1kv01.den03-1.vmx' opID=5755438a-77-db-65-43 user=vpxuser] UpdatePortOnVmReconfigure: _vmInstanceUuidAfter is set to inSpec [50197f97-da7f-d145-d8bf-8b41df5732f0]
    2014-01-31T23:02:03.366Z [FF9475B0 info 'vm:/vmfs/volumes/5101904b-227e85e8-3e4a-78e7d17c7994/87457-n1kv01.den03-1/87457-n1kv01.den03-1.vmx' opID=5755438a-77-db-65-43 user=vpxuser]  Reloading config state.
    2014-01-31T23:02:03.387Z [FF9475B0 info 'Libs' opID=5755438a-77-db-65-43 user=vpxuser] VMHS: Transitioned vmx/execState/val to poweredOff
    2014-01-31T23:02:03.405Z [FF9475B0 verbose 'Vmsvc.vm:/vmfs/volumes/5101904b-227e85e8-3e4a-78e7d17c7994/87457-n1kv01.den03-1/87457-n1kv01.den03-1.vmx' opID=5755438a-77-db-65-43 user=vpxuser] Commit Vigor batch operation successful
    2014-01-31T23:02:03.405Z [FF9475B0 verbose 'Vmsvc.vm:/vmfs/volumes/5101904b-227e85e8-3e4a-78e7d17c7994/87457-n1kv01.den03-1/87457-n1kv01.den03-1.vmx' opID=5755438a-77-db-65-43 user=vpxuser] Signalling Commit Vigor batch operation
    2014-01-31T23:02:03.405Z [FF9475B0 verbose 'Vmsvc.vm:/vmfs/volumes/5101904b-227e85e8-3e4a-78e7d17c7994/87457-n1kv01.den03-1/87457-n1kv01.den03-1.vmx' opID=5755438a-77-db-65-43 user=vpxuser] Waiting on Commit Vigor batch operation
    2014-01-31T23:02:03.405Z [FF9475B0 verbose 'Vmsvc.vm:/vmfs/volumes/5101904b-227e85e8-3e4a-78e7d17c7994/87457-n1kv01.den03-1/87457-n1kv01.den03-1.vmx' opID=5755438a-77-db-65-43 user=vpxuser] Completed Commit Vigor batch operation
    2014-01-31T23:02:03.405Z [50FC2B70 verbose 'Hostsvc.DatastoreSystem'] Datastore-Vdisk refresh: scheduling thread after:0 usec
    2014-01-31T23:02:03.405Z [4F8E2B70 verbose 'Hostsvc.FSVolumeProvider'] RefreshOneVmfsVolume on /vmfs/volumes/5101904b-227e85e8-3e4a-78e7d17c7994
    2014-01-31T23:02:03.413Z [50281B70 info 'vm:Vix: [34498 foundryVMPowerOps.c:980]: FoundryVMPowerStateChangeCallback: /vmfs/volumes/5101904b-227e85e8-3e4a-78e7d17c7994/87457-n1kv01.den03-1/87457-n1kv01.den03-1.vmx']  vmx/execState/val = poweredOff.
    2014-01-31T23:02:03.415Z [FF9475B0 verbose 'Vmsvc.vm:/vmfs/volumes/5101904b-227e85e8-3e4a-78e7d17c7994/87457-n1kv01.den03-1/87457-n1kv01.den03-1.vmx' opID=5755438a-77-db-65-43 user=vpxuser] Time to gather Snapshot information ( read from disk,  build tree): 1 msecs. needConsolidate is false.
    2014-01-31T23:02:03.415Z [FF9475B0 verbose 'Vmsvc.vm:/vmfs/volumes/5101904b-227e85e8-3e4a-78e7d17c7994/87457-n1kv01.den03-1/87457-n1kv01.den03-1.vmx' opID=5755438a-77-db-65-43 user=vpxuser] Time to gather snapshot file layout: 0 (msecs)
    2014-01-31T23:02:03.415Z [FF9475B0 warning 'Vmsvc.vm:/vmfs/volumes/5101904b-227e85e8-3e4a-78e7d17c7994/87457-n1kv01.den03-1/87457-n1kv01.den03-1.vmx' opID=5755438a-77-db-65-43 user=vpxuser] CannotRetrieveCorefiles: VM is in an invalid state
    2014-01-31T23:02:03.460Z [4F8E2B70 verbose 'Hostsvc.FSVolumeProvider'] RefreshOneVmfsVolume 5101904b-227e85e8-3e4a-78e7d17c7994 calling ProcessVmfs
    2014-01-31T23:02:03.461Z [4F8E2B70 info 'Hostsvc.DatastoreSystem'] RefreshVdiskDatastores: Done refreshing datastores.
    2014-01-31T23:02:03.467Z [FF9475B0 verbose 'Vmsvc.vm:/vmfs/volumes/5101904b-227e85e8-3e4a-78e7d17c7994/87457-n1kv01.den03-1/87457-n1kv01.den03-1.vmx' opID=5755438a-77-db-65-43 user=vpxuser] Time to gather config: 52 (msecs)
    2014-01-31T23:02:03.468Z [50FC2B70 verbose 'Hbrsvc'] Replicator: ReconfigListener triggered for config VM 14
    2014-01-31T23:02:03.470Z [FF9475B0 info 'Libs' opID=5755438a-77-db-65-43 user=vpxuser]  VAAI-NAS :: vmfsNasPlugin: SUCCESSES: RsrvSpace [0] Cln-Full [0] Cln-Lazy [0] cln-DRun [0], Ext-stats [97]
    2014-01-31T23:02:03.470Z [FF9475B0 info 'Libs' opID=5755438a-77-db-65-43 user=vpxuser]  VAAI-NAS :: vmfsNasPlugin: FAILURES: RsrvSpace [0] Cln-Full [0] Cln-Lazy [0] cln-DRun [0], Ext-stats [0]
    2014-01-31T23:02:03.470Z [FF9475B0 info 'Libs' opID=5755438a-77-db-65-43 user=vpxuser]  VAAI-NAS :: SvaNasPlugin: SUCCESSES: RsrvSpace [0] Cln-Full [0] Cln-Lazy [0] cln-DRun [0], Ext-stats [0]
    2014-01-31T23:02:03.470Z [FF9475B0 info 'Libs' opID=5755438a-77-db-65-43 user=vpxuser]  VAAI-NAS :: SvaNasPlugin: FAILURES: RsrvSpace [0] Cln-Full [0] Cln-Lazy [0] cln-DRun [0], Ext-stats [0]
    2014-01-31T23:02:03.470Z [FF9475B0 info 'Libs' opID=5755438a-77-db-65-43 user=vpxuser]  VAAI-NAS :: NAS Mapping Used successfully for 120 times
    2014-01-31T23:02:03.471Z [FF9475B0 info 'Hostsvc' opID=5755438a-77-db-65-43 user=vpxuser] Lookupvm: World ID not set for VM 14
    2014-01-31T23:02:03.471Z [FF9475B0 verbose 'Hbrsvc' opID=5755438a-77-db-65-43 user=vpxuser] Replicator: VmFileProviderCallback VM (id=14)
    2014-01-31T23:02:03.472Z [50FC2B70 verbose 'Hbrsvc'] Replicator: VmReconfig ignoring VM 14 not configured for replication
    2014-01-31T23:02:03.472Z [FF9475B0 verbose 'Vmsvc.vm:/vmfs/volumes/5101904b-227e85e8-3e4a-78e7d17c7994/87457-n1kv01.den03-1/87457-n1kv01.den03-1.vmx' opID=5755438a-77-db-65-43 user=vpxuser] Fault Tolerance state callback received
    2014-01-31T23:02:03.472Z [FF9475B0 verbose 'Vmsvc.vm:/vmfs/volumes/5101904b-227e85e8-3e4a-78e7d17c7994/87457-n1kv01.den03-1/87457-n1kv01.den03-1.vmx' opID=5755438a-77-db-65-43 user=vpxuser] Record/replay state callback received
    2014-01-31T23:02:03.474Z [FF9475B0 info 'Libs' opID=5755438a-77-db-65-43 user=vpxuser] Failed to find manifest content in extended config xml.
    2014-01-31T23:02:03.474Z [FF9475B0 info 'Vmsvc.vm:/vmfs/volumes/5101904b-227e85e8-3e4a-78e7d17c7994/87457-n1kv01.den03-1/87457-n1kv01.den03-1.vmx' opID=5755438a-77-db-65-43 user=vpxuser] Initial tools version: 7:guestToolsNotInstalled
    2014-01-31T23:02:03.475Z [FF9475B0 info 'Vmsvc.vm:/vmfs/volumes/5101904b-227e85e8-3e4a-78e7d17c7994/87457-n1kv01.den03-1/87457-n1kv01.den03-1.vmx' opID=5755438a-77-db-65-43 user=vpxuser] State Transition (VM_STATE_INITIALIZING -> VM_STATE_OFF)
    2014-01-31T23:02:03.475Z [FF9475B0 verbose 'Hostsvc.HaHost' opID=5755438a-77-db-65-43 user=vpxuser] ModeMgr::End: op = normal, current = normal, count = 3
    2014-01-31T23:02:03.475Z [FF9475B0 verbose 'Vmsvc.vm:/vmfs/volumes/5101904b-227e85e8-3e4a-78e7d17c7994/87457-n1kv01.den03-1/87457-n1kv01.den03-1.vmx' opID=5755438a-77-db-65-43 user=vpxuser] Predicted VM overhead: 141684736 bytes
    2014-01-31T23:02:03.476Z [FF9475B0 verbose 'Vmsvc.vm:/vmfs/volumes/5101904b-227e85e8-3e4a-78e7d17c7994/87457-n1kv01.den03-1/87457-n1kv01.den03-1.vmx' opID=5755438a-77-db-65-43 user=vpxuser] Max connection count changed from 0 to 40
    2014-01-31T23:02:03.476Z [FF9475B0 info 'Vmsvc.vm:/vmfs/volumes/5101904b-227e85e8-3e4a-78e7d17c7994/87457-n1kv01.den03-1/87457-n1kv01.den03-1.vmx' opID=5755438a-77-db-65-43 user=vpxuser] Initialized virtual machine.
    2014-01-31T23:02:03.476Z [FF9475B0 verbose 'Vmsvc.vm:/vmfs/volumes/5101904b-227e85e8-3e4a-78e7d17c7994/87457-n1kv01.den03-1/87457-n1kv01.den03-1.vmx' opID=5755438a-77-db-65-43 user=vpxuser] Create DONE: /vmfs/volumes/5101904b-227e85e8-3e4a-78e7d17c7994/87457-n1kv01.den03-1/87457-n1kv01.den03-1.vmx

  • Nexus 1000v VSM can't comunicate with the VEM

    This is the configuration I have on my vsm
    !Command: show running-config
    !Time: Thu Dec 20 02:15:30 2012
    version 4.2(1)SV2(1.1)
    svs switch edition essential
    no feature telnet
    banner motd #Nexus 1000v Switch#
    ssh key rsa 2048
    ip domain-lookup
    ip host Nexus-1000v 172.16.0.69
    hostname Nexus-1000v
    errdisable recovery cause failed-port-state
    vem 3
      host vmware id 78201fe5-cc43-e211-0000-00000000000c
    vem 4
      host vmware id e51f2078-43cc-11e2-0000-000000000009
    priv 0xa2cb98ffa3f2bc53380d54d63b6752db localizedkey
    vrf context management
      ip route 0.0.0.0/0 172.16.0.1
    vlan 1-2
    port-channel load-balance ethernet source-mac
    port-profile default max-ports 32
    port-profile type ethernet Unused_Or_Quarantine_Uplink
      vmware port-group
      shutdown
      description Port-group created for Nexus1000V internal usage. Do not use.
      state enabled
    port-profile type vethernet Unused_Or_Quarantine_Veth
      vmware port-group
      shutdown
      description Port-group created for Nexus1000V internal usage. Do not use.
      state enabled
    port-profile type ethernet vmware-uplinks
      vmware port-group
      switchport mode trunk
      switchport trunk allowed vlan 1-3967,4048-4093
      channel-group auto mode on
      no shutdown
      system vlan 2
      state enabled
    port-profile type vethernet Management
      vmware port-group
      switchport mode access
      switchport access vlan 2
      no shutdown
      state enabled
    port-profile type vethernet vMotion
      vmware port-group
      switchport mode access
      switchport access vlan 2
      no shutdown
      state enabled
    port-profile type vethernet ServidoresGestion
      vmware port-group
      switchport mode access
      switchport access vlan 2
      no shutdown
      state enabled
    port-profile type vethernet L3-VSM
      capability l3control
      vmware port-group
      switchport mode access
      switchport access vlan 2
      no shutdown
      system vlan 2
      state enabled
    port-profile type vethernet VSG-Data
      vmware port-group
      switchport mode access
      switchport access vlan 2
      no shutdown
      state enabled
    port-profile type vethernet VSG-HA
      vmware port-group
      switchport mode access
      switchport access vlan 2
      no shutdown
      state enabled
    vdc Nexus-1000v id 1
      limit-resource vlan minimum 16 maximum 2049
      limit-resource monitor-session minimum 0 maximum 2
      limit-resource vrf minimum 16 maximum 8192
      limit-resource port-channel minimum 0 maximum 768
      limit-resource u4route-mem minimum 1 maximum 1
      limit-resource u6route-mem minimum 1 maximum 1
    interface mgmt0
      ip address 172.16.0.69/25
    interface control0
    line console
    boot kickstart bootflash:/nexus-1000v-kickstart.4.2.1.SV2.1.1.bin sup-1
    boot system bootflash:/nexus-1000v.4.2.1.SV2.1.1.bin sup-1
    boot kickstart bootflash:/nexus-1000v-kickstart.4.2.1.SV2.1.1.bin sup-2
    boot system bootflash:/nexus-1000v.4.2.1.SV2.1.1.bin sup-2
    svs-domain
      domain id 1
      control vlan 1
      packet vlan 1
      svs mode L3 interface mgmt0
    svs connection vcenter
      protocol vmware-vim
      remote ip address 172.16.0.66 port 80
      vmware dvs uuid "ae 31 14 50 cf b2 e7 3a-5c 48 65 0f 01 9b b5 b1" datacenter-n
    ame DTIC Datacenter
      admin user n1kUser
      max-ports 8192
      connect
    vservice global type vsg
      tcp state-checks invalid-ack
      tcp state-checks seq-past-window
      no tcp state-checks window-variation
      no bypass asa-traffic
    vnm-policy-agent
      registration-ip 172.16.0.70
      shared-secret **********
      policy-agent-image bootflash:/vnmc-vsmpa.2.0.0.38.bin
      log-level
    for some reason my vsm can't the the vem. I could before, but then my server crashed without doing a copy run start and when it booted up all my config but the uplinks was lost.
    When I tried to configure the connection again it wasn't working.
    I'm also attaching a screen capture of the vds
    and a capture of the regular switch.
    I will appreciate very much any help you could give me and will provide any configuration details that you might need.
    Thank you so much.

    Carlos,
       Looking at vds.jpg, you do not have any VEM vmkernel interface attached to port-profile L3-VSM. So fix VSM-VEM communication problem, you either migrate your VEM management vmkernel interface to L3-VSM port-profile of the vds, or create new VMkernel port on your VEM/host and attach it to L3-VSM port-profile.

  • Nexus 1000V and strange ping behavior

    Hi ,
    I am using a Nexus 1000v a FI 6248 with a Nexus 5K in redundant architecture and I have a strange bevahior with VMs.
    I am using  port-profiles without any problems but in one case I have this issue
    I have 2 VMs assigned to the same port profile
    When the 2 Vms are on the same esx I can ping (from a VM)  the gateway and the other VM, now when I move one of the VM to an other ESX (same chassis or not).
    From both , I can ping the gateway, a remote IP but VMs are unreachable between them.
    and a remote PC are able to ping both Vms.
    I checked the mac table, from N5k it's Ok , from FI 6348 it's Ok , but from N1K I am unable to see the mac address of both VMs.
    Why I tried ( I performed at each step a clear mac table)
        Assign to an other vmnic , it works.
        On UCS I moved it to an other vmnic , it works
        On UCS I Changed the QOS policy , it works.
        I reassigned it , and I had the old behavior
        I checked all trunk links it's ok
    So i didn't understand why I have this strange behavior and how I can troubleshoot it deeper?
    I would like if possible to avoid to do that but the next step will be to create a new vmnic card and assign the same policy and after to suppress the vnmic and to recreate the old one.
    Regards

    From what you mentioned here's my thoughts.
    When the two VMs are on the same host, they can reach each other.  This is because they're locally switching in the VEM so this doesn't tell us much other than the VEM is working as expected.
    When you move one of the VMs to a different UCS ESX host, the path changes.    Let's assume you've moved one VM to a different host, within the UCS system.
    UCS-Blade1(Host-A) - VM1
    UCS-Blade2(Host-B) - VM2
    There are two paths option from VM1 -> VM2
    VM1 -> Blade1 Uplink -> Fabric Interconnect A -> Blade 2 Uplink -> VM2
    or
    VM1-> Blade1 Uplink -> Fabric Interconnect A -> Upstream Switch -> Fabric Interconnect B -> Blade 2 Uplink -> VM2
    For the two options I've seen many instances were the FIRST option works fine, but the second doesn't.  Why?  Well as you can see option 1 has a path from Host A to FI-A and back down to Host B.  In this path there's no northbound switching outside of UCS.  This would require both VMs to be be pinned to the Hosts Uplink going to the same Fabric Interconnect. 
    In the second option if the path involves going from Host-A up to FI-A, then northbound to the upstream switch, then back down eventually to FI-B  and then Host-B. When this path is taken, if the two VMs can't reach each other then you have some problem with your upstream switches.  If both VMs reside in the same subnet, it's a Layer2 problem.  If they're in different subnets, then it's a Layer 2 or 3 problem somewhere north of UCS.
    So knowing this - why did manual pinning on the N1K fix your problem?  What pinning does is forces a VM to a particular uplink.  What likely happened in your case is you pinned both VMs to Host Uplinks that both go to the same UCS Fabric Interconnect (avoiding having to be switched northbound).  Your original problem still exists, so you're not clear out of the woods yet.
    Ask yourself is - Why are just these two VMs affected.   Are they possibly the only VMs using a particular VLAN or subnet?
    An easy test to verify the pinning to to use the command below.  "x" is the module # for the host the VMs are running on.
    module vem x execute vemcmd show port-old
    I explain the command further in another post here -> https://supportforums.cisco.com/message/3717261#3717261.  In your case you'll be looking for the VM1 and VM2 LTL's and finding out which SubGroup ID they use, then which SG_ID belongs to whch VMNIC.
    I bet your find the manual pinning "that works" takes the path from each host to the same FI. If this is the case, look northbound for your L2 problem.
    Regards,
    Robert

Maybe you are looking for

  • Exchange 2013 Window Backup failure - fails to clean up log files - error FFFFFFFC

    Hello, We currently have an Exchange 2013 - Exchange 2007 coexistence setup. 1x Windows Server 2008 R2 with Exchange 2007 CAS/HUB 1x Windows Server 2008 R2 with Exchange 2007 MBX 1x Windows Server 2008 R2 with Exchange 2013 CU 3 (all roles) Since our

  • Create slide show - help needed to transfer from iPhoto

    I am an editing novice but a skilled Mac user with a programing background. I am trying to create a 20-30 minute slide show of approximately 250 photos to run in the background at my son's bar mitzvah on 61 inch plasmas. I have selected the photos an

  • URL download at timer interval

    I want to capture the original JPG files posted by a webcam in the Arctic. This webcam places a new JPG image on the following URL each second. http://195.149.144.50/ImageHarvester/Images/icehotel_live.jpg I have created a simple Automater Workflow t

  • WFXLoad : Upload Subscription def. : how to identify event by his name ?

    Hello, I want to upload BES definitions (Event and Subscription) with WFXload Tools ... For Event it's OK. In the tag <GUID> I use the value #NEW to generate a new GUID in the PL/SQL packages. But for the subscription, how to identify the Event by hi

  • Internal error workbook storage fault 330

    Hi All Trying to save a new workbook in a role i created. Anyone know why I am getting this error. I have seen many solutions for error 310 but not 330. The role has S_GUI and  S_USER_AGR with full authorisation on each. Anyone know what is missing f