Nexus 1000V private-vlan issue

Hello
/* Style Definitions */
table.MsoNormalTable
{mso-style-name:Standardowy;
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-parent:"";
mso-padding-alt:0cm 5.4pt 0cm 5.4pt;
mso-para-margin:0cm;
mso-para-margin-bottom:.0001pt;
mso-pagination:widow-orphan;
font-size:10.0pt;
font-family:"Times New Roman";
mso-ansi-language:#0400;
mso-fareast-language:#0400;
mso-bidi-language:#0400;}
I need to transmit both the private-vlans (as promiscous trunk) and regular vlans on the trunk port between the Nexus 1000V and the physical switch. Do you know how to properly configure the uplink port to accomplish that ?
Thank you in advance
Lucas

Control vlan is a totally seperate VLAN then your System Console. The VLAN just needs to be available to the ESX host through the upstream physical switch and then make sure the VLAN is passed on the uplink port-profile that you assign the ESX host to.
We only need an interface on the ESX host if you decide to use L3 control. In that instance you would create or use an existing VMK interface on the ESX host.

Similar Messages

  • Nexus 1010 + 1000v control vlan issue

                       Hi,
    I have Nexus 1000v installed on nexus 1010. The nexus 1010 is in cluster and working fine. I have made network uplink option 3.
    My VSM is configured to be on L3 mode. Hence I set control and packet vlan to 1 (on vsm). while creating the VSB too I have choosen control and packet vlan to be 1 (keeping in mind my mode will be L3).
    Now The vsm is not coming up in HA. The redandancy log says degraded mode is true.
    Is it because, the control packet coming from VSM after reaching the N1010, the packets are getting tagged with vlan 1. Since I have not set any native vlan on 1010, might be control vlan 1 is also tagged one. Is it this the case ?
    help needed on this issue.
    regards
    Prasad K

    Control vlan is a totally seperate VLAN then your System Console. The VLAN just needs to be available to the ESX host through the upstream physical switch and then make sure the VLAN is passed on the uplink port-profile that you assign the ESX host to.
    We only need an interface on the ESX host if you decide to use L3 control. In that instance you would create or use an existing VMK interface on the ESX host.

  • Nexus 1000v: Control VLAN must be same VLAN as ESX hosts?

    Hello,
    I'm trying to install nexus 1000v and came across the below prerequisite.
    The below release notes for Nexus 1000v states
    VMware and Host Prerequisites
    The VSM VM control interface must be on the same Layer 2 VLAN as the ESX 4.0 host that it manages. If you configure Layer 3, then you do not have this restriction. In each case however, the two VSMs must run in the same IP subnet.
    What I'm trying to do is to create 2 VLANs - one for management and the other for control & Data (as per latest deployment guide, we can put control & data in the same vlan).
    However, I wanted to have all ESX host management same VLAN as the VSM management as well as the vCenter Management. Essentially, creating a management network.
    However, from the above "VMWare and Host Prerequisites", does this means I cannot do this?
    I need to have the ESX host management same VLAN as the control VLAN?
    This means that my ESX host will reside in a different VLAN than my management subnet?
    Thanks...

    Control vlan is a totally seperate VLAN then your System Console. The VLAN just needs to be available to the ESX host through the upstream physical switch and then make sure the VLAN is passed on the uplink port-profile that you assign the ESX host to.
    We only need an interface on the ESX host if you decide to use L3 control. In that instance you would create or use an existing VMK interface on the ESX host.

  • Nexus 1000V VEM Issues

    I am having some problems with VSM/VEM connectivity after an upgrade that I'm hoping someone can help with.
    I have a 2 ESXi host cluster that I am upgrading from vSphere 5.0 to 5.5u1, and upgrading a Nexus 1000V from SV2(2.1) to SV2(2.2).  I upgraded vCenter without issue (I'm using the vCSA), but when I attempted to upgrade ESXi-1 to 5.5u1 using VUM it complained that a VIB was incompatible.  After tracing this VIB to the 1000V VEM, I created an ESXi 5.5u1 installer package containing the SV2(2.2) VEM VIB for ESXi 5.5 and attempted to use VUM again but was still unsuccessful
    I removed the VEM VIB from the vDS and the host and was able to upgrade the host to 5.5u1.  I tried to add it back to the vDS and was given the error below:
    vDS operation failed on host esxi1, Received SOAP response fault from [<cs p:00007fa5d778d290, TCP:esxi1.gooch.net:443>]: invokeHostTransactionCall
    Received SOAP response fault from [<cs p:1f3cee20, TCP:localhost:8307>]: invokeHostTransactionCall
    An error occurred during host configuration. got (vim.fault.PlatformConfigFault) exception
    I installed the VEM VIB manually at the CLI with 'esxcli software vib install -d /tmp/cisco-vem-v164-4.2.1.2.2.2.0-3.2.1.zip' and I'm able to add to to the vDS, but when I connect the uplinks and migrate the L3 Control VMKernel, I get the following error where it complains about the SPROM when the module comes online, then it eventually drops the VEM.
    2014 Mar 29 15:34:54 n1kv %VEM_MGR-2-VEM_MGR_DETECTED: Host esxi1 detected as module 3
    2014 Mar 29 15:34:54 n1kv %VDC_MGR-2-VDC_CRITICAL: vdc_mgr has hit a critical error: SPROM data is invalid. Please reprogram your SPROM!
    2014 Mar 29 15:34:54 n1kv %VEM_MGR-2-MOD_ONLINE: Module 3 is online
    2014 Mar 29 15:37:14 n1kv %VEM_MGR-2-VEM_MGR_REMOVE_NO_HB: Removing VEM 3 (heartbeats lost)
    2014 Mar 29 15:37:19 n1kv %STP-2-SET_PORT_STATE_FAIL: Port state change req to PIXM failed, status = 0x41e80001 [failure] vdc 1, tree id 0, num ports 1, ports  state BLK, opcode MTS_OPC_PIXM_SET_MULT_CBL_VLAN_BM_FOR_MULT_PORTS, msg id (2274781), rr_token 0x22B5DD
    2014 Mar 29 15:37:21 n1kv %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
    I have tried gracefully removing ESXi-1 from the vDS and cluster, reformatting it with a fresh install of ESXi 5.5u1, but when I try to join it to the N1KV it throws the same error.

    Hi, 
    The SET_PORT_STATE_FAIL message is usually thrown when there is a communication issue between the VSM and the VEM while the port-channel interface is being programmed. 
    What is the uplink port profile configuration? 
    Other hosts are using this uplink port profile successfully?
    The upstream configuration on an affected and a working host is the same? (ie control VLAN allowed where necessary)
    Per kpate's post, control VLAN needs to be a system VLAN on the uplink port profile.
    The VDC SPROM message is a cosmetic defect
    https://tools.cisco.com/bugsearch/bug/CSCul65853/
    HTH,
    Joe

  • Trunking Vlans in Nexus 1000V

    I am looking to design a solution for a customer and they run a very tight hosting environment with Nexus 1000V switches and want to setup private vlans as they are running out of vlans
    I need to find some info on if it is possible to trunk a private vlan between 2 nexus switches
    Or any info on private vlans on Nexus 1000V
    Thanks
    Roger

    Hello Roger,
    Yes, pVLANs can be trunked between switches.  A good discussion can be found here.  Have you considered VXLAN as an alternative to pVLANs?  VXLAN allows up to 16M segments definied though they differ slightly from pVLAN in that all VMs in a VXLAN segment can communicate.
    Matthew

  • Private Vlan, Etherchannel and Isolated Trunk on Nexus 5010

    I'm not sure if I'm missing something basic here however i though that I'd ask the question. I recieved a request from a client who is trying to seperate traffic out of a IBM P780 - one set of VIO servers/clients (Prod) is tagged with vlan x going out LAG 1 and another set of VIO server/clients (Test) is tagged with vlan y and z going out LAG 2. The problem is that the management subnet for these devices is on one subnet.
    The infrastructure is the host device is trunked via LACP etherchannel to Nexus 2148TP(5010) which than connects to the distribution layer being a Catalyst 6504 VSS. I have tried many things today, however I feel that the correct solution to get this working is to use an Isolated trunk (as the host device does not have private vlan functionality) even though there is no requirement for hosts to be segregated. I have configured:
    1. Private vlan mapping on the SVI;
    2. Primary vlan and association, and isolated vlan on Distribution (6504 VSS) and Access Layer (5010/2148)
    3. All Vlans are trunked between switches
    4. Private vlan isolated trunk and host mappings on the port-channel interface to the host (P780).
    I haven't had any luck. What I am seeing is as soon as I configure the Primary vlan on the Nexus 5010 (v5.2) (vlan y | private-vlan primary), this vlan (y) does not forward on any trunk on the Nexus 5010 switch, even without any other private vlan configuration. I believe this may be the cause to most of the issues I am having. Has any one else experienced this behaviour. Also, I haven't had a lot of experience with Private Vlans so I might be missing some fundamentals with this configuration. Any help would be appreciated.

    Hello Emcmanamy, Bruce,
    Thanks for your feedback.
    Just like you, I have been facing the same problematic last months with my customer.
    Regarding PVLAN on FEX, and as concluded in Bruce’s previous posts I understand :
    You can configure a host interface as an isolated or community access port only.
    We can configure “isolated trunk port” as well on a host interface. Maybe this specific point could be updated in the documentation.  
    This ability is documented here =>
    http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/layer2/513_N2_1/b_Cisco_n5k_layer2_config_gd_rel_513_N2_1_chapter_0101.html#task_1170903
    You cannot configure a host interface as a promiscuous  port.
    You cannot configure a host interface as a private  VLAN trunk port.
    Indeed a pvlan is not allowed on a trunk defined on a FEX host interface.
    However since NxOS 5.1(3)N2(1), the feature 'PVLAN on FEX trunk' is supported. But a command has to be activated before => system private-vlan fex trunk . When entered a warning about the presence of ‘FEX isolated trunks’ is prompted.
    http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/layer2/513_N2_1/b_Cisco_n5k_layer2_config_gd_rel_513_N2_1_chapter_0101.html#task_16C0869F1B0C4A68AFC3452721909705
    All these conditions are not met on a N5K interface.
    Best regards.
    Karim

  • Private-VLAN using Nexus 7010 and 2248TP FEX

    I have a Nexus 7010 with several 2248TP FEX modules.
    I am trying to configure a Private VLAN on one of the FEX host ports.
    I see in the documentation you can't do promiscous but I can't even get the host only configuration to take.
    Software
      BIOS:      version 3.22.0
      kickstart: version 6.0(2)
      system:    version 6.0(2)
    sho run | inc private
    feature private-vlan
    vlan 11
      name PVLAN_Primary
      private-vlan primary
      private-vlan association 12
    vlan 12
      name PVLAN_Secondary
      private-vlan isolated
    7010(config)# int e101/1/48
    7010(config-if)#
    7010(config-if)# switchport mode ?
      access        Port mode access
      dot1q-tunnel  Port mode dot1q tunnel
      fex-fabric    Port mode FEX fabric
      trunk         Port mode trunk
    Switchport mode private-vlan doesn't even show up!!!!!!
    If I try this command it says its not allowed on the FEX port.
    7010(config-if)# switchport private-vlan host-association 11 12
    ERROR: Requested config not allowed on fex port
    What am I doing wrong?????
    Todd

    Have you found a solution to this?
    -Jeremy

  • Heads Up: Private VLAN Sticky-ARP DHCP Issues

    Here is the scenario:
    Private VLANs are configured on a 6500 Sup720 with SVIs routing for the PVLANs.
    DHCP Snooping and IP ARP Inspection are also configured for the PVLAN subnets.
    A DHCP Server is offering 3 day leases.
    A laptop connects to the network and receives a 3-day lease. The user leaves the office and returns 4 days later. The DHCP server offers a new lease with a different IP address. Furthermore, the previous IP address leased to the laptop has been handed out in a new lease to another host. Both systems receive their DHCP lease but have no network connectivity.
    The problem occurs because, by default, PVLAN SVIs use Sticky-ARP and never age out their ARP cache. Since the laptop has a different IP address to MAC address mapping than recorded in the Sticky-ARP cache, a violation occurs and the switch prevents the new IP address from populating the ARP table on the switch.
    Sticky-ARP is a security feature that prevents one system from stealing another systems IP address.
    Log messages show the following:
    %IP-3-STCKYARPOVR: Attempt to overwrite Sticky ARP entry
    The 6500 PVLAN configuration guide Restrictions and Guidlines section suggests that Sticky-ARP is fundamental to Private-VLANs, and the only work-around for this problem is to create manual arp entries for the new IP address. This is clearly not a viable workaround for this scenario.
    http://www.cisco.com/univercd/cc/td/doc/product/lan/cat6000/122sx/swcg/pvlans.htm#wp1090979
    However, the 6500 Command Reference shows that Sticky ARP can be disabled, but makes no reference to PVLANs
    http://www.cisco.com/univercd/cc/td/doc/product/lan/cat6000/122sx/cmdref/i1.htm#wp1091738
    There appears to be two sensible solutions to this problem:
    1) Disable Stick-ARP on the 6500 for the PVLANs. Since DHCP Snooping and IP ARP Inspection are configured, sticky-arp can be disabled without relaxing network security. This is assuming the 6500 will accept the command and will not break the existing PVLAN functionality.
    2) Extend the DHCP lease longer, to 45 or 90 days perhaps. This will catch most transient activity and keep the IP address to MAC address relationships the same, wherever possible. The downside here is that DHCP address pools could collect stale entires that would take the lease time to flush, thus reducing the overall available IPs in the pool.
    Has anyone else run into this problem? If so, what was your solution? Did you attempt either option above? I am planning on using solution #1 above, but I wanted to ping the NetPro community with this as I am sure we are not the first customer to run into this. Or are we??
    Regards,
    Brad

    Excellent question.
    Sticky-ARP is NOT intended to be a pain-in-the-butt that should disabled right away, rather, it is a security mechanism that prevents a system from stealing an active IP address on the subnet and causing a lot of problems. Sticky-ARP works best on subnets that have all static IP addressing where there is no expectation that a host would frequently change its IP address.
    Yes, I would recommend keeping Sticky-ARP on subnets with all static IP addresses.
    In DHCP subnets with no static IP addressing, DHCP Snooping and IP ARP Inspection provide the same security coverage that Sticky-ARP does, they prevent a system from claiming an illegitimate IP and MAC address. Furthermore, in DHCP subnets, it is reasonable to expect that a host would change its IP address from time to time when its lease expires.
    Sticky-ARP does not provide any addtional securtity benefits when DHCP Snooping and IP ARP Inspection are active and it only causes problems when a lease expires.
    When Cisco made Stick-ARP the default behavior for Private VLANs, they certain did not have DHCP in mind.
    In Summary, it should be known as a Best Practice that when using Private VLANs on user segments with DHCP that DHCP Snooping and IP ARP Inspection should be enabled and Sticky-ARP be disabled.
    Brad

  • Nexus 1000v, VMWare ESX and Microsoft SC VMM

    Hi,
    Im curious if anybody has worked up any solutions managing network infrastructure for VMWare ESX hosts/vms with the Nexus 1000v and Microsoft's System Center Virtual Machine Manager.
    There currently exists support for the 1000v and ESX and SCVMM using the Cisco 1000v software for MS Hyper-V and SCVMM.   There is no suck support for VMWare ESX.
    Im curious as to what others with VMWare, Nexus 1000v or equivalent and SCVMM have done to work around this issue.
    Trying to get some ideas.
    Thanks

    Aaron,
    The steps you have above are correct, you will need steps 1 - 4 to get it working correctly.  Normally people will create a separate VLAN for their NLB interfaces/subnet, to prevent uncessisary flooding of mcast frames within the network.
    To answer your questions
    1) I've seen multiple customer run this configuration
    2) The steps you have are correct
    3) You can't enable/disable IGMP snooping on UCS.  It's enabled by default and not a configurable option.  There's no need to change anything within UCS in regards to MS NLB with the procedure above.  FYI - the ability to disable/enable IGMP snooping on UCS is slated for an upcoming release 2.1.
    This is the correct method untill the time we have the option of configuring static multicast mac entries on
    the Nexus 1000v.  If this is a feature you'd like, please open a TAC case and request for bug CSCtb93725 to be linked to your SR. 
    This will give more "push" to our develpment team to prioritize this request.
    Hopefully some other customers can share their experience.
    Regards,
    Robert

  • Can a Nexus 1000v be configured to NOT do local switching in an ESX host?

    Before the big YES, use an external Nexus switch and use VN-Tag. The question is when there is a 3120 in a blade chassis that connects to the ESX hosts that have a 1000v installed on the ESX host. So, first hop outside the ESX host is not a Nexus box.
    Looking for if this is possible, if so how, and if not, where that might be documented. I have a client who's security policy prohibits switching (yes, even on the same VLAN) within a host (in this case blade server). Oh and there is an insistance to use 3120s inside the blade chassis.
    Has to be the strangest request I have had in a while.
    Any data would be GREATY appreciated!

    Thanks for the follow up.
    So by private VLANs, are you referring to "PVLAN":
    "PVLANs: PVLANs are a new feature available with the VMware vDS and the Cisco Nexus
    1000V Series. PVLANs provide a simple mechanism for isolating virtual machines in the
    same VLAN from each other. The VMware vDS implements PVLAN enforcement at the
    destination host. The Cisco Nexus 1000V Series supports a highly efficient enforcement
    mechanism that filters packets at the source rather than at the destination, helping ensure
    that no unwanted traffic traverses the physical network and so increasing the network
    bandwidth available to other virtual machines"

  • [Nexus 1000v] VEM can't be add into VSM

    hi all,
    following my lab, i have some problems with Nexus 1000V when VEM can't be add into VSM.
    + on VSM has already installed on ESX 1 (standalone or ha) and you can see:
    Cisco_N1KV# show module
    Mod  Ports  Module-Type                       Model               Status
    1    0      Virtual Supervisor Module         Nexus1000V          active *
    Mod  Sw                Hw
    1    4.2(1)SV1(4a)     0.0
    Mod  MAC-Address(es)                         Serial-Num
    1    00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8  NA
    Mod  Server-IP        Server-UUID                           Server-Name
    1    10.4.110.123     NA                                    NA
    + on ESX2 that 's installed VEM
    [root@esxhoadq ~]# vem status
    VEM modules are loaded
    Switch Name      Num Ports   Used Ports  Configured Ports  MTU     Uplinks
    vSwitch0         128         3           128               1500    vmnic0
    VEM Agent (vemdpa) is running
    [root@esxhoadq ~]#
    any advices for this,
    thanks so much

    Hi,
    i'm having similar issue: the VEM insatlled on the ESXi is not showing up on the VSM.
    please check from the following what can be wrong?
    This is the VEM status:
    ~ # vem status -v
    Package vssnet-esx5.5.0-00000-release
    Version 4.2.1.1.4.1.0-2.0.1
    Build 1
    Date Wed Jul 27 04:42:14 PDT 2011
    Number of PassThru NICs are 0
    VEM modules are loaded
    Switch Name     Num Ports   Used Ports Configured Ports MTU     Uplinks  
    vSwitch0         128         4           128               1500   vmnic0  
    DVS Name         Num Ports   Used Ports Configured Ports MTU     Uplinks  
    VSM11           256         40         256               1500   vmnic2,vmnic1
    Number of PassThru NICs are 0
    VEM Agent (vemdpa) is running
    ~ # vemcmd show port    
    LTL   VSM Port Admin Link State PC-LTL SGID Vem Port
       18               UP   UP   F/B*     0       vmnic1
       19             DOWN   UP   BLK       0       vmnic2
    * F/B: Port is BLOCKED on some of the vlans.
    Please run "vemcmd show port vlans" to see the details.
    ~ # vemcmd show trunk
    Trunk port 6 native_vlan 1 CBL 1
    vlan(1) cbl 1, vlan(111) cbl 1, vlan(112) cbl 1, vlan(3968) cbl 1, vlan(3969) cbl 1, vlan(3970) cbl 1, vlan(3971) cbl 1,
    Trunk port 16 native_vlan 1 CBL 1
    vlan(1) cbl 1, vlan(111) cbl 1, vlan(112) cbl 1, vlan(3968) cbl 1, vlan(3969) cbl 1, vlan(3970) cbl 1, vlan(3971) cbl 1,
    Trunk port 18 native_vlan 1 CBL 0
    vlan(111) cbl 1, vlan(112) cbl 1,
    ~ # vemcmd show port
    LTL   VSM Port Admin Link State PC-LTL SGID Vem Port
       18               UP   UP   F/B*     0       vmnic1
       19            DOWN   UP   BLK       0       vmnic2
    * F/B: Port is BLOCKED on some of the vlans.
    Please run "vemcmd show port vlans" to see the details.
    ~ # vemcmd show port vlans
                           Native VLAN   Allowed
    LTL   VSM Port Mode VLAN   State Vlans
       18             T       1   FWD   111-112
       19             A       1   BLK   1
    ~ # vemcmd show port
    LTL   VSM Port Admin Link State PC-LTL SGID Vem Port
       18               UP   UP   F/B*     0       vmnic1
       19             DOWN   UP   BLK       0       vmnic2
    * F/B: Port is BLOCKED on some of the vlans.
    Please run "vemcmd show port vlans" to see the details.
    ~ # vemcmd show port vlans
                           Native VLAN   Allowed
    LTL   VSM Port Mode VLAN   State Vlans
       18             T       1   FWD   111-112
       19             A       1   BLK   1
    ~ # vemcmd show trunk
    Trunk port 6 native_vlan 1 CBL 1
    vlan(1) cbl 1, vlan(111) cbl 1, vlan(112) cbl 1, vlan(3968) cbl 1, vlan(3969) cbl 1, vlan(3970) cbl 1, vlan(3971) cbl 1,
    Trunk port 16 native_vlan 1 CBL 1
    vlan(1) cbl 1, vlan(111) cbl 1, vlan(112) cbl 1, vlan(3968) cbl 1, vlan(3969) cbl 1, vlan(3970) cbl 1, vlan(3971) cbl 1,
    Trunk port 18 native_vlan 1 CBL 0
    vlan(111) cbl 1, vlan(112) cbl 1,
    ~ # vemcmd show card
    Card UUID type 2: ebd44e72-456b-11e0-0610-00000000108f
    Card name: esx
    Switch name: VSM11
    Switch alias: DvsPortset-0
    Switch uuid: c4 be 2c 50 36 c5 71 97-44 41 1f c0 43 8e 45 78
    Card domain: 1
    Card slot: 1
    VEM Tunnel Mode: L2 Mode
    VEM Control (AIPC) MAC: 00:02:3d:10:01:00
    VEM Packet (Inband) MAC: 00:02:3d:20:01:00
    VEM Control Agent (DPA) MAC: 00:02:3d:40:01:00
    VEM SPAN MAC: 00:02:3d:30:01:00
    Primary VSM MAC : 00:50:56:ac:00:42
    Primary VSM PKT MAC : 00:50:56:ac:00:44
    Primary VSM MGMT MAC : 00:50:56:ac:00:43
    Standby VSM CTRL MAC : ff:ff:ff:ff:ff:ff
    Management IPv4 address: 10.1.240.30
    Management IPv6 address: 0000:0000:0000:0000:0000:0000:0000:0000
    Secondary VSM MAC : 00:00:00:00:00:00
    Secondary L3 Control IPv4 address: 0.0.0.0
    Upgrade : Default
    Max physical ports: 32
    Max virtual ports: 216
    Card control VLAN: 111
    Card packet VLAN: 112
    Card Headless Mode : Yes
           Processors: 8
    Processor Cores: 4
    Processor Sockets: 1
    Kernel Memory:   16712336
    Port link-up delay: 5s
    Global UUFB: DISABLED
    Heartbeat Set: False
    PC LB Algo: source-mac
    Datapath portset event in progress : no
    ~ #
    On VSM
    VSM11# sh svs conn
    connection vcenter:
       ip address: 10.1.240.38
       remote port: 80
       protocol: vmware-vim https
       certificate: default
       datacenter name: New Datacenter
       admin:  
       max-ports: 8192
       DVS uuid: c4 be 2c 50 36 c5 71 97-44 41 1f c0 43 8e 45 78
       config status: Enabled
       operational status: Connected
       sync status: Complete
       version: VMware vCenter Server 4.1.0 build-345043
    VSM11# sh svs ?
    connections Show connection information
    domain       Domain Configuration
    neighbors   Svs neighbors information
    upgrade     Svs upgrade information
    VSM11# sh svs dom
    SVS domain config:
    Domain id:   1  
    Control vlan: 111
    Packet vlan: 112
    L2/L3 Control mode: L2
    L3 control interface: NA
    Status: Config push to VC successful.
    VSM11# sh port
               ^
    % Invalid command at '^' marker.
    VSM11# sh run
    !Command: show running-config
    !Time: Sun Nov 20 11:35:52 2011
    version 4.2(1)SV1(4a)
    feature telnet
    username admin password 5 $1$QhO77JvX$A8ykNUSxMRgqZ0DUUIn381 role network-admin
    banner motd #Nexus 1000v Switch#
    ssh key rsa 2048
    ip domain-lookup
    ip domain-lookup
    hostname VSM11
    snmp-server user admin network-admin auth md5 0x389a68db6dcbd7f7887542ea6f8effa1
    priv 0x389a68db6dcbd7f7887542ea6f8effa1 localizedkey
    vrf context management
    ip route 0.0.0.0/0 10.1.240.254
    vlan 1,111-112
    port-channel load-balance ethernet source-mac
    port-profile default max-ports 32
    port-profile type ethernet Unused_Or_Quarantine_Uplink
    vmware port-group
    shutdown
    description Port-group created for Nexus1000V internal usage. Do not use.
    state enabled
    port-profile type vethernet Unused_Or_Quarantine_Veth
    vmware port-group
    shutdown
    description Port-group created for Nexus1000V internal usage. Do not use.
    state enabled
    port-profile type ethernet system-uplink
    vmware port-group
    switchport mode trunk
    switchport trunk allowed vlan 111-112
    no shutdown
    system vlan 111-112
    description "System profile"
    state enabled
    port-profile type vethernet servers11
    vmware port-group
    switchport mode access
    switchport access vlan 11
    no shutdown
    description "Data Profile for VM Traffic"
    port-profile type ethernet vm-uplink
    vmware port-group
    switchport mode access
    switchport access vlan 11
    no shutdown
    description "Uplink profile for VM traffic"
    state enabled
    vdc VSM11 id 1
    limit-resource vlan minimum 16 maximum 2049
    limit-resource monitor-session minimum 0 maximum 2
    limit-resource vrf minimum 16 maximum 8192
    limit-resource port-channel minimum 0 maximum 768
    limit-resource u4route-mem minimum 32 maximum 32
    limit-resource u6route-mem minimum 16 maximum 16
    limit-resource m4route-mem minimum 58 maximum 58
    limit-resource m6route-mem minimum 8 maximum 8
    interface mgmt0
    ip address 10.1.240.124/24
    interface control0
    line console
    boot kickstart bootflash:/nexus-1000v-kickstart-mz.4.2.1.SV1.4a.bin sup-1
    boot system bootflash:/nexus-1000v-mz.4.2.1.SV1.4a.bin sup-1
    boot kickstart bootflash:/nexus-1000v-kickstart-mz.4.2.1.SV1.4a.bin sup-2
    boot system bootflash:/nexus-1000v-mz.4.2.1.SV1.4a.bin sup-2
    svs-domain
    domain id 1
    control vlan 111
    packet vlan 112
    svs mode L2
    svs connection vcenter
    protocol vmware-vim
    remote ip address 10.1.240.38 port 80
    vmware dvs uuid "c4 be 2c 50 36 c5 71 97-44 41 1f c0 43 8e 45 78" datacenter-n
    ame New Datacenter
    max-ports 8192
    connect
    vsn type vsg global
    tcp state-checks
    vnm-policy-agent
    registration-ip 0.0.0.0
    shared-secret **********
    log-level
    thank you
    Michel

  • Nexus 1000v VEM module bouncing between hosts

    I'm receiving these error messages on my N1KV and don't know how to fix it.  I've tried removing, rebooting, reinstalling host B's VEM but that did not fix the issue.  How do I debug this?
    My setup,
    Two physical hosts running esxi 5.1, vcenter appliance, n1kv with two system uplinks and two uplinks for iscsi for each host.  Let me know if you need more output from logs or commands, thanks.
    N1KV# 2013 Jun 17 18:18:07 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.52.100 detected as module 3
    2013 Jun 17 18:18:07 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
    2013 Jun 17 18:18:08 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_UNEXP_NODEID_REQ: Removing VEM 3 (Unexpected Node Id Request)
    2013 Jun 17 18:18:09 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
    2013 Jun 17 18:18:13 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.51.100 detected as module 3
    2013 Jun 17 18:18:13 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
    2013 Jun 17 18:18:16 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_UNEXP_NODEID_REQ: Removing VEM 3 (Unexpected Node Id Request)
    2013 Jun 17 18:18:17 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
    2013 Jun 17 18:18:21 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.52.100 detected as module 3
    2013 Jun 17 18:18:21 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
    2013 Jun 17 18:18:22 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_UNEXP_NODEID_REQ: Removing VEM 3 (Unexpected Node Id Request)
    2013 Jun 17 18:18:23 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
    2013 Jun 17 18:18:28 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.51.100 detected as module 3
    2013 Jun 17 18:18:29 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
    2013 Jun 17 18:18:44 N1KV %PLATFORM-2-MOD_DETECT: Module 2 detected (Serial number :unavailable) Module-Type Virtual Supervisor Module Model :unavailable
    N1KV# sh module
    Mod  Ports  Module-Type                       Model               Status
    1    0      Virtual Supervisor Module         Nexus1000V          ha-standby
    2    0      Virtual Supervisor Module         Nexus1000V          active *
    3    248    Virtual Ethernet Module           NA                  ok
    Mod  Sw                  Hw     
    1    4.2(1)SV2(1.1a)     0.0                                             
    2    4.2(1)SV2(1.1a)     0.0                                             
    3    4.2(1)SV2(1.1a)     VMware ESXi 5.1.0 Releasebuild-838463 (3.1)     
    Mod  MAC-Address(es)                         Serial-Num
    1    00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8  NA
    2    00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8  NA
    3    02-00-0c-00-03-00 to 02-00-0c-00-03-80  NA
    Mod  Server-IP        Server-UUID                           Server-Name
    1    192.168.54.2     NA                                    NA
    2    192.168.54.2     NA                                    NA
    3    192.168.51.100   03000200-0400-0500-0006-000700080009  NA
    * this terminal session
    ~ # vemcmd show card
    Card UUID type  2: 03000200-0400-0500-0006-000700080009
    Card name:
    Switch name: N1KV
    Switch alias: DvsPortset-1
    Switch uuid: e6 dc 36 50 c0 a9 d9 a5-0b 98 fb 90 e1 fc 99 af
    Card domain: 2
    Card slot: 3
    VEM Tunnel Mode: L3 Mode
    L3 Ctrl Index: 49
    L3 Ctrl VLAN: 51
    VEM Control (AIPC) MAC: 00:02:3d:10:02:02
    VEM Packet (Inband) MAC: 00:02:3d:20:02:02
    VEM Control Agent (DPA) MAC: 00:02:3d:40:02:02
    VEM SPAN MAC: 00:02:3d:30:02:02
    Primary VSM MAC : 00:50:56:b6:0c:b2
    Primary VSM PKT MAC : 00:50:56:b6:35:3f
    Primary VSM MGMT MAC : 00:50:56:b6:d5:12
    Standby VSM CTRL MAC : 00:50:56:b6:96:f2
    Management IPv4 address: 192.168.51.100
    Management IPv6 address: 0000:0000:0000:0000:0000:0000:0000:0000
    Primary L3 Control IPv4 address: 192.168.54.2
    Secondary VSM MAC : 00:00:00:00:00:00
    Secondary L3 Control IPv4 address: 0.0.0.0
    Upgrade : Default
    Max physical ports: 32
    Max virtual ports: 216
    Card control VLAN: 1
    Card packet VLAN: 1
    Control type multicast: No
    Card Headless Mode : No
           Processors: 4
      Processor Cores: 4
    Processor Sockets: 1
      Kernel Memory:   16669760
    Port link-up delay: 5s
    Global UUFB: DISABLED
    Heartbeat Set: True
    PC LB Algo: source-mac
    Datapath portset event in progress : no
    Licensed: Yes
    ~ # vemcmd show card
    Card UUID type  2: 03000200-0400-0500-0006-000700080009
    Card name:
    Switch name: N1KV
    Switch alias: DvsPortset-0
    Switch uuid: e6 dc 36 50 c0 a9 d9 a5-0b 98 fb 90 e1 fc 99 af
    Card domain: 2
    Card slot: 3
    VEM Tunnel Mode: L3 Mode
    L3 Ctrl Index: 49
    L3 Ctrl VLAN: 52
    VEM Control (AIPC) MAC: 00:02:3d:10:02:02
    VEM Packet (Inband) MAC: 00:02:3d:20:02:02
    VEM Control Agent (DPA) MAC: 00:02:3d:40:02:02
    VEM SPAN MAC: 00:02:3d:30:02:02
    Primary VSM MAC : 00:50:56:b6:0c:b2
    Primary VSM PKT MAC : 00:50:56:b6:35:3f
    Primary VSM MGMT MAC : 00:50:56:b6:d5:12
    Standby VSM CTRL MAC : 00:50:56:b6:96:f2
    Management IPv4 address: 192.168.52.100
    Management IPv6 address: 0000:0000:0000:0000:0000:0000:0000:0000
    Primary L3 Control IPv4 address: 192.168.54.2
    Secondary VSM MAC : 00:00:00:00:00:00
    Secondary L3 Control IPv4 address: 0.0.0.0
    Upgrade : Default
    Max physical ports: 32
    Max virtual ports: 216
    Card control VLAN: 1
    Card packet VLAN: 1
    Control type multicast: No
    Card Headless Mode : Yes
           Processors: 4
      Processor Cores: 4
    Processor Sockets: 1
      Kernel Memory:   16669764
    Port link-up delay: 5s
    Global UUFB: DISABLED
    Heartbeat Set: False
    PC LB Algo: source-mac
    Datapath portset event in progress : no
    Licensed: Yes
    ! ports 1-6 connected to physical host A
    interface GigabitEthernet1/0/1
    description VMWARE ESXi Trunk
    switchport trunk encapsulation dot1q
    switchport mode trunk
    switchport nonegotiate
    spanning-tree portfast trunk
    spanning-tree bpdufilter enable
    spanning-tree bpduguard enable
    channel-group 1 mode active
    ! ports 7-12 connected to phys host B
    interface GigabitEthernet1/0/7
    description VMWARE ESXi Trunk
    switchport trunk encapsulation dot1q
    switchport mode trunk
    switchport nonegotiate
    spanning-tree portfast trunk
    spanning-tree bpdufilter enable
    spanning-tree bpduguard enable
    channel-group 2 mode active

    ok after deleteing the n1kv vms and vcenter and then reinstalling all I got the error again,
    N1KV# 2013 Jun 18 17:48:12 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_STATE_CONFLICT: Removing VEM 3 due to state conflict VSM(NodeId Processed), VEM(ModIns End Rcvd)
    2013 Jun 18 17:48:13 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
    2013 Jun 18 17:48:16 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.52.100 detected as module 3
    2013 Jun 18 17:48:16 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
    2013 Jun 18 17:48:22 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_STATE_CONFLICT: Removing VEM 3 due to state conflict VSM(NodeId Processed), VEM(ModIns End Rcvd)
    2013 Jun 18 17:48:23 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
    2013 Jun 18 17:48:34 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.52.100 detected as module 3
    2013 Jun 18 17:48:34 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
    2013 Jun 18 17:48:41 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_STATE_CONFLICT: Removing VEM 3 due to state conflict VSM(NodeId Processed), VEM(ModIns End Rcvd)
    2013 Jun 18 17:48:42 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
    2013 Jun 18 17:49:03 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.52.100 detected as module 3
    2013 Jun 18 17:49:03 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
    2013 Jun 18 17:49:10 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_STATE_CONFLICT: Removing VEM 3 due to state conflict VSM(NodeId Processed), VEM(ModIns End Rcvd)
    2013 Jun 18 17:49:11 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
    2013 Jun 18 17:49:29 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.51.100 detected as module 3
    2013 Jun 18 17:49:29 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
    2013 Jun 18 17:49:35 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_STATE_CONFLICT: Removing VEM 3 due to state conflict VSM(NodeId Processed), VEM(ModIns End Rcvd)
    2013 Jun 18 17:49:36 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
    2013 Jun 18 17:49:53 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.51.100 detected as module 3
    2013 Jun 18 17:49:53 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
    2013 Jun 18 17:49:59 N1KV %VEM_MGR-2-VEM_MGR_REMOVE_STATE_CONFLICT: Removing VEM 3 due to state conflict VSM(NodeId Processed), VEM(ModIns End Rcvd)
    2013 Jun 18 17:50:00 N1KV %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
    2013 Jun 18 17:50:05 N1KV %VEM_MGR-2-VEM_MGR_DETECTED: Host 192.168.52.100 detected as module 3
    2013 Jun 18 17:50:05 N1KV %VEM_MGR-2-MOD_ONLINE: Module 3 is online
    Host A
    ~ # vemcmd show card
    Card UUID type  2: 03000200-0400-0500-0006-000700080009
    Card name:
    Switch name: N1KV
    Switch alias: DvsPortset-0
    Switch uuid: e6 dc 36 50 c0 a9 d9 a5-0b 98 fb 90 e1 fc 99 af
    Card domain: 2
    Card slot: 1
    VEM Tunnel Mode: L3 Mode
    L3 Ctrl Index: 49
    L3 Ctrl VLAN: 52
    VEM Control (AIPC) MAC: 00:02:3d:10:02:00
    VEM Packet (Inband) MAC: 00:02:3d:20:02:00
    VEM Control Agent (DPA) MAC: 00:02:3d:40:02:00
    VEM SPAN MAC: 00:02:3d:30:02:00
    Primary VSM MAC : 00:50:56:b6:96:f2
    Primary VSM PKT MAC : 00:50:56:b6:11:b6
    Primary VSM MGMT MAC : 00:50:56:b6:48:c6
    Standby VSM CTRL MAC : ff:ff:ff:ff:ff:ff
    Management IPv4 address: 192.168.52.100
    Management IPv6 address: 0000:0000:0000:0000:0000:0000:0000:0000
    Primary L3 Control IPv4 address: 192.168.54.2
    Secondary VSM MAC : 00:00:00:00:00:00
    Secondary L3 Control IPv4 address: 0.0.0.0
    Upgrade : Default
    Max physical ports: 32
    Max virtual ports: 216
    Card control VLAN: 1
    Card packet VLAN: 1
    Control type multicast: No
    Card Headless Mode : Yes
           Processors: 4
      Processor Cores: 4
    Processor Sockets: 1
      Kernel Memory:   16669764
    Port link-up delay: 5s
    Global UUFB: DISABLED
    Heartbeat Set: False
    PC LB Algo: source-mac
    Datapath portset event in progress : no
    Licensed: No
    Host B
    ~ # vemcmd show card
    Card UUID type  2: 03000200-0400-0500-0006-000700080009
    Card name:
    Switch name: N1KV
    Switch alias: DvsPortset-0
    Switch uuid: bf fb 28 50 1b 26 dd ae-05 bd 4e 48 2e 37 56 f3
    Card domain: 2
    Card slot: 3
    VEM Tunnel Mode: L3 Mode
    L3 Ctrl Index: 49
    L3 Ctrl VLAN: 51
    VEM Control (AIPC) MAC: 00:02:3d:10:02:02
    VEM Packet (Inband) MAC: 00:02:3d:20:02:02
    VEM Control Agent (DPA) MAC: 00:02:3d:40:02:02
    VEM SPAN MAC: 00:02:3d:30:02:02
    Primary VSM MAC : 00:50:56:a8:f5:f0
    Primary VSM PKT MAC : 00:50:56:a8:3c:62
    Primary VSM MGMT MAC : 00:50:56:a8:b4:a4
    Standby VSM CTRL MAC : 00:50:56:a8:30:d5
    Management IPv4 address: 192.168.51.100
    Management IPv6 address: 0000:0000:0000:0000:0000:0000:0000:0000
    Primary L3 Control IPv4 address: 192.168.54.2
    Secondary VSM MAC : 00:00:00:00:00:00
    Secondary L3 Control IPv4 address: 0.0.0.0
    Upgrade : Default
    Max physical ports: 32
    Max virtual ports: 216
    Card control VLAN: 1
    Card packet VLAN: 1
    Control type multicast: No
    Card Headless Mode : No
           Processors: 4
      Processor Cores: 4
    Processor Sockets: 1
      Kernel Memory:   16669760
    Port link-up delay: 5s
    Global UUFB: DISABLED
    Heartbeat Set: True
    PC LB Algo: source-mac
    Datapath portset event in progress : no
    Licensed: Yes
    I used the nexus 1000v java installer so I don't know what it keeps assigning the same UUID nor do I know how to change it.
    Here is the other output you requested,
    N1KV# show vms internal info dvs
      DVS INFO:
    DVS name: [N1KV]
          UUID: [bf fb 28 50 1b 26 dd ae-05 bd 4e 48 2e 37 56 f3]
          Description: [(null)]
          Config version: [1]
          Max ports: [8192]
          DC name: [Galaxy]
         OPQ data: size [1121], data: [data-version 1.0
    switch-domain 2
    switch-name N1KV
    cp-version 4.2(1)SV2(1.1a)
    control-vlan 1
    system-primary-mac 00:50:56:a8:f5:f0
    active-vsm packet mac 00:50:56:a8:3c:62
    active-vsm mgmt mac 00:50:56:a8:b4:a4
    standby-vsm ctrl mac 0050-56a8-30d5
    inband-vlan 1
    svs-mode L3
    l3control-ipaddr 192.168.54.2
    upgrade state 0 mac 0050-56a8-30d5 l3control-ipv4 null
    cntl-type-mcast 0
    profile dvportgroup-26 trunk 1,51-57,110
    profile dvportgroup-26 mtu 9000
    profile dvportgroup-27 access 51
    profile dvportgroup-27 mtu 1500
    profile dvportgroup-27 capability l3control
    profile dvportgroup-28 access 52
    profile dvportgroup-28 mtu 1500
    profile dvportgroup-28 capability l3control
    profile dvportgroup-29 access 53
    profile dvportgroup-29 mtu 1500
    profile dvportgroup-30 access 54
    profile dvportgroup-30 mtu 1500
    profile dvportgroup-31 access 55
    profile dvportgroup-31 mtu 1500
    profile dvportgroup-32 access 56
    profile dvportgroup-32 mtu 1500
    profile dvportgroup-34 trunk 220
    profile dvportgroup-34 mtu 9000
    profile dvportgroup-35 access 220
    profile dvportgroup-35 mtu 1500
    profile dvportgroup-35 capability iscsi-multipath
    end-version 1.0
          push_opq_data flag: [1]
    show svs neighbors
    Active Domain ID: 2
    AIPC Interface MAC: 0050-56a8-f5f0
    Inband Interface MAC: 0050-56a8-3c62
    Src MAC           Type   Domain-id    Node-id     Last learnt (Sec. ago)
    0050-56a8-30d5     VSM         2         0201      1020.45
    0002-3d40-0202     VEM         2         0302         1.33
    I cannot add Host A to the N1KV it errors out with,
    vDS operation failed on host 192.168.52.100, An error occurred during host configuration. got (vim.fault.PlatformConfigFault) exception
    Host B (192.168.51.100) was added fine, then I moved a vmkernel to the N1KV which brought up the VEM and got the VEM flapping errors.

  • Nexus 1000v repo is not available

    Hi everyone.
    Cisco Yum repo for nexus 1000v is not available at the moment. I am wondering, is it Ok and Cisco finished it experiment with free Nexus1k or I need to contact someon (who?) to ask him to fix this problem.
    PS Link to the repo: https://cnsg-yum-server.cisco.com/yumrepo

    Let's set the record straight here - to avoid confusion.
    1. VEMs will continue to forward traffic in the event one or both VSM are unavailable - this requires the VEM to remain online and not reboot while both VSMs are offline. VSM communication is only required for config changes (and LACP negociation prior to 1.4)
    2.  If there is no VSM reachable, and a VEM is reboot, only then will the System VLANs go into a forwarding state.  All other non-system VLANs will remain down. This is to faciliate the Chicken & Egg theory of a VEM being able to initially communicate with a VSM to obtain its programming.
    The ONLY VLANs & vEth Profiles that should be set as system vlans are:
    1000v-Control
    1000v-Packet
    Service Console/VMkernel for Mgmt
    IP Storage (iSCSI or NFS)
    Everything else should not be defined as a system VLAN including VMotion - which is a common Mistake.
    **Remember that for a vEth port profile to behave like a system profile, it must be define on BOTH the vEth and Eth port profiles.  Two factor check.  This allows port profiles that maybe are not critical, yet share the same VLAN ID to behave differently.
    There are a total of 16 profiles that can include system VLANs.  If you exceed this, you can potentially run into issues with the Opaque data pushed from vCenter is truncated causing programming errors on your VEMs.  Adhering to the limitations above should never lead to this situation.
    Regards,
    Robert

  • Cisco Nexus 1000v stops inheriting

    Guys,
    I have an issue with the Nexus 1000v, basically the trunk ports on the ESXi hosts stop inheriting from the main DATA-UP link port profile, which means that not all VLANS get presented down that given trunk port, its like it gets completey out of sync somehow. An example is below,
    THIS IS A PC CONFIG THAT'S NOT WOKRING CORRECTLY
    show int trunk
    Po9        100,400-401,405-406,412,430,434,438-439,446,449-450,591,850
    sh run int po9
    interface port-channel9
      inherit port-profile DATA-UP
      switchport trunk allowed vlan add 438-439,446,449-450,591,850 (the system as added this not user)
    THIS IS A PC CONFIG THAT IS WORKING CORRECTLY
    show int trunk
    Po2        100,292,300,313,400-401,405-406,412,429-430,434,438-439,446,449-450,582,591,850
    sh run int po2
    interface port-channel2
        inherit port-profile DATA-UP
    I have no idea why this keeps happening, when i remove the manual static trunk configuration on po9, everything is fine, few days later, it happens again, its not just po9, there is at least 3 port-channel that it affects.
    My DATA-UP link port-profile configuration looks like this and all port channels should reflect the VLANs allowed but some are way out.
    port-profile type ethernet DATA-UP
      vmware port-group
      switchport mode trunk
      switchport trunk allowed vlan 100,292,300,313,400-401,405-406,412,429-430,434,438-439,446,449-450,5
    82,591,850
      channel-group auto mode on sub-group cdp
      no shutdown
      state enabled
    The upstream switches match the same VLANs allowed and the VLAN database is a mirror image between Nexus and Upstream switches.
    The Cisco Nexus version is 4.2.1
    Anyone seen this problem?
    Cheers

    Using vMotion you can perform the entire upgrade with no disruption to your virtual infrastructure. 
    If this is your first upgrade, I highly recommend you go through the upgrade guides in detail.
    There are two main guides.  One details the VSM and overall process, the other covers the VEM (ESX) side of the upgrade.  They're not very long guides, and should be easy to follow.
    1000v Upgrade Guide:
    http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_2_1_s_v_1_4_a/upgrade/software/guide/n1000v_upgrade_software.html
    VEM Upgrade Guides:
    http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_2_1_s_v_1_4_a/install/vem/guide/n1000v_vem_install.html
    In a nutshell the procedure looks like this:
    -Backup of VSM Config
    -Run pre-upgrade check script (which will identify any config issues & ensures validation of new version with old config)
    -Upgrade standby VSM
    -Perform switchover
    -Upgrade image on old active (current standby)
    -Upgrade VEM modules
    One decision you'll need to make is whether to use Update Manager or not for the VEM upgrades.  If you don't have many hosts, the manual method is a nice way to maintain control on exactly what's being upgrade & when.  It will allow you to migrate VMs off the host, upgrade it, and then continue in this manner for all remaining hosts.  The alternate is Update Manager, which can be a little sticky if it runs into issues.  This method will automatically put hosts in Maintenance Mode, migrate VMs off, and then upgrade each VEM one by one.  This is a non-stop process so there's a little less control from that perspective.   My own preference is any environment with 10 or less hosts, I use manual, for more than that let VUM do the work.
    Let me know if you have any other questions.
    Regards,
    Robert

  • Port-channel with Private VLANs on Nexus1000v

    Hi all,
    It says that private vlans are not supported on port-channel ports ont Nexus 1000v L2 Switching Guide.
    AFAIK, if you have two ports between ESX VEM and physical switch and both these ports are configured as 802.1Q and carrying the same VLANs, when the port which carries the traffic at the moment fails,  the other port do not failover automatically. This is mentioned in "Nexus 1000v Deployment Guide version 2" as ,
    "Individual Uplinks : A standard uplink is an uplink that is not a member of a PortChannel from the VEM to a physical switch. It provides
    no capability to load balance across multiple standard uplink links and no high-availability characteristics. When a  standard uplink fails, no secondary link exists to take over. Defining two standard uplinks to carry the same VLAN involves the risk of creating loops within the environment and is an unsupported configuration. Cisco NX-OS will post warnings when such a condition occurs. "
    Does anyone have any idea in order for the attached topology to work. Do I have to forward each and every VLAN from different ports ? If I do that how am I going to manage different VLANs and still have that hosts in the same primary VLAN with same IP subnet ?
    Thanks in advance.
    Dumlu

    Hi,
    You can't have M and F ports in single port channel irrespective what code version you are running , it will throw error on you..
    nor you can have m1 port channel one side and another f port channel other side , port channel 

Maybe you are looking for