System VLAN's on n1000v

Hi all
when deploying the VSM on a Standard Switch, is it a requirement to include System VLAN's for my the packet and control VLAN's in the Port-Profiles on the dvs?
Many thanks                  

Correct. You only want system VLANs on the VLANs that are needed to bootstrap the 1000v.
The normal port-profiles that would be used with your VMs typically don't require a system VLAN.
In fact, if you enable a system VLAN on an interface that is going to implement access lists or QoS and there is a problem programming the interface, the port will still forward on that system VLAN, but none of the access lists or QoS settings will be in place.

Similar Messages

  • Only system vlans forward traffic on 1000v

    I am trying to migrate to a Nexus 1000v vDS but only VM's in the system VLAN can forward traffic. I do not want to make my voice vlan a system VLAN but that is the only way I can get a VM in that VLAN to work properly. I have a host with its vmk in the L3Control port group. From the VSM, a show module shows the VEM 3 with an "ok" status. I currently only have 1 NIC under the vDS control. My VM's using the VM_Network port group work fine and can forward traffic normally. When I put a VM in the Voice_Network port group I lose communication with it. If I add vlan 5 as a system vlan to my Uplink port profile then the VM's in the Voice_Network work properly. I thought you shouldn't create system vlans for each vlan and only use it for critical management functions so I would rather not make it a system vlan. Below is my n1k config. The upstream switch is a 2960X with the "switchport mode trunk" command. Am I missing something that is not allowing VLAN 5 to communicate over the Uplink port profile?
    port-profile type ethernet Unused_Or_Quarantine_Uplink
      vmware port-group
      shutdown
      description Port-group created for Nexus1000V internal usage. Do not use.
      state enabled
    port-profile type vethernet Unused_Or_Quarantine_Veth
      vmware port-group
      shutdown
      description Port-group created for Nexus1000V internal usage. Do not use.
      state enabled
    port-profile type vethernet VM_Network
      vmware port-group
      switchport mode access
      switchport access vlan 1
      no shutdown
      system vlan 1
      max-ports 256
      description VLAN 1
      state enabled
    port-profile type vethernet L3-control-vlan1
      capability l3control
      vmware port-group L3Control
      switchport mode access
      switchport access vlan 1
      no shutdown
      system vlan 1
      state enabled
    port-profile type ethernet iSCSI-50
      vmware port-group "iSCSI Uplink"
      switchport mode trunk
      switchport trunk allowed vlan 50
      switchport trunk native vlan 50
      mtu 9000
      channel-group auto mode active
      no shutdown
      system vlan 50
      state enabled
    port-profile type vethernet iSCSI-A
      vmware port-group
      switchport access vlan 50
      switchport mode access
      capability iscsi-multipath
      no shutdown
      system vlan 50
      state enabled
    port-profile type vethernet iSCSI-B
      vmware port-group
      switchport access vlan 50
      switchport mode access
      capability iscsi-multipath
      no shutdown
      system vlan 50
      state enabled
    port-profile type ethernet Uplink
      vmware port-group
      switchport mode trunk
      switchport trunk allowed vlan 1,5
      no shutdown
      system vlan 1
      state enabled
    port-profile type vethernet Voice_Network
      vmware port-group
      switchport mode access
      switchport access vlan 5
      no shutdown
      max-ports 256
      description VLAN 5
      state enabled

    Below is the output you requested. Thank you.
    ~ # vemcmd show card
    Card UUID type  2: 4c4c4544-004c-5110-804a-b9c04f564831
    Card name: synergvm5
    Switch name: synergVSM
    Switch alias: DvsPortset-0
    Switch uuid: 7d e9 0d 50 b3 3b 25 47-64 14 61 c0 3f c0 7b d9
    Card domain: 4094
    Card slot: 3
    VEM Tunnel Mode: L3 Mode
    L3 Ctrl Index: 49
    L3 Ctrl VLAN: 1
    VEM Control (AIPC) MAC: 00:02:3d:1f:fe:02
    VEM Packet (Inband) MAC: 00:02:3d:2f:fe:02
    VEM Control Agent (DPA) MAC: 00:02:3d:4f:fe:02
    VEM SPAN MAC: 00:02:3d:3f:fe:02
    Primary VSM MAC : 00:50:56:aa:70:b9
    Primary VSM PKT MAC : 00:50:56:aa:70:bb
    Primary VSM MGMT MAC : 00:50:56:aa:70:ba
    Standby VSM CTRL MAC : 00:50:56:aa:70:b6
    Management IPv4 address: 172.30.2.64
    Management IPv6 address: 0000:0000:0000:0000:0000:0000:0000:0000
    Primary L3 Control IPv4 address: 172.30.100.1
    Secondary VSM MAC : 00:00:00:00:00:00
    Secondary L3 Control IPv4 address: 0.0.0.0
    Upgrade : Default
    Max physical ports: 32
    Max virtual ports: 216
    Card control VLAN: 1
    Card packet VLAN: 1
    Control type multicast: No
    Card Headless Mode : No
           Processors: 16
      Processor Cores: 8
    Processor Sockets: 2
      Kernel Memory:   62904468
    Port link-up delay: 5s
    Global UUFB: DISABLED
    Heartbeat Set: True
    PC LB Algo: source-mac
    Datapath portset event in progress : no
    Licensed: Yes
    ~ # vemcmd show port
      LTL   VSM Port  Admin Link  State  PC-LTL  SGID  Vem Port  Type
       24     Eth3/8     UP   UP    FWD       0          vmnic7
       49      Veth1     UP   UP    FWD       0            vmk1
       50      Veth2     UP   UP    FWD       0        XP-Voice.eth0
       51      Veth3     UP   UP    FWD       0        synergPresence.eth0
    ~ # vemcmd show port vlans
                              Native  VLAN   Allowed
      LTL   VSM Port  Mode    VLAN    State* Vlans
       24     Eth3/8   T          1   FWD    1
       49      Veth1   A          1   FWD    1
       50      Veth2   A          1   FWD    1
       51      Veth3   A          5   FWD    5
    * VLAN State: VLAN State represents the state of allowed vlans.
    ~ # vemcmd show bd
    Number of valid BDS: 10
    BD 1, vdc 1, vlan 1, swbd 1, 5 ports, ""
    Portlist:
    BD 2, vdc 1, vlan 3972, swbd 3972, 0 ports, ""
    Portlist:
    BD 3, vdc 1, vlan 3970, swbd 3970, 0 ports, ""
    Portlist:
    BD 4, vdc 1, vlan 3969, swbd 3969, 2 ports, ""
    Portlist:
          8
          9
    BD 5, vdc 1, vlan 3968, swbd 3968, 3 ports, ""
    Portlist:
          1  inban
          5  inband port securit
         11
    BD 6, vdc 1, vlan 3971, swbd 3971, 2 ports, ""
    Portlist:
         14
         15
    BD 7, vdc 1, vlan 5, swbd 5, 1 ports, ""
    Portlist:
         51  synergPresence.eth0
    BD 8, vdc 1, vlan 50, swbd 50, 0 ports, ""
    Portlist:
    BD 9, vdc 1, vlan 77, swbd 77, 0 ports, ""
    Portlist:
    BD 10, vdc 1, vlan 199, swbd 199, 0 ports, ""
    Portlist:
    ~ #

  • System vlan an port-profile

    I have a profile uplink which include a system vlan of 50, 60, 220
    thne i also have a port profile for vlan 50 and 60
    but when i connect a vm to this port group, i do not get any connection.
    however other vlans that are not set as system vlan on the uplink are working fine on their own port group.
    any idea why?

    here is an example from my configs I use.
    port-profile type ethernet system-uplink-03
    vmware port-group
    switchport mode trunk
    switchport trunk native vlan 1034
    switchport trunk allowed vlan 1031-1034
    channel-group auto mode on mac-pinning
    no shutdown
    system vlan 1031-1033
    description  Development system profile for critical ports and vm traffic
    state enabled
    1031-1034 are vmware mgmt, ip storage and vmotion in this instance vcenter was in a different environment I have I think about 12 different system uplink port profiles
    here is a port-profile:
    port-profile type vethernet 03-development-vmsc
    capability l3control
    vmware port-group
    switchport mode access
    switchport access vlan 1031
    no shutdown
    system vlan 1031
    max-ports 32
    description 03 Development ESXi Management
    state enabled
    hope this helps.

  • System VLANs for AD, vCentre...

    Hi All,
    In the event that my entire data centre were to shut-down, is it recommended that the VLANs for AD, vCentre, vCentre DB be configured as System VLANs so that when everything powers up the VEM modules can actually communicate with these systems in order to get their configs? I am aware that the system vlans pretty much negate any security applied to them however was looking to see the best practice.
    thanks,

    Yeah it wouldn't be a bad idea. Just make sure to add the system vlan to the eth and veth port-profiles.
    And remember you can only have 32 port-profiles with the system vlan command in them.
    Also understand that when the VSM is not available to program the VEMs and a system vlan is present on the port-profiles that it is only basic connectivity that is allowed. No higher level features like ACLs or QOS will be working.
    Let us know if you need more classification. You can also play with the concept if you want by building a small lab environment. The great thing about the N1KV is it does work on a nested ESXi environment so you can build an entire lab on one host.
    louis

  • Unix system VLAN frame tagging?

    I realize this isn't a Unix-related forum, but I suspect only networking professionals will understand my quandary...
    A number of Unix variants (all *BSD's & Linux...) all offer 802.1Q frame tagging through the vlan(4) command.
    Was the motivation for these Unix camps to implement their own frame tagging due to bridges at the time not being able to assign VLAN's to frame traffic?
    Any insight which can be shared would be appreciated.

    Remember the core of the popular VMware products is also a linux system. You certainly want vlan support in a virtual environment. In my opinion, they are simply following the trends & standards development in networking. It is quite normal to expect VLAN support on Unix boxes.
    Leo

  • Nexus 1000v guidance

    OK, I need help.  Is there anyone here that can help me get my head wrapped around the 1000v?
    All the pictures and words in the webosphere don't seem to be getting it through my thick skull.
    If I can install the n1000v and use all the same vmnetwork, where does the logic kick in to start
    assigning Vlans to packet, control, management?  What about the system vlan and uplink settings
    after you start hacking up the Vlan assignments?  Let's say I build a 4 host cluster with an uplink for
    VM networking, and the other for NFS.  What would the uplink profile look like after VLAN assignments on
    packet and control for each of those NICS?
    I think if I can at least get the uplink configured and running correctly, the vethernet stuff will fall into place.

    Hi,
    See if the following deployment guide answers a few questions for you:
    http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/guide_c07-556626.html
    ./Abhinav

  • Nexus 1K VEM module shutdown (with DELL BLADE server)

    Hello, This is Vince.
    I am doing  one of PoC with important customer.
    Can anyone help me to explain what the problem is?
    I have been found couples of strange situation in a Nexus 1000V with DELL BLADE server)
    Actually, Network diagram is like below.
    I installed each two Vsphere Esxi on the Dell Blade server.
    As Diagram shows each server is connected to Cisco N5K via M8024 Dell Blade Switch.
    - two N1KV VM are installed on the Esxi. (of course as Primary and Secondary)
    - N5K is connected to M8024 in vPC.
    - VSM and VEM are checking each other via Layer3 control interface.
    - the way of uplink's port-profile port channel LB is mac pinning.
    interface control0
      ip address 10.10.100.10/24
    svs-domain
      domain id 1
      control vlan 1
      packet vlan 1
      svs mode L3 interface control0
    port-profile type ethernet Up-Link
      vmware port-group
      switchport mode trunk
      switchport trunk allowed vlan 1-2,10,16,30,77-78,88,100,110,120-121,130
      switchport trunk allowed vlan add 140-141,150,160-161,166,266,366
      service-policy type queuing output N1KV_SVC_Uplink
      channel-group auto mode on mac-pinning
      no shutdown
      system vlan 1,10,30,100
      state enabled
    n1000v# show module
    Mod  Ports  Module-Type                       Model               Status
    1    0      Virtual Supervisor Module         Nexus1000V          ha-standby
    2    0      Virtual Supervisor Module         Nexus1000V          active *
    3    332    Virtual Ethernet Module           NA                  ok
    4    332    Virtual Ethernet Module           NA                  ok
    Mod  Sw                  Hw     
    1    4.2(1)SV2(2.1a)     0.0                                             
    2    4.2(1)SV2(2.1a)     0.0                                             
    3    4.2(1)SV2(2.1a)     VMware ESXi 5.5.0 Releasebuild-1331820 (3.2)    
    4    4.2(1)SV2(2.1a)     VMware ESXi 5.5.0 Releasebuild-1331820 (3.2)    
    Mod  Server-IP        Server-UUID                           Server-Name
    1    10.10.10.10      NA                                    NA
    2    10.10.10.10      NA                                    NA
    3    10.10.10.101     4c4c4544-0038-4210-8053-b5c04f485931  10.10.10.101
    4    10.10.10.102     4c4c4544-0043-5710-8053-b4c04f335731  10.10.10.102
    Let me explain what the strange things happened from now on.
    If I move the Primary N1KV on the module 3 to the another Esxi of the module 4, VEM will be shutdown suddenly.
    Here is sys logs.
    2013 Dec 20 15:45:22 n1000v %VEM_MGR-2-VEM_MGR_REMOVE_NO_HB: Removing VEM 4 (heartbeats lost)
    2013 Dec 20 15:45:22 n1000v %VIM-5-IF_DETACHED_MODULE_REMOVED: Interface Ethernet4/7 is detached (module removed)
    2013 Dec 20 15:45:22 n1000v %VIM-5-IF_DETACHED_MODULE_REMOVED: Interface Ethernet4/8 is detached (module removed)
    2013 Dec 20 15:45:22 n1000v %VIM-5-IF_DETACHED_MODULE_REMOVED: Interface Vethernet1 is detached (module removed)
    2013 Dec 20 15:45:22 n1000v %VIM-5-IF_DETACHED_MODULE_REMOVED: Interface Vethernet17 is detached (module removed)
    2013 Dec 20 15:45:22 n1000v %VIM-5-IF_DETACHED_MODULE_REMOVED: Interface Vethernet9 is detached (module removed)
    2013 Dec 20 15:45:22 n1000v %VIM-5-IF_DETACHED_MODULE_REMOVED: Interface Vethernet37 is detached (module removed)
    2013 Dec 20 15:46:53 n1000v %VEM_MGR-2-MOD_OFFLINE: Module 4 is offline
    If I wanna make it works again then I have to do two things.
    First of all, It should be selected on the Source MAC Check the way of vSwitch's Load balance.
    (Port ID check is the default)
    Second of all, the the order of Switch's fail over is very important.
    If I change this order then VEM will be off in very soon.
    Here you go, the screen capture file of These option. (you may not understand these Korean letters.)
    In my opinion, the main problem is the link part between Esxi and M8024.
    As you saw, Each Esxi is connected to two M8024 Dell Blade switches separately.
    I saw the manual for the way N1K's uplink Load balance.
    Even though there are 16 different port-channel LB way,
    but It should be used only the way of src-mac  If there is no supporting port-channel option in the upstreaming switches.
    But I don't know exactly why this situation happened.
    Can anyone help me how I make it works better.
    Thanks in advance.
    Best Regards,
    Vince

    There's not enough information to determine the reason by those two outputs alone.  All those commands tell us is the VSM is removing/attaching the VEM.
    The normal cause for the VEM to flap is a problem with the Control VLAN communication.  The loss of 6 consecutive heart beats will cause the VEM to detach from the VSM.  We need to isolate the reason why.
    -Which version of 1000v & ESX?
    -Are multiple VEMs affected or just one?
    -Are the VSM's interfaces hosted on the DVS or vSwitch?
    -What is the network topology between the VEM and VSM (primarily the control VLAN)
    -Do you have the Cisco SR # I can take a look into it.  TAC is your best course of action for an issue like this.  There will likely need to be live troubleshooting into your network environment to determine the cause.
    Regards,
    Robert

  • Email in Inbox keeps dropping off.

    EMail in Inbox keeps dropping off.

    hi Robert,
    thanks a lot for your reply.
    here is the uplink port profile config:
    port-profile type ethernet HostUplink
      vmware port-group
      switchport mode trunk
      switchport trunk native vlan 1
      switchport trunk allowed vlan 1-3967,4048-4093
      channel-group auto mode on mac-pinning
      no shutdown
      system vlan 803,832,900-901
      state enabled
    803 is mgmt for both esx and vsm, 900 is control, 901 is packet.
    my NC553i driver version is different from what you suggested, it is 2.102.518.0, any reason why you suggest to use the version you specified?
    I have been trying to contact HP support, to be honest, not saying overall, but from my experience, they are much less helpful then Cisco support, I will keep trying.
    thanks
    ming

  • Port quarantined due to Cmd Failure, Failure applying command channel-group mac-pinning

    Hi,
    we run an UCS domain with several server, having two FIs. Each blade has 18 vNICs, 9 to Fabric-A and 9 to Fabric-B. Now for some weird reason we get the error for our management vlan (5), which has two dedicated vNICs, inteface x/y has been quarantined... --> failure when enabling PC mac-pinning
    sh logging logfile | grep INTER:
    2014 May 21 07:28:14 be-egt-sw-p8 %ETH_PORT_CHANNEL-5-PCM_CHANNEL_ID_ASSIGNED: Assigning port channel number 81 for member ports Ethernet3/1
    2014 May 21 07:28:14 be-egt-sw-p8 %PORT-PROFILE-2-INTERFACE_QUARANTINED: Interface Ethernet3/1 has been quarantined due to Cmd Failure
    2014 May 21 07:28:15 be-egt-sw-p8 %ETH_PORT_CHANNEL-5-PCM_CHANNEL_ID_ASSIGNED: Assigning port channel number 81 for member ports Ethernet3/10
    2014 May 21 07:28:15 be-egt-sw-p8 %PORT-PROFILE-2-INTERFACE_QUARANTINED: Interface Ethernet3/10 has been quarantined due to Cmd Failure
    sh accounting log:
    Wed May 21 07:28:14 2014:update:ppm.14356:admin:configure terminal ; interface Ethernet3/1 (SUCCESS)
    Wed May 21 07:28:14 2014:update:ppm.14356:admin:configure terminal ; interface Ethernet3/1 ; no switchport trunk allowed vlan (SUCCESS)
    Wed May 21 07:28:14 2014:update:ppm.14356:admin:configure terminal ; interface Ethernet3/1 ; no switchport mode trunk (SUCCESS)
    Wed May 21 07:28:14 2014:update:ppm.14364:admin:configure terminal ; interface Ethernet3/1 (SUCCESS)
    Wed May 21 07:28:14 2014:update:ppm.14364:admin:configure terminal ; interface Ethernet3/1 ; switchport mode trunk (SUCCESS)
    Wed May 21 07:28:14 2014:update:ppm.14364:admin:configure terminal ; interface Ethernet3/1 ; switchport trunk allowed vlan 5 (SUCCESS)
    Wed May 21 07:28:14 2014:update:ppm.14364:admin:configure terminal ; interface Ethernet3/1 ; channel-group auto mode on mac-pinning (FAILURE)
    Wed May 21 07:28:14 2014:update:ppm.14379:admin:configure terminal ; interface Ethernet3/10 (SUCCESS)
    Wed May 21 07:28:15 2014:update:ppm.14379:admin:configure terminal ; interface Ethernet3/10 ; no switchport trunk allowed vlan (SUCCESS)
    Wed May 21 07:28:15 2014:update:ppm.14379:admin:configure terminal ; interface Ethernet3/10 ; no switchport mode trunk (SUCCESS)
    Wed May 21 07:28:15 2014:update:ppm.14393:admin:configure terminal ; interface Ethernet3/10 (SUCCESS)
    Wed May 21 07:28:15 2014:update:ppm.14393:admin:configure terminal ; interface Ethernet3/10 ; switchport mode trunk (SUCCESS)
    Wed May 21 07:28:15 2014:update:ppm.14393:admin:configure terminal ; interface Ethernet3/10 ; switchport trunk allowed vlan 5 (SUCCESS)
    Wed May 21 07:28:15 2014:update:ppm.14393:admin:configure terminal ; interface Ethernet3/10 ; channel-group auto mode on mac-pinning (FAILURE)
    We tried wich port-channel based on cdp information, but same errrors occured. The weird thing is that it only applies to mgmt port-profile...
    Can someone help me what I'm doing wrong here... thanks a lot :)
    Uplink Config:
    port-profile type ethernet ESX_MGMT
      vmware port-group
      switchport mode trunk
      switchport trunk allowed vlan 5
      channel-group auto mode on mac-pinning
      no shutdown
      system vlan 5
      state enabled
    vEth Config:
    port-profile type vethernet VLANname
      capability l3control
      vmware port-group
      switchport mode access
      switchport access vlan 5
      no shutdown
      system vlan 5
      max-ports 64
      state enabled
    Kind regards,
    Yan

    Hi,
    we run an UCS domain with several server, having two FIs. Each blade has 18 vNICs, 9 to Fabric-A and 9 to Fabric-B. Now for some weird reason we get the error for our management vlan (5), which has two dedicated vNICs, inteface x/y has been quarantined... --> failure when enabling PC mac-pinning
    sh logging logfile | grep INTER:
    2014 May 21 07:28:14 be-egt-sw-p8 %ETH_PORT_CHANNEL-5-PCM_CHANNEL_ID_ASSIGNED: Assigning port channel number 81 for member ports Ethernet3/1
    2014 May 21 07:28:14 be-egt-sw-p8 %PORT-PROFILE-2-INTERFACE_QUARANTINED: Interface Ethernet3/1 has been quarantined due to Cmd Failure
    2014 May 21 07:28:15 be-egt-sw-p8 %ETH_PORT_CHANNEL-5-PCM_CHANNEL_ID_ASSIGNED: Assigning port channel number 81 for member ports Ethernet3/10
    2014 May 21 07:28:15 be-egt-sw-p8 %PORT-PROFILE-2-INTERFACE_QUARANTINED: Interface Ethernet3/10 has been quarantined due to Cmd Failure
    sh accounting log:
    Wed May 21 07:28:14 2014:update:ppm.14356:admin:configure terminal ; interface Ethernet3/1 (SUCCESS)
    Wed May 21 07:28:14 2014:update:ppm.14356:admin:configure terminal ; interface Ethernet3/1 ; no switchport trunk allowed vlan (SUCCESS)
    Wed May 21 07:28:14 2014:update:ppm.14356:admin:configure terminal ; interface Ethernet3/1 ; no switchport mode trunk (SUCCESS)
    Wed May 21 07:28:14 2014:update:ppm.14364:admin:configure terminal ; interface Ethernet3/1 (SUCCESS)
    Wed May 21 07:28:14 2014:update:ppm.14364:admin:configure terminal ; interface Ethernet3/1 ; switchport mode trunk (SUCCESS)
    Wed May 21 07:28:14 2014:update:ppm.14364:admin:configure terminal ; interface Ethernet3/1 ; switchport trunk allowed vlan 5 (SUCCESS)
    Wed May 21 07:28:14 2014:update:ppm.14364:admin:configure terminal ; interface Ethernet3/1 ; channel-group auto mode on mac-pinning (FAILURE)
    Wed May 21 07:28:14 2014:update:ppm.14379:admin:configure terminal ; interface Ethernet3/10 (SUCCESS)
    Wed May 21 07:28:15 2014:update:ppm.14379:admin:configure terminal ; interface Ethernet3/10 ; no switchport trunk allowed vlan (SUCCESS)
    Wed May 21 07:28:15 2014:update:ppm.14379:admin:configure terminal ; interface Ethernet3/10 ; no switchport mode trunk (SUCCESS)
    Wed May 21 07:28:15 2014:update:ppm.14393:admin:configure terminal ; interface Ethernet3/10 (SUCCESS)
    Wed May 21 07:28:15 2014:update:ppm.14393:admin:configure terminal ; interface Ethernet3/10 ; switchport mode trunk (SUCCESS)
    Wed May 21 07:28:15 2014:update:ppm.14393:admin:configure terminal ; interface Ethernet3/10 ; switchport trunk allowed vlan 5 (SUCCESS)
    Wed May 21 07:28:15 2014:update:ppm.14393:admin:configure terminal ; interface Ethernet3/10 ; channel-group auto mode on mac-pinning (FAILURE)
    We tried wich port-channel based on cdp information, but same errrors occured. The weird thing is that it only applies to mgmt port-profile...
    Can someone help me what I'm doing wrong here... thanks a lot :)
    Uplink Config:
    port-profile type ethernet ESX_MGMT
      vmware port-group
      switchport mode trunk
      switchport trunk allowed vlan 5
      channel-group auto mode on mac-pinning
      no shutdown
      system vlan 5
      state enabled
    vEth Config:
    port-profile type vethernet VLANname
      capability l3control
      vmware port-group
      switchport mode access
      switchport access vlan 5
      no shutdown
      system vlan 5
      max-ports 64
      state enabled
    Kind regards,
    Yan

  • VXLAN requirement

    I am planning to implement VXLAN, I have changed the port-channel load balancing hashing to include src-dst-port on LAN switch Nexus 5500 which is in vpc then UCS FI are connected with a single uplink in virtual port-channel to both 5500. I would like to know if I need to change the port-channel hashing on on UCS and Nexus 1000v to make sure the all uplinks are properly utilized based on src-dst-port. Same time how does pinning being done on UCS and NExus 1000v based on src-mac.
    Folloing is my config
    sh run port-profile DATA-UPLINK
    !Command: show running-config port-profile DATA-UPLINK
    !Time: Mon Apr  1 10:17:28 2013
    version 4.2(1)SV2(1.1a)
    port-profile type ethernet DATA-UPLINK
      vmware port-group
      switchport mode trunk
      switchport trunk allowed vlan 105-106,109,115-116,121,200-210
      pinning control-vlan 0
      pinning packet-vlan 0
      system mtu 9000
      channel-group auto mode on mac-pinning
      no shutdown
      system vlan 105-106,109,115-116
      state enabled
    VSM01# show port-channel load-balance
    Port Channel Load-Balancing Configuration:
    System: source-mac
    Port Channel Load-Balancing Addresses Used Per-Protocol:
    Non-IP: source-mac
    IP: source-mac

    HI,
    Not sure what you are looking for ??
    You can use SHDB to record the BDC for those transactions..
    Then use GUI_DOWNLOAD to download the file
    Thanks
    Naren

  • I get "INTERFACE_QUARANTINED due to Cmd Failure" error when enabling PortChannel in n1kv

    I have added a new host to my n1kv, but the interface PortChannel3 appears to be in shutdown and I can't bring it up, despite of it is working.
    SW-OUT-N1KV# sh int status
    Po3            --                 up       trunk     full    1000    --
    SW-OUT-N1KV# sh port-channel sum
    Flags:  D - Down        P - Up in port-channel (members)
            I - Individual  H - Hot-standby (LACP only)
            s - Suspended   r - Module-removed
            S - Switched    R - Routed
            U - Up (port-channel)
    Group Port-       Type     Protocol  Member Ports
          Channel
    1     Po1(SD)     Eth      NONE      Eth3/2(r)
    2     Po2(SU)     Eth      LACP      Eth4/5(P)    Eth4/6(P)
    3     Po3(SU)     Eth      LACP      Eth5/5(P)    Eth5/6(P)
    4     Po4(SU)     Eth      LACP      Eth4/1(P)    Eth4/2(P)    Eth4/3(P)
                                         Eth4/4(P)
    5     Po5(SD)     Eth      LACP      Eth5/1(D)    Eth5/2(D)    Eth5/3(D)
                                         Eth5/4(D)
    6     Po6(SD)     Eth      NONE      --
    SW-OUT-N1KV# sh run int po3
    !Command: show running-config interface port-channel3
    !Time: Thu Jan  5 18:00:08 2012
    version 4.2(1)SV1(4)
    interface port-channel3
      inherit port-profile SYSTEM-UPLINK
      shutdown
    When I try to enable it, I get the following error:
    SW-OUT-N1KV(config)# int po3
    SW-OUT-N1KV(config-if)# no shut
    2012 Jan  5 18:00:51 SW-OUT-N1KV %PORT-PROFILE-2-INTERFACE_QUARANTINED: Interface port-channel3 has been quarantined due to Cmd Failure
    The "show logging logfile" and "show accounting log" shows nothing.
    Any idea why is it happening????
    Other outputs,
    SW-OUT-N1KV(config-if)# do sh run port-profile SYSTEM-UPLINK
    !Command: show running-config port-profile SYSTEM-UPLINK
    !Time: Thu Jan  5 18:02:46 2012
    version 4.2(1)SV1(4)
    port-profile type ethernet SYSTEM-UPLINK
      vmware port-group
      switchport mode trunk
      duplex auto
      speed auto
      switchport trunk native vlan 666
      switchport trunk allowed vlan 2,43-44
      cdp enable
      channel-group auto mode active
      no shutdown
      system vlan 2,43-44
      state enabled
    SW-OUT-N1KV(config-if)# do sh port-prof virt usag
    Port Profile               Port        Adapter        Owner
    SYSTEM-UPLINK              Po1
                               Po2
                               Po3
                               Eth4/5      vmnic4         10.100.100.63
                               Eth4/6      vmnic5         10.100.100.63
                               Eth5/5      vmnic4         10.100.100.64
                               Eth5/6      vmnic5         10.100.100.64
    IESE_CONTENIDO             Veth9       Net Adapter 1  IESECMPRO
                               Veth14      Net Adapter 1  IESEAUTOPRO
    IESE_BACKEND               Veth10      Net Adapter 1  IESEDBINTPRE
                               Veth11      Net Adapter 1  IESEDBPRO
    IESE_PRE_INT               Veth3       Net Adapter 1  IESEWEBPRE
                               Veth4       Net Adapter 1  IESEAPPINT
                               Veth5       Net Adapter 1  IESEAPPPRE1
                               Veth6       Net Adapter 1  IESEAPPPRE2
                               Veth7       Net Adapter 1  IESEAUTOPRE
                               Veth8       Net Adapter 1  IESECMINTPRE
    DATA-UPLINK-100m           Po6
    SERVICE-CONSOLE            Veth1       vmk0           Module 4
                               Veth2       vmk1           Module 4
                               Veth12      vmk0           Module 5
                               Veth13      vmk1           Module 5
    DATA-UPLINK-IESE           Po4
                               Po5
                               Eth4/1      vmnic0         10.100.100.63
                               Eth4/2      vmnic1         10.100.100.63
                               Eth4/3      vmnic2         10.100.100.63
                               Eth4/4      vmnic3         10.100.100.63
                               Eth5/1      vmnic0         10.100.100.64
                               Eth5/2      vmnic1         10.100.100.64
                               Eth5/3      vmnic2         10.100.100.64
                               Eth5/4      vmnic3         10.100.100.64

    Debugging port-profile errors, and trying agair, I get the following output:
    SW-OUT-N1KV(config-port-prof)# no shut
    2012 Jan 9 14:43:02.102017 port-profile: (ERR) ppm_fq_cmd_intf_range_get(1753): Failed to find intf tlv ^Yex^Yconf^Yport-profile DATA-UPLINK-IESE-SA^Yno shutdown error: Command Parsing Failed
    2012 Jan 9 14:43:02.102577 port-profile: (ERR) ppm_profile_fsm_session_get(71): RID before session start request is 1
    2012 Jan 9 14:43:02.106573 port-profile: (ERR) ppm_req_dbmgr_send(426): Changing session status: 0x0
    2012 Jan 9 14:43:02.443092 port-profile: (ERR) ppm_vppm_profile_acfg_gen(353): VPPM Profile Level -1 acfg generation failed Error: no such pss key (0x40480003)
    2012 Jan 9 14:43:03.048365 port-profile: (ERR) ppm_pending_req_queue_check(1035): msg_ref_p->state == 0xbba6e3cc
    SW-OUT-N1KV(config-port-prof)# 2012 Jan 9 14:43:03.373461 port-profile: (ERR) ppm_pending_req_queue_check(1035): msg_ref_p->state == 0xbba6e3cc
    SW-OUT-N1KV(config-port-prof)# channel-group auto mode active
    2012 Jan 9 15:37:32.338832 port-profile: (ERR) ppm_fq_cmd_intf_range_get(1753): Failed to find intf tlv ^Yex^Yconf^Yport-profile type ethernet DATA-UPLINK-IESE-SA^Ychannel-group auto mode active error: Command Parsing Failed
    2012 Jan 9 15:37:32.339399 port-profile: (ERR) ppm_profile_fsm_session_get(71): RID before session start request is 1
    2012 Jan 9 15:37:32.358546 port-profile: (ERR) ppm_db_port_channel_check(7068): Not a channel grp command.
    2012 Jan 9 15:37:32.360130 port-profile: (ERR) ppm_req_dbmgr_send(426): Changing session status: 0x0
    2012 Jan 9 15:37:32.651020 port-profile: (ERR) ppm_vppm_profile_acfg_gen(353): VPPM Profile Level -1 acfg generation failed Error: no such pss key (0x40480003)
    2012 Jan 9 15:37:33.181682 port-profile: (ERR) ppm_pending_req_queue_check(1035): msg_ref_p->state == 0xbba6e3cc
    2012 Jan 9 15:37:33.387996 port-profile: (ERR) ppm_find_intf_in_cache(917): No pending plan present
    2012 Jan 9 15:37:34.772375 port-profile: (ERR) ppm_pending_req_queue_check(1035): msg_ref_p->state == 0xbba6e3cc
    2012 Jan 9 15:37:34.774586 port-profile: (ERR) ppm_profile_fsm_session_get(71): RID before session start request is 1
    SW-OUT-N1KV(config-port-prof)# 2012 Jan 9 15:37:35.102895 port-profile: (ERR) ppm_vppm_profile_acfg_gen(353): VPPM Profile Level -1 acfg generation failed Error: no such pss key (0x40480003)
    2012 Jan 9 15:37:35.469793 port-profile: (ERR) ppm_config_merge_handler(773): Show run merge not performed with reason PPM show run merge is not required
    2012 Jan 9 15:37:35.502211 port-profile: (ERR) ppm_cmd_desc_create(1176): command to be ignored parsed mode:/exec/configuremode: /exec/configure cmd:version 4.2(1)SV1(4) error: Command is not a port-profile command
    2012 Jan 9 15:37:35.503417 port-profile: (ERR) ppm_cmd_desc_create(1265): interface command detected ^Yex^Yconf^Yinterface port-channel5, error: Interface command is given for cmd desciptor
    2012 Jan 9 15:37:35.510826 port-profile: (ERR) ppm_req_dbmgr_send(426): Changing session status: 0x0
    2012 Jan 9 15:37:59.800813 port-profile: (ERR) ppm_dispatcher_invoker(542): Invoker parent failed with Err[0x1400]
    2012 Jan 9 15:37:59.832496 port-profile: (ERR) procjobcb_job_done(686): status: 0x1400, (null)
    2012 Jan 9 15:37:59.837319 port-profile: (ERR) ppm_profile_fsm_sma_apply_acc(1250): Changing session status: 0x420c007c
    2012 Jan 9 15:37:59.837732 port-profile: (ERR) ppm_profile_fsm_sma_apply_acc(1252): apply status: 0x1400
    2012 Jan 9 15:37:59.838435 port-profile: (ERR) ppm_profile_fsm_sma_apply_acc(1277): Could not open the file /dev/shm/ppm_DATA-UPLINK-IESE-SA_output.txt
    2012 Jan 9 15:37:59.800813 port-profile: (ERR) ppm_dispatcher_invoker(542): Invoker parent failed with Err[0x1400]
    2012 Jan 9 15:37:59.832496 port-profile: (ERR) procjobcb_job_done(686): status: 0x1400, (null)
    2012 Jan 9 15:37:59.837319 port-profile: (ERR) ppm_profile_fsm_sma_apply_acc(1250): Changing session status: 0x420c007c
    2012 Jan 9 15:37:59.837732 port-profile: (ERR) ppm_profile_fsm_sma_apply_acc(1252): apply status: 0x1400
    2012 Jan 9 15:37:59.838435 port-profile: (ERR) ppm_profile_fsm_sma_apply_acc(1277): Could not open the file /dev/shm/ppm_DATA-UPLINK-IESE-SA_output.txt
    2012 Jan 9 15:37:59 SW-OUT-N1KV %PORT-PROFILE-2-INTERFACE_QUARANTINED: Interface port-channel5 has been quarantined due to Cmd Failure
    Any idea? I'm lost...

  • UCS 1.4 support for PVLAN

    Hi all,
    Cisco advise UCS 1.4 supports PVLAN. But i see the following comment about PVLAN in UCS 1.4
    "UCS extends PVLAN support for non virtualised deployments (without vSwitch ) . "
    "UCS release 1.4(1) provides support for isolated PVLAN support for physical server access ports or for Palo CNA vNIC ports."
    Does this means PVLAN won't work for virtual machine if VMs is connected to UCS by Nexus1000v or vDS although i am using PALO (M81KR) card?
    Could anybody can confirm that?
    Thank you very much!

    Have not got that working so far...how would that traffic flow work?
    1000v -> 6120 -> 5020 -> 6500s
    (2) 10Gbe interfaces, one on each fabric to the blades. All VLANs (including the PVLAN parent and child VLAN IDs) are defined and added to the server templates - so propagated to each ESX host.
    At this point, nothing can do layer 3 except for the 6500s. Let's say my primary VLAN ID for one PVLAN is 160 and the isolated vlan ID is 161...
    On the Nexus 1000v:
    vlan 160
      name TEN1-Maint-PVLAN
      private-vlan primary
      private-vlan association 161
    vlan 161
      name TEN1-Maint-Iso
      private-vlan isolated
    port-profile type vethernet TEN1-Maint-PVLAN-Isolated
      description TEN1-Maint-PVLAN-Isolated
      vmware port-group
      switchport mode private-vlan host
      switchport private-vlan host-association 160 161
      no shutdown
      state enabled
    port-profile type vethernet TEN1-Maint-PVLAN-Promiscuous
      description TEN1-Maint-PVLAN-Promiscuous
      vmware port-group
      switchport mode private-vlan promiscuous
      switchport private-vlan mapping 160 161
      no shutdown
      state enabled
    port-profile type ethernet system-uplink
      description Physical uplink from N1Kv to physical switch
      vmware port-group
      switchport mode trunk
      switchport trunk allowed vlan all
      channel-group auto mode on mac-pinning
      no shutdown
      system vlan 20,116-119,202,1408
      state enabled
    This works fine to and from VMs on the same ESX host (PVLAN port-profiles work as expected)...If I move a VM over to another host, nothing works...pretty sure not even if in same promiscuous port-profile. How does the 6120 handle this traffic? What do they get tagged with when they leave the 1000v?

  • Nexus1000v - ? communication issue vcenter - N1Kv ???

    Unregulary we get these kind of alarms in our vcenter although there were no changes made on nexus or VM side.:                 
    "vSphere HA detected that host HOSTNAME is in a different network partition than the master MASTERNAME"
    They are cleared after a certain time without any manual action.VMware assumes a communication issue between vcenter and nexus1000.
    Anyone with similar experiences (or even a solution)???

    Hi Sachin,
    Thanks for your response.
    Please find my reply below:
    Is the module showing as up in the VSM when you execute 'show module'?
    >Yes the module is up and showing active when i execute the given command
    Is you ESX management VLAN allowed on the access port profile and on the uplink? Is it created on the switch?
    >Yes it is created on the vswitch as well as the upstream switches and allowed in the link also.
    Do you have system VLANs for your control, packet connectivity? For your ESX mgmt connectivity?
    > Yes i had defined control, packet and management vlans as system vlans.
    I feel that the port profile is not allowing the traffic to go in and out from the DVS.
    When i change the uplink of VSM from vswitch to DVS then the VSM doesnt reach gateway itself.
    Thanks,

  • Nexus 1010/1000v L3 Mode Through ASA

    Hi,
    A question regarding the subject line.  When deoploying redundant Nexus 1010 hardware appliances (VSM's) on the "inside" and Nexus 1000v's on your ESX hosts in the "DMZ" in Layer 3 mode which is seperated by an ASA, what VLAN's are actually need?  Both from the inside and DMZ perspective.  Specifically, do you actually need Control and Data/Packet VLAN's configured when using L3 mode.  When you configure the SVS domin for L3 Transport you explicitly negate both Control and Data/Packet VLAN's? 
    Also, when configuring the 1000v in L3 mode is it best practice to have the system vlan the same as your management vlan, and also use the same vlan for the Vmkernal NIC.  When setting up the Vmkernal NIC on the ESX host the only option available was to use the management vlan.  

    Hello Aaron,
    The Nexus 1010s only communicate in L2 mode so you'll still need control, management & packet vlans between the two appliances. VSMs deployed in L3 mode collapse the control & packet vlans into the management network.  Traffic between the VSM and ESX host will be tunneled over IP.  Therefore you need to ensure IP connectivity between the VSM mgmt0 interface and the ESX host management vmk.
    Yes, you will want to define the ESX vmk vlan as a system vlan on BOTH the vethernet & ethernet port-profiles.
    Matthew

  • Nexus 1000V - what is a DVS really?

    Hello 1000V Experts,
    I'm hoping I can get some clarification on the functions of a 1000V and what happens if they are shut down.  I've had a few 1KVs in existance for some time and after running them on active servers I've come under the impression that a 1KV is basically a pre-configurator.  It has a predefined configuration that is uploaded to VCenter.  VCenter guys then can use that config to modify their servers uplinks.
    It does not do anything for the servers in real time, ie no inspection, no ACLs(that are not uploaded to VCenter), etc (other than CDP).
    If i shut down a 1KV live, nothing will happen, services will go on as normal as the config was uplinked to VCenter?
    I have a feeling I may be missing something and would really appreciate any clarification.
    /r
    Rob

    Let's set the record straight here - to avoid confusion.
    1. VEMs will continue to forward traffic in the event one or both VSM are unavailable - this requires the VEM to remain online and not reboot while both VSMs are offline. VSM communication is only required for config changes (and LACP negociation prior to 1.4)
    2.  If there is no VSM reachable, and a VEM is reboot, only then will the System VLANs go into a forwarding state.  All other non-system VLANs will remain down. This is to faciliate the Chicken & Egg theory of a VEM being able to initially communicate with a VSM to obtain its programming.
    The ONLY VLANs & vEth Profiles that should be set as system vlans are:
    1000v-Control
    1000v-Packet
    Service Console/VMkernel for Mgmt
    IP Storage (iSCSI or NFS)
    Everything else should not be defined as a system VLAN including VMotion - which is a common Mistake.
    **Remember that for a vEth port profile to behave like a system profile, it must be define on BOTH the vEth and Eth port profiles.  Two factor check.  This allows port profiles that maybe are not critical, yet share the same VLAN ID to behave differently.
    There are a total of 16 profiles that can include system VLANs.  If you exceed this, you can potentially run into issues with the Opaque data pushed from vCenter is truncated causing programming errors on your VEMs.  Adhering to the limitations above should never lead to this situation.
    Regards,
    Robert

Maybe you are looking for