Uplinks to Nexus 7K as backup uplinks

I have my pair of Fabric Interconnect connected to 2 Nexus5000. There is a flaw in this design that if both Nexus 5000 are down, my UCS is cut off from the network. 
Is there a way that I can run another pair of connection to the Core Switches Nexus 7000, but set it up so the traffic only flows through the 5000 and if both 5000s down, then flow to the core Nexus 7000 ?
Thank you. 

The design I mention has no single point of failure, is bandwith and hop count optimized. Why are you so concerned about both N5k failing ? there are hundreds or thousands of such installations in the field working without a problem. In principle I could be paranoid, and raise the same issue for N7k, although they have 2 Sup's.
The UCS N5K vPC will only be used for intra VLAN traffic, e.g. server 1, vnic 1 connected to fabric A talking to server 2, vnic2, connected to fabric B, where vnic 1 and 2 are in the same vlan.
The traffic will be sent to N7K, for South-North traffic to WAN/Campus, resp. UCS inter Vlan traffic.
see eg.
http://www.cisco.com/cdc_content_elements/flash/dcap/6/#/enterprise/system-level-designs/virtualized-multi-tenant-data-center

Similar Messages

  • SCVMM Kicks out Nexus 1000V Uplink NIC Any ideas?

    The SCVMM suddenly kicks out the Nexus 1000V Uplink NIC,
    thus preventing me from remediating the change.
    Also i get this error message
    Using Hyper V as virtualization platform

    Hello,
    You can use one Ethernet port-profile with a channel-group command (like 'channel-group auto mode on mac-pinning') and assign it to all the vmnic interfaces that need to carry the same set of VLANs
    The same port-profile can be used on other hosts too. The N1k would automatically bundle (port-channel) the interfaces that belong to the same ESX host (accomplished through the 'channel-group auto' command)
    If you need the interfaces to carry separate sets of VLANs, then you need a different port-profile.
    Port-profile is just a container for a common set of configuration that you can apply for multiple interfaces across multiple hosts.
    Thanks,
    Shankar

  • Uplink-ID 0 in Nexus 5500 with FEX 2248.

    Hi all,
    I've got 2 Nexus 5548 and a fex 2248 connected to both of them via a vPC. 1 Uplink port goes to N5K-01 e1/10 and a second Uplink port goes to N5K-02 to port e1/10.
    On the log of the first Nexus 5500 I see this:
    N5K-01 %FEX-5-FEX_PORT_STATUS_NOTI: Uplink-ID 0 of Fex 103 that is connected with Ethernet1/10 changed its status from Configured to Fabric Up
    N5K-01 %FEX-5-FEX_PORT_STATUS_NOTI: Uplink-ID 1 of Fex 103 that is connected with Ethernet1/10 changed its status from Fabric Up to Connecting
    N5K-01 1 %FEX-5-FEX_PORT_STATUS_NOTI: Uplink-ID 1 of Fex 103 that is connected with Ethernet1/10 changed its status from Connecting to Active
    On the log of the second nexus 5500, I see this:
    N5K-02 %FEX-5-FEX_PORT_STATUS_NOTI: Uplink-ID 0 of Fex 103 that is connected with Ethernet1/10 changed its status from Configured to Fabric Up
    N5K-02 %FEX-5-FEX_PORT_STATUS_NOTI: Uplink-ID 2 of Fex 103 that is connected with Ethernet1/10 changed its status from Fabric Up to Connecting
    N5K-02 %FEX-5-FEX_PORT_STATUS_NOTI: Uplink-ID 2 of Fex 103 that is connected with Ethernet1/10 changed its status from Connecting to Active
    What is the difference between Uplink-ID 0, 1 and 2? If I only have 2 uplink ports, why is there a third one?
    Thanks in advace for your assistance.

    Hi Sonu,
    I see the similar problem on my nexus5ks. show cfs peer does not display the peer information. I have been waiting for hours now.
    sw01# sh cfs peerCFS Discovery is in Progress ..Please waitCould not get response. The network topology may be under going change.Please try after about 30 seconds
    Did you find the fix7workaround of this problem ?
    Regards,
    Umair

  • Fabric Interconnect Uplinks to Nexus 7710

    Can someone please help suggest how many Fabric Interconnect switches I can uplink to pair of Nexus 7710's.

    in most deployment, Fabric Interconnect is put as a pair for the redundancy.  They are either vPCed to the Nexus switches or straight-through up to non-Nexus switches.  
    i guess we would like to see more background of your questions in term of traffic requirement, what kind of line cards used on the N7710 etc.
    Regards,
    Michael

  • Uplink Nexus 5010 to 6509 and management

    I have looked around online on Cisco's site and scanned over the Nexus 5000 document and I cant seem to find the answers Im looking for. The document I am referring to can be found at http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/configuration/guide/cli_rel_4_0_1a/CLIConfigurationGuide.html
    I have two questions:
    1) Are there any guides to connecting the 5000's to 6509's which would be serving as the core? If not can someone point me in the right direction.
    2) I have configured the management port on the 5000. In order to access the CLI of the switch without a console cable do I need to have the management port connected to my infrastructure or can I ssh/telnet to the switch by just having it uplinked via Fiber back to the 6509's?
    Thank you kindly

    Mod Ports Card Type                              Model              Serial No.
      1   48  CEF720 48 port 10/100/1000mb Ethernet  WS-X6748-GE-TX     SAL12330BKM
      4    8  CEF720 8 port 10GE with DFC            WS-X6708-10GE      SAL123418MN
      5    4  CEF720 4 port 10-Gigabit Ethernet      WS-X6704-10GE      SAL11370JSY
      6    2  Supervisor Engine 720 (Active)         WS-SUP720-BASE     SAL1201BZUC
      9   16  SFM-capable 16 port 1000mb GBIC        WS-X6516A-GBIC     SAL08196XHD
    Mod MAC addresses                       Hw    Fw           Sw           Status
      1  0022.55ec.69a8 to 0022.55ec.69d7   3.0   12.2(18r)S1  12.2(18)SXF1 Ok
      4  0023.045e.fbe8 to 0023.045e.fbef   1.6   12.2(18r)S1  12.2(18)SXF1 Ok
      5  001d.4542.17b0 to 001d.4542.17b3   2.6   12.2(14r)S5  12.2(18)SXF1 Ok
      6  0019.e7d4.3e5c to 0019.e7d4.3e5f   4.0   8.4(2)       12.2(18)SXF1 Ok
      9  000f.f780.d2bc to 000f.f780.d2cb   4.1   7.2(1)       8.5(0.46)RFW Ok
    Mod  Sub-Module                  Model              Serial       Hw     Status
      1  Centralized Forwarding Card WS-F6700-CFC       SAL1230Y8RY  4.1    Ok
      4  Distributed Forwarding Card WS-F6700-DFC3C     SAL123304FZ  1.0    Ok
      5  Distributed Forwarding Card WS-F6700-DFC3B     SAL1115LPP0  4.6    Ok
      6  Policy Feature Card 3       WS-F6K-PFC3A       SAL1201C3KH  2.6    Ok
      6  MSFC3 Daughterboard         WS-SUP720          SAL1201C1TQ  3.1    Ok
    Mod  Online Diag Status
      1  Pass
      4  Pass
      5  Pass
      6  Pass
      9  Pass
    #sh ver
    Cisco Internetwork Operating System Software
    IOS (tm) s72033_rp Software (s72033_rp-ADVIPSERVICESK9_WAN-M), Version 12.2(18)SXF15a, RELEASE SOFTWARE (fc1)
    Technical Support: http://www.cisco.com/techsupport
    Copyright (c) 1986-2008 by cisco Systems, Inc.
    Compiled Tue 21-Oct-08 00:04 by kellythw
    Image text-base: 0x40101040, data-base: 0x42DDBE30
    ROM: System Bootstrap, Version 12.2(17r)S4, RELEASE SOFTWARE (fc1)
    BOOTLDR: s72033_rp Software (s72033_rp-ADVIPSERVICESK9_WAN-M), Version 12.2(18)SXF15a, RELEASE SOFTWARE (fc1)
    pa-core-6509 uptime is 1 year, 11 weeks, 2 days, 23 hours, 49 minutes
    Time since pa-core-6509 switched to active is 1 year, 11 weeks, 2 days, 23 hours, 48 minutes
    System returned to ROM by s/w reset at 08:16:27 UTC Tue Oct 28 2008 (SP by bus error at PC 0x401A4578, address 0x0)
    System image file is "disk0:s72033-advipservicesk9_wan-mz.122-18.SXF15a.bin"

  • Nexus 2232 TM uplink ports are they only for uplink use?

    We have upgraded our N5K to support the new 2232TM, but now we face the issue that the 8 SFP+ ports are not visible in our config:
    Nexus5010_1# sh  fex 103 det
    FEX: 103 Description: 2KSRV02   state: Online
      FEX version: 5.0(3)N2(2a) [Switch version: 5.0(3)N2(2a)]
      FEX Interim version: 5.0(3)N2(2a)
      Switch Interim version: 5.0(3)N2(2a)
      Extender Model: N2K-C2232TM-10GE,  Extender Serial: FOC15262P7Q
      Part No: 73-13626-03
      Card Id: 164, Mac Addr: 30:e4:db:66:12:02, Num Macs: 64
      Module Sw Gen: 21  [Switch Sw Gen: 21]
      post level: complete
    pinning-mode: static    Max-links: 1
      Fabric port for control traffic: Eth1/9
      Fabric interface state:
        Po103 - Interface Up. State: Active
        Eth1/9 - Interface Up. State: Active
        Eth1/10 - Interface Up. State: Active
      Fex Port        State  Fabric Port
           Eth103/1/1  Down       Po103
           Eth103/1/2  Down       Po103
           Eth103/1/3  Down       Po103
           Eth103/1/4  Down       Po103
           Eth103/1/5  Down       Po103
           Eth103/1/6  Down       Po103
           Eth103/1/7  Down       Po103
           Eth103/1/8  Down       Po103
           Eth103/1/9  Down       Po103
          Eth103/1/10  Down       Po103
          Eth103/1/11  Down       Po103
          Eth103/1/12  Down       Po103
          Eth103/1/13  Down       Po103
          Eth103/1/14  Down       Po103
          Eth103/1/15  Down       Po103
          Eth103/1/16  Down       Po103
          Eth103/1/17  Down       Po103
          Eth103/1/18  Down       Po103
          Eth103/1/19  Down       Po103
          Eth103/1/20  Down       Po103
          Eth103/1/21  Down       Po103
          Eth103/1/22  Down       Po103
          Eth103/1/23  Down       Po103
          Eth103/1/24  Down       Po103
          Eth103/1/25  Down       Po103
          Eth103/1/26  Down       Po103
          Eth103/1/27  Down       Po103
          Eth103/1/28  Down       Po103
          Eth103/1/29  Down       Po103
          Eth103/1/30  Down       Po103
          Eth103/1/31  Down       Po103
          Eth103/1/32  Down       Po103
    They are not visible: neither with the "show fex" neither with the "show int status", we cannot configure them....?
    Do you think it's possible to access these ports by another way and configure them to allow some hosts to connect on it or is their purpose exclusively UPLINK?
    best regards,
    Eric Gillis

    Do you mean the Fabric interfaces? If yes, you cannot use those for any host connectivities.
    HTH,
    jerry

  • Nexus 5548UP uplink to Catalyst 4510R

    I am designing my fully 10g datacenter access layer with a pair of 5548UP switches in an EvPC and a pair of dual homed 2232PP FEX's. I need to be able to uplink my pair of 5500's up to my pair of 4510R classic aggregation switches. Each 4510R has two SUP V supervisors, each with two 10g X2 ports. I have a dual 10g trunk in a port channel between each 4510R.
    What is the ideal method for uplinking from the 5500's to the 4510R switches? As far I understand only one out of the two 10g ports on each Sup V can be active. Should I dual home each 5548 to each 4510R and let STP do its thing? Am I able to do that with my SUP V's? And if so, will I still be able to trunk dual 10g ports in a port channel between the two 4510R switches? I need to maintain a large pipe between the aggregation swithces for my regular campus traffic that does not traverse to the datacenter.
    Thanks!

    well since you can not cluster the 45K as a virtual switch ( Cisco with new sup will start support VSS in the 4500 try to check which sup exactly and if you can upgrade as this will make a significant improvement to your design )
    anyway the only method that you can use currently is the traditional way which is depending on STP ( use rapid-PVST)
    from each N5K use one separate link to each 45K and STP will put on of the links in blocking mode
    however you might do some STP and vlan design for load sharing where you can send vlan x over link1 and vlan y over link b to the 45K using STP cost
    HTH

  • Migrate physical adapter to Nexus 1000v's specific uplink port Group

    When I run the below script in vmware powercli, the physical adapters get added to N1K's "Unused_Or_Quarantine_Uplink" port group. I got "sys-uplink" PortGroup in my N1K (VSM-DVS-SCALE), and I want the physical adapters to get added to this "sys-uplink".
    Issue is Add-VDSwitchPhysicalNetworkAdapter does not have a option to specify which Port-Group the adapter should be added to. Any workarounds to solve this issue. Looks like customers are facing similar issue and moving away from N1k to vmware's dvs (see
    https://communities.vmware.com/thread/442897?start=0&tstart=0)
    $vmhost = Get-Datacenter Dao | Get-VMHost "192.100.12.16"
    $myVDSwitch = Get-VDSwitch -Name "VSM-DVS-SCALE" -Location Dao
    $hostsphysicalnic = $vmhost | Get-VMHostNetworkAdapter -name vmnic2,vmnic1
    $myVDPortGroup = get-vdportgroup -name $myVDPortGroup  -vdswitch $myVDSwitch
    Add-VDSwitchPhysicalNetworkAdapter -VMHostNetworkAdapter $hostsPhysicalNic -DistributedSwitch "VSM-DVS-SCALE"

    When I run the below script in vmware powercli, the physical adapters get added to N1K's "Unused_Or_Quarantine_Uplink" port group. I got "sys-uplink" PortGroup in my N1K (VSM-DVS-SCALE), and I want the physical adapters to get added to this "sys-uplink".
    Issue is Add-VDSwitchPhysicalNetworkAdapter does not have a option to specify which Port-Group the adapter should be added to. Any workarounds to solve this issue. Looks like customers are facing similar issue and moving away from N1k to vmware's dvs (see
    https://communities.vmware.com/thread/442897?start=0&tstart=0)
    $vmhost = Get-Datacenter Dao | Get-VMHost "192.100.12.16"
    $myVDSwitch = Get-VDSwitch -Name "VSM-DVS-SCALE" -Location Dao
    $hostsphysicalnic = $vmhost | Get-VMHostNetworkAdapter -name vmnic2,vmnic1
    $myVDPortGroup = get-vdportgroup -name $myVDPortGroup  -vdswitch $myVDSwitch
    Add-VDSwitchPhysicalNetworkAdapter -VMHostNetworkAdapter $hostsPhysicalNic -DistributedSwitch "VSM-DVS-SCALE"

  • Nexus 5010 uplink to VSS-1440

    We are deploying a new set of VMware server farms and would like to use the Nexus line for these 40 servers. Does is make sense to connect these via layer 2 to our VSS on a WS-X6708-10G-3C? Or should we wait a few years until we have a budget for a pair of Nexus 7k's and run everything to the VSS? thanks

    hi Jim,
    Nexus 5000 is a L2 switch, so pretty much whatever you do, its going to be a L2 Portchannel 'northbound' from N5K up to the C6K VSS pair.
    Your VSS pair would be the L2/L3 boundary, so you'd have a SVI configured on it that is the L3 default gateway (from your servers).
    Best practice would be that you distribute the portchannel members across the pair of physical switches in the VSS pair.
    I'm involved in the development of the Nexus range within Cisco, so I'd be all for recommending you deploy Nexus 7000 too. :) But the reality is that you can probably achieve what you want today with C6K VSS, and in future if you did deploy Nexus 7000 too, you could make use of virtual Port Channel (vPC) to allow for full bisectional bandwidth from 'access' to 'agg/core' without any blocked links in STP.
    vPC is available today on N7K and provides roughly similar L2 multichassis etherchannel as what VSS enables today in C6K VSS.
    hope that helps.
    cheers,
    lincoln.

  • Various questions on uplink profiles, CoS, native VLAN, downlink trunking

    I will be using vPC End Host Mode with MAC-pinning. I see I can further configure MAC-Pinning. Is this required or will it automatically forward packets by just turning it on? Is it also best not to enable failover for the vnics in this configuration? See this text from the Cisco 1000V deployment Guide:
    Fabric Fail-Over Mode
    Within the Cisco UCS M71KR-E, M71KR-Q and M81KR adapter types, the Cisco Unified Computing System can
    enable a fabric failover capability in which loss of connectivity on a path in use will cause remapping of traffic
    through a redundant path within the Cisco Unified Computing System. It is recommended to allow the Cisco Nexus
    1000V redundancy mechanism to provide the redundancy and not to enable fabric fail-over when creating the
    network interfaces within the UCS Service Profiles. Figure 3 shows the dialog box. Make sure the Enable Failover
    checkbox is not checked."
    What is the 1000V redundancy?? I didn't know it has redundancy. Is it the MAC-Pinning set up in the 1000V? Is it Network State Tracking?
    The 1000V has redundancy and we can even pin VLANs to whatever vNIC we want. See Cisco's Best Practices for Nexus 1000V and UCS.
    Nexus1000V management VLAN. Can I use the same VLAN for this and for ESX-management and for Switch management? E.g VLan 3 for everything.
    According to the below text (1000V Deployment Guide), I can have them all in the same vlan:
    There are no best practices that specify whether the VSM
    and the VMware ESX management interface should be on the same VLAN. If the management VLAN for
    network devices is a different VLAN than that used for server management, the VSM management
    interface should be on the management VLAN used for the network devices. Otherwise, the VSM and the
    VMware ESX management interfaces should share the same VLAN.
    I will also be using CoS and Qos to prioritize the traffic. The CoS can either be set in the 1000V (Host control Full) or per virtual adapter (Host control none) in UCS. Since I don't know how to configure CoS on the 1000V, I wonder if I can just set it in UCS (per adapter) as before when using the 1000V, ie. we have 2 choices.
    Yes, you can still manage CoS using QoS on the vnics when using 1000V:
    The recommended action in the Cisco Nexus 1000V Series is to assign a class of service (CoS) of 6 to the VMware service console and VMkernel flows and to honor these QoS markings on the data center switch to which the Cisco UCS 6100 Series Fabric Interconnect connects. Marking of QoS values can be performed on the Cisco Nexus 1000V Series Switch in all cases, or it can be performed on a per-VIF basis on the Cisco UCS M81KR or P81E within the Cisco Unified Computing System with or without the Cisco Nexus 1000V Series Switch.
    Something else: Native VLANs
    Is it important to have the same native VLAN on the UCS and the Cisco switch? And not to use the default native VLAN 1?   I read somewhere that the native VLAN is used for communication between the switches and CDP amongst others. I know the native VLAN is for all untagged traffic. I see many people set the ESXi management VLAN as native also, and in the above article the native VLAN (default 1) is setup. Why? I have been advised to leave out the native VLAN.
    Example:Will I be able to access a VM set with VLAN 0 (native) if the native VLAN is the same in UCS and the Cisco switch (Eg. VLAN 2)? Can I just configure a access port with the same VLAN ID as the native VLAN, i.e 2 and connect to it with a PC using the same IP network address?
    And is it important to trunk this native VLAN? I see in a Netapp Flexpod config they state this: "This configuration also leverages the native VLAN on the trunk ports to discard untagged packets, by setting the native VLAN on the port channel, but not including this VLAN in the allowed VLANs on the port channel". But I don't understand it...
    What about the downlinks from the FI to the chassis. Do you configure this as a port channel also in UCS? Or is this not possible with the setup described here with 1000V and MAC-pinning.
    No, port channel should not be configured when MAC-pinning is configured.
    [Robert] The VSM doesn't participate in STP so it will never send BPDU's.  However, since VMs can act like bridges & routers these days, we advise to add two commands to your upstream VEM uplinks - PortFast and BPDUFilter.  PortFast so the interface is FWD faster (since there's no STP on the VSM anyway) and BPDUFilter to ignore any received BPDU's from VMs.  I prefer to ignore them then using BPDU Gaurd - which will shutdown the interface if BPDU's are received.
    -Are you thinking of the upstream switch here (Nexus, Catalyst) or the N1kV uplink profile config?
    Edit: 26 July 14:23. Found answers to many of my many questions...

    Answers inline.
    Atle Dale wrote:
    Something else: Native VLANsIs it important to have the same native VLAN on the UCS and the Cisco switch? And not to use the default native VLAN 1?   I read somewhere that the native VLAN is used for communication between the switches and CDP amongst others. I know the native VLAN is for all untagged traffic. I see many people set the ESXi management VLAN as native also, and in the above article the native VLAN (default 1) is setup. Why? I have been advised to leave out the native VLAN.[Robert] The native VLAN is assigned per hop.  This means between the 1000v Uplinks port profile and your UCS vNIC definition, the native VLAN should be the same.  If you're not using a native VLAN, the "default" VLAN will be used for control traffic communication.  The native VLAN and default VLAN are not necessarily the same.  Native refers to VLAN traffic without an 802.1q header and can be assigned or not.  A default VLAN is mandatory.  This happens to start as VLAN 1 in UCS but can be changed. The default VLAN will be used for control traffic communication.  If you look at any switch (including the 1000v or Fabric Interconnects) and do a "show int trunk" from the NXOS CLI, you'll see there's always one VLAN allowed on every interface (by default VLAN 1) - This is your default VLAN.Example:Will I be able to access a VM set with VLAN 0 (native) if the native VLAN is the same in UCS and the Cisco switch (Eg. VLAN 2)? Can I just configure a access port with the same VLAN ID as the native VLAN, i.e 2 and connect to it with a PC using the same IP network address?[Robert] There's no VLAN 0.  An access port doesn't use a native VLAN - as its assigned to only to a single VLAN.  A trunk on the other hand carries multiple VLANs and can have a native vlan assigned.  Remember your native vlan usage must be matched between each hop.  Most network admins setup the native vlan to be the same throughout their network for simplicity.  In your example, you wouldn't set your VM's port profile to be in VLAN 0 (doens't exist), but rather VLAN 2 as an access port.  If VLAN 2 also happens to be your Native VLAN northbound of UCS, then you would configured VLAN 2 as the Native VLAN on your UCS ethernet uplinks.  On switch northbound of the UCS Interconnects you'll want to ensure on the receiving trunk interface VLAN 2 is set as the native vlan also.  Summary:1000v - VM vEthernet port profile set as access port VLAN 21000v - Ethernet Uplink Port profile set as trunk with Native VLAN 2UCS - vNIC in Service Profile allowing all required VLANs, and VLAN 2 set as NativeUCS - Uplink Interface(s) or Port Channel set as trunk with VLAN 2 as Native VLANUpstream Switch from UCS - Set as trunk interface with Native VLAN 2From this example, your VM will be reachable on VLAN 2 from any device - assuming you have L3/routing configured correctly also.And is it important to trunk this native VLAN? I see in a Netapp Flexpod config they state this: "This configuration also leverages the native VLAN on the trunk ports to discard untagged packets, by setting the native VLAN on the port channel, but not including this VLAN in the allowed VLANs on the port channel". But I don't understand it...[Robert] This statement recommends "not" to use a native VLAN.  This is a practice by some people.  Rather than using a native VLAN throughout their network, they tag everything.  This doesn't change the operation or reachability of any VLAN or device - it's simply a design descision.  The reason some people opt not to use a native VLAN is that almost all switches use VLAN 1 as the native by default.  So if you're using the native VLAN 1 for management access to all your devices, and someone connects in (without your knowing) another switch and simply plug into it - they'd land on the same VLAN as your management devices and potentially do harm.What about the downlinks from the FI to the chassis. Do you configure this as a port channel also in UCS? Or is this not possible with the setup descrived here with 1000V and MAC-pinning.[Robert] On the first generation hardware (6100 FI and 2104 IOM) port channeling is not possible.  With the latest HW (6200 and 2200) you can create port channels with all the IOM - FI server links.  This is not configurable.  You either tell the system to use Port Channel or Individual Links.  The major bonus of using a Port Channel is losing a link doesn't impact any pinned interfaces - as it would with individual server interfaces.  To fix a failed link when configured as "Individual" you must re-ack the Chassis to re-pinn the virtual interfaces to the remaining server uplinks.  In regards to 1000v uplinks - the only supported port channeling method is "Mac Pinning".  This is because you can't port channel physical interfaces going to separate Fabrics (one to A and one to B).  Mac Pinning gets around this by using pinning so all uplinks can be utilized at the same time.--[Robert] The VSM doesn't participate in STP so it will never send BPDU's.  However, since VMs can act like bridges & routers these days, we advise to add two commands to your upstream VEM uplinks - PortFast and BPDUFilter.  PortFast so the interface is FWD faster (since there's no STP on the VSM anyway) and BPDUFilter to ignore any received BPDU's from VMs.  I prefer to ignore them then using BPDU Gaurd - which will shutdown the interface if BPDU's are received.-Are you thinking of the upstream switch here (Nexus, Catalyst) or the N1kV uplink profile config?[Robert] The two STP commands would be used only when the VEM (ESX host) is directly connected to an upstream switch.  For UCS these two commands to NOT apply.

  • Best Practice for vDS, Uplinks and TF ?

    Hi,
    Three questions about vNetwork Distributed Switch:
    My environment:
    - Datacenters: 2
    - Hosts: 50 (25 in each datacenter)
    - Cluster: 10 (5 in each datacenter) (2 to 5 nodes per cluster)
    - Nics Hosts:
         - 1 nic for management
         - 1 nic for redundant management and VMotion
         - 2, 4 or 8 nics with trunk internal networks (15 VLANs)
         - 2, 4 or 8 nics with trunk to external networks (10 VLANs)
         - 1 nic for backup
         - Over 1500 vms
    Question 1 - Speaking only of internal, external and back up, how I should have vDS?
    - 6 vDS (2 internal, 2 external and 2 backup, half in Datacenter), right?
    Question 2 - I have clusters of different sizes, some clusters have two hosts nic for internal networks while others cluster hosts have 4 or 8 nics nics, so I must have DvUplinks how many in each vDS? What is the impact of having uplinks without nic attached in some clusters?
    - VDs_Internal = 8 uplinks?
    Question 3 - What is good practice to use Teaming and Failover in this case? "Routed originating based on virtual port" or "Routed bases on physical NIC load" knowing that I'm not using Network Control IO?
    thanks,
    Reis

    Q1. With VLAN you can also use a single vDS with several portgroup (and for some, like FT, vMotion, ... use explicit uplink order).
    Q2. This could be a little problem. With vDS you can map uplink to pNIC... but DRS (for example) could not know if your host has less uplinks. If the difference is minimal go with a single cluster, otherwise consider to use 2 clusters.
    Q3. The virtual port solution could be simple and good in most cases.
    Andre

  • Uplink problems from UCS 6128 FI

    Hi,
    We have one demo UCS system. It was working pretty smoth, we take it to few presentations etc etc.
    Few days we cleaned system, and decided to reconfigure and deploy.
    Overview :
    Cisco UCS blade chasis, 2 b200m2 servers with VIC adapters, 2x6128 FI.
    Uplinks are only 1 Gb and GLC-T modules are used for uplinks.
    Problem now is that as soon as i configure port as uplink and configure it for 1 Gbps, port changes to admin down, and no connections to upper switches are available.
    We decided to upgrade to latest software and now is 2.0(4a). Conection from VIC, through IOM and to server ports is ok, but uplinks are still not working.
    Upper switch  c2950 has ports that connects to ucs as trunk, everything as usual.
    Just to clarfy, this exact setup was active before and working.
    Errors from UCS are :
    F0479    2012-07-25T13:04:26.635    112798 Virtual interface 704 link state is down
    F0283    enm source pinning failed
    Output from nxos for uplink port :
    ======================================================
    Hardware: 1000/10000 Ethernet, address: 0005.73cc.4b48 (bia 0005.73cc.4b48)
    Description: U: Uplink
    MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,
    reliability 255/255, txload 1/255, rxload 1/255
    Encapsulation ARPA
    Port mode is trunk
    auto-duplex, 1000 Mb/s, media type is 10G
    Beacon is turned off
    Input flow-control is off, output flow-control is off
    Rate mode is dedicated
    Switchport monitor is off
    EtherType is 0x8100
    Last link flapped never
    Last clearing of "show interface" counters never
    30 seconds input rate 0 bits/sec, 0 bytes/sec, 0 packets/sec
    30 seconds output rate 0 bits/sec, 0 bytes/sec, 0 packets/sec
    Load-Interval #2: 5 minute (300 seconds)
    input rate 0 bps, 0 pps; output rate 0 bps, 0 pps
    RX
    0 unicast packets  0 multicast packets  0 broadcast packets
    0 input packets  0 bytes
    0 jumbo packets  0 storm suppression packets
    0 giants      0 input error  0 short frame  0 overrun   0 underrun      0 watchdog  0 if down drop
    0 input with dribble  0 input discard
    0 Rx pause
    TX
    0 unicast packets  0 multicast packets  0 broadcast packets
    0 output packets  0 bytes
    0 jumbo packets
    0 output errors  0 collision  0 deferred  0 late collision
    0 lost carrier  0 no carrier  0 babble
    0 Tx pause
    0 interface resets
    =====================================================
    Output from UCS cli for vif paths
    Server: 1/1
    Fabric ID: A
    VIF        vNIC            Link State  Overall Status Prot State    Prot Role   Admin Pin  Oper Pin   Transport
    41                 Unknown     Unknown                                  0/0        0/0        Unknown
    689 eth1            Error       Error          No Protection Backup      0/0        0/0        Ether
    690 eth0            Error       Error          No Protection Primary     0/0        0/0        Ether
    Fabric ID: B
    VIF        vNIC            Link State  Overall Status Prot State    Prot Role   Admin Pin  Oper Pin   Transport
    42                 Unknown     Unknown                                  0/0        0/0        Unknown
    688 eth1            Error       Error          No Protection Primary     0/0        0/0        Ether
    691 eth0            Error       Error          No Protection Backup      0/0        0/0        Ether
    Any idea? Everything is connected as before, it used to work flawless.

    well, server ports are up. I know this is uplink ports problem, but really cannot see where the problem is :). Ports on northbound switch that ucs connects to are trunk ports, vlans are defined on that switch.
    I also suspect sfp's (cisco glc-t) but it seems pretty strange that 2 sfp modules are malfunctioning. They used to work pretty neat.
    Just to mention we have same UCS in production and it is working, config is same, ports are same, nxos outputs for ethernet uplinks are same..
    Just to check somenthing i created two hba's (we do have FC expansion card, but ucs is not connected to FC switches), and vif paths for FC are up...seems that this is releated to ethernet problem solely.

  • VShield Manager, VXLAN virtual wires, VDS uplink changes

    Need some input on how to fix this situation without rebuilding the whole damn cluster...
    I have vSM providing VXLAN virtual wires/networks for vCloud Director. As a part of some host updates--migrating from all 1Gbps to 1/10Gbps mixed--I was able to rearrange the NICs on the hosts and add additional uplinks to the total available on the DVS switch that VXLAN is "riding on." In the course of making these updates, I also renamed the uplinks, which resulted in errors and discovery of the root problem.
    The errors were popping up when trying to instantiate a new VXLAN network. The vCloud error was pretty inscrutable (as usual), but trying to manually create a new network in vSM provided useful information: the error was a failure to set the teaming mode for the new port group, and it referenced one of the former uplink names.
    After I added additional "dummy" uplinks to the DVS and renamed them so that the old names were included, the virtual wires could be built just fine. However, in reviewing the new portgroups, it was clear that vShield was building them using the original, pre-reconfiguration uplink names, ignoring all the new uplinks: the new uplinks were set as "unused" in the teaming properties, and the "active" were the dummy uplinks that had no physical adapters associated with them!
    I've restarted vSM, re-entered the vCenter credentials to try and get it to re-sync with the network configuration, but to no avail.
    I need some way to force vSM to re-enumerate that DVS that VXLAN is using so that it'll end up with the correct uplinks. To date, my Google-fu has failed me, so I'm hoping someone on the forums might have a clue. Heck, for all I know, this is a defect that I've just uncovered...

    Assuming you are using "failover order" configured for your VXLAN, you can try to use REST API call to change the uplink names in vCNS manager (almost similar to what you can see in KB 2093324).
    Note :  I recommend to backup or snapshot the Manager first! Just in case ....
    Headers required as:
    Accept: application/xml
    Content-Type : application/xml
    Basic Auth
    1./
    Use GET on the following to the prepared VDS':
    http://<manager>/api/2.0/vdn/switches
    ( obviously replace the <manager> part with the name or IP of your vShield/vCNS manager )
    You will get something like this (this is just a part of it if you have more than one vDS) :
    <vdsContext>
    <switch>
    <objectId>dvs-18</objectId>
    <type>
    <typeName>VmwareDistributedVirtualSwitch</typeName>
    </type>
    <name>DSwitch</name>
    <revision>16</revision>
    <objectTypeName>VmwareDistributedVirtualSwitch</objectTypeName>
    <scope>
    <id>datacenter-2</id>
    <objectTypeName>Datacenter</objectTypeName>
    <name>cloud</name>
    </scope>
    <extendedAttributes/>
    </switch>
    <mtu>1600</mtu>
    <teaming>FAILOVER_ORDER</teaming>
    <uplinkPortName>Uplink 2</uplinkPortName>
    <uplinkPortName>Uplink 1</uplinkPortName>
    <promiscuousMode>false</promiscuousMode>
    </vdsContext>
    2./
    Modify the <uplinkPortName> parts as you need. For example:
    <uplinkPortName>NewLink1</uplinkPortName>
    <uplinkPortName>NewLink2</uplinkPortName>
    Leave the rest as it is.
    3./
    Then use execute a "PUT" for the URL below in the REST client containing the above ( in step 2./ ) modified body (and again: assuming you are using failover order).
    http://<manager>/api/2.0/vdn/switches/dvs-18
    Note: Replace the "dvs-18" with the id in between <objectId> and </objectId> to what you got in the GET query ( again in bold and red ),
    You should get a HTTP 200 code if all is OK. See : vShield API Guide around page 154, but personally I think the "Edit Teaming Policy" part is not correct.
    This will not change any existing port-group setting in vCenter. You will need to edit them manually. This change is only for any further VXLAN v-wire creation.
    HTH
    Roland
    P.s: I did my best to test and try the above example, but no guarantee and no support provided. For support please open a service request with VMware.

  • Uplink failover scenarios - The correct behavior

    /* Style Definitions */
    table.MsoNormalTable
    {mso-style-name:"Table Normal";
    mso-tstyle-rowband-size:0;
    mso-tstyle-colband-size:0;
    mso-style-noshow:yes;
    mso-style-priority:99;
    mso-style-qformat:yes;
    mso-style-parent:"";
    mso-padding-alt:0in 5.4pt 0in 5.4pt;
    mso-para-margin:0in;
    mso-para-margin-bottom:.0001pt;
    mso-pagination:widow-orphan;
    font-size:11.0pt;
    font-family:"Calibri","sans-serif";
    mso-ascii-font-family:Calibri;
    mso-ascii-theme-font:minor-latin;
    mso-hansi-font-family:Calibri;
    mso-hansi-theme-font:minor-latin;}
    Hello Dears,
    I’m somehow confused about the failover scenarios related to the uplinks and the Fabric Interconnect (FI) switches, as we have a lot of failover points either in the vNIC , FEX , FI or uplinks.
    I have some questions and I hope that someone can clear this confusion:
    A-     Fabric Interconnect failover
    1-      As I understand when I create a vNIC , it can be configured to use FI failover , which means if FI A is down , or the uplink from the FEX to the FI is down , so using the same vNIC it will failover to the other FI via the second FEX ( is that correct , and is that the first stage of the failover ?).
    2-      This vNIC will be seen by the OS as 1 NIC and it will not feel or detect anything about the failover done , is that correct ?
    3-      Assume that I have 2 vNICs for the same server (metal blade with no ESX or vmware), and I have configured 2 vNICs to work as team (by the OS), does that mean that if primary FI or FEX is down , so using the vNIC1 it will failover to the 2nd FI, and for any reason the 2nd vNIC is down (for example if uplink is down), so it will go to the 2nd vNIC using the teaming ?
    B-      FEX failover
    1-      As I understand the blade server uses the uplink from the FEX to the FI based on their location in the chassis, so what if this link is down, does that mean FI failover will trigger, or it will be assigned to another uplink ( from the FEX to the FI)
    C-      Fabric Interconnect Uplink failover
    1-      Using static pin LAN group, the vNIC is associated with an uplink, what is the action if this uplink is down ? will the vNIC:
    a.       Brought down , as per the Network Control policy applied , and in this case the OS will go for the second vNIC
    b.      FI failover to the second FI , the OS will not detect anything.
    c.       The FI A will re-pin the vNIC to another uplink on the same FI with no failover
    I found all theses 3 scenarios in a different documents and posts, I did not have the chance it to test it yet, so it will be great if anyone tested it and can explain.
    Finally I need to know if the correct scenarios from the above will be applied to the vHBA or it has another methodology.
    Thanks in advance for your support.
    Moamen

    Moamen
    A few things about Fabric Failover (FF)  to keep in mind before I try to address your questions.
    FF is only supported on the M71KR and the M81KR.
    FF is only applicable/supported in End Host Mode of Operation and applies only to ethernet traffic. For FC traffic one has to use multipathing software (the way FC failover has worked always). In End Host mode, anything along the path (adapter port, FEX-IOM link, uplinks) fails and FF is initiated for ethernet traffic *by the adapter*.
    FF is an event which is triggered by vNIC down i.e a vNIC is triggered down and the adapter initiates the failover i.e it sends a message to the other fabric to activate the backup veth (switchport) and the FI sends our gARPs for the MAC as part of it. As it is adapter driven, this is why FF is only available on a few adapters i.e for now the firmware for which is done by Cisco.
    For the M71KR (menlo's) the firmware on the Menlo chip is made by Cisco. The Oplin and FC parts of the card, and Intel/Emulex/Qlogic control that.
    The M81KR is made by Cisco exclusively for UCS and hence the firmware on that is done by us.
    Now to your questions -
    >1-      As I understand when I create a vNIC , it can be configured to use FI failover , which means if FI A is down , or the uplink from the FEX to the >FI is down , so using the same vNIC it will failover to the other FI via the second FEX ( is that correct , and is that the first stage of the failover ?).
    Yes
    > 2-      This vNIC will be seen by the OS as 1 NIC and it will not feel or detect anything about the failover done , is that correct ?
    Yes
    >3-      Assume that I have 2 vNICs for the same server (metal blade with no ESX or vmware), and I have configured 2 vNICs to work as team (by the >OS), does that mean that if primary FI or FEX is down , so using the vNIC1 it will failover to the 2nd FI, and for any reason the 2nd vNIC is down (for >example if uplink is down), so it will go to the 2nd vNIC using the teaming ?
    Instead of FF vNICs you can use NIC teaming. You bond the two vNICs which created a bond interface and you specify an IP on it.
    With NIC teaming you will not have the vNICs (in the Service Profile) as FF. So the FF will not kick in and the vNIC will be down for the teaming software to see on a fabric failure etc for the teaming driver to come into effect.
    > B-      FEX failover
    > 1-      As I understand the blade server uses the uplink from the FEX to the FI based on their location in the chassis, so what if this link is down, > >does that mean FI failover will trigger, or it will be assigned to another uplink ( from the FEX to the FI)
    Yes, we use static pinning between the adapters and the IOM uplinks which depends on the number of links.
    For example, if you have 2 links between IOM-FI.
    Link 1 - Blades 1,3,5,7
    Link 2 - Blades 2,4,6,8
    If Link 1 fails, Blade 1,3,5,7 move to the other IOM.
    i.e it will not failover to the other links on the same IOM-FI i.e it is no a port-channel.
    The vNIC down event will be triggered. If FF is initiated depends on the setting (above explanation).
    > C-      Fabric Interconnect Uplink failover
    > 1-      Using static pin LAN group, the vNIC is associated with an uplink, what is the action if this uplink is down ? will the vNIC:
    > a.       Brought down , as per the Network Control policy applied , and in this case the OS will go for the second vNIC
    If you are using static pin group, Yes.
    If you are not using static pin groups, the same FI will map it to another available uplink.
    Why? Because by defining static pinning you are purposely defining the uplink/subscription ratio etc and you don't want that vNIC to go to any other uplink. Both fabrics are active at any given time.
    > b.      FI failover to the second FI , the OS will not detect anything.
    Yes.
    > c.       The FI A will re-pin the vNIC to another uplink on the same FI with no failover
    For dynamic pinning yes. For static pinning NO as above.
    >I found all theses 3 scenarios in a different documents and posts, I did not have the chance it to test it yet, so it will be great if anyone tested it and >can explain.
    I would still highly recommend testing it. Maybe its me but I don't believe anything till I have tried it.
    > Finally I need to know if the correct scenarios from the above will be applied to the vHBA or it has another methodology.
    Multipathing driver as I mentioned before.
    FF *only* applies to ethernet.
    Thanks
    --Manish

  • 2100 WLC Redundent LAN Uplinks?

    Hi,
    In the data sheet for the 2100 it states that "Provides eight 10/100 Ethernet ports, intended to support a combination of access points and redundant LAN uplinks"
    http://www.cisco.com/en/US/prod/collateral/wireless/ps6302/ps8322/ps7206/ps7221/product_data_sheet0900aecd805aaab9.html
    My question is.. How do you configure the redundent uplinks as aparently the 2100 doesnt support LAG (Link aggregation)..
    Any ideas?
    Many Thanks

    Excellent, Thanks.
    So i can uplink the primary and backup ports to seperate switches?
    I'm thinking that i would home all interfaces to the same priamry and backup ports and then uplink each port to a different switch. There isnt really a need to try and load balance as the traffic volumes are quite low.
    What do you think?

Maybe you are looking for

  • Photos not visible in iPhoto 6

    I'm a Mac newbie, and haven't been able to figure out the following situation. I transfered all of my .JPG files from my PC to the 'iPhoto Library' using my home network. Problem: I can't seem to see any of my pictures using iPhoto or other applicati

  • Need quantitative as well as amount data for an account member

    Requirement of the report is that for an account dimension member, there needs to be quatity, amount as well as a ratio in a single line along with the unit for the quantity (which i will keep in the custom dim). Is this possible.. what are the ways

  • Change default "on success" behavior for click box?

    Hello, I create a lot of pptx --> Adobe Captivate eLearnings, and as you know, when the slides are imported, they are given a click box, which when clicked, advance the user to the next slide. I want to keep the click boxes to stop the project at the

  • My podcast was removed from iTunes.

    I rewrote my podcast description and without thinking I put in a swear word.  I'm assuming that's why my podcast was removed.  Now when I try to resubmit my feed it says that it's already been submitted.  I just changed my podcast's name around 16:00

  • Itunes 10.6.3 and iOS7

    help!  I have itune 10.6.3 and cant upgrade it w/o upgrading OSx 10.5.8.  This makes it seemingly impossible to sync my iphone 4 now that I have iOS7 installed on it.  Any advice as to how I can get my music, contacts, etc...back onto my phone?