VM-FEX and Nexus 1000v relation

Hi
I am a new in virtulaization world and I need to know what is the relation between Cisco Nexus 1000v and Cisco VM-FEX?, and when to use VM-FEX and when to use Nexus 1000v.
Regards

Ahmed,
Sorry for taking this long to get back to you.
Nexus 1000v is a virtualized switch and as such will require that any traffic coming in or leaving the VM will first need to pass through the virtualization layer, therefore causing a minimum delay that for some applications (VMs) can be catastrophic enough that may mean too much delay.
With VM-FEX you gain the option to bypass the virtualization layer with for example "Pass-Through" mode where the vmnics are really assigned and managed by the OS, minimizing the delay and making the VMs look as if they were directly attached, also, this offloads CPU workload in the mean time, optimizing the host/VM's performance.
The need for one or the other will be defined as always by the needs your organization/business has.
Benefits of VM-FEX (from cisco.com):
Simplified operations: Eliminates the need for a separate, virtual networking infrastructure
Improved network security: Contains VLAN proliferation
Optimized network utilization: Reduces broadcast domains
Enhanced application performance: Offloads virtual  machine switching from host CPU to parent switch application-specific  integrated circuits (ASICs)
Benefits of Nexus 1000v here on another post from Rob Burns:
https://supportforums.cisco.com/thread/2087541 
https://communities.vmware.com/thread/316542?tstart=0
I hope that helps 
-Kenny

Similar Messages

  • Latency between FEX and Nexus 9300

    Does anyone know or tested latency from port to port on a Nexus FEX? We are seeing some weird/strange results to MTU above 1500. Everything has been configured for jumbo (9216). The FEX is a N2K-C2232PP-10GE and its attached to a Nexus9000 C9396PX 1/10G Module.
    Port to Port Layer 2 on the FEX we are seeing around 76us
    Traffic from the traffic generator connected to a FEX, to the traffic generator on the Nexus 9300, we are seeing around 74us
    Traffic from the traffic generator connected to the Nexus 9300, to the traffic generator on the FEX, we are seeing around 2us.
    Any ideas why the latency seems high in one direction? Same test on the 9300 only is fine also.

    Ahmed,
    Sorry for taking this long to get back to you.
    Nexus 1000v is a virtualized switch and as such will require that any traffic coming in or leaving the VM will first need to pass through the virtualization layer, therefore causing a minimum delay that for some applications (VMs) can be catastrophic enough that may mean too much delay.
    With VM-FEX you gain the option to bypass the virtualization layer with for example "Pass-Through" mode where the vmnics are really assigned and managed by the OS, minimizing the delay and making the VMs look as if they were directly attached, also, this offloads CPU workload in the mean time, optimizing the host/VM's performance.
    The need for one or the other will be defined as always by the needs your organization/business has.
    Benefits of VM-FEX (from cisco.com):
    Simplified operations: Eliminates the need for a separate, virtual networking infrastructure
    Improved network security: Contains VLAN proliferation
    Optimized network utilization: Reduces broadcast domains
    Enhanced application performance: Offloads virtual  machine switching from host CPU to parent switch application-specific  integrated circuits (ASICs)
    Benefits of Nexus 1000v here on another post from Rob Burns:
    https://supportforums.cisco.com/thread/2087541 
    https://communities.vmware.com/thread/316542?tstart=0
    I hope that helps 
    -Kenny

  • VWLC and Nexus-1000V

    Hi Experts!
    Does anybody try to install vWLC on ESX with Nexus-1000V as switch?
    All deployment guide are based on standard VMWare vSwitch and I can not find any information about questions:
    1. Is vWLC compatible with Nexus-1000V?
    2. What configuration should be done on Nexus-1000V to vWLC works properly?

    Hi Dave,
    You can access  below URL for nexus 1000v -4.0(4)SV1(3b) docs:
    http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_0_4_s_v_1_3_b/roadmap/guide/n1000v_roadmap.html
    And
    Nexus5000
    http://www.cisco.com/en/US/products/ps9670/tsd_products_support_series_home.html
    BR,
    John Meng

  • Port-security and Nexus 1000v

    Is there really any true need for port-security on Nexus 1000v for vethernet ports? Can a VM be assigned a previously used vethernet port that would trigger a port-security action?

    If you want to prevent admins or malicious users from being able change the mac address of a VM then port-security is a useful feature. Especially in VDI environments where users might have full admin control of the VM and can change the mac of the vnic.
    Now about veths ports. A veth gets assigned to a VM and stays with that VM. A veth is only released when either the nic on the VM is deleted or the nic is assigned to another port-profile on the N1KV or a port-group on a vSwitch or VMware DVS. Now when the veth is released it does not retain any of the piror information. It's freed up and added to a pool of available veths. When a veth is needed for a VM in either the same port-profile or a different port-profile the free veth will be grabbed and initialized. It does not retain any of the previous settings.
    So assigning a VM to a previsously used veth port should not trigger a violation. The MAC should get learned and traffic should be able to flow.

  • How to change Nexus 1010 and Nexus 1000v IP address

    Hi Experts,
    We had run two VSM and one NAM in the Nexus 1010. The Nexus 1010 version is 4.2.1.SP1.4. And the Nexus 1000v version is 4.0.4.SV1.3c. Now we must to change management IP address to the other one. Where can I find the SOP or config sample? And have anything I must to remind?

    If it's only the mgmt0 IP address that you are changing, then you can just enter the new address under the mgmt0 interface. It will automatically sync with the VC.
    I guess you are trying to change the IP address of the VC plus the management VLAN as well. One way of doing this is:
    - From the Nexus 1000v, disconnect the connection to the VC (svs connection -> no connect)
    - Change the VC IP address and connect to it (svs connection -> remote ip address)
    - Change the Nexus 1000v mgmt0 address
    - Change the mgmt VLAN on the N1010
    - Change the mgmt address of the N1010
    - Reconnect the Nexus 1000v back to the VC (svs connection -> connect)
    Correspondingly, change the VLAN configuration on the upstream switch plus the connection to the VC as well.
    Thanks,
    Shankar

  • Nexus 1000v port-channels questions

    Hi,
    I’m running vCenter 4.1 and Nexus 1000v and about 30 ESX Hosts.
    I’m using one system uplink port profile for all 30 ESX Host; On each of the ESX host I have 2 NICs going to a Catalyst 3750 switch stack (Switch A), and another 2 NICs going to another Catalyst 3750 switch stack (Switch B).
    The Nexus is configured with the “sub-group CDP” command on the system uplink port profile like the following:
    port-profile type ethernet uplink
    vmware port-group
    switchport mode trunk
    switchport trunk allowed vlan 1,800,802,900,988-991,996-997,999
    switchport trunk native vlan 500
    mtu 1500
    channel-group auto mode on sub-group cdp
    no shutdown
    system vlan 988-989
    description System-Uplink
    state enabled
    And the port channel on the Catalyst 3750 are configured like the following:
    interface Port-channel11
    description ESX-10(Virtual Machine)
    switchport trunk encapsulation dot1q
    switchport trunk native vlan 500
    switchport trunk allowed vlan 800,802,900,988-991
    switchport mode trunk
    switchport nonegotiate
    spanning-tree portfast trunk
    end
    interface GigabitEthernet1/0/18
    description ESX-10(Virtual Machine)
    switchport trunk encapsulation dot1q
    switchport trunk native vlan 500
    switchport trunk allowed vlan 800,802,900,988-991
    switchport mode trunk
    switchport nonegotiate
    channel-group 11 mode on
    spanning-tree portfast trunk
    spanning-tree guard root
    end
    interface GigabitEthernet1/0/1
    description ESX-10(Virtual Machine)
    switchport trunk encapsulation dot1q
    switchport trunk native vlan 500
    switchport trunk allowed vlan 800,802,900,988-991
    switchport mode trunk
    switchport nonegotiate
    channel-group 11 mode on
    spanning-tree portfast trunk
    spanning-tree guard root
    end
    Now Cisco is telling me that I should be using MAC pinning when doing a trunk to two different stacks , and that each interface on 3750 should not be configured in a port-channel like above,  but should be configured as individual trunks.
    First question: Is the above statement correct, are my uplinks configured wrong?  Should they be configured individually in trunks instead of a port-channel?
    Second questions: If I need to add the MAC pinning configuration on my system uplink port-profile can I create a new system uplink port profile with the MAC pinning configuration and then move one ESX host (with no VM on them) one at a time to that new system uplink port profile? This way, I could migrate one ESX host at a time without outages to my VMs. Or is there an easier way to move 30 ESX hosts to a new system uplink profile with the MAC Pinning configuration.
    Thanks.

    Hello,
    From what I understood, you have the following setup:
         - Each ESX host has 4 NICS
         - 2 of them go to a 3750 stack and the other 2 go to a different 3750 stack
         - all 4 vmnics on the ESX host use the same Ethernet port-profile
              - this has 'channel-group auto mode on sub-group cdp'
         - The 2 interfaces on each 3750 stack are in a port-channel (just 'mode on')
    If yes, then this sort of a setup is correct. The only problem with this is the dependance on CDP. With CDP loss, the port-channels would go down.
    'mac-pinning' is the recommended option for this sort of a setup. You don't have to bundle the interfaces on the 3750 for this and these can be just regular trunk ports. If all your ports are on the same stack, then you can look at LACP. The CDP option would not be supported in the future releases. In fact, it is supposed to be removed from 4.2(1)SV1(2.1) but I still see the command available (ignore 4.2(1)SV1(4) next to it) - I'll follow up on this internally:
    http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_2_1_s_v_2_1_1/interface/configuration/guide/b_Cisco_Nexus_1000V_Interface_Configuration_Guide_Release_4_2_1_SV_2_1_1_chapter_01.html
    For migrating, the best option would be as you suggested. Create a new port-profile with mac-pinning and move one host at a time. You can migrate VMs off the host before you change the port-profile and can remove the upstream port-channel config as well.
    Thanks,
    Shankar

  • Nexus 1000v, VMWare ESX and Microsoft SC VMM

    Hi,
    Im curious if anybody has worked up any solutions managing network infrastructure for VMWare ESX hosts/vms with the Nexus 1000v and Microsoft's System Center Virtual Machine Manager.
    There currently exists support for the 1000v and ESX and SCVMM using the Cisco 1000v software for MS Hyper-V and SCVMM.   There is no suck support for VMWare ESX.
    Im curious as to what others with VMWare, Nexus 1000v or equivalent and SCVMM have done to work around this issue.
    Trying to get some ideas.
    Thanks

    Aaron,
    The steps you have above are correct, you will need steps 1 - 4 to get it working correctly.  Normally people will create a separate VLAN for their NLB interfaces/subnet, to prevent uncessisary flooding of mcast frames within the network.
    To answer your questions
    1) I've seen multiple customer run this configuration
    2) The steps you have are correct
    3) You can't enable/disable IGMP snooping on UCS.  It's enabled by default and not a configurable option.  There's no need to change anything within UCS in regards to MS NLB with the procedure above.  FYI - the ability to disable/enable IGMP snooping on UCS is slated for an upcoming release 2.1.
    This is the correct method untill the time we have the option of configuring static multicast mac entries on
    the Nexus 1000v.  If this is a feature you'd like, please open a TAC case and request for bug CSCtb93725 to be linked to your SR. 
    This will give more "push" to our develpment team to prioritize this request.
    Hopefully some other customers can share their experience.
    Regards,
    Robert

  • Connecting a Dell B22 FEX to Nexus 6001 and storage

    Hi Folks,
    We have a setup where we are running a Dell B22 FEX in Blade enviornment and want to connect the B22 FEX to a cisco Nexus 6001 switch. As per NX-OS release note, B22 FEX and 6001 Nexus connectivity is supported.
    Now after connecting to Nexus 6001, how do i get access to storage pool or SAN fabric ?  As per another thread of discussion, Nexus 6001 does not support direct fabric attachment at this point in time. So how do we bridge these two elements to a storage fabric ??
    As per 6.02 release note:
    "Support for DELL FEX
    Added support for the Cisco Nexus B22 Dell Fabric Extender for Cisco Nexus 6000 Series switches starting with the 6.0(2)N1(2) release."
    This is the exact reason we bought it. We have a enviornment where we are running Dell B22 FEX.  Idea is to connect the B22 FEX into Nexus 6001. We are confuse at this point. After connecting the Dell B22 FEX to Nexus 6001, how to access the storage network or storage fabric ??
    Thanks,

    Hi Rays,
    The function should be the same for all FEXs regardless of what parent switch they connect to. to.
    if you are referring to this comment:
    You can connect any edge switch that leverages a link redundancy mechanism not dependent on spanning tree such as Cisco FlexLink or vPC
    FelxLink is a different technology that does not use STP, but not every switch platform supports FelxLink. FlexLink is not used very often, as other technologies like VSS, VPC and stacking has emerged.
    HTH

  • Firewall between Nexus 1000V VSM and vCenter

    Hi,
    Customer has multiple security zones in environment, and VMware vCenter is located in a Management Security Zone. VSMs in security zones have dedicated management interface facing Management Security Zone with firewall in between. What ports do we need to open for the communication between VSMs and vCenter? The Nexus 1000V troubleshooting guide only mentioned TCP/80 and TCP/443. Are these outbound from VSM to vCenter? Is there any requirements from vCenter to VSM? What's the best practice for VSM management interface configuration in multiple security zones environment? Thanks.

    Avi -
    You need the connection between vCenter and the VSM anytime you want to add or make any changes to the existing port-profiles.  This is how the port-profiles become available to the virtual machines that reside on your ESX hosts.
    One problem when the vCenter is down is what you pointed out - configuration changes cannot be pushed
    The VEM/VSM relationship is independent of the VSM/vCenter connection.  There are separate VLANs or L3 interfaces that are used to pass information and heartbeats between the VSM and its VEMs.
    Jen

  • VN-Tag with Nexus 1000v and Blades

    Hi folks,
    A while ago there was a discussion on this forum regarding the use of Catalyst 3020/3120 blades switches in conjunction with VN-tag.  Specifically, you can't do VN-Tag with that Catalyst blade switch sitting inbetween the Nexus 1000V and the Nexus 5000.  I know there's a Blade switch for the IBM blade servers, but will there be a similar version for the HP C-class blades?  My guess is NO, since Cisco just kicked HP to the curb.  But if that's the case, what are my options?  Pass-through switches?  (ugh!)
    Previous thread:
    https://supportforums.cisco.com/message/469303#469303

    wondering the same...

  • Nexus 1000v and vcenter domain admin account

    I changed out domain admin account on our domain in which vcenter services runs as and now its using a different services account. I am wondering if I need to update anything on the nexus 1000v switch side between the 1000v and venter

    Hi Dan,
    You are on the right track. However you can perform some of these function "online".
    First you want to ensure that you are running at a minimum, Nexus 1000v SV1(4a) as ESXi 5.0 only began support on this release. With SV1(4a), it provides support for both ESXi 5.0 and ESX/i 4.1.
    Then you can follow the procedure documented here:
    Upgrading from VMware Release 4.0/4.1 to VMware Release 5.0.0
    This document walks you through upgrading your ESX infrastructure to VMware Release 5.0.0 when Cisco Nexus 1000V is installed. It is required to be completed in the following order:
    1. Upgrade the VSMs and VEMs to Release 4.2(1)SV1(4a).
    2. Upgrade the VMware vCenter Server to VMware Release 5.0.0.
    3. Upgrade the VMware Update Manager to VMware Release 5.0.0.
    4. Upgrade your ESX hosts to VMware Release 5.0.0 with a custom ESXi image that includes the VEM bits.
    Upgrading the ESX/ESXi hosts consists of the following procedures:
    –Upgrading the vCenter Server
    –Upgrading the vCenter Update Manager
    –Augmenting the Customized ISO
    –Upgrading the ESXi Hosts
    There is also a 3 part video highlighting the procedure to perfrom the last two steps above (customized ISO and upgrading ESXi hosts)
    Video: Upgrading the VEM to VMware ESXi Release 5.0.0
    Hope that helps you with your upgrade.
    Thanks,
    Michael

  • Vmware Tools for Nexus 1000v, VNMC and VSG

    Hi everyone, a customer is asking me about how to install the vmware tools in the virtual machines of N1Kv, VSG and VNMC.
    Someone knows the procedure, or if thats posiible or not.

    @Robert
    Wanted to know / understand what hardware version would be compatible for Nexus 1000V ? Is there any dependency for hardware version ?
    Regards,
    Amit Vyas

  • VSM and Cisco nexus 1000v

    Hi,
    We are planning to install Cisco Nexus 1000v in our environment. Before we want to install we want to explore little bit about Cisco Nexus 1000v
    •  I know there is 2 elements for Cisco 1k, VEM and VSM. Does VSM is required? Can we configure VEM individually?
    •   How does Nexus 1k integrated with vCenter. Can we do all Nexus 1000v configuration from vCenter without going to VEM or VSM?
    •   In term of alarming and reporting, does we need to get SNMP trap and get from individual VEM or can be use VSM to do that. OR can we   get    Cisco Nexus 1000v alarming and reporting form VMware vCenter.
    •  Apart from using Nexus 1010 can what’s the recommended hosting location for VSM, (same Host as VEM, different VM, and different physical server)
    Foyez Ahammed

    Hi Foyez,
    Here is a brief on the Nexus1000v and I'll answer some of your questions in that:
    The Nexus1000v is a Virtual Distributed Switch (software based) from Cisco which integrated with the vSphere environment to provide uniform networking across your vmware environment for the host as well as the VMs. There are two components to the N1K infrastructure 1) VSM 2) VEM.
    VSM - Virtual supervisor module is the one which controls the entire N1K setup and is from where the configuration is done for the VEM modules, interfaces, security, monitoring etc. VSM is the one which interacts with the VC.
    VEM - Virtual ethernet module are simply the module or virtual linecards which provide the connectivity option or virtual ports for the VMs and other virtaul interfaces. Each ESX host today can only have one VEM. These VEMs recieve their configuration / programing from the VSM.
    If you are aware of any other switching products from Cisco like the Cat 6k switches, the n1k behaves the same way but in a software / virtual environment. Where the VSM are equal of a SUPs and the VEM are similar to the line cards. The control and the packet VLANs in the n1k provide the same kind of AIPC and Inband connectivity as the 6k backplane would for the communication between the modules and the SUP (VSM in this case).
    *The n1k configuration is done only from the VSM and is visible in the VC.However the port-profiles created from the VSM are pushed from the VSM to the VC and have to be assigned to the virtual / physical ports from the VC.
    *You can run the VSM either on the Nexus1010 as a Virtual service blade (VSB) or as a normal VM on any of the ESX/ESXi server. The VSM and the VEM on the same server are fully supported.
    You can refer the following deployment guide for some more details: http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/guide_c07-556626.html
    Hope this answers your queries!
    ./Abhinav

  • Nexus 1000V, 4K and 5K

    I'm looking over the deployment guide for 1000Vs, and am not clear on the design.  If I have a Nexus 4k connecting to a Nexus 5k, how does the Nexus 1000V fit?  What I'm seeing is that typically a vpc is built between the Nexus 1k and a clustered upstream switch, such as Nexus 5ks, or VSS with 6500s.  However, if I already have a vpc between a Nexus 4k and a pair of 5ks, what affect does adding 1ks to the configuration have?  Or is the idea to move the vpc back to the 1000Vs instead of the between the 4k and 5ks?  Or perhaps is using a 1000V more suited when you have blades that are pass through modules where each blade has its own NIC or there are blade switches (non Nexus 4k) in the chassis? 
    thank you,
    Bill

    hi bill
    mainly there are two options
    first option if to use the N1K with a clustered up stream switches as you mentioned vPC or VSS
    in this case all what you need form the N1K/ESXi host is to use a normal portchannel and multihome th eport channel links to both of these switches ( this is a recommended solution if applicable )
    option two is to use non-clustered switches like in your case the two 4K switches as the upstream switches with the N1K
    and in this case you can use vPC host mode where the N1K with new releases uses mac-pining to chose uplink subgroup within the port channel
    see below:

  • Nexus 1000v UCS Manager and Cisco UCS M81KR

    Hello everyone
    I am confused about how works the integration between N1K and UCS Manager:
    First question:
    If two VMs on different ESXi and different VEM but in the same VLAN,would like to talk each other, the data flow between them is managed from the upstream switch( in this case UCS Fabric Inteconnect), isn'it?
    I created a Ethernet uplink port-profile on N1K in switch port mode access(100), I created a vEthernet port-profile for the VM in switchport mode access(100) as well. In the Fabric Interconnect I created a vNIC profile for the physical NICs of ESXi(where there are the VMs). Also I created the vlan 100(the same in N1K)
    Second question: With the configuration above, if I include in the vNIC profile the vlan 100 (not as native vlan) only, the two VMs can not ping each other. Instead if I include in the vNIC profile only the defaul vlan(I think it is the vlan 1) as native vlan evereything works fine. WHY????
    Third question: How it works the tagging vlan on Fabric interconnectr and also in N1K.
    I tried to read differnt documents, but I did not understand.
    Thanks                 

    This document may help...
    Best Practices in Deploying Cisco Nexus 1000V Series Switches on Cisco UCS B and C Series Cisco UCS Manager Servers
    http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/white_paper_c11-558242.html
    If two VMs on different ESXi and different VEM but in the same  VLAN,would like to talk each other, the data flow between them is  managed from the upstream switch( in this case UCS Fabric Inteconnect),  isn'it?
    -Yes.  Each ESX host with the VEM will have one or more dedicated NICs for the VEMs to communicate with the upstream network.  These would be your 'type ethernet' port-profiles.  The ustream network would need to bridge the vlan between the two physicall nics.
    Second question: With the configuration above, if I include in the vNIC  profile the vlan 100 (not as native vlan) only, the two VMs can not ping  each other. Instead if I include in the vNIC profile only the defaul  vlan(I think it is the vlan 1) as native vlan evereything works fine.  WHY????
    -  The N1K port profiles are switchport access making them untagged.  This would be the native vlan in ucs.  If there is no native vlan in the UCS configuration, we do not have the upstream networking bridging the vlan.
    Third question: How it works the tagging vlan on Fabric interconnectr and also in N1K.
    -  All ports on the UCS are effectively trunks and you can define what vlans are allowed on the trunk as well as what vlan is passed natively or untagged.  In N1K, you will want to leave your vEthernet port profiles as 'switchport mode access'.  For your Ethernet profiles, you will want them to be 'switchport mode trunk'.  Use an used used vlan as the native vlan.  All production vlans will be passed from N1K to UCS as tagged vlans.
    Thank You,
    Dan Laden
    PDI Helpdesk
    http://www.cisco.com/go/pdihelpdesk

Maybe you are looking for

  • Witholding tax tables

    Hi all, Can anyone tell me what table name and detail i can give to ABAP people for the check printout so that they can include the Withhold tax.? I mean, can anyone help me to know the technical specification of the Withhold tax, so that i can give

  • Check out the DROID 2 - video!

    hey folks - There's a new post over at the DROID Community blogs; check out the DROID 2 features video, and let us know what you think! http://community.vzw.com/t5/DROID-Community-Blogs/Check-out-the-DROID-2-video/ba-p/265267 And let us know what you

  • Reservation feature in SharePoint 2013

    Hi, As We know "Group work List" feature is not available in SharePoint 2013.So can you help me how to add the double booking and overlapping functionality to the calender list in SharePoint 2013. Is it possible through the SharePoint out of box feat

  • Syncing iTunes with ps3

    How do I get my ps3 to see my iTunes libary, my music, movies, etc?

  • Is there a function by name HTMLDB_LDAP.GET_USER_ATTRIBUTES?

    Hi, Is there a function by name HTMLDB_LDAP.GET_USER_ATTRIBUTES? Customer wants to get the DN of the user who was authenticated through SSO? Thanks in advance for any help on this. Regards, Rachel