Nexus 1000V, 4K and 5K

I'm looking over the deployment guide for 1000Vs, and am not clear on the design.  If I have a Nexus 4k connecting to a Nexus 5k, how does the Nexus 1000V fit?  What I'm seeing is that typically a vpc is built between the Nexus 1k and a clustered upstream switch, such as Nexus 5ks, or VSS with 6500s.  However, if I already have a vpc between a Nexus 4k and a pair of 5ks, what affect does adding 1ks to the configuration have?  Or is the idea to move the vpc back to the 1000Vs instead of the between the 4k and 5ks?  Or perhaps is using a 1000V more suited when you have blades that are pass through modules where each blade has its own NIC or there are blade switches (non Nexus 4k) in the chassis? 
thank you,
Bill

hi bill
mainly there are two options
first option if to use the N1K with a clustered up stream switches as you mentioned vPC or VSS
in this case all what you need form the N1K/ESXi host is to use a normal portchannel and multihome th eport channel links to both of these switches ( this is a recommended solution if applicable )
option two is to use non-clustered switches like in your case the two 4K switches as the upstream switches with the N1K
and in this case you can use vPC host mode where the N1K with new releases uses mac-pining to chose uplink subgroup within the port channel
see below:

Similar Messages

  • Firewall between Nexus 1000V VSM and vCenter

    Hi,
    Customer has multiple security zones in environment, and VMware vCenter is located in a Management Security Zone. VSMs in security zones have dedicated management interface facing Management Security Zone with firewall in between. What ports do we need to open for the communication between VSMs and vCenter? The Nexus 1000V troubleshooting guide only mentioned TCP/80 and TCP/443. Are these outbound from VSM to vCenter? Is there any requirements from vCenter to VSM? What's the best practice for VSM management interface configuration in multiple security zones environment? Thanks.

    Avi -
    You need the connection between vCenter and the VSM anytime you want to add or make any changes to the existing port-profiles.  This is how the port-profiles become available to the virtual machines that reside on your ESX hosts.
    One problem when the vCenter is down is what you pointed out - configuration changes cannot be pushed
    The VEM/VSM relationship is independent of the VSM/vCenter connection.  There are separate VLANs or L3 interfaces that are used to pass information and heartbeats between the VSM and its VEMs.
    Jen

  • Vmware Tools for Nexus 1000v, VNMC and VSG

    Hi everyone, a customer is asking me about how to install the vmware tools in the virtual machines of N1Kv, VSG and VNMC.
    Someone knows the procedure, or if thats posiible or not.

    @Robert
    Wanted to know / understand what hardware version would be compatible for Nexus 1000V ? Is there any dependency for hardware version ?
    Regards,
    Amit Vyas

  • VSM and Cisco nexus 1000v

    Hi,
    We are planning to install Cisco Nexus 1000v in our environment. Before we want to install we want to explore little bit about Cisco Nexus 1000v
    •  I know there is 2 elements for Cisco 1k, VEM and VSM. Does VSM is required? Can we configure VEM individually?
    •   How does Nexus 1k integrated with vCenter. Can we do all Nexus 1000v configuration from vCenter without going to VEM or VSM?
    •   In term of alarming and reporting, does we need to get SNMP trap and get from individual VEM or can be use VSM to do that. OR can we   get    Cisco Nexus 1000v alarming and reporting form VMware vCenter.
    •  Apart from using Nexus 1010 can what’s the recommended hosting location for VSM, (same Host as VEM, different VM, and different physical server)
    Foyez Ahammed

    Hi Foyez,
    Here is a brief on the Nexus1000v and I'll answer some of your questions in that:
    The Nexus1000v is a Virtual Distributed Switch (software based) from Cisco which integrated with the vSphere environment to provide uniform networking across your vmware environment for the host as well as the VMs. There are two components to the N1K infrastructure 1) VSM 2) VEM.
    VSM - Virtual supervisor module is the one which controls the entire N1K setup and is from where the configuration is done for the VEM modules, interfaces, security, monitoring etc. VSM is the one which interacts with the VC.
    VEM - Virtual ethernet module are simply the module or virtual linecards which provide the connectivity option or virtual ports for the VMs and other virtaul interfaces. Each ESX host today can only have one VEM. These VEMs recieve their configuration / programing from the VSM.
    If you are aware of any other switching products from Cisco like the Cat 6k switches, the n1k behaves the same way but in a software / virtual environment. Where the VSM are equal of a SUPs and the VEM are similar to the line cards. The control and the packet VLANs in the n1k provide the same kind of AIPC and Inband connectivity as the 6k backplane would for the communication between the modules and the SUP (VSM in this case).
    *The n1k configuration is done only from the VSM and is visible in the VC.However the port-profiles created from the VSM are pushed from the VSM to the VC and have to be assigned to the virtual / physical ports from the VC.
    *You can run the VSM either on the Nexus1010 as a Virtual service blade (VSB) or as a normal VM on any of the ESX/ESXi server. The VSM and the VEM on the same server are fully supported.
    You can refer the following deployment guide for some more details: http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/guide_c07-556626.html
    Hope this answers your queries!
    ./Abhinav

  • Trunking Vlans in Nexus 1000V

    I am looking to design a solution for a customer and they run a very tight hosting environment with Nexus 1000V switches and want to setup private vlans as they are running out of vlans
    I need to find some info on if it is possible to trunk a private vlan between 2 nexus switches
    Or any info on private vlans on Nexus 1000V
    Thanks
    Roger

    Hello Roger,
    Yes, pVLANs can be trunked between switches.  A good discussion can be found here.  Have you considered VXLAN as an alternative to pVLANs?  VXLAN allows up to 16M segments definied though they differ slightly from pVLAN in that all VMs in a VXLAN segment can communicate.
    Matthew

  • Nexus 1000v, VMWare ESX and Microsoft SC VMM

    Hi,
    Im curious if anybody has worked up any solutions managing network infrastructure for VMWare ESX hosts/vms with the Nexus 1000v and Microsoft's System Center Virtual Machine Manager.
    There currently exists support for the 1000v and ESX and SCVMM using the Cisco 1000v software for MS Hyper-V and SCVMM.   There is no suck support for VMWare ESX.
    Im curious as to what others with VMWare, Nexus 1000v or equivalent and SCVMM have done to work around this issue.
    Trying to get some ideas.
    Thanks

    Aaron,
    The steps you have above are correct, you will need steps 1 - 4 to get it working correctly.  Normally people will create a separate VLAN for their NLB interfaces/subnet, to prevent uncessisary flooding of mcast frames within the network.
    To answer your questions
    1) I've seen multiple customer run this configuration
    2) The steps you have are correct
    3) You can't enable/disable IGMP snooping on UCS.  It's enabled by default and not a configurable option.  There's no need to change anything within UCS in regards to MS NLB with the procedure above.  FYI - the ability to disable/enable IGMP snooping on UCS is slated for an upcoming release 2.1.
    This is the correct method untill the time we have the option of configuring static multicast mac entries on
    the Nexus 1000v.  If this is a feature you'd like, please open a TAC case and request for bug CSCtb93725 to be linked to your SR. 
    This will give more "push" to our develpment team to prioritize this request.
    Hopefully some other customers can share their experience.
    Regards,
    Robert

  • VN-Tag with Nexus 1000v and Blades

    Hi folks,
    A while ago there was a discussion on this forum regarding the use of Catalyst 3020/3120 blades switches in conjunction with VN-tag.  Specifically, you can't do VN-Tag with that Catalyst blade switch sitting inbetween the Nexus 1000V and the Nexus 5000.  I know there's a Blade switch for the IBM blade servers, but will there be a similar version for the HP C-class blades?  My guess is NO, since Cisco just kicked HP to the curb.  But if that's the case, what are my options?  Pass-through switches?  (ugh!)
    Previous thread:
    https://supportforums.cisco.com/message/469303#469303

    wondering the same...

  • Nexus 1000v and vcenter domain admin account

    I changed out domain admin account on our domain in which vcenter services runs as and now its using a different services account. I am wondering if I need to update anything on the nexus 1000v switch side between the 1000v and venter

    Hi Dan,
    You are on the right track. However you can perform some of these function "online".
    First you want to ensure that you are running at a minimum, Nexus 1000v SV1(4a) as ESXi 5.0 only began support on this release. With SV1(4a), it provides support for both ESXi 5.0 and ESX/i 4.1.
    Then you can follow the procedure documented here:
    Upgrading from VMware Release 4.0/4.1 to VMware Release 5.0.0
    This document walks you through upgrading your ESX infrastructure to VMware Release 5.0.0 when Cisco Nexus 1000V is installed. It is required to be completed in the following order:
    1. Upgrade the VSMs and VEMs to Release 4.2(1)SV1(4a).
    2. Upgrade the VMware vCenter Server to VMware Release 5.0.0.
    3. Upgrade the VMware Update Manager to VMware Release 5.0.0.
    4. Upgrade your ESX hosts to VMware Release 5.0.0 with a custom ESXi image that includes the VEM bits.
    Upgrading the ESX/ESXi hosts consists of the following procedures:
    –Upgrading the vCenter Server
    –Upgrading the vCenter Update Manager
    –Augmenting the Customized ISO
    –Upgrading the ESXi Hosts
    There is also a 3 part video highlighting the procedure to perfrom the last two steps above (customized ISO and upgrading ESXi hosts)
    Video: Upgrading the VEM to VMware ESXi Release 5.0.0
    Hope that helps you with your upgrade.
    Thanks,
    Michael

  • VWLC and Nexus-1000V

    Hi Experts!
    Does anybody try to install vWLC on ESX with Nexus-1000V as switch?
    All deployment guide are based on standard VMWare vSwitch and I can not find any information about questions:
    1. Is vWLC compatible with Nexus-1000V?
    2. What configuration should be done on Nexus-1000V to vWLC works properly?

    Hi Dave,
    You can access  below URL for nexus 1000v -4.0(4)SV1(3b) docs:
    http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_0_4_s_v_1_3_b/roadmap/guide/n1000v_roadmap.html
    And
    Nexus5000
    http://www.cisco.com/en/US/products/ps9670/tsd_products_support_series_home.html
    BR,
    John Meng

  • Nexus 1000v UCS Manager and Cisco UCS M81KR

    Hello everyone
    I am confused about how works the integration between N1K and UCS Manager:
    First question:
    If two VMs on different ESXi and different VEM but in the same VLAN,would like to talk each other, the data flow between them is managed from the upstream switch( in this case UCS Fabric Inteconnect), isn'it?
    I created a Ethernet uplink port-profile on N1K in switch port mode access(100), I created a vEthernet port-profile for the VM in switchport mode access(100) as well. In the Fabric Interconnect I created a vNIC profile for the physical NICs of ESXi(where there are the VMs). Also I created the vlan 100(the same in N1K)
    Second question: With the configuration above, if I include in the vNIC profile the vlan 100 (not as native vlan) only, the two VMs can not ping each other. Instead if I include in the vNIC profile only the defaul vlan(I think it is the vlan 1) as native vlan evereything works fine. WHY????
    Third question: How it works the tagging vlan on Fabric interconnectr and also in N1K.
    I tried to read differnt documents, but I did not understand.
    Thanks                 

    This document may help...
    Best Practices in Deploying Cisco Nexus 1000V Series Switches on Cisco UCS B and C Series Cisco UCS Manager Servers
    http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/white_paper_c11-558242.html
    If two VMs on different ESXi and different VEM but in the same  VLAN,would like to talk each other, the data flow between them is  managed from the upstream switch( in this case UCS Fabric Inteconnect),  isn'it?
    -Yes.  Each ESX host with the VEM will have one or more dedicated NICs for the VEMs to communicate with the upstream network.  These would be your 'type ethernet' port-profiles.  The ustream network would need to bridge the vlan between the two physicall nics.
    Second question: With the configuration above, if I include in the vNIC  profile the vlan 100 (not as native vlan) only, the two VMs can not ping  each other. Instead if I include in the vNIC profile only the defaul  vlan(I think it is the vlan 1) as native vlan evereything works fine.  WHY????
    -  The N1K port profiles are switchport access making them untagged.  This would be the native vlan in ucs.  If there is no native vlan in the UCS configuration, we do not have the upstream networking bridging the vlan.
    Third question: How it works the tagging vlan on Fabric interconnectr and also in N1K.
    -  All ports on the UCS are effectively trunks and you can define what vlans are allowed on the trunk as well as what vlan is passed natively or untagged.  In N1K, you will want to leave your vEthernet port profiles as 'switchport mode access'.  For your Ethernet profiles, you will want them to be 'switchport mode trunk'.  Use an used used vlan as the native vlan.  All production vlans will be passed from N1K to UCS as tagged vlans.
    Thank You,
    Dan Laden
    PDI Helpdesk
    http://www.cisco.com/go/pdihelpdesk

  • VM-FEX and Nexus 1000v relation

    Hi
    I am a new in virtulaization world and I need to know what is the relation between Cisco Nexus 1000v and Cisco VM-FEX?, and when to use VM-FEX and when to use Nexus 1000v.
    Regards

    Ahmed,
    Sorry for taking this long to get back to you.
    Nexus 1000v is a virtualized switch and as such will require that any traffic coming in or leaving the VM will first need to pass through the virtualization layer, therefore causing a minimum delay that for some applications (VMs) can be catastrophic enough that may mean too much delay.
    With VM-FEX you gain the option to bypass the virtualization layer with for example "Pass-Through" mode where the vmnics are really assigned and managed by the OS, minimizing the delay and making the VMs look as if they were directly attached, also, this offloads CPU workload in the mean time, optimizing the host/VM's performance.
    The need for one or the other will be defined as always by the needs your organization/business has.
    Benefits of VM-FEX (from cisco.com):
    Simplified operations: Eliminates the need for a separate, virtual networking infrastructure
    Improved network security: Contains VLAN proliferation
    Optimized network utilization: Reduces broadcast domains
    Enhanced application performance: Offloads virtual  machine switching from host CPU to parent switch application-specific  integrated circuits (ASICs)
    Benefits of Nexus 1000v here on another post from Rob Burns:
    https://supportforums.cisco.com/thread/2087541 
    https://communities.vmware.com/thread/316542?tstart=0
    I hope that helps 
    -Kenny

  • Port-security and Nexus 1000v

    Is there really any true need for port-security on Nexus 1000v for vethernet ports? Can a VM be assigned a previously used vethernet port that would trigger a port-security action?

    If you want to prevent admins or malicious users from being able change the mac address of a VM then port-security is a useful feature. Especially in VDI environments where users might have full admin control of the VM and can change the mac of the vnic.
    Now about veths ports. A veth gets assigned to a VM and stays with that VM. A veth is only released when either the nic on the VM is deleted or the nic is assigned to another port-profile on the N1KV or a port-group on a vSwitch or VMware DVS. Now when the veth is released it does not retain any of the piror information. It's freed up and added to a pool of available veths. When a veth is needed for a VM in either the same port-profile or a different port-profile the free veth will be grabbed and initialized. It does not retain any of the previous settings.
    So assigning a VM to a previsously used veth port should not trigger a violation. The MAC should get learned and traffic should be able to flow.

  • How to change Nexus 1010 and Nexus 1000v IP address

    Hi Experts,
    We had run two VSM and one NAM in the Nexus 1010. The Nexus 1010 version is 4.2.1.SP1.4. And the Nexus 1000v version is 4.0.4.SV1.3c. Now we must to change management IP address to the other one. Where can I find the SOP or config sample? And have anything I must to remind?

    If it's only the mgmt0 IP address that you are changing, then you can just enter the new address under the mgmt0 interface. It will automatically sync with the VC.
    I guess you are trying to change the IP address of the VC plus the management VLAN as well. One way of doing this is:
    - From the Nexus 1000v, disconnect the connection to the VC (svs connection -> no connect)
    - Change the VC IP address and connect to it (svs connection -> remote ip address)
    - Change the Nexus 1000v mgmt0 address
    - Change the mgmt VLAN on the N1010
    - Change the mgmt address of the N1010
    - Reconnect the Nexus 1000v back to the VC (svs connection -> connect)
    Correspondingly, change the VLAN configuration on the upstream switch plus the connection to the VC as well.
    Thanks,
    Shankar

  • Install and Configure Nexus 1000V

    Hi all !! Hope everyone is well !!
    I just purchase 2 Nexus 1000V's. Could someone give me some guidance on how to go about installing / configuring a Nexus 1000V switch ? This is my first time working with Nexus series switches. Thanks in advance !!!
    D.

    Hi,
    Here are some of the documents and video links that will help you to proceed futher with Nexus 1000v installation:
    http://www.cciemachine.com/en/US/products/ps9902/prod_installation_guides_list.html
    http://www.google.co.in/search?q=nexus+1000v+installation&hl=en&client=firefox-a&hs=Hnt&rls=org.mozilla:en-US:official&prmd=v&source=univ&tbs=vid:1&tbo=u&ei=WtloTI6hAc-rcbb98I8F&sa=X&oi=video_result_group&ct=title&resnum=4&ved=0CCwQqwQwAw

  • Nexus 1000V and strange ping behavior

    Hi ,
    I am using a Nexus 1000v a FI 6248 with a Nexus 5K in redundant architecture and I have a strange bevahior with VMs.
    I am using  port-profiles without any problems but in one case I have this issue
    I have 2 VMs assigned to the same port profile
    When the 2 Vms are on the same esx I can ping (from a VM)  the gateway and the other VM, now when I move one of the VM to an other ESX (same chassis or not).
    From both , I can ping the gateway, a remote IP but VMs are unreachable between them.
    and a remote PC are able to ping both Vms.
    I checked the mac table, from N5k it's Ok , from FI 6348 it's Ok , but from N1K I am unable to see the mac address of both VMs.
    Why I tried ( I performed at each step a clear mac table)
        Assign to an other vmnic , it works.
        On UCS I moved it to an other vmnic , it works
        On UCS I Changed the QOS policy , it works.
        I reassigned it , and I had the old behavior
        I checked all trunk links it's ok
    So i didn't understand why I have this strange behavior and how I can troubleshoot it deeper?
    I would like if possible to avoid to do that but the next step will be to create a new vmnic card and assign the same policy and after to suppress the vnmic and to recreate the old one.
    Regards

    From what you mentioned here's my thoughts.
    When the two VMs are on the same host, they can reach each other.  This is because they're locally switching in the VEM so this doesn't tell us much other than the VEM is working as expected.
    When you move one of the VMs to a different UCS ESX host, the path changes.    Let's assume you've moved one VM to a different host, within the UCS system.
    UCS-Blade1(Host-A) - VM1
    UCS-Blade2(Host-B) - VM2
    There are two paths option from VM1 -> VM2
    VM1 -> Blade1 Uplink -> Fabric Interconnect A -> Blade 2 Uplink -> VM2
    or
    VM1-> Blade1 Uplink -> Fabric Interconnect A -> Upstream Switch -> Fabric Interconnect B -> Blade 2 Uplink -> VM2
    For the two options I've seen many instances were the FIRST option works fine, but the second doesn't.  Why?  Well as you can see option 1 has a path from Host A to FI-A and back down to Host B.  In this path there's no northbound switching outside of UCS.  This would require both VMs to be be pinned to the Hosts Uplink going to the same Fabric Interconnect. 
    In the second option if the path involves going from Host-A up to FI-A, then northbound to the upstream switch, then back down eventually to FI-B  and then Host-B. When this path is taken, if the two VMs can't reach each other then you have some problem with your upstream switches.  If both VMs reside in the same subnet, it's a Layer2 problem.  If they're in different subnets, then it's a Layer 2 or 3 problem somewhere north of UCS.
    So knowing this - why did manual pinning on the N1K fix your problem?  What pinning does is forces a VM to a particular uplink.  What likely happened in your case is you pinned both VMs to Host Uplinks that both go to the same UCS Fabric Interconnect (avoiding having to be switched northbound).  Your original problem still exists, so you're not clear out of the woods yet.
    Ask yourself is - Why are just these two VMs affected.   Are they possibly the only VMs using a particular VLAN or subnet?
    An easy test to verify the pinning to to use the command below.  "x" is the module # for the host the VMs are running on.
    module vem x execute vemcmd show port-old
    I explain the command further in another post here -> https://supportforums.cisco.com/message/3717261#3717261.  In your case you'll be looking for the VM1 and VM2 LTL's and finding out which SubGroup ID they use, then which SG_ID belongs to whch VMNIC.
    I bet your find the manual pinning "that works" takes the path from each host to the same FI. If this is the case, look northbound for your L2 problem.
    Regards,
    Robert

Maybe you are looking for

  • Installation error 2738 updating itunes + quicktime on Windows Vista OS

    Every time I try to install the new V7 iTunes + Quicktime on my Windows Vista Home Premium OS I receiving the error message that the installer has encountered an unexpected error, error code 2738. Anyone else encountered this? Any solutions?

  • How do I get adobe reader to work on windows 8?

    I adobe 6 on windows 8.  It won't work when I try to print anything, telling me the program cannot be found.  When I try to reinstall a newer version, I'm told my existing adobe is not compatible, but I can't reinstall because I'm told I already have

  • Layers from a PDF to be recognized by Illustrator

    I have exported a dwg to pdf using Autocad, now when I open the pdf with acrobat 9 I can see the layers from the dwg file with the same names and everything, and turn them on and off. When I open the same file with Illustrator the layers are all in o

  • GR3 Presets not Saving

    I have Audition 3 and the VST Guitar Rig 3 by Native Instruments. I've recorded a few tracks with the Guitar Rig 3 set as my fx. The effects are applied correctly and stay until I restart the program. When I re-open audition, it still says Guitar Rig

  • Spotlight has stopped working in finder!!!!

    I'm having a major Spotlight problem. This morning I tried to perform a search in the finder but no results came up, so I tried another one. Again nothing. So I checked my smart folders, all empty. This despite the fact spotlight is working fine ever