Cisco Nexus 1000v VXLAN don't work

Hi to all,
I configured VXLAN configuration by the book (Cisco Nexus 1000V VXLAN Configuration Guide, Release 4.2(1)SV1(5.1)), but there is some problem.
There are two ESXs with four VMs (two VMs on each ESX). Each VM has one NIC and that NIC is assigned to a port-profile configured for same VXLAN bridge-domain access. There is connectivity between VMs on same ESX but there is no connectivity between VMs hosted on different ESXs. In other words, L2 connectivity works between VMs on same ESX but not between VMs on different ESXs.
Nexus 1000V VSM is installed on Nexus 1010 Appliance and manages two VEMs through L3 control interfaces.
VSM version is 4.2(1)SV1(5.1) and VEM feature level is 4.2(1)SV1(5.1).
Bridge-domain is VXLAN-5001 with segment id 5001 and group address 239.1.1.1
Port-profile for VMK VXLAN interface is properly configured for access to VLAN 588 ("transport" VLAN for VXLAN) and capability vxlan.
VLAN 588 is allowed on all uplinks on both sides (Nexus and physical switch).
Port profile for VMs if properly configured for access to bridge-domain.
I was create a monitor session for VLAN 588 on upstrean switch (Cisco 6513 with 12.2(18)SXF14 IOS) and  did't see any multicast, unicast or any other traffic. According to documentation, first I shuld to see IGMP join, after that multicast and after that unicast traffic between two VMK interfaces.
Here is MAC address table for bridge-domain VXLAN-5001:
Nexus1000V-VSM-1# sh mac address-table bridge-domain VXLAN-5001
Bridge-domain: VXLAN-5001
          MAC Address       Type    Age       Port            IP Address     Mod
--------------------------+-------+---------+---------------+---------------+---
          0050.56a3.0009    static  0         Veth6           0.0.0.0         3 
          0050.56a3.000a    static  0         Veth7           0.0.0.0         3 
          0050.56a3.0007    static  0         Veth4           0.0.0.0         4 
          0050.56a3.0008    static  0         Veth5           0.0.0.0         4 
Total MAC Addresses: 4
As you can see, there is no proper destination IP addresses.
Can somebody help me?

Good hint, but it seems that is not the problem...
Cat ports connecting VEMs support jumbo frames and their MTU is set to 9216B.
I saw that MTU on Ethernet interfaces of VEMs is set to 1500B, I changed uplink port-profile and set MTU to first to 1550B, and after that to 9000B (max), but thing still isn't working.
I'm not using vCloud director, just VMware vSphere 4.1 (vCenter Server with VUM, vCenter Client and two ESX hosts).
Message was edited by: Mate Grbavac
After little research I found something strange... I setted up SVI on Cat in Vlan 588 ("transport" VLAN for VXLAN) and when I ping VMKernel interface (with capabilitiy vxlan) with packet size more than 1500B and df bit set I have no reply. My Cat ports and UpLink port profiles are configured for jumbo frames. Is it possible to change MTU of VMKernel interface?

Similar Messages

  • VSM and Cisco nexus 1000v

    Hi,
    We are planning to install Cisco Nexus 1000v in our environment. Before we want to install we want to explore little bit about Cisco Nexus 1000v
    •  I know there is 2 elements for Cisco 1k, VEM and VSM. Does VSM is required? Can we configure VEM individually?
    •   How does Nexus 1k integrated with vCenter. Can we do all Nexus 1000v configuration from vCenter without going to VEM or VSM?
    •   In term of alarming and reporting, does we need to get SNMP trap and get from individual VEM or can be use VSM to do that. OR can we   get    Cisco Nexus 1000v alarming and reporting form VMware vCenter.
    •  Apart from using Nexus 1010 can what’s the recommended hosting location for VSM, (same Host as VEM, different VM, and different physical server)
    Foyez Ahammed

    Hi Foyez,
    Here is a brief on the Nexus1000v and I'll answer some of your questions in that:
    The Nexus1000v is a Virtual Distributed Switch (software based) from Cisco which integrated with the vSphere environment to provide uniform networking across your vmware environment for the host as well as the VMs. There are two components to the N1K infrastructure 1) VSM 2) VEM.
    VSM - Virtual supervisor module is the one which controls the entire N1K setup and is from where the configuration is done for the VEM modules, interfaces, security, monitoring etc. VSM is the one which interacts with the VC.
    VEM - Virtual ethernet module are simply the module or virtual linecards which provide the connectivity option or virtual ports for the VMs and other virtaul interfaces. Each ESX host today can only have one VEM. These VEMs recieve their configuration / programing from the VSM.
    If you are aware of any other switching products from Cisco like the Cat 6k switches, the n1k behaves the same way but in a software / virtual environment. Where the VSM are equal of a SUPs and the VEM are similar to the line cards. The control and the packet VLANs in the n1k provide the same kind of AIPC and Inband connectivity as the 6k backplane would for the communication between the modules and the SUP (VSM in this case).
    *The n1k configuration is done only from the VSM and is visible in the VC.However the port-profiles created from the VSM are pushed from the VSM to the VC and have to be assigned to the virtual / physical ports from the VC.
    *You can run the VSM either on the Nexus1010 as a Virtual service blade (VSB) or as a normal VM on any of the ESX/ESXi server. The VSM and the VEM on the same server are fully supported.
    You can refer the following deployment guide for some more details: http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/guide_c07-556626.html
    Hope this answers your queries!
    ./Abhinav

  • Cisco Nexus 1000V InterCloud

    Need download link for Cisco Nexus 1000V InterCloud

    We had a simliar issue with 5.2(1)SV3(1.3) and found this in the release notes:
    ERSPAN
    If the ERSPAN source and destination are in different subnets, and if the ERSPAN source is an L3 control VM kernel NIC attached to a Cisco Nexus 1000V VEM, you must enable proxy-ARP on the upstream switch.
    If you do not enable proxy-ARP on the upstream switch (or router, if there is no default gateway), ERSPAN packets are not sent to the destination.
    http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus1000/sw/5_x/release_notes/b_Cisco_N1KV_VMware_521SV313_ReleaseNotes.html#concept_652D9BADC4B04C0997E7F6C29A2C8B1F
    After enabling 'ip proxy-arp' on the upstream SVI it started working properly.

  • Cisco Nexus 1000v Virtual Switch for Hyper-V Availability

    Hi,
    Does anyone have any information on the availability of the Cisco Nexus 1000v virtual switch for Hyper-V. Is it available to download from Cisco yet? If not when will it be released? Are there any Beta programs etc?
    I can download the 1000v for VmWare but cannot find any downloads for the Hyper-V version.
    Microsoft Partner

    Any updates on the Cisco Nexus 1000v virtual switch for Hyper-V? Just checked on the Cisco site, however still only the download for VMware and no trace of any beta version. Also posted the same question at:
    http://blogs.technet.com/b/schadinio/archive/2012/06/09/windows-server-2012-hyper-v-extensible-switch-cisco-nexus-1000v.aspx
    "Hyper-V support isn't out yet. We are looking at a beta for Hyper-V starting at the end of February or the begining of March. "
    -Ian @ Cisco Community
    || MCITP: EA, VA, EMA, Lync SA, makes a killer sandwich. ||

  • Cisco Nexus 1000v on Hyper-v 2012 R2

    Dears;
    I have deployed Cisco Nexus 1000v on Hyper-v hosts 2012 R2, and I'm in phase of testing and exploring feature, while doing this I removed the Nexus Virtual Switch {VEM} from HOST, it disappeared from host but I couldn't use the uplink attached previously with the switch as it sees it still attached on Nexus 1000v. I tried to remove it by several ways finally the host gets unusable and I had to setup the host again.
    the question here; there is no mention on cisco documents for how to uninstall or remove the VEM attached to a host, can any one help in this ?
    Thanks
    Regards

    Zoning is generally a term used with fibre channel, but I think I understand what you mean.
    Microsoft Failover Clusters rely on shared storage.  So you would configure your storage so that it is accessible from all three nodes of the cluster.  Any LUN you want to be part of the cluster should be presented to all nodes.  With iSCSI,
    it is recommended to use two different IP subnets and configure MPIO.  The LUNs have to be formatted as NTFS volumes.  Run the cluster validation wizard once you think you have things configured correctly.  It will help you find any potential
    configuration issues.
    After you have run a cluster validation and there aren't any warnings left that you can't resolve, build the cluster.  The cluster will form with the available LUNs as storage to the cluster.  Configure the storage to be Cluster Shared Volumes
    for the VMs, and left the witness as the witness.  By default, the cluster will take the smallest LUN to be the witness disk.  If you are just using the cluster for Hyper-V (recommended) you do not need to assign drive letters to any of the disks. 
    You do not need, nor is it recommended to use, pass-through disks.  There are many downsides to using pass through disks, and maybe one benefit, and that one is very iffy.
    . : | : . : | : . tim

  • Cisco Nexus 1000v stops inheriting

    Guys,
    I have an issue with the Nexus 1000v, basically the trunk ports on the ESXi hosts stop inheriting from the main DATA-UP link port profile, which means that not all VLANS get presented down that given trunk port, its like it gets completey out of sync somehow. An example is below,
    THIS IS A PC CONFIG THAT'S NOT WOKRING CORRECTLY
    show int trunk
    Po9        100,400-401,405-406,412,430,434,438-439,446,449-450,591,850
    sh run int po9
    interface port-channel9
      inherit port-profile DATA-UP
      switchport trunk allowed vlan add 438-439,446,449-450,591,850 (the system as added this not user)
    THIS IS A PC CONFIG THAT IS WORKING CORRECTLY
    show int trunk
    Po2        100,292,300,313,400-401,405-406,412,429-430,434,438-439,446,449-450,582,591,850
    sh run int po2
    interface port-channel2
        inherit port-profile DATA-UP
    I have no idea why this keeps happening, when i remove the manual static trunk configuration on po9, everything is fine, few days later, it happens again, its not just po9, there is at least 3 port-channel that it affects.
    My DATA-UP link port-profile configuration looks like this and all port channels should reflect the VLANs allowed but some are way out.
    port-profile type ethernet DATA-UP
      vmware port-group
      switchport mode trunk
      switchport trunk allowed vlan 100,292,300,313,400-401,405-406,412,429-430,434,438-439,446,449-450,5
    82,591,850
      channel-group auto mode on sub-group cdp
      no shutdown
      state enabled
    The upstream switches match the same VLANs allowed and the VLAN database is a mirror image between Nexus and Upstream switches.
    The Cisco Nexus version is 4.2.1
    Anyone seen this problem?
    Cheers

    Using vMotion you can perform the entire upgrade with no disruption to your virtual infrastructure. 
    If this is your first upgrade, I highly recommend you go through the upgrade guides in detail.
    There are two main guides.  One details the VSM and overall process, the other covers the VEM (ESX) side of the upgrade.  They're not very long guides, and should be easy to follow.
    1000v Upgrade Guide:
    http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_2_1_s_v_1_4_a/upgrade/software/guide/n1000v_upgrade_software.html
    VEM Upgrade Guides:
    http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_2_1_s_v_1_4_a/install/vem/guide/n1000v_vem_install.html
    In a nutshell the procedure looks like this:
    -Backup of VSM Config
    -Run pre-upgrade check script (which will identify any config issues & ensures validation of new version with old config)
    -Upgrade standby VSM
    -Perform switchover
    -Upgrade image on old active (current standby)
    -Upgrade VEM modules
    One decision you'll need to make is whether to use Update Manager or not for the VEM upgrades.  If you don't have many hosts, the manual method is a nice way to maintain control on exactly what's being upgrade & when.  It will allow you to migrate VMs off the host, upgrade it, and then continue in this manner for all remaining hosts.  The alternate is Update Manager, which can be a little sticky if it runs into issues.  This method will automatically put hosts in Maintenance Mode, migrate VMs off, and then upgrade each VEM one by one.  This is a non-stop process so there's a little less control from that perspective.   My own preference is any environment with 10 or less hosts, I use manual, for more than that let VUM do the work.
    Let me know if you have any other questions.
    Regards,
    Robert

  • Install Cisco Nexus 1000V without vCenter?

    Hi guys,
    Is this possible that we install nexus 1000v without using vcenter?
    I have vmware enterprise plus license, but I don't need to install vcenter.
    Regards,

    No, one of the features of the Enterprise Plus licenses provides are VMware & 3rd Party Distributed Virtual Switch functionality.  This requires vCenter to manage them.
    Regards,
    Robert

  • Cisco Nexus 1000V - DMZ - ARP

    Hi there,
    Thanks for reading.
    I have a VM (VM1) connected to a Nexus 1000V distributed switch. The 1000V has a connection to our DMZ (physically, an interface on our Cisco ASA 5520) which has 3 other VMs that are successfully serving up in the DMZ. The problem is that a SHOW ARP run on the ASA shows the other VM's MAC addresses but not VM1.
    The vSphere properties for all VMs (including VM1) participating in the DMZ are the same:
    Network label
    VLAN ID
    Port Group
    State - Link Up
    DirectPath I/O - Inactive "Direct Path I/O has been explicitly disabled for this port"
    The one major difference between VM1 and the others is they are multihomed and have a foot in our private network space. I think the absence of a private IP on VM1 is not the source of the problem. All the VMs recognized as directly connected to the ASA (except VM1).
    Have you ever seen this kind of thing before?
    Thanks again for reading!
    Bob

    FYI: we solved this problem on the VM side.  We removed the network object with VMWare and recreated it.  Once that delete-recreate was complete, I saw the VM1 mac in the firewall.

  • Nexus 1000v UCS Manager and Cisco UCS M81KR

    Hello everyone
    I am confused about how works the integration between N1K and UCS Manager:
    First question:
    If two VMs on different ESXi and different VEM but in the same VLAN,would like to talk each other, the data flow between them is managed from the upstream switch( in this case UCS Fabric Inteconnect), isn'it?
    I created a Ethernet uplink port-profile on N1K in switch port mode access(100), I created a vEthernet port-profile for the VM in switchport mode access(100) as well. In the Fabric Interconnect I created a vNIC profile for the physical NICs of ESXi(where there are the VMs). Also I created the vlan 100(the same in N1K)
    Second question: With the configuration above, if I include in the vNIC profile the vlan 100 (not as native vlan) only, the two VMs can not ping each other. Instead if I include in the vNIC profile only the defaul vlan(I think it is the vlan 1) as native vlan evereything works fine. WHY????
    Third question: How it works the tagging vlan on Fabric interconnectr and also in N1K.
    I tried to read differnt documents, but I did not understand.
    Thanks                 

    This document may help...
    Best Practices in Deploying Cisco Nexus 1000V Series Switches on Cisco UCS B and C Series Cisco UCS Manager Servers
    http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/white_paper_c11-558242.html
    If two VMs on different ESXi and different VEM but in the same  VLAN,would like to talk each other, the data flow between them is  managed from the upstream switch( in this case UCS Fabric Inteconnect),  isn'it?
    -Yes.  Each ESX host with the VEM will have one or more dedicated NICs for the VEMs to communicate with the upstream network.  These would be your 'type ethernet' port-profiles.  The ustream network would need to bridge the vlan between the two physicall nics.
    Second question: With the configuration above, if I include in the vNIC  profile the vlan 100 (not as native vlan) only, the two VMs can not ping  each other. Instead if I include in the vNIC profile only the defaul  vlan(I think it is the vlan 1) as native vlan evereything works fine.  WHY????
    -  The N1K port profiles are switchport access making them untagged.  This would be the native vlan in ucs.  If there is no native vlan in the UCS configuration, we do not have the upstream networking bridging the vlan.
    Third question: How it works the tagging vlan on Fabric interconnectr and also in N1K.
    -  All ports on the UCS are effectively trunks and you can define what vlans are allowed on the trunk as well as what vlan is passed natively or untagged.  In N1K, you will want to leave your vEthernet port profiles as 'switchport mode access'.  For your Ethernet profiles, you will want them to be 'switchport mode trunk'.  Use an used used vlan as the native vlan.  All production vlans will be passed from N1K to UCS as tagged vlans.
    Thank You,
    Dan Laden
    PDI Helpdesk
    http://www.cisco.com/go/pdihelpdesk

  • Ask the Expert: Configuration, Design, and Troubleshooting of Cisco Nexus 1000

    With Louis Watta
    Welcome to the Cisco Support Community Ask the Expert conversation. This is an opportunity to learn and ask questions about design, configuration, and troubleshooting of Cisco Nexus 1000V Series Switches operating inside VMware ESXi and Hyper-V with Cisco expert Louis Watta. Cisco Nexus 1000V Series Switches deliver highly secure, multitenant services by adding virtualization intelligence to the data center network. With Cisco Nexus 1000V Series Switches, you can have a consistent networking feature set and provisioning process all the way from the virtual machine access layer to the core of the data center network infrastructure.
    This is a continuation of the live Webcast.
    Louis Watta is a technical leader in the services organization for Cisco. Watta's primary background is in data center technologies: servers (UNIX, Windows, Linux), switches (MDS, Brocade), storage arrays (EMC, NetApp, HP), network switches (Cisco Catalyst and Cisco Nexus), and enterprise service hypervisors (VMware ESX, Hyper-V, KVM, XEN). As a Technical Leader in Technical Services, Louis currently supports beta and early field trials (EFTs) on new Cisco software and hardware. He has more than 15 years of experience in a wide variety of data center applications and is interested in data center technologies oriented toward data center virtualization and orchestration. Prior to Cisco, Louis was a system administrator for GTE Government Systems. He has a bachelor of science degree in computer science from North Carolina State University. .
    Remember to use the rating system to let Louis know if you have received an adequate response.
    Louis might not be able to answer each question because of the volume expected during this event. Remember that you can continue the conversation on the Data Center community Unified Computing shortly after the event.
    This event lasts through Friday, JUne 14, 2013. Visit this forum often to view responses to your questions and the questions of other Cisco Support Community members.
    Webcast related links:
    Slides
    FAQ
    Webcast Video Recording

    Right now there is only a few features that are not supported on N1Kv on Hyper-V
    They are VXLAN and QOS Fair Weighted Queuing. We are currently demoing VXLAN functionality at Microsoft TechEd Conference this week in New Orleans. So VXLAN support should be coming soon. I can't give you a specific timeline.
    For Fair Weighted Queuing I'm not sure. In the VMware world we take advantage of NETIOC infrastructure. In the MS world they do not have a NETIOC infrastructure that we can use to create a similar feature.
    Code base parity (as in VMware and Hyper-V VSMs running NXOS 5.x) will happen with the next major N1KV release for ESX.
    Let me know if that doesn't answer your question.
    thanks
    louis

  • Ask the Expert: Different Flavors and Design with vPC on Cisco Nexus 5000 Series Switches

    Welcome to the Cisco® Support Community Ask the Expert conversation.  This is an opportunity to learn and ask questions about Cisco® NX-OS.
    The biggest limitation to a classic port channel communication is that the port channel operates only between two devices. To overcome this limitation, Cisco NX-OS has a technology called virtual port channel (vPC). A pair of switches acting as a vPC peer endpoint looks like a single logical entity to port channel attached devices. The two devices that act as the logical port channel endpoint are actually two separate devices. This setup has the benefits of hardware redundancy combined with the benefits offered by a port channel, for example, loop management.
    vPC technology is the main factor for success of Cisco Nexus® data center switches such as the Cisco Nexus 5000 Series, Nexus 7000 Series, and Nexus 2000 Series Switches.
    This event is focused on discussing all possible types of vPC along-with best practices, failure scenarios, Cisco Technical Assistance Center (TAC) recommendations and troubleshooting
    Vishal Mehta is a customer support engineer for the Cisco Data Center Server Virtualization Technical Assistance Center (TAC) team based in San Jose, California. He has been working in TAC for the past 3 years with a primary focus on data center technologies, such as the Cisco Nexus 5000 Series Switches, Cisco Unified Computing System™ (Cisco UCS®), Cisco Nexus 1000V Switch, and virtualization. He presented at Cisco Live in Orlando 2013 and will present at Cisco Live Milan 2014 (BRKCOM-3003, BRKDCT-3444, and LABDCT-2333). He holds a master’s degree from Rutgers University in electrical and computer engineering and has CCIE® certification (number 37139) in routing and switching, and service provider.
    Nimit Pathak is a customer support engineer for the Cisco Data Center Server Virtualization TAC team based in San Jose, California, with primary focus on data center technologies, such as Cisco UCS, the Cisco Nexus 1000v Switch, and virtualization. Nimit holds a master's degree in electrical engineering from Bridgeport University, has CCNA® and CCNP® Nimit is also working on a Cisco data center CCIE® certification While also pursuing an MBA degree from Santa Clara University.
    Remember to use the rating system to let Vishal and Nimit know if you have received an adequate response. 
    Because of the volume expected during this event, Vishal and Nimit might not be able to answer every question. Remember that you can continue the conversation in the Network Infrastructure Community, under the subcommunity LAN, Switching & Routing, shortly after the event. This event lasts through August 29, 2014. Visit this forum often to view responses to your questions and the questions of other Cisco Support Community members.

    Hello Gustavo
    Please see my responses to your questions:
    Yes almost all routing protocols use Multicast to establish adjacencies. We are dealing with two different type of traffic –Control Plane and Data Plane.
    Control Plane: To establish Routing adjacency, the first packet (hello) is punted to CPU. So in the case of triangle routed VPC topology as specified on the Operations Guide Link, multicast for routing adjacencies will work. The hellos packets will be exchanged across all 3 routers and adjacency will be formed over VPC links
    http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus5000/sw/operations/n5k_L3_w_vpc_5500platform.html#wp999181
    Now for Data Plane we have two types of traffic – Unicast and Multicast.
    The Unicast traffic will not have any forwarding issues, but because the Layer 3 ECMP and port channel run independent hash calculations there is a possibility that when the Layer 3 ECMP chooses N5k-1 as the Layer 3 next hop for a destination address while the port channel hashing chooses the physical link toward N5k-2. In this scenario,N5k-2 receives packets from R with the N5k-1 MAC as the destination MAC.
    Sending traffic over the peer-link to the correct gateway is acceptable for data forwarding, but it is suboptimal because it makes traffic cross the peer link when the traffic could be routed directly.
    For that topology, Multicast Traffic might have complete traffic loss due to the fact that when a PIM router is connected to Cisco Nexus 5500 Platform switches in a vPC topology, the PIM join messages are received only by one switch. The multicast data might be received by the other switch.
    The Loop avoidance works little different across Nexus 5000 and Nexus 7000.
    Similarity: For both products, loop avoidance is possible due to VSL bit
    The VSL bit is set in the DBUS header internal to the Nexus.
    It is not something that is set in the ethernet packet that can be identified. The VSL bit is set on the port asic for the port used for the vPC peer link, so if you have Nexus A and Nexus B configured for vPC and a packet leaves Nexus A towards Nexus B, Nexus B will set the VSL bit on the ingress port ASIC. This is not something that would traverse the peer link.
    This mechanism is used for loop prevention within the chassis.
    The idea being that if the port came in the peer link from the vPC peer, the system makes the assumption that the vPC peer would have forwarded this packet out the vPC-enabled port-channels towards the end device, so the egress vpc interface's port-asic will filter the packet on egress.
    Differences:  In Nexus 5000 when it has to do L3-to-L2 lookup for forwarding traffic, the VSL bit is cleared and so the traffic is not dropped as compared to Nexus 7000 and Nexus 3000.
    It still does loop prevention but the L3-to-L2 lookup is different in Nexus 5000 and Nexus 7000.
    For more details please see below presentation:
    https://supportforums.cisco.com/sites/default/files/session_14-_nexus.pdf
    DCI Scenario:  If 2 pairs are of Nexus 5000 then separation of L3/L2 links is not needed.
    But in most scenarios I have seen pair of Nexus 5000 with pair of Nexus 7000 over DCI or 2 pairs of Nexus 7000 over DCI. If Nexus 7000 are used then L3 and L2 links are required for sure as mentioned on above presentation link.
    Let us know if you have further questions.
    Thanks,
    Vishal

  • Macvtap over Cisco Nexus ...

    Hi everybody!
    First all my apologizes for my english language!
    I have a standard server running linux (centos 6.5) with kvm to install and configure virtual machines. I tried to use macvtap driver to setup the virutal network interfaces. I not use ucs Cisco servers or Nexus 1000v switches!
    The physical ethernet is connected to Cisco Nexus 2000 (with a Cisco Nexus  5548up). To default a standard switch not working with the macvtap configuration because the swith to be support "reflective relay" or "hair pinning" propierties for to this works.
    Is possible configure the Cisco Nexus 5548up/2000 to this works? Capabilities how to Adapter-FEX, veheternet, VM-FEX o similars can help me?
    Know you some docs over this?
    Thank you all.

    Hi Tony,
    You do not have to worry about PFC. Priority-Flow-Control is something which is related to FCOE or DCBX protocol. With iSCSI as this is a TCP traffic, you have to make sure that you have a proper QOS for the IP Subnet or VLAN that has your iSCSI hosts transmitting the data. As iSCSI rides over TCP this is the major issue that I have seen related to the valid QOS config for the traffic. There is not best practice for it but you have make sure that you added it to a class other than BE or Scavanger class.
    Hope this helps.
    Cheers,
    -amit singh

  • Can a Nexus 1000v be configured to NOT do local switching in an ESX host?

    Before the big YES, use an external Nexus switch and use VN-Tag. The question is when there is a 3120 in a blade chassis that connects to the ESX hosts that have a 1000v installed on the ESX host. So, first hop outside the ESX host is not a Nexus box.
    Looking for if this is possible, if so how, and if not, where that might be documented. I have a client who's security policy prohibits switching (yes, even on the same VLAN) within a host (in this case blade server). Oh and there is an insistance to use 3120s inside the blade chassis.
    Has to be the strangest request I have had in a while.
    Any data would be GREATY appreciated!

    Thanks for the follow up.
    So by private VLANs, are you referring to "PVLAN":
    "PVLANs: PVLANs are a new feature available with the VMware vDS and the Cisco Nexus
    1000V Series. PVLANs provide a simple mechanism for isolating virtual machines in the
    same VLAN from each other. The VMware vDS implements PVLAN enforcement at the
    destination host. The Cisco Nexus 1000V Series supports a highly efficient enforcement
    mechanism that filters packets at the source rather than at the destination, helping ensure
    that no unwanted traffic traverses the physical network and so increasing the network
    bandwidth available to other virtual machines"

  • Nexus 1000v and vcenter domain admin account

    I changed out domain admin account on our domain in which vcenter services runs as and now its using a different services account. I am wondering if I need to update anything on the nexus 1000v switch side between the 1000v and venter

    Hi Dan,
    You are on the right track. However you can perform some of these function "online".
    First you want to ensure that you are running at a minimum, Nexus 1000v SV1(4a) as ESXi 5.0 only began support on this release. With SV1(4a), it provides support for both ESXi 5.0 and ESX/i 4.1.
    Then you can follow the procedure documented here:
    Upgrading from VMware Release 4.0/4.1 to VMware Release 5.0.0
    This document walks you through upgrading your ESX infrastructure to VMware Release 5.0.0 when Cisco Nexus 1000V is installed. It is required to be completed in the following order:
    1. Upgrade the VSMs and VEMs to Release 4.2(1)SV1(4a).
    2. Upgrade the VMware vCenter Server to VMware Release 5.0.0.
    3. Upgrade the VMware Update Manager to VMware Release 5.0.0.
    4. Upgrade your ESX hosts to VMware Release 5.0.0 with a custom ESXi image that includes the VEM bits.
    Upgrading the ESX/ESXi hosts consists of the following procedures:
    –Upgrading the vCenter Server
    –Upgrading the vCenter Update Manager
    –Augmenting the Customized ISO
    –Upgrading the ESXi Hosts
    There is also a 3 part video highlighting the procedure to perfrom the last two steps above (customized ISO and upgrading ESXi hosts)
    Video: Upgrading the VEM to VMware ESXi Release 5.0.0
    Hope that helps you with your upgrade.
    Thanks,
    Michael

  • VM-FEX and Nexus 1000v relation

    Hi
    I am a new in virtulaization world and I need to know what is the relation between Cisco Nexus 1000v and Cisco VM-FEX?, and when to use VM-FEX and when to use Nexus 1000v.
    Regards

    Ahmed,
    Sorry for taking this long to get back to you.
    Nexus 1000v is a virtualized switch and as such will require that any traffic coming in or leaving the VM will first need to pass through the virtualization layer, therefore causing a minimum delay that for some applications (VMs) can be catastrophic enough that may mean too much delay.
    With VM-FEX you gain the option to bypass the virtualization layer with for example "Pass-Through" mode where the vmnics are really assigned and managed by the OS, minimizing the delay and making the VMs look as if they were directly attached, also, this offloads CPU workload in the mean time, optimizing the host/VM's performance.
    The need for one or the other will be defined as always by the needs your organization/business has.
    Benefits of VM-FEX (from cisco.com):
    Simplified operations: Eliminates the need for a separate, virtual networking infrastructure
    Improved network security: Contains VLAN proliferation
    Optimized network utilization: Reduces broadcast domains
    Enhanced application performance: Offloads virtual  machine switching from host CPU to parent switch application-specific  integrated circuits (ASICs)
    Benefits of Nexus 1000v here on another post from Rob Burns:
    https://supportforums.cisco.com/thread/2087541 
    https://communities.vmware.com/thread/316542?tstart=0
    I hope that helps 
    -Kenny

Maybe you are looking for

  • Using CopyValue function with error

    Hi All, I have a requirement to send the three values coming in the one source field to three different target fields. E.g: Source field is "src1" and having the values 1,2 ,3 These values should be sent to the target fields such as first target fiel

  • How do I rename a Mail box

    Since I downloaded Mountain Lion 9.8.2 my Apple Mail box has been renamed with my personal password for all to see on my desktop.  Not a good idea you might think but "rename mailbox is greyed out".  Why and HELP!!!!!!!!

  • Reg : Doubts on Knowledge Modules

    hi all, Please give brief idea about knowledge modules. please dont give links . KNOWLEDGE MODULES REVERSE ENGINEERING LKM CKM IKM JKM SKM Thanks a lot in advance , -Chinnu.

  • IDOC INVOIC02 Error

    Hello Friends, Inbound error code is 51. error details as in application log. Error segment  :  E1EDP01 Error desc        : Field format is incorrect, Required field MENGE is missing in segment E1EDP01 When i checked this field in IDOC segment it as

  • Confusion on trojan/virus download

    I was going over to Hotmail and a pop up came up on my iMac stating that a possible trojan was detected. Having my guard down -- being on an iMac -- I hit "download," which when finished immediately prompted five more downloads to start. I immediatel