FCoE and multiple Nexus 5000s and a 7000 core

Hi
I have a customer who is looking at four Nexus 5020s to start with and more in the future uplinked to a Nexus 7000 core.
Am I right in thinking that whilst data traffic will be able to reach hosts ocnnected to a different Nexus 5002 via the 7000 core FCoE traffic will not ?
If so what is teh recommended way of rolling out pods of 5020s for VMware servers with converged network adaptors so that they can all access the SAN ?
Regards
Pat

Don't use corecenter for fan speed control.  Instead, use the BIOS:
1) Open Corecenter from the administrator account or a user account that has admin privileges.  Click on the top center logo which should open the fan speed control window.  There should be two items, CoolnQuiet and User/Manual or something like.  Put it in manual and move the slider all the way to the right. Ignore the CoolnQuiet, it's misnamed here - MSI's corecenter does not control CoolnQuiet.  What MSI is calling CoolnQuiet is just fan speed control.  Anyway, put it in manual.  Close Corecenter and you should never have to open it again unless you want to monitor fan RPM's.
2) Next, reboot and enter the BIOS.  Go to the H/W  Monitor settings.  Turn on Smart CPU Fan Speed.  Set to 40 C +/- 1.    This will allow the motherboard to control the fan speed.  On the other hand, if you want your fan always at maximum, then disable Smart Fan Speen control.   Don't worry about the Smart NB Fan speed, leave it disabled.
Also, keep in mind that when you first open Corecenter, the immediate fan RPM's reported are not correct.  It takes it a few seconds to get the readings.  So wait until Corecenter minimizes itself to the system tray, then open it from there, and you will see the correct fan RPM's.

Similar Messages

  • SAN Port-Channel between Nexus 5000 and Brocade 5100

    I have a Nexus 5000 running in NPV mode connected to a Brocade 5100 FC switch using two FC ports on a native FC module in the Nexus 5000. I would like to configure these two physical links as one logical link using a SAN Port-Channel/ISL-Trunk. An ISL trunking license is already installed on the Brocade 5100. The Nexus 5000 is running NX-OS 4.2(1), the Brocade 5100 Fabric OS 6.20. Does anybody know if this is a supported configuration? If so, how can this be configured on the Nexus 5000 and the Brocade 5100? Thank you in advance for any comments.
    Best regards,
    Florian

    I tried that and I could see the status light on the ports come on but it still showed not connected.
    I configured another switch (a 3560) with the same config and the same layout with the fiber and I got the connection up on it. I just cant seem to get it on the 4506, would it be something with the supervisor? Could it be wanting to use the 10gb port instead of the 1gb ports?

  • New 13" Macbook Pro trying to connect to a Panasonic VIERA TC-L42U30 as second monitor. I've used multiple hdmi-thunderbolt/mini displayport adapters and multiple hdmi cables and still no success. The Macbook does not sense the second monitor (TV). Help?!

    I've been a Mac since 2008, so I know my way around the system pretty well. This issue, however, has me stumped. I had an iMac until now and only now am I experiencing some difficulty with my new MacBook Pro. The model I have is the newest 13" Macbook Pro model and I'm trying to connect a Panasonic VIERA TC-L42U30 42" HDTV as a second monitor via the Thunderbolt port.
    It worked the first two times and hasn't worked since, after 10-15 attempts with different configurations, turning things on n off, restarting the mac, unplugging the cables, adapters, TV, resetting the P RAM, etc... I've used multiple hdmi-thunderbolt/mini displayport adapters and multiple hdmi cables and still no success. No matter what I do, the Macbook does not sense the TV as a second monitor anymore.
    I took the MacBook Pro to the Apple Store, and their "genius" there had it working fine with a DVI connection to a regular monitor. The Panasonic TV I have has HDMI connections and one VGA connection which does not support HD, but no DVI option. I want an HD connection to mirror or extend my MacBook Pro screen. At the Apple Store, they didn't have a Thunderbolt/Mini DisplayPort to HDMI adapter, so he could not try that out for me.
    Anyone else have this configuration or another similar one with a Panasonic HDTV?
    Ideas? Suggestions? Anything?! Help!!
    P.S. I'm running Mountain Lion, if that wasn't already obvious. Everything is up to date in my App Store as well.
    Thanks!

    Hi There,
    I have had the exact same issue but with a projector.
    The issue lies with Mountian Lion 10.8.2.
    I tried many a combination with no luck to get HDMI working.
    Took my mac into the apple store and came to the conclusion it was the software, so I asked them to install 10.8 onto it (this is destructive so a backup is a must)
    Bought my macbook home and voila, now displaying through my projector.
    There is a small graphics update after 10.8.1 which seems to be the cause.
    Hope this helps.
    Thanks.

  • Nexus 5000 and MDS 9509 FC

    Greetings Everyone,
    I have been beating my head on the desk for the last few days tring to get a FC connection established between a NX5010 and a MDS9509.  The 9509 is running the legacy storage systems but we are moving to FCoE on the NEXUS.  I am not getting link on the fiber between the 2 devices.  I have checked and verified the MM cable both SFP units have been swapped out.
    I keep getting the following on the MDS when I bring up the interface.
    2011 Apr 22 20:39:42 MDS9509_01 %PORT-5-IF_DOWN_INACTIVE: %$VSAN 1%$ Interface fc1/9 is down (Inactive)
    fc1/9 is down (Inactive)
        Port description is Connection to Nexus 5kSW01 [FC2/1]
        Hardware is Fibre Channel, SFP is short wave laser w/o OFC (SN)
        Port WWN is xxxxxxxxxxxxxxx
        Admin port mode is auto, trunk mode is on
    On the NX 5010 I have
    fc2/1 is down (Link failure or not-connected)
        Hardware is Fibre Channel, SFP is short wave laser w/o OFC (SN)
        Port WWN is xxxxxxxxxxxxxxxxx
        Admin port mode is auto, trunk mode is on
        snmp link state traps are enabled
        Port vsan is 1
        Receive data field Size is 2112
    I dont have anything really configured I am just tring to get a link to establish.  If someone has a sample configuration for this or a suggestion, I would certainly appreciate it.
    Cheers
    Mike

    Yes, the trunk interfaces must be assigned into active VSAN, even though they are trunk (TE) ports and might not even be trunking that VSAN.
    There is no concept of Spanning-Tree in Fibre Channel. Don't confuse Ethernet with FC, a lot of things a different. VSANs are similar to VLANs and  they allow you to segregate traffic, and VSAN trunking also has the same goal as VLAN trunking, but that's where the similarity with Ethernet ends.
    The loop prevention is achieved through FSPF (FC shortest path first) algorithm. Think of it as OSPF. Each VSAN runs its own FSPF process. Each FC switch has a unique Domain ID (value from 1 to 254) assigned by the FC Domain process in each VSAN. FSPF process is automatically on, it detects other switches and determines a best path between domain IDs based on FSPF interface costs (per VSAN). So it's actually more like IP routing than Ethernet STP. FC addresses are 3-byte FCID addresses where the first byte is the domain ID of the switch that the node (storage or HBA) is connected to. You are probably familiar with WWN's, but WWN's are not actually used in fibre channel frame headers, instead they are mapped to FCIDs.
    FC/SANOS is a fun topic, consider becoming CCIE Storage expert (http://www.iementor.com/ccie-storage-workbook-walkthrough-bundle-p-338.html)
    I'm not sure why it took 2 minutes for interface to come up, it should have been instanteneous, are you saying the actual interfaces didn't go Up for two minutes after changes were made?
    There's no VPC concept in fibre channel. There is a regular port channel that Cisco has on MDS/NX-OS FC switches. Yes, you can configure Port channel between NX-OS on N5K and SAN-OS/NX-OS on MDS9K. It must be configured between two switches and can't span multiple switches.

  • How many Nexus 7000, Nexus 5000 and 2000 can a DCNM deployment support?

    Hi Everyone,
    Good Day! I would like to inquire how many Nexus boxes can a single DCNM deployment support given that we have the recommended server specifications?
    Thanks and Regards,
    Albert

    Hi Lucien,
    I have 2 pair of Nexus switches in my setup as follows
    The first pair connection as below
    ==========================
    Nexus 1 (configured with vpc 1)----- 2 connections------ 6500 catalyst sw1(po 9)
    Nexus 2 (configured with vpc 1) ----- 2 connections------ 6500 catalyst sw2 (po9)
    po2 on nexus pair for vpc peer link
    Spanning tree on Nexus 1
    po1     root fwd1      p2p peer stp
    po2    desg fwd 1   vpc peer link
    Spanning tree on Nexus 2
    po1      altn blk1     p2p peer stp
    po2      root fwd1   vpc peer link
    The second pair connection
    =====================
    Nexus3 (configured with vpc 20 ) ------ 1 connection ------- 6500 catalyst sw1 (po20)
                  (configured with vpc 30) ------- 1 connection ------- 6500 catalyst sw2 (po30)
    Nexus4 (configured with vpc 20) ----- 1connection ---- 6500 catalyst sw1 (po20)(stp guard root)
                  (configured with vpc 30) ----- 1 connection ----6500 catalyst sw2 (po30)(stp guard root)
    po1 on nexus pair for vpc peer link
    Spanning tree on Nexus 3
    po1      desg fwd1        vpc peer link
    po20    root fwd1           p2p peer stp
    po30    altn blk1            p2p peer stp
    Spanning tree on Nexus 4
    po1      root fwd1           vpc peer link
    po20    root fwd 1          p2p peer stp
    po30    altn blk1               p2p peer stp
    Problem Observed :  High Ping response
    Source server on 1st pair of switches  ; Destination server on 2nd pair of switches
    Ping response from 1st pair of switches to destination server : normal (between 1 to 3 ms)
    Ping response fron 2nd pair of switches to source server  :   (jumping from 3ms to 100+ ms).
    There is no errors or packet drops on any of the above ports, I cannot understand why the ping response is high for connections from second pair.

  • VPC / Cisco ACE and the Nexus 2K and 5K

    Hi all,
    So we have a test environment that looks like the following. We have 2 5K's switch 1 and switch 2. Switch 1 has two 10gb connections downstream to a 2K and switch 2 has two 10Gb connections downstream to the other 2K. We have a few servers that are multi-homed with LACP and VPC via the 2Ks and it works a treat.
    We have our Cisco ACE 01, ports 1 and 2 going to one of the 2K's and we have ports 3 and 4 going to the other 2K, ACE02 ports 1 and 2 going to one of the 2K's and we have ports 3 and 4 going to the other 2K. If i enable VPC and none LACP based etherchannel i cannot get the ACE's talking to each other, but looking at the VPC status its all healthy and up.
    Has anyone managed to multi-home the ACE between two 2K's with VPC successfully? 
    If I disable the links so each ACE only has links upstream in a traditional port-channel and not cross connected, the ACE's can see each other with no issues.
    Cheers

    Doh.. so we had a cable patching issue in the end. Let this be a lesson to all networking chaps - always check the basics first! Now we have patched the cables as per design the VPC has been established and works.
    Now we  have VPC is working we are simulating link failures. When we restore a shutdown physical port within the port-channel/VPC that sits between the 2K and ACE (simulating a port failure) the ACE's lose sight of each other for about 10 seconds and causes an short outage until the port is up and up. The logs on the ACE show 'the Peer x.x.x.x is not reachable. Error: Heartbeat stopped. No alternate interface configured' but the VLAN for the FT interface is carried over all four ACE NIC's that are multi-homed to two 2K's... very strange, i would not expect this, it's like the MAC addresses for the FT interface are waiting to be timed out on the 2K until they are switched on another interface within the port-channel and VPC.
    Anyone seen this before?

  • 10.6.4 and Nikon Coolscan 5000 and Fuji ScanSnapS1500M

    Both scanners stopped working after the update. Attempted to re-install the software but no luck.
    Fuji scanner: Software starts but scanner is not recognized by the software
    Nikon scanner: Nikon Scan crashes repeatedly on start (i.e. Won't open).
    Hopefully ScanSnap will update their software soon, however I doubt Nikon will. If anyone had a way around to make it work again it would be much appreciated.
    Thank you

    Take a look at the Nikon Coolscan User's Group Discussion
    http://www.flickr.com/groups/74221125@N00/discuss/72157624196874277
    Two options that you might find useful are Silverfast http://www.silverfast.com/show/scanners-nikon/en.html and VueScan http://www.hamrick.com/
    Silverfast will handle dust removal.
    Silverfast is somewhat expensive, and VueScan is under $100.

  • Mail and Multiple User Accounts and Email Accounts

    Hi there I want to be able to have two seperate pop email accounts, one for each of the two user accounts on my Mac mini but what is actually happening is that the primary mail account opens up within both user accounts and if you email the secondary account it arrives at the inbox for the primary one. The two pop accounts have the same ISP (NTL) and incoming mail server but different user names. I'd be very grateful for any suggested solutions!
    Thanks
    Mac Mini   Mac OS X (10.3.9)  

    When you check Mail preferences and select accounts, you should show only one account for each user. If two accounts show for both, the upper account is considered the primary, but all mail from all listed accounts will be available.
    If you show two accounts, delete the account not belonging to the logged in person.

  • Ask the Expert: Different Flavors and Design with vPC on Cisco Nexus 5000 Series Switches

    Welcome to the Cisco® Support Community Ask the Expert conversation.  This is an opportunity to learn and ask questions about Cisco® NX-OS.
    The biggest limitation to a classic port channel communication is that the port channel operates only between two devices. To overcome this limitation, Cisco NX-OS has a technology called virtual port channel (vPC). A pair of switches acting as a vPC peer endpoint looks like a single logical entity to port channel attached devices. The two devices that act as the logical port channel endpoint are actually two separate devices. This setup has the benefits of hardware redundancy combined with the benefits offered by a port channel, for example, loop management.
    vPC technology is the main factor for success of Cisco Nexus® data center switches such as the Cisco Nexus 5000 Series, Nexus 7000 Series, and Nexus 2000 Series Switches.
    This event is focused on discussing all possible types of vPC along-with best practices, failure scenarios, Cisco Technical Assistance Center (TAC) recommendations and troubleshooting
    Vishal Mehta is a customer support engineer for the Cisco Data Center Server Virtualization Technical Assistance Center (TAC) team based in San Jose, California. He has been working in TAC for the past 3 years with a primary focus on data center technologies, such as the Cisco Nexus 5000 Series Switches, Cisco Unified Computing System™ (Cisco UCS®), Cisco Nexus 1000V Switch, and virtualization. He presented at Cisco Live in Orlando 2013 and will present at Cisco Live Milan 2014 (BRKCOM-3003, BRKDCT-3444, and LABDCT-2333). He holds a master’s degree from Rutgers University in electrical and computer engineering and has CCIE® certification (number 37139) in routing and switching, and service provider.
    Nimit Pathak is a customer support engineer for the Cisco Data Center Server Virtualization TAC team based in San Jose, California, with primary focus on data center technologies, such as Cisco UCS, the Cisco Nexus 1000v Switch, and virtualization. Nimit holds a master's degree in electrical engineering from Bridgeport University, has CCNA® and CCNP® Nimit is also working on a Cisco data center CCIE® certification While also pursuing an MBA degree from Santa Clara University.
    Remember to use the rating system to let Vishal and Nimit know if you have received an adequate response. 
    Because of the volume expected during this event, Vishal and Nimit might not be able to answer every question. Remember that you can continue the conversation in the Network Infrastructure Community, under the subcommunity LAN, Switching & Routing, shortly after the event. This event lasts through August 29, 2014. Visit this forum often to view responses to your questions and the questions of other Cisco Support Community members.

    Hello Gustavo
    Please see my responses to your questions:
    Yes almost all routing protocols use Multicast to establish adjacencies. We are dealing with two different type of traffic –Control Plane and Data Plane.
    Control Plane: To establish Routing adjacency, the first packet (hello) is punted to CPU. So in the case of triangle routed VPC topology as specified on the Operations Guide Link, multicast for routing adjacencies will work. The hellos packets will be exchanged across all 3 routers and adjacency will be formed over VPC links
    http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus5000/sw/operations/n5k_L3_w_vpc_5500platform.html#wp999181
    Now for Data Plane we have two types of traffic – Unicast and Multicast.
    The Unicast traffic will not have any forwarding issues, but because the Layer 3 ECMP and port channel run independent hash calculations there is a possibility that when the Layer 3 ECMP chooses N5k-1 as the Layer 3 next hop for a destination address while the port channel hashing chooses the physical link toward N5k-2. In this scenario,N5k-2 receives packets from R with the N5k-1 MAC as the destination MAC.
    Sending traffic over the peer-link to the correct gateway is acceptable for data forwarding, but it is suboptimal because it makes traffic cross the peer link when the traffic could be routed directly.
    For that topology, Multicast Traffic might have complete traffic loss due to the fact that when a PIM router is connected to Cisco Nexus 5500 Platform switches in a vPC topology, the PIM join messages are received only by one switch. The multicast data might be received by the other switch.
    The Loop avoidance works little different across Nexus 5000 and Nexus 7000.
    Similarity: For both products, loop avoidance is possible due to VSL bit
    The VSL bit is set in the DBUS header internal to the Nexus.
    It is not something that is set in the ethernet packet that can be identified. The VSL bit is set on the port asic for the port used for the vPC peer link, so if you have Nexus A and Nexus B configured for vPC and a packet leaves Nexus A towards Nexus B, Nexus B will set the VSL bit on the ingress port ASIC. This is not something that would traverse the peer link.
    This mechanism is used for loop prevention within the chassis.
    The idea being that if the port came in the peer link from the vPC peer, the system makes the assumption that the vPC peer would have forwarded this packet out the vPC-enabled port-channels towards the end device, so the egress vpc interface's port-asic will filter the packet on egress.
    Differences:  In Nexus 5000 when it has to do L3-to-L2 lookup for forwarding traffic, the VSL bit is cleared and so the traffic is not dropped as compared to Nexus 7000 and Nexus 3000.
    It still does loop prevention but the L3-to-L2 lookup is different in Nexus 5000 and Nexus 7000.
    For more details please see below presentation:
    https://supportforums.cisco.com/sites/default/files/session_14-_nexus.pdf
    DCI Scenario:  If 2 pairs are of Nexus 5000 then separation of L3/L2 links is not needed.
    But in most scenarios I have seen pair of Nexus 5000 with pair of Nexus 7000 over DCI or 2 pairs of Nexus 7000 over DCI. If Nexus 7000 are used then L3 and L2 links are required for sure as mentioned on above presentation link.
    Let us know if you have further questions.
    Thanks,
    Vishal

  • Reg Nexus 5000 + 7000 software (Shellshock -bug )

    Hi people.
    Regarding this bug : Shellshock . What is the recomended software upgrade for Nexus 5000 & 7000.
    It is important that the VPC and FCoE is still working, after an upgrade.
    Need recommendation for following devices :
    Nexus 5000
    https://tools.cisco.com/bugsearch/bug/CSCur05017
    All current versions of NX-OS on this platform are affected unless 
    otherwise stated.. This bug will be updated with detailed affected and 
    fixed software versions once fixed software is available.
    Exposure is not configuration dependent.
    Authentication is required to exploit this vulnerability. 
    Nexus 7000
    https://tools.cisco.com/bugsearch/bug/CSCuq98748
    All current versions of NX-OS on this platform are affected unless 
    otherwise stated
    Exposure is not configuration dependent.
    Authentication is required to exploit this vulnerability.
    This bug is fixed in NX-OS versions specified below:
    5.2(9a)
    6.1(5a)
    6.2(8b)
    6.2(10) and above
    Is there anyone that has some information on this ?
    Many thanx in advance,

    Hello.
    Let me look into this for you. Do you have an existing support contract or SmartNet for these Nexus 5000 and 7000 switches, by the way?
    Let me know if you have other concerns as well or e-mail ([email protected]) me directly. 
    Kind regards. 

  • VN-Tag with Nexus 1000v and Blades

    Hi folks,
    A while ago there was a discussion on this forum regarding the use of Catalyst 3020/3120 blades switches in conjunction with VN-tag.  Specifically, you can't do VN-Tag with that Catalyst blade switch sitting inbetween the Nexus 1000V and the Nexus 5000.  I know there's a Blade switch for the IBM blade servers, but will there be a similar version for the HP C-class blades?  My guess is NO, since Cisco just kicked HP to the curb.  But if that's the case, what are my options?  Pass-through switches?  (ugh!)
    Previous thread:
    https://supportforums.cisco.com/message/469303#469303

    wondering the same...

  • Nexus 5000 as NTP client

    We run 6509 core routers as NTP servers to other IOS routers/switches & servers of several OS flavours.
    All good.
    Recently added some Nexus 5000s and cannot get them to lock.
    No firewalls or ACLs in the path
    6509 (1 of 4) state:
    LNPSQ01CORR01>sh ntp ass
          address         ref clock     st  when  poll reach  delay  offset    disp
    + 10.0.1.2         131.188.3.220     2   223  1024  377     0.5   -6.23     0.7
    +~130.149.17.21    .PPS.             1   885  1024  377    33.7   -0.26     0.8
    *~138.96.64.10     .GPS.             1   680  1024  377    22.7   -2.15     1.0
    +~129.6.15.29      .ACTS.            1   720  1024  377    84.9   -3.37     0.6
    +~129.6.15.28      .ACTS.            1   855  1024  377    84.8   -3.30     2.3
    * master (synced), # master (unsynced), + selected, - candidate, ~ configured
    Nexus state:
    BL01R01B10SRVS01# sh ntp peer-status
    Total peers : 4
    * - selected for sync, + -  peer mode(active),
    - - peer mode(passive), = - polled in client mode
        remote               local              st  poll  reach   delay
    =10.0.1.1               10.0.201.11            16   64       0   0.00000
    =10.0.1.2               10.0.201.11            16   64       0   0.00000
    =10.0.1.3               10.0.201.11            16   64       0   0.00000
    =10.0.1.4               10.0.201.11            16   64       0   0.00000
    Nexus config:
    ntp distribute
    ntp server 10.0.1.1
    ntp server 10.0.1.2
    ntp server 10.0.1.3
    ntp server 10.0.1.4
    ntp source 10.0.201.11
    ntp commit
    interface mgmt0
      ip address 10.0.201.11/24
    vrf context management
      ip route 0.0.0.0/0 10.0.201.254
    Reachability to the NTP source...
    BL01R01B10SRVS01# ping 10.0.1.1 vrf management source 10.0.201.11
    PING 10.0.1.1 (10.0.1.1) from 10.0.201.11: 56 data bytes
    64 bytes from 10.0.1.1: icmp_seq=0 ttl=253 time=3.487 ms
    64 bytes from 10.0.1.1: icmp_seq=1 ttl=253 time=4.02 ms
    64 bytes from 10.0.1.1: icmp_seq=2 ttl=253 time=3.959 ms
    64 bytes from 10.0.1.1: icmp_seq=3 ttl=253 time=4.053 ms
    64 bytes from 10.0.1.1: icmp_seq=4 ttl=253 time=4.093 ms
    --- 10.0.1.1 ping statistics ---
    5 packets transmitted, 5 packets received, 0.00% packet loss
    round-trip min/avg/max = 3.487/3.922/4.093 ms
    BL01R01B10SRVS01#
    Are we missing some NTP or managment vrf setup in the Nexus 5Ks??
    Thanks
    Rob Spain
    UK

    I have multiple 5020's, 5548's, and 5596's, and they all experience this same problem. Mind you I run strictly layer 2. I don't even have feature interface-vlan enabled. I tried: "ntp server X.X.X.X use-vrf management" as well as "clock protocol ntpt". These didn't help. 
    I was told by TAC that there is a bug (sorry I do not have the ID), but basically NTP will not work over the management VRF. The only way I got NTP to work, was by enabling the feature interface-vlan, and adding a vlan interface with an IP and retrieving NTP through this interface. 
    I upgraded to 5.2 (1) in hopes that this would fix the issue. but it did not. 

  • VPC on Nexus 5000 with Catalyst 6500 (no VSS)

    Hi, I'm pretty new on the Nexus and UCS world so I have some many questions I hope you can help on getting some answers.
    The diagram below is the configuration we are looking to deploy, that way because we do not have VSS on the 6500 switches so we can not create only one  Etherchannel to the 6500s.
    Our blades inserted on the UCS chassis  have INTEL dual port cards, so they do not support full failover.
    Questions I have are.
    - Is this my best deployment choice?
    - vPC highly depend on the management interface on the Nexus 5000 for the keep alive peer monitoring, so what is going to happen if the vPC brakes due to:
         - one of the 6500 goes down
              - STP?
              - What is going to happend with the Etherchannels on the remaining  6500?
         - the Management interface goes down for any other reason
              - which one is going to be the primary NEXUS?
    Below is the list of devices involved and the configuration for the Nexus 5000 and 65000.
    Any help is appreciated.
    Devices
    ·         2  Cisco Catalyst with two WS-SUP720-3B each (no VSS)
    ·         2 Cisco Nexus 5010
    ·         2 Cisco UCS 6120xp
    ·         2 UCS Chassis
         -    4  Cisco  B200-M1 blades (2 each chassis)
              - Dual 10Gb Intel card (1 per blade)
    vPC Configuration on Nexus 5000
    TACSWN01
    TACSWN02
    feature vpc
    vpc domain 5
    reload restore
    reload restore   delay 300
    Peer-keepalive   destination 10.11.3.10
    role priority 10
    !--- Enables vPC, define vPC domain and peer   for keep alive
    int ethernet 1/9-10
    channel-group 50   mode active
    !--- Put Interfaces on Po50
    int port-channel 50
    switchport mode   trunk
    spanning-tree port   type network
    vpc peer-link
    !--- Po50 configured as Peer-Link for vPC
    inter ethernet 1/17-18
    description   UCS6120-A
    switchport mode   trunk
    channel-group 51   mode active
    !--- Associates interfaces to Po51 connected   to UCS6120xp-A  
    int port-channel 51
    swithport mode   trunk
    vpc 51
    spannig-tree port   type edge trunk
    !--- Associates vPC 51 to Po51
    inter ethernet 1/19-20
    description   UCS6120-B
    switchport mode   trunk
    channel-group 52   mode active
    !--- Associates interfaces to Po51 connected   to UCS6120xp-B  
    int port-channel 52
    swithport mode   trunk
    vpc 52
    spannig-tree port   type edge trunk
    !--- Associates vPC 52 to Po52
    !----- CONFIGURATION for Connection to   Catalyst 6506
    Int ethernet 1/1-3
    description   Cat6506-01
    switchport mode   trunk
    channel-group 61   mode active
    !--- Associate interfaces to Po61 connected   to Cat6506-01
    Int port-channel 61
    switchport mode   trunk
    vpc 61
    !--- Associates vPC 61 to Po61
    Int ethernet 1/4-6
    description   Cat6506-02
    switchport mode   trunk
    channel-group 62   mode active
    !--- Associate interfaces to Po62 connected   to Cat6506-02
    Int port-channel 62
    switchport mode   trunk
    vpc 62
    !--- Associates vPC 62 to Po62
    feature vpc
    vpc domain 5
    reload restore
    reload restore   delay 300
    Peer-keepalive   destination 10.11.3.9
    role priority 20
    !--- Enables vPC, define vPC domain and peer   for keep alive
    int ethernet 1/9-10
    channel-group 50   mode active
    !--- Put Interfaces on Po50
    int port-channel 50
    switchport mode   trunk
    spanning-tree port   type network
    vpc peer-link
    !--- Po50 configured as Peer-Link for vPC
    inter ethernet 1/17-18
    description   UCS6120-A
    switchport mode   trunk
    channel-group 51   mode active
    !--- Associates interfaces to Po51 connected   to UCS6120xp-A  
    int port-channel 51
    swithport mode   trunk
    vpc 51
    spannig-tree port   type edge trunk
    !--- Associates vPC 51 to Po51
    inter ethernet 1/19-20
    description   UCS6120-B
    switchport mode   trunk
    channel-group 52   mode active
    !--- Associates interfaces to Po51 connected   to UCS6120xp-B  
    int port-channel 52
    swithport mode   trunk
    vpc 52
    spannig-tree port   type edge trunk
    !--- Associates vPC 52 to Po52
    !----- CONFIGURATION for Connection to   Catalyst 6506
    Int ethernet 1/1-3
    description   Cat6506-01
    switchport mode   trunk
    channel-group 61   mode active
    !--- Associate interfaces to Po61 connected   to Cat6506-01
    Int port-channel 61
    switchport mode   trunk
    vpc 61
    !--- Associates vPC 61 to Po61
    Int ethernet 1/4-6
    description   Cat6506-02
    switchport mode   trunk
    channel-group 62   mode active
    !--- Associate interfaces to Po62 connected   to Cat6506-02
    Int port-channel 62
    switchport mode   trunk
    vpc 62
    !--- Associates vPC 62 to Po62
    vPC Verification
    show vpc consistency-parameters
    !--- show compatibility parameters
    Show feature
    !--- Use it to verify that vpc and lacp features are enabled.
    show vpc brief
    !--- Displays information about vPC Domain
    Etherchannel configuration on TAC 6500s
    TACSWC01
    TACSWC02
    interface range GigabitEthernet2/38 - 43
    description   TACSWN01 (Po61 vPC61)
    switchport
    switchport trunk   encapsulation dot1q
    switchport mode   trunk
    no ip address
    channel-group 61   mode active
    interface range GigabitEthernet2/38 - 43
    description   TACSWN02 (Po62 vPC62)
    switchport
    switchport trunk   encapsulation dot1q
    switchport mode   trunk
    no ip address
    channel-group 62   mode active

    ihernandez81,
    Between the c1-r1 & c1-r2 there are no L2 links, ditto with d6-s1 & d6-s2.  We did have a routed link just to allow orphan traffic.
    All the c1r1 & c1-r2 HSRP communications ( we use GLBP as well ) go from c1-r1 to c1-r2 via the hosp-n5k-s1 & hosp-n5k-s2.  Port channels 203 & 204 carry the exact same vlans.
    The same is the case on the d6-s1 & d6-s2 sides except we converted them to a VSS cluster so we only have po203 with  4 *10 Gb links going to the 5Ks ( 2 from each VSS member to each 5K).
    As you can tell what we were doing was extending VM vlans between 2 data centers prior to arrivals of 7010s and UCS chassis - which  worked quite well.
    If you got on any 5K you would see 2 port channels - 203 & 204  - going to each 6500, again when one pair went to VSS po204 went away.
    I know, I know they are not the same things .... but if you view the 5Ks like a 3750 stack .... how would you hook up a 3750 stack from 2 6500s and if you did why would you run an L2 link between the 6500s ?
    For us using 4 10G ports between 6509s took ports that were too expensive - we had 6704s - so use the 5Ks.
    Our blocking link was on one of the links between site1 & site2.  If we did not have wan connectivty there would have been no blocking or loops.
    Caution .... if you go with 7Ks beware of the inability to do L2/L3 via VPCs.
    better ?
    one of the nice things about working with some of this stuff is as long as you maintain l2 connectivity if you are migrating things they tend to work, unless they really break

  • Trunking on Nexus 5000 to Catalyst 4500

    I have 2 devices on the each end of a Point to Point.  One side has a Nexus 5000 the other end a Catalyst 4500.  We want a trunk port on both sides to allow a single VLAN for the moment.  I have not worked with Nexus before.  Could someone look at the configurations of the Ports and let me know if it looks ok?
    nexus 5000
    interface Ethernet1/17
      description
      switchport mode trunk
      switchport trunk allowed vlan 141
      spanning-tree guard root
      spanning-tree bpdufilter enable
      speed 1000
    Catalyst 4500
    interface GigabitEthernet3/39
    description
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 141
    switchport mode trunk
    speed 1000
    spanning-tree bpdufilter enable
    spanning-tree guard root

    Thanks guys, we found the issue.  The Catalyst is on my side and the Nexus is on the side of the hosting center.  The hosting center moved his connection to a different Nexus 5000 and the connection came right up.  We dropped the spanning-tree guard root. 
    It was working on the previous nexus when we set the native vlan for 141.  So we thought it was the point to point dropping the tags.
    The hosting center engineer this it might have to do with the VPC Peer-Link loop prevention on the previous Nexus. 
    Anyway it is working the way we need it to.

  • Comandos Nexus 5000

    Hola alguien me podria decir cual es el comendo en Nexus 5000 para mandar a Default una interface                  

    Hi,
    Disculpas, no hablo español. Apologies, I don't speak Spanish, but I think the answer you're looking for is as follows:
    From the Resolved Caveats in Cisco NX-OS Release 5.2(1)N1(4) section of the Release Notes:
    CSCth06584 The enhancement request filed requesting the "default interface" capability in Cisco Nexus 5000 and 5500 Series switches.
    Regards

Maybe you are looking for

  • I used the audio widget in iBA but got this error message when I tried to deliver in iTunes Producer

    -- any clue what to do? ERROR ITMS-9000: "Files of type audio/x-m4a are not allowed outside of widgets: assets/media/12%20Largo%20for%20Glass%20Armonica%20in%20G%20Minor.m4p" at Bo

  • When trying to send a pdf for multiple digital signatures, it keeps erroring out.  What are some reasons?

    I am having trouble sending out a document for digital signatures for multiple people.  I enter the signature fields and click on Distribute.  It then allows me to choose the receiving parties.  I click on <SEND> and it processes for a moment and the

  • Using Datasources with Weblogic 7.2

    Hi everyone, I am using Weblogic 7.2 with Db2 8.1 fp3(Type 4 Drivers). I am wanting to set up a Datasource that would allow my application to connect to DB2 using the JNDI lookup for that data-source. My question is: what additions/changes I need to

  • Issue opening pdf on mouse click

    I have build a swing app which makes search over lucene index and fetches the pdf file from derbydb saves in HDD and let me opens the pdf from the selected row. public void mouseClicked(MouseEvent e){             JTable table = (JTable)e.getSource();

  • Modbus tcp on c-rio

    Hello everyone. I already wrote an application to control via modbus tcp the plc of my brushless motor. It works fine.  Now i'd like to do the same thing using 2nd ethernet port of c-rio (just set like ni guide). It's possible this application? thank