LACP port channel between 6509 and Nexus 7K

We are in the process of migrating from dual 6509's to dual 7010's.  We have moved our 5k/2K's behind the 7K and have layer 2 up between the 6509 and 7K.  This link is configured as a port channel with 2 1gig links using LACP.  The port channel is up and working and traffic is passing but it doesn't appear the load it equally distributed between the links.  Both the 7K and 6K are setup for src-dst-ip for the load balancing.  The links have been in place for over 12 hours and I would have expected them to "equal" out.  Has anyone had this issue in or is this to be expected?  For clarification there is not VPC inolved in this configuration it is simply a port-channel between one 6509 and a 7010.
Thanks,
Joe

We are in the process of migrating from dual 6509's to dual 7010's.  We have moved our 5k/2K's behind the 7K and have layer 2 up between the 6509 and 7K.  This link is configured as a port channel with 2 1gig links using LACP.  The port channel is up and working and traffic is passing but it doesn't appear the load it equally distributed between the links.  Both the 7K and 6K are setup for src-dst-ip for the load balancing.  The links have been in place for over 12 hours and I would have expected them to "equal" out.  Has anyone had this issue in or is this to be expected?  For clarification there is not VPC inolved in this configuration it is simply a port-channel between one 6509 and a 7010.
Thanks,
Joe

Similar Messages

  • FC Port Channel between UCS and MDS.

    Hi All,
    I am new to Cisco Fabric Concepts. In my enviornment i have a F port channel(8 Port Group) created on MDS 9513 switch and this Fport 
    channel is connected to Cisco UCS 6296 FI. The Cisco UCS Blade servers are connected to the Fabric Interconnect .
    On MDS NPIV is enabled.
    Can anyone explain the below questions.
    1. Why do we create an F port Channel Group and connect it to the UCS FI? Is this something similar to Brocade Edge to AG Switch Connectivity.
    2. How to configure F port Channel Group in MDS . Can anyone explain with an example.
    3. Do we need to make any Configuration on UCS FI ports for server connectivity and Channel port Connectivity if yes what are the steps required to 
    do the same.Does the WWPN shows up in the FLOGI Database if the connectivity and configuration looks good in UCS FI and MDS.
    4. What happens when a VSAN on MDS switch is added to the Port Channel.
    Thanks and Regards,
    Santosh surya

    Look at my remarks in
    https://supportforums.cisco.com/discussion/12468266/fc-port-channels-between-mds-and-ucs-fi-best-practice
    1. Why do we create an F port Channel Group and connect it to the UCS FI? Is this something similar to Brocade Edge to AG Switch Connectivity.
    F port channel is proprietary; therefore any such F port channel between UCS FI and Brocade doesn't work.
    2. How to configure F port Channel Group in MDS . Can anyone explain with an example.
    see eg.
    https://supportforums.cisco.com/sites/default/files/legacy/9/9/2/53299-UCS_1-4-1_F-port_channel-trunk-v1.pdf
    3. Do we need to make any Configuration on UCS FI ports for server connectivity and Channel port Connectivity if yes what are the steps required to 
    see eg.
    https://supportforums.cisco.com/sites/default/files/legacy/9/9/2/53299-UCS_1-4-1_F-port_channel-trunk-v1.pdf
    Does the WWPN shows up in the FLOGI Database if the connectivity and configuration looks good in UCS FI and MDS.
    flogi database is on the MDS, not FI; there are however UCS CLI commands, like "show npv ...."
    4. What happens when a VSAN on MDS switch is added to the Port Channel.
    If its not created on UCS, it will just not become the status "up"

  • SAN Port Channel between 6120 and Brocade

    Has anyone setup or know how to setup a SAN Port Channel between 6120s and a Brocade FC switch? I know that it requires UCS firmware 1.4 but I don't know what the Brocade requirements are.

    The SAN port channel is Cisco proprietory. You cannot setup SAN port channel with Brocade. Sorry

  • FC port channels between MDS and UCS FI best practice?

    Hi,
    We would like to create FC port channels between our UCS FI's and MDS9250 switches.
    At the moment we have 2 separate 8Gbps links to the FI's.
    Are there any disadvantages or reasons to NOT do this?
    Is it a best practice?
    Thanks.

    As Walter said, having port-channels is best practice.  Here is a little more information on why.
    Let's take your example of two 8Gbps links, not in a port-channel ( and no static pinning ) for Fibre Channel connectivity:
    Hosts on the UCS get automatically assigned ( pinned ) to the individual uplinks in a round-robin fashion.
    (1)If you have some hosts that are transferring a lot of data, to and from storage, these hosts can end up pinned to the same uplink and could hurt their performance. 
    In a port-channel, the hosts are pinned to the port-channel and not individual links.
    (2)Since hosts are assigned to an individual link, if that link goes down, the hosts now have to log back into the fabric over the existing working link.   Now you would have all hosts sharing a single link. The hosts will not get re-pinned to a link until they leave and rejoin the fabric.  To get them load balanced again would require taking them out of the fabric and adding them back, again via log out, power off, reload, etc...
    If the links are in a port-channel, the loss of one link will reduce the bandwidth of course, but when the link is restored, no hosts have to be logged out to regain the bandwidth.
    Best regards,
    Jim

  • Port-channel between ACE4710 and switch

                   Hi all,
    I have configured port-channels on ACE and switch. Full-duplex and speed 1000Mbps are set on both sides. The configurations of port-channel on both sides is same. But there are output drops on the switch. See:
      Last clearing of "show interface" counters 00:31:24
      Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 536
    Why there are output drops on the switch? There is no half-duplex and both sides have the same speeds.
    Thank you
    Roman

    Hi Roman,
    Please paste the configuration of the ACE and Switch
    Cesar R
    ANS Team

  • Port channel between Nexus and Server is initializing

    Hi all, 
    I will need your help. 
    I have Nexus 5000, there is a POrt Channel between Nexus and Server, but one port always is in initializing.
    sh interface Ethernet101/1/27
    Ethernet101/1/27 is down (initializing)
      Hardware: 1000/10000 Ethernet, address: c8f9.f920.5b5c (bia c8f9.f920.5b5c)
      MTU 1500 bytes, BW 10000000 Kbit, DLY 10 usec
      reliability 255/255, txload 1/255, rxload 1/255
      Encapsulation ARPA
      Port mode is trunk
      auto-duplex, 10 Gb/s, media type is 10G
      Beacon is turned off
      Input flow-control is off, output flow-control is on
      Rate mode is dedicated
      Switchport monitor is off
    Any idea?
    Regards

    how many ports are you using in this port channel.
    can you paste the configuration of the channel ports and the interface.
    check the sh log if you are getting any log related to the port channeling.

  • Create port channel between UCS-FI and MDS 9124 (F Mode)

    Dear Team,
    We were trying to create  port channel between UCS FI and MDS 9124
    But the port channel not getting active in F mode on MDS 9124
    FI is in FC End Host Mode
    We have enabled FC uplink trunking on FI
    We have enabled NPIV on MDS
    We have enabled trunk on MDS
    FI and MDS in default VSAN
    To check we changed the FI mode to FC Switching mode and port channels became active but in E mode
    when we enabled FC uplink trunking on FI and FC Switching mode port channels became active in TE mode
    but in both the above cases showflogi database shows WWPN of SAN alone not showing any from FI.
    How to achive this?
    Have read that no need to change the swicthing mode to FC Switching mode and keep as FC Endhost mode
    SO how to achieve Port channel with F mode in MDS and FI ( Mode showing as NProxy)
    Does it has to do anything with MDS NX-OS version? (https://supportforums.cisco.com/thread/2179129)
    If yes how to upgrade as license for ports came along with Device and we do not have any PAC/PAK or license file as it came
    with license
    Also we have seen 2 files availabe for download (m9100-s2ek9-kickstart-mz.5.2.8b.bin and m9100-s2ek9-mz.5.2.8b.bin) which to use
    Thanks and Regards
    Jose

    Hi Jo Bo,
    what version of software if your MDS running?
    On your UCS do connect nxos and show inteface brieft and look at the mac address.
    it is possible that you might be hitting the bug below. if this is the case you might need to upgrade the firmware on your MDS.
    Add MAC OUI "002a6a", "8c604f", "00defb" for 5k/UCS-FI
    http://tools.cisco.com/Support/BugToolKit/search/getBugDetails.do?method=fetchBugDetails&bugId=CSCty04686
    Symptom:
    Nexus switch unable to connect any other Nexus or other Cisco Switch in NPV mode with a F port-channel.   Issue might be seen in earlier 5.1 releases like
    5.1.3.N1.1a
    but not the latest
    5.1.3.N2.1c
    release. Issue is also seen in
    5.2(1)N1(1)
    and
    6.0(2)N1(1)
    and later releases.
    Conditions:
    Nexus configured for SAN PortChannels or NPIV trunking mode Nexus connected to UCS via regular F port channel where UCS in NPV mode  NPV edge switch: Port WWN OUI from UCS FI  or other Cisco manufactured switch:  xx:xx:00:2a:6a:xx:xx:xx   OR  xx:xx:8c:60:4f:xx:xx:xx
    Workaround:
    Turn-off trunking mode on Nexus 5k TF-port Issue does not happen with standard  F-PORT Remove SAN Portchannel config
    Further Problem Description:
    To verify the issue please collect  show flogi internal event-history errors  Each time the port is attempted OLS, NOS, LRR counters will increment. This can be determined via the following output,  show port internal info all show port internal event-history errors

  • Configure port channel between IO Module and FI

                       Hi,
    I have the current setup
    UCS chassis (4 uplinks) --> FI --> (Port channel) --> N5K --> (port channel) --> VSS 6500
    I configure port channel between IO Module and the FI by changing to policy to "Port Channel" and set the link to 4
    FI has created a portchannel under "Internal" containing all the FI interfaces that are connected to the IO module.
    I have installed ESXI on a blade but i was unable to reach it, even the esx was unable to ping the gateway.
    VLAN tagging is enabled from the ESX server.
    I have issued the command "show mac address-table | inc <mac address of the vnic assigned from thre service profile> on both the N5K and thr 6500 and the mac is there.
    I have allowed all the vlans on the vNIC from the service profile.
    am I missing anything?
    thanks

    Hello,
    Can you please check whether your ESXi vmkernel interface ip address learned on right VLAN on FI / upstream switch or not.
    connect nxos
    show mac-address-table | inc 
    Padma

  • Interface port-channel btw N2K and Brocade VDX

    Dear all,
    I tried to configure a port-channel between a nexus 2K (2FEX) and a brocade VDX (VDX6710).
    As you seen below, my configuration :
    N2K
    Po10
    switchport access vlan 200
    Eth101/1/6
    switchport access vlan 200
    channel-group 10 mode active (used LACP)
    Eth102/1/6
    switchport access vlan 200
    channel-group 10 mode active (used LACP)
    VDX
    interface Port-channel 27
    vlag ignore-split
    speed 1000
    switchport
    switchport mode access
    switchport access vlan 200
    spanning-tree shutdown
    no shutdown
    interface GigabitEthernet 21/0/31
    channel-group 27 mode active type standard
    lacp timeout long
    no shutdown
    interface GigabitEthernet 22/0/31
    channel-group 27 mode active type standard
    lacp timeout long
    no shutdown
    Unfortunately, we can't up the link for interfaces and so the port-channel.
    We have the next message from the N5K for FEX :
    %ETHPORT-5-IF_ADMIN_UP: Interface Ethernet101/1/6 is admin up .
    %ETHPORT-5-SPEED: Interface Ethernet101/1/6, operational speed changed to 1 Gbps
    %ETHPORT-5-IF_DUPLEX: Interface Ethernet101/1/6, operational duplex mode changed to Full
    %ETHPORT-5-IF_RX_FLOW_CONTROL: Interface Ethernet101/1/6, operational Receive Flow Control state changed to off
    %ETHPORT-5-IF_TX_FLOW_CONTROL: Interface Ethernet101/1/6, operational Transmit Flow Control state changed to on
    %ETHPORT-5-IF_UP: Interface Ethernet101/1/6 is up in mode access
    %ETHPORT-5-IF_DOWN_NONE: Interface Ethernet101/1/6 is down (None)
    %ETHPORT-5-IF_DOWN_ERROR_DISABLED: Interface Ethernet101/1/6 is down (Error disabled. Reason:BPDUGuard)
    Any idea about the configuration ?
    Thanks for your help.
    Matthieu

    Fexes are really made to connect hosts and not switches .  So the ports should not see bpdu's so bpduguard is err-disabling the ports.          
    Spanning Tree Protocol
    HIFs go down with the BPDUGuard errDisable message
    HIFs go down accompanied with the message, BPDUGuard errDisable.
    Possible Cause
    By default, the HIFs are in STP edge mode with the BPDU guard enabled.  This means that the HIFs are supposed to be connected to hosts or  non-switching devices. If they are connected to a non-host device/switch  that is sending BPDUs, the HIFs become error-disabled upon receiving a  BPDU.
    Solution
    Enable the BPDU filter on the HIF and on the peer connecting device.  With the filter enabled, the HIFs do not send or receive any BPDUs. Use  the following commands to confirm the details of the STP port state for  the port:

  • LACP port-channel configs

    Hi,
    I currently have Nexus 1000v and vSphere 5.1 and want to convert my port-channel uplink port (ESX to Catalyst 3750 stack) from static to LACP; The configs below are what I have now, and I added in BOLD what I think I need to change on both my VSM and Catalyst 3750 stack to make LACP work. Can someone confirm if the configs below are right?
    Thanks you for your Help!
    conf t
    feature lacp
    port-profile type ethernet uplink
    vmware port-group
    feature lacp
    mtu 1500
    switchport mode trunk
    switchport trunk allowed vlan 10,988-989
    switchport trunk native vlan 900
    channel-group auto mode on
    no shutdown
    system vlan 10,988-989
    description uplink
    state enabled
    And here is what I have configured on my Cisco 3750 at the other end;
    interface Port-channel8
    description Etherchannel connection to ESX
    switchport trunk encapsulation dot1q
    switchport trunk native vlan 900
    switchport trunk allowed vlan 10,988,989
    switchport mode trunk
    switchport nonegotiate
    spanning-tree portfast trunk
    interface GigabitEthernet3/0/24
    description 22 - nic2
    switchport trunk encapsulation dot1q
    switchport trunk native vlan 900
    switchport trunk allowed vlan 10,988,989
    switchport mode trunk
    switchport nonegotiate
    channel-group 8 mode on (need to change to active)
    spanning-tree portfast trunk
    end
    interface GigabitEthernet2/0/24
    description 22 - nic3
    switchport trunk encapsulation dot1q
    switchport trunk native vlan 900
    switchport trunk allowed vlan 10,988,989
    switchport mode trunk
    switchport nonegotiate
    channel-group 8 mode on (need to change to active)
    spanning-tree portfast trunk
    end

    I would like to use LACP port channel instead of a static port channel, because LACP will shut down a link in case of a faulty nic (dead, driver issue, etc), instead of trying to send traffic through that link.  With static port channel, one side goes down (ESX NIC) the other side (3750) does not see the link go down and still tries to send traffic, instead of hashing the address to another link.

  • What can be the max difference in cable lengths that we can have between the ISLs in a port-channel between MDS switches?

    Hello All
    What can be the max difference in cable lengths that we can have between the ISLs in a port-channel between MDS switches? Do we have any documentation?
    Thanks
    Chetan

     competitive solution instead recommends a distance variance of 30 meters or less among ISLs within a trunk. If the distance variance is greater than 30 meters, undesired and degraded performance will occur. For example, if a trunk has a distance of 100 kilometers, the competitive trunking solution allows a cable length variance of only 0.03 percent!
    ref;
    http://www.cisco.com/c/en/us/products/collateral/storage-networking/mds-9500-series-multilayer-directors/white_paper_c11-534878.html
    hth
    regards
    inayath
    **********PLZ dont forget to rate if this info is helpfull.

  • Lacp port channel shows down on one 5k

    I got one side of my lacp port channel down.
    the topology is shown but the left side is showing down
    20    Po20(SD)    Eth      LACP      Eth1/5(s)    Eth1/6(s) 
    # sh int port-channel 20
    port-channel20 is down (No operational members)
      Hardware: Port-Channel, address: 547f.eebb.644d (bia 547f.eebb.644d)
      Description: **To-VA-7004**
      MTU 1500 bytes, BW 100000 Kbit, DLY 10 usec
      reliability 255/255, txload 1/255, rxload 1/255
      Encapsulation ARPA
      Port mode is trunk
      auto-duplex, 10 Gb/s
      Input flow-control is off, output flow-control is off
      Switchport monitor is off 
      EtherType is 0x8100 
      Members in this channel: Eth1/5, Eth1/6
      Last clearing of "show interface" counters never
      30 seconds input rate 80 bits/sec, 0 packets/sec
      30 seconds output rate 176 bits/sec, 0 packets/sec
      Load-Interval #2: 5 minute (300 seconds)
        input rate 112 bps, 0 pps; output rate 288 bps, 0 pps
      RX
        4286 unicast packets  785765 multicast packets  1493093 broadcast packets
        2283144 input packets  248607161 bytes
        13 jumbo packets  0 storm suppression bytes
        0 runts  0 giants  0 CRC  0 no buffer
        0 input error  0 short frame  0 overrun   0 underrun  0 ignored
        0 watchdog  0 bad etype drop  0 bad proto drop  0 if down drop
        0 input with dribble  0 input discard
        0 Rx pause
      TX
        0 unicast packets  3397636 multicast packets  0 broadcast packets
        3397636 output packets  399463036 bytes
        0 jumbo packets
        0 output errors  0 collision  0 deferred  0 late collision
        0 lost carrier  0 no carrier  0 babble 0 output discard
        0 Tx pause
      2 interface resets
    sh run interface port-channel 20 membership 
    !Command: show running-config interface port-channel20 membership
    !Time: Mon Feb  2 23:04:37 2015
    version 5.1(3)N2(1b)
    interface port-channel20
      description **To-VA-7004**
      switchport mode trunk
      switchport trunk allowed vlan 1,200-202,251
    interface Ethernet1/5
      description **TO-VA-7004-ETH3/45**
      switchport mode trunk
      switchport trunk allowed vlan 1,200-202,251
      channel-group 20 mode active
    interface Ethernet1/6
      description **To-VA-7004-ETH4/46**
      switchport mode trunk
      switchport trunk allowed vlan 1,200-202,251
      channel-group 20 mode active
    but on the right side everything is up, 
    20    Po20(SU)    Eth      LACP      Eth1/5(P)    Eth1/6(P) 

    It seems have a problem on interfaces => 20    Po20(SD)    Eth      LACP      Eth1/5(s)    Eth1/6(s) 
    Can you share us the status about interfaces 1/5 - 6 & 3/45, 4/45 of 7k?
    Do you have configured per Ethernet interfaces or on the Po ?

  • SAN Port-Channel between Nexus 5000 and Brocade 5100

    I have a Nexus 5000 running in NPV mode connected to a Brocade 5100 FC switch using two FC ports on a native FC module in the Nexus 5000. I would like to configure these two physical links as one logical link using a SAN Port-Channel/ISL-Trunk. An ISL trunking license is already installed on the Brocade 5100. The Nexus 5000 is running NX-OS 4.2(1), the Brocade 5100 Fabric OS 6.20. Does anybody know if this is a supported configuration? If so, how can this be configured on the Nexus 5000 and the Brocade 5100? Thank you in advance for any comments.
    Best regards,
    Florian

    I tried that and I could see the status light on the ports come on but it still showed not connected.
    I configured another switch (a 3560) with the same config and the same layout with the fiber and I got the connection up on it. I just cant seem to get it on the 4506, would it be something with the supervisor? Could it be wanting to use the 10gb port instead of the 1gb ports?

  • 8Gig Port Channel between two 6509s

    Hey all,
    I have two 6509s that I'm trying to configure an 8 Gig trunk/port channel. I have an 8 port fiber module in slot 3 on both switches. When I use the following command: "set port channel 3/1-8" on it seems to take the command, but if I do "show port channel" it shows two groups:
    3/1-4
    3/5-8
    Is there a limit as to how many gigs a port channel can be? If not, why does it split it like this?
    I should also note I'm using dot1q for the trunks using Auto mode on one switch and Desirable on the other.
    Thanks,
    Scott

    I did a show port cap on the interfaces and I didn't see any sort of restriction. I decided to run the command again 'set port channel 3/1-8 on' and for some reason it seemed to work this time. Not sure what changed, but it's working now.
    Thanks for your help!

  • Gigabit EtherChannel between 6509 and Intel Pro/1000 MTx

    I am trying to configure gigabit etherchannel to connect our backup server with a 6509. The Intel Pro/1000 MT nic in the server has two gigabit ports and supports teaming using LACP. I have configured two gig ports on the 6509 as layer 2 etherchannel ports ( channel-group 2 mode active ). The switch ports always show up as disabled. I can't seem find any documentaion on connecting server to switch using etherchannel ( although the Intel Pro MT nic documentaion does mention it ). Does anyone have experience doing this ?

    We always just use load balancing on the Intel NICs and do nothing with the switch but make sure that the ports are in the correct vlan.
    We have been using this for over 3 years flawlessly.
    Our switches are a Cat4006 and Cat4506.

Maybe you are looking for