Timers on vPC peer-keepalive link

Hello,
I am confused about what 2 timer parameters (Keepalive Hold Timeout and Keepalive Timeout) are used for.
Below are the quotes, which are truely quite confusing, from Cisco official docs ( Design and Configuration Guide:
Best Practices for Virtual Port Channels (vPC) on Cisco Nexus 7000 Series Switches)
Keepalive Hold Timeout
This timer gets started once the vPC peer-link goes to down state. During this time period, the secondary vPC peer
device will ignore any peer-keepalive hello messages (or the lack of). This is to assure that network convergence
can happen before any action is taken.
Q1: Why vPC secenary device ignores ongoing keepalive message? As far as I know, secondary device does needs
these keepalive messages to determine subsequent actions (shut down all its vPC memeber port or enter split-brain scenario).
Q2: What kind of network convergence will happen here?
Keepalive Timeout
During this time period, the secondary vPC peer device will look for vPC peer-keepalive hello messages from the
primary vPC peer device. If a single hello is received, the secondary vPC peer concludes that there must be a dual
active scenario and therefore will disable all its vPC member ports (that is, all port-channels that carry the keyword
vpc).
Q1: When will this timer be triggered?
Q2: If a single Hello is received, why dual active scenario (also termed split-brain scenario) is determined?
Q3: Why all vPC member ports on secondary switch will be all disabled when dual active scenario is determined?
Thanks in advance for your help.

Q1:keepalive holdtimeout
The difference between the hold-timeout and the timeout parameters is as follows:
During the hold-timeout, the vPC secondary device does not take any action based on any keepalive messages received, which prevents the system taking action when the keepalive might be received just temporarily, such as if a supervisor fails a few seconds after the peer link goes down
During the timeout, the vPC secondary device takes action to become the vPC primary device if no keepalive message is received by the end of the configured interval. 

Similar Messages

  • Nexus 5k Peer Keepalive Link

    Can you move the peer-keepalive link after it already has been implemented? Is Cisco's recommendation to use the maangement ports on the 5k's rather than buring a 10G port for the keep-alive?                  

    Hello,
    Yes you can move the PKL, with no issue once the vPC is established. This is a zero impact if vPC is established.
    In regards to where to connect it. Yes, as Steve mentioned it is the second choice on the Nexus 7000 to use the mgmt port. Some customers chose to use the mgmt port because they only have 10 gig in the chassis and don't want to burn ports on the PKL. The thing to remember is if you have redundant sups, to connect both mgmt ports (one will show down to the switch it is connected to). This will allow the PKL to still be up when a sup switchover occurs.
    On the Nexus 5000 we typically see the mgmt port used for the PKL. The reason being is if you used a SVI and created a separate link then it'd break ISSU (there were some ways to make it work but not recommended). If you have a layer 3 module in the 5500s then you can burn a port in the Nexus 5548 and make it a routed port.
    A majority of customers I work with that have N7K and N5K they would use the common dominator and use the mgmt port for PKL for both. This would be so there templates, and cabling were standardized.
    With all that said, it is an unsupported design to have the PKL plugged into a FEX that is dual connected to a 5K (the FEX is vPC). If you think about it, it boils down to a chicken and egg issue where you need the FEX to come online for the PKL to come up, but to get the FEX online you need vPC to come up and to get vPC to come up you need the PKL. I tell customers to never connect the PKL into a FEX off the same pair. I have seen customers use Nexus for their OOB network...
    Hope this helps to clarify.
    Dave
    Sent from Cisco Technical Support iPhone App

  • Migrating a peer-keepalive link to dedicated mode ports.

    Hi
    Currently our two Nexus 7010 switches use shared mode ports in the port channel that forms the peer-keepalive link between them.
    As we are replacing the modules that these peer-keepalive links are on with non blocking modules we want to migrate the ports to dedicated mode.
    The Cisco documentation suggests you cannot have both shared mode ports and dedicated mode ports in the one L2 port channel.
    In this case has anyone tested whether it would work to just remove the channel-group (associated with the peer-keepailve link) from all of the old shared mode ports and add it to the new dedicated mode ports ? Or will the port channel complain because it was originally created with shared mode ports ?
    thanks

    Hi
    Currently our two Nexus 7010 switches use shared mode ports in the port channel that forms the peer-keepalive link between them.
    As we are replacing the modules that these peer-keepalive links are on with non blocking modules we want to migrate the ports to dedicated mode.
    The Cisco documentation suggests you cannot have both shared mode ports and dedicated mode ports in the one L2 port channel.
    In this case has anyone tested whether it would work to just remove the channel-group (associated with the peer-keepailve link) from all of the old shared mode ports and add it to the new dedicated mode ports ? Or will the port channel complain because it was originally created with shared mode ports ?
    thanks

  • VPC Peer-Link Failure

    Hello,
    In the case I have two N5k acting as a vPC peers and I lose the vPC peer-link between two of them, but I do not lose the vPC peer-keepalive link, what would happen when the vPC peer-link comes back again?
    As I understand in the case of vPC peer-link failure all vPC member ports on the secondary N5k will be shut down. When the vPC peer-link comes back again what would happen?
    I have read that in that case the vPC member ports will not come back automatically, but they will remain disabled until you do manual recovery. Is that really so?
    Is there some way that we can automate the process upon recovery?
    Thanks

    The reload restore command has been removed/replaced and the new feature is
    now called auto recovery. Auto recovery covers the use case that reload
    restore addressed, plus more.
    If both switches reload, and only one switch boots up, auto-recovery allows
    that switch to assume the role of the primary switch. The vPC links come up
    after a configurable period of time if the vPC peer-link and the
    peer-keepalive fail to become operational within that time. If the peer-link
    comes up but the peer-keepalive does not come up, both peer switches keep
    the vPC links down. This feature is similar to the reload restore feature in
    Cisco NX-OS Release 5.0(2)N1(1) and earlier releases. The reload delay
    period can range from 240 to 3600 seconds.
    When you disable vPCs on a secondary vPC switch because of a peer-link
    failure and then the primary vPC switch fails, the secondary switch
    reenables the vPCs. In this scenario, the vPC waits for three consecutive
    keepalive failures before recovering the vPC links.
    The vPC consistency check cannot be performed when the peer link is lost.
    When the vPC peer link is lost, the operational secondary switch suspends
    all of its vPC member ports while the vPC member ports remain on the
    operational primary switch. If the vPC member ports on the primary switch
    flaps afterwards (for example, when the switch or server that connects to
    the vPC primary switch is reloaded), the ports remain down due to the vPC
    consistency check and you cannot add or bring up more vPCs.
    For more information, please refer to the Operations Guide: As a best practice,
    auto-recovery should be enabled in vPC.
    HTH,
    Alex

  • VPC Keep-alive link in F1 series Linecards

    Hi.
    Can we use N7K F1 linecards for vpc keepalive link?
    Configure layer 2  portchannel  and a point-to-point vlan interface?
    thanks

    Hi,
    This is supported, but as per page 28 of the Best Practices for Virtual Port Channels (vPC) on Cisco Nexus 7000 Series Switches:
    Note: If you are using a pure Cisco Nexus F1 Series system or VDC (that is, only F1 line cards used in the chassis or only F1 ports in the VDC), the peer-keepalive link can be formed with mgmt0 interface or 10-Gigabit Ethernet front panel port. In the latter case, use the management command under the SVI to enable it for inband management (otherwise, the SVI is brought down because no M1 modules exist in the system or VDC).
    It's probably worth noting the recommendations as to how the peer-keepalive link should be configured:
    Strong Recommendations:
    When building a vPC peer-keepalive link, use the following in descending order of preference:
    1. Dedicated link(s) (1-Gigabit Ethernet port is enough) configured as L3. Port-channel with 2 X 1G port is even better.
    2. Mgmt0 interface (along with management traffic)
    3. As a last resort, route the peer-keepalive link over the Layer 3 infrastructure
    Regards

  • Nexus5k peer-keepalive design/configuration

    So I am looking for thoughts on the following implementation.
    I have three sets of Nexus5ks (PODS) that I want to setup the peer-keepalive links for.
    For each POD I have configured the mgmt0 ports and connected to a L2 switch.   This L2 switch is being used for each PODs peer-keepalive along with some other management services for our DC.    My concern is that all PODs peer-keepalives are traversing this single switch and want to make sure that I fully understand what will happen if this switch goes down.   We'll work diligently to restore service to this switch as other critical management services are running on it but the single-point of failure for 3 PODs peer-keepalives has me concerned.  
    So if the keepalive link goes down it is my understanding that all the vPCs will remain active and data forwarding will continue.   That's good to know.  But are there any other risks or caveats I should be aware of.   What if another system failure occurs when this keepalive link is down?  A switch reboots or a vPC drops?  
    Also, is there any failure scenario where all 3 PODS would lose data forwarding if this L2 switch fails that all the keepalives are going over?
    I feel it would be overkill to setup a separate L2 switch for each POD for just this use.   So am leveraging an existing L2 switch we use for other network management functions. 
    Any advice in appreciated.  Thank in advance
    Chucky

    Hi Chucky,
    As you already know once vPC are operational, if the peer-keepalive link fails, then everything carries on as before. Both switches will still continue to forward traffic on their vPC member ports.
    If you were then unfortunate enough to have a failure of the vPC peer link while the peer-keepalive is down, then you get into the scenario where the vPC member ports on the operational secondary device are taken down. You still have connectivity to downstream devices from the operational primary though, and so unless you have single attached devices on the secondary, you're still OK.
    "What if another system failure occurs when this keepalive link is down?  A switch reboots or a vPC drops?"
    If one of the Nexus 5K switch reboots while the peer-keepalive were down, then the remaining N5K will remain or become operational primary and continue to forward traffic on the vPC member ports. If you lost both Nexus 5K of the same pod at the same time as the Layer-2 switch were down, then depending upon your code version and configuration, you could run into issues when they came back up. In the early days of vPC the peer-keepalive was required to initially establish vPC, but Cisco have addressed this issue from 5.0(2)N2(1) with the auto-recovery feature.
    If a vPC drops on one or both of the peers e.g., due to a single link failure or the entire downstream device rebooting, then the ports and vPC becomes operational on both the peer devices once the downstream device is operational again. This is irrespective of the state of the peer-keepalive link.
    The Virtual Port Channel Operations guide discusses failure scenarios (and more besides), and the use of the auto-recovery and is worth a read to ensure you fully understand the recovery options for all scenario.
    In short, I believe that what you're planning is an acceptable risk.
    Regards

  • Using 40GE ports for VPC Peer Link

    Hi,
    Is it possible to use the native 40GE ports on the N7K-M206FQ-23L module for the VPC Peer Link, or do you have to break these ports out into 10GE ? I have read that 10GE ports must be used for the VPC peer link.
    Thanks in advance.

    You can use 40GE ports for VPC peer-link. No need to break those to 10G.

  • VPC, VPC Peer-links and VDC

    I have 2 7Ks and will run VPC and multiple VDCs.
    Should it be a separete VPC Peer-link and keep-alive link per VDC?
    I am not sure but I guess yes since a physical interface should be allocated to a VDC.
    I just need confirmation.
    thanks

    Yes, if you are having 4 VDCs with vPC, you will need 4 separate vPC peer-links. VDC is a physical separation (even though it is the same box) and it cannot communicate across VDC.
    HTH,
    jerry

  • VPC Peer-Link In Different VPC/Portchannel

    Hi all,
    Can we make 2 different port-channel as vpc peer-link.
    Example:
    interface port-channel 10
    vpc peer-link
    interface port-channel 20
    vpc peer-link
    Is this working as vpc peer-link

    I have an issue with mac-table full in F1 linecard..Just got idea to do like the title, situation as below:
    Vlan : 1 until 4000
    interface : 4 x 10G
    I want to separate the Vlan into 2 group:
    1) 1-2000
    2) 2001-4000
    This link is for VPC peer-link.
    I will create 2 VPC group and combine it as 1 peer-link.
    Can be done?

  • VPC Peer Link

    What is the function of the VPC peer-link? Should be the composite of all VPC links that are dual homed between switches?
    In this diagram, is it necessary to have 8 x 10G links as shown above. The links conecting the 7Ks to the 5Ks are VPC links.

    ok, so as I read your reply I would like to confirm the following:
    Hosts which are not connected to the FEX via normally trunk or vPC which need to communicate to Hosts which are on a vPC these VLANs need to be trunked on the vPC peer link.
    VLANs which communicate between devices which are not on the vPC is recommended to have a seperate link. 
    I now have an issue, where I have a Nexus 1000v deployed in vmware which we are using L3. The control (same requirements for vMotion VLAN) VLANs requires to be L2 and is trunked via the physical uplinks which also carry VLANs which have HSRP on the 5Ks. 
    As a port-channel from each hosts will terminate on each fex as part of a vPC, each will be carrying VLANs which only require L2 communication and some which have a gateway (HSRP).
    For VLANs which carry only L2 information i.e. Control VLAN or vMotion VLAN, they are required to communicate with other hosts at this point if source packet arrives one Fex 1 which is connected to N5K1 and required to communicate to destination on Fex 2 which is linked to N5K2 it would need to transit via the two Nexus 5Ks, could this be achieved by the peer link or would I need a separate link carrying these VLANs in addition to them being carried over the vPC peer link?

  • VPC peer-link on N7k's 1Gig link?

    We are in process of setting up vPC peer between 2 N7ks over a 1Gig link, has anybody done this before? Couldn't find any documents in cisco site which talks about this. All of the documents points to setting up using 10G links.
    Cheers
    Raja

    Hello Raja,
    The vPC peer link must be 10Gb Ethernet otherwise it will not form. It is also mentioned here. 
    http://www.cisco.com/c/en/us/td/docs/switches/datacenter/sw/4_2/nx-os/interfaces/configuration/guide/if_nxos/if_vPC.html
    https://books.google.co.uk/books?id=o3jeY1SwOYcC&pg=PA114&lpg=PA114&dq=peer+link+must+be+10&source=bl&ots=cZSAvLRMto&sig=YviMepi0thKtqUA2P2n3r2JkWnc&hl=en&sa=X&ei=-GauVNXwIs_waMzcgvAG&ved=0CFQQ6AEwCQ#v=onepage&q=peer%20link%20must%20be%2010&f=false
    The vPC peer keep alive link by all means can be 1Gb.
    HTH
    Bilal

  • (*) - local vPC is down, forwarding via vPC peer-link

    Hello 
    Local VPC status down what is the issue-----
    status - 
     show vpc
    Legend:
                    (*) - local vPC is down, forwarding via vPC peer-link
    vPC domain id                     : 1
    Peer status                       : peer adjacency formed ok
    vPC keep-alive status             : peer is alive
    Configuration consistency status  : success
    Per-vlan consistency status       : success
    Type-2 consistency status         : success
    vPC role                          : secondary
    Number of vPCs configured         : 2
    Peer Gateway                      : Disabled
    Dual-active excluded VLANs        : -
    Graceful Consistency Check        : Enabled
    Auto-recovery status              : Enabled (timeout = 240 seconds)
    vPC Peer-link status
    id   Port   Status Active vlans
    1    Po1    up     1,150
    vPC status
    id     Port        Status Consistency Reason                     Active vlans
    10     Po10        down*  success     success                    -
    20     Po20        down*  success     success                    -
    # show port-channel summary
    Flags:  D - Down        P - Up in port-channel (members)
            I - Individual  H - Hot-standby (LACP only)
            s - Suspended   r - Module-removed
            S - Switched    R - Routed
            U - Up (port-channel)
            M - Not in use. Min-links not met
    Group Port-       Type     Protocol  Member Ports
          Channel
    1     Po1(SU)     Eth      LACP      Eth1/1(P)    Eth1/2(P)
    10    Po10(SD)    Eth      LACP      Eth1/47(I)
    20    Po20(SD)    Eth      LACP      Eth1/48(I)

    Hi,
    What is Portchannel 10 and 20 for?  They are both down.
    Can you post the config from both switches?
    HTH

  • Vpc peer-link forwarding behavior

    Hey,
    In this cisco doc (http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/C07-572835-00_NX-OS_vPC_DG.pdf ) I come across this statement:
    One of the most important forwarding rules of vPC is the fact that a frame that entered the vPC peer switch from the peer link cannot exit the switch out of a vPC member port (except if this is coming from an orphaned port).
    This makes perfect sense up to the "except if this is coming from an orphaned port". I can't seem to figure out why traffic sourced from an orphaned port (ie, "from" an orphaned port) and ulimately destined to a vPC member port is allowed -- since it should be sent out the local vPC member port and not across the peer link.
    Would make more sense to me if it said "destined to an orphaned port", so of course it would have to cross the peer-link.
    Can anyone shed some light on this exception to the rule?
    Thanks!

    Thanks Chad!
    Kept racking my brain on that one, and the only time it would make any sense (ie, I was trying to fit a square peg in a round hole), is if you have IGP peering to each 7K from an orphan port (ex, FW), the IGP ECMP hashes a packet to the far-end 7K, and then the traffic sent to the directly attached 7K must be sent across the vpc-peerlink -- and in theory shouldn't be dropped. This is, of course, until you add peer-gateway command, which confuses matters a bit -- especially from an IGP control-plane perspective, but also in this loop-prevention rule, since the local 7K will handle the packets destined to the other's 7K MAC.
    To complicate matters worse, the latest 5K release notes say to exclude-vlan for peer-gateway for your backup router vlan... still have to dive into that one.

  • Nexus 7K Core Layer VDC, does it require a VPC Peer Link

    We are going to be using a pair of Cisco Nexus 7010s to act as both our data center aggregation layer and the core layer. We will accomplish this via two VDCs, one for the core layer and one for the aggregation layer.
    I know that if we are doing VPCs between the access and aggregation layers that we need a VPC Per Link (and peer keep alive link) between the two aggregation layer contexts, but if the connection between the aggregation and the core is purely layer 3 (OSPF), then I don't think we need a VPC peer link between the two core VDCs, Am I correct?

    You are on the right track
    You will use VPC if you’re designing include L2 trunk infrastructure. Since your aggregating with L3 core there is no need to add vpc I think.
    http://www.cisco.ws/en/US/docs/solutions/Enterprise/Data_Center/DC_3_0/DC-3_0_IPInfra.html
    Thx,
    Eric

  • Duplicate address across VPC peer-link on Nexus 7010

    Just set up a VPC peer-link between two 7010 switches.  The peer-link is a port-channel of two 10Gb connections.  On both sides I'm seeing this in the log:
    2010 Jan  5 04:27:34 CRMCN7K-1 %ARP-2-DUP_SRC_IP:  arp [3069]  Source address of packet received from 0024.f716.b341 on Vlan401(port-channel10) is duplicate of local, 10.180.0.17
    and on the other
    2010 Jan  5 04:23:39 CRMCN7K-2 %ARP-2-DUP_SRC_IP:  arp [3052]  Source address of packet received from 0024.f71f.a7c1 on Vlan401(port-channel10) is duplicate of local, 10.180.0.18
    VLAN 401 is the only VLAN on them right now with a Layer 3 address.  What am I missing?  Everything looks correct.  Port-Channel10 is up and running fine..or so it seems.

    Hey Nashwj,
    What version of NX-OS are you running?
    Are the 7K in a stand alone environment (lab or similar) or connected to other production network devices?
    Are both of the VLANs carried across the vPC peer link port-channel?
    Are both of the VLANs carried across any vPC port-channel?
    Do you have HSRP setup on the VLAN 401 interfaces on each of the 7Ks?  If so, what are the real and vip IP addresses?
    If you can either provide answers to the above or configuration snapshots of the vPC and SVI interfaces for your VLANs on each of the 7Ks a solution should be reachable.

Maybe you are looking for