Hsrp convergence time

Hello!
We have implemented an hsrp network using two(2) 3750(hsrp1 & hsrp2). I have 6 vlans in my LAN. THe 3 vlans are active state in the hsrp1 and others are active in hsrp2.. Our set-up is typically an example of a best practices of hsrp as it said by the cisco. Now, upon our testing if our hsrp works, we used the laptop to check the connectivity by "continuous pinging"...and patch it to the switch_3(vlan3). We ping the other site(wan) because most of the server was there... Now for the testing, I shut the interface from hsrp1 which is connected to the switch_3. THe pinging got only one timeout... BUt when i put it back, the convergence is very slow and i think we got a timeout of more than 10... and then, it goes back to normal.
Any ideas of the convergence time if it is normal that there is a slow converge when the active goes back?
By the way, in spanning tree, all of my switches are in a default bridge priority. And it seems that not all of the root bridge(per vlan) are there in the two 3750(hsrp1 & hsrp2)... Do we need to force one of them to be the root bridge of all vlans?
Pls. advise... thanks...
-john-

The spanning tree root must be at the node that is hsrp-active. The alternative root should be the second hsrp-switch. You must configure this to get hsrp working as desired.
Besides, you have to setup your network in a way that enables quick recovery from spanning-tree topology changes. To achieve this you may use rapid-spanning tree (spanning-tree proto rapid) or uplink-fast/backbone-fast with 802.1d. I believe there are still some unresolved issues with rstp but you may still give it a try. If it works for your network, this is the easiest way to configure your spanning tree. The two links below point to the corresponding config-guides.
http://www.cisco.com/en/US/products/hw/switches/ps5023/products_configuration_guide_chapter09186a008047627f.html#wp1166029
http://www.cisco.com/en/US/products/hw/switches/ps5023/products_configuration_guide_chapter09186a0080476271.html
Regards,
Leo

Similar Messages

  • How can I change the real server convergence timer in LD ?

    I have LD416(3.1.4) and configured 1*VIP and 2*Real server. looks it takes about 30 seconds to switching to the other real server when one of failure.
    Q) How can I reduce the the convergence time?
    Thanks,

    I am not sure , but check with by configure the DELAY command and see if that helps resolve this.For related information on timers, could you refer the below URL :
    http://www.cisco.com/en/US/products/sw/iosswrel/ps1835/products_configuration_guide_chapter09186a00800ca75d.html

  • BGP convergence time

    Hello,
    Recently I have been experiencing some problems with BGP convergence taking longer then expected.
    I have 3 upstream peers, and for awhile I had the best one local pref'ed down, so that things were more balanced.
    I took away those local preference configurations, so that my ibgp routers (6) pretty much learn 250,000 routes from the 1 peer, and a very small amount of routes from the other 2, the 1 peer is just that well connected.
    If i soft shut down the peer that has all of the routes as the best path, convergence cane take from 3-4 minutes, and causes some destinations to be unreachable during those times. Prior to the change, when my routes were balanced because of filtering, where I may learn 100,000 routes from each peer etc, things converged much quicker.
    My question is, can Cisco routes just not handle pulling out 250,000 routes from 5-6 downstream ibgp peers quickly?
    I am running 6500s and 7600s with sup720, so I would think the reason is not bad hardware that can not handle it, i would just expect these routers to be able to handle full tables a bit better, when it comes to sending the remove messages to its downstream ibgp peers faster.
    Is there a way to make that go smoother, with maybe route reflectors? I have no used them before so am not to familar with how they work low level, just looking for ideas.
    Thanks.

    Hello Jason,
    >> If i soft shut down the peer that has all of the routes as the best path, convergence cane take from 3-4 minutes, and causes some destinations to be unreachable during those times. Prior to the change, when my routes were balanced because of filtering, where I may learn 100,000 routes from each peer etc, things converged much quicker.
    It takes times to install the new routes: now when the preferred peer for 250,000 routes fail all 250,000 routes have to be withdrawn for each prefix a new BGP best path has to be chosen and propagated.
    Each BGP update takes space in a BGP update packet and a BGP update packet has a maximum size.
    The protocol has its own dynamics and cannot convergence in zero seconds: the more best BGP paths change the more the protocol has to work.
    So what you see is the result of having preferred all the routes of a single peer: this can be a resonable choice but it is a worse case from the reduncancy point of view in comparison to having as best routes one third of prefixes via peer1, oner third via peer2, one third via peer3.
    BGP Route Reflector servers are a good tool for scalability, they are of limited help in reducing convergence time.
    Hope to help
    Giuseppe

  • Port Channel Downtime/Convergence time when adding interfaces

    I am looking to add a 3rd and maybe a 4th physical port to an existing port channel as we are running out of capacity, at peak we are pushing around 1.3Gb over the 2Gb channel and we are expanding services out of this location. My question is this, what will the downtime and convergence time be on the port channel when i add in the extra interfaces. Will i drop packets, will my customers notice any service disruption ?
    Currently configured as follows:
    interface Port-channel2
     description DN Agg Switch 2
     switchport
     switchport trunk encapsulation dot1q
     switchport trunk allowed vlan add <very large vlan list>
     switchport mode trunk
     mtu 9216
     mls qos trust dscp
     spanning-tree bpdufilter enable
    interface GigabitEthernet1/47
     description Po2 AGG#1
     switchport
     switchport trunk encapsulation dot1q
     switchport trunk allowed vlan add <very large vlan list>
     switchport mode trunk
     mtu 9216
     mls qos trust dscp
     no cdp enable
     channel-group 2 mode on
    interface GigabitEthernet2/47
     description Po2 AGG#2
     switchport
     switchport trunk encapsulation dot1q
     switchport trunk allowed vlan add <very large vlan list>
     switchport mode trunk
     mtu 9216
     mls qos trust dscp
     no cdp enable
     channel-group 2 mode on
    Thank you

    Disclaimer
    The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
    Liability Disclaimer
    In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
    Posting
    I wouldn't expect any service impact when adding links to the bundle.

  • HSRP Stanby Time

    Hi,
    I have 3550 2 Nos. & 2950T 1 No. Created 3 Vlans, vlan 2-4, below are the configurations for Main & Standby Switches
    spanning-tree vlan 2-4 hello-time 1
    spanning-tree vlan 2-4 forward-time 4
    spanning-tree vlan 2-4 max-age 6
    Vlan 2(Main)
    ip address 192.168.10.253 255.255.255.0
    standby 2 ip 192.168.10.252
    standby 2 ip priority 109
    standby 2 ip timers 5 15
    standby 2 preempt
    Vlan 3(Main)
    ip address 192.168.11.253 255.255.255.0
    standby 3 ip 192.168.11.252
    standby 3 ip priority 109
    standby 3 ip timers 5 15
    standby 3 preempt
    Vlan 4(Main)
    ip address 192.168.12.253 255.255.255.0
    standby 4 ip 192.168.12.252
    standby 4 ip priority 109
    standby 4 ip timers 5 15
    standby 4 preempt
    spanning-tree vlan 2-4 hello-time 1
    spanning-tree vlan 2-4 forward-time 4
    spanning-tree vlan 2-4 max-age 6
    Vlan 2(Standby)
    ip address 192.168.10.254 255.255.255.0
    standby 2 ip 192.168.10.252
    standby 2 ip priority 110
    standby 2 ip timers 5 15
    standby 2 preempt
    Vlan 3(Standby)
    ip address 192.168.11.254 255.255.255.0
    standby 3 ip 192.168.11.252
    standby 3 ip priority 110
    standby 3 ip timers 5 15
    standby 3 preempt
    Vlan 4(Standby)
    ip address 192.168.12.254 255.255.255.0
    standby 4 ip 192.168.12.252
    standby 4 ip priority 110
    standby 4 ip timers 5 15
    standby 4 preempt
    From 2950T switch, 1 port is connected to Main & Other port is connected to Standby Switch configued trunking. Between Main & Standby Switch cable is connected configured Trunking. 2 Client PC's which are in 2 different Vlan's are connected in 2950T. Now if i disconnect or Switch OFF the Main switch, it is taking less than 10 seconds to reach the other Vlan PC, which is normal. now when do in reverse, i mean if i disconnect or Switch OFF the standby switch there is no effect, it is keep on pinging continious (there is no request timed out). why is that so? & why not it is happening the same in Main switch?

    I am not sure that I accurately understand the description given by Anand. But I think he is describing a situation where he has PC(s) in one VLAN and PC(s) in another VLAN and is pinging between them as he disables and enables the primary and backup switch.
    If my understanding is correct then the behavior that he describes is the correct behavior as I understand it. The first thing to remember is that traffic from one VLAN to the other VLAN must go to the default gateway and the default gateway should be the HSRP address. Therefore if he is pinging with both switche active and switches off the main switch, then there should be some interruption in the ping while HSRP recognizes the failure and gives the HSRP address to the backup switch. However if he is pinging with both switches active and switches off the backup switch there should be no interruption to the ping since the primary switch already has the HSRP address and does not have to change anything.
    If my understanding of the situation is faulty please clarify what is going on.
    HTH
    Rick

  • HSRP Issues on VLAN interfaces

    We are experiencing an issue with HSRP and VLANS. We have the VLANS tracked to physical interfaces, with the default decrement value of 10.
    When we physically fail the fiber circuit (pull fiber transmit) the physical port reports down condition. The VLAN reports that it is still up. BOTH routers report that they are the active router and connectivity is lost.
    When the physical port is shut down, the failover takes place and the routers report their state as predicted.
    Any help would be greatly appreciated.
    These routers are 4506's running 12.1(19)EW code
    on WS-X4515 module.

    If there are still active ports, then I would expect the VLAN interface to stay UP on both routers. However, I would not normally expect both routers to be ACTIVE. Could it be that when you take down these physical links, that the routers lose sight of each other as far as the Hellos are concerned?
    About the "If there are still active ports" bit ... don't gorget that a trunk can also constitute an active port in this sense. So if you have go any access switches uplinked to these 4506s, the trunks will be enough to keep the VLAN interface alive.
    Remember also that HSRP has a hold time of only 9 seconds by default, whereas 802.1d Spanning Tree has a convergence time up to 50 seconds by default. So it is possible that if the link you are disconnecting is the active root port of a switch, that the two HSRP routers will lose sight of each other. In that case,they can both become active for a few seconds. Effectively, during the STP convergence the VLAN can be partitioned. It all depends on your topology.
    You are pulling only the transmit fiber. I wonder if enabling UDLD would help here.
    As Georg says, it would be useful to know a bit more about the topology and the configuration.
    Kevin Dorrell
    Luxembourg

  • HSRP-Spanning tree

    Here is my setup:
    - each 4500 connect to both 6509 and both 6509s connect with etherchannel of 6Gb.
    4500c -|-- 6509a (vlan61: st-root, hsrp-active)
    | |||||
    |-- 6509b (vlan11: st-root, hsrp-stdby
    6509s run hsrp.
    4500 has two vlans-11,61
    6509a is hsrp-active for all vlans, whereas 6509b is hsrp-standby for all vlans.
    6509a is st root for vlan61
    6509b is st root for vlan11
    Question: Performance wise, is it better to have 6509b as HSRP-active for VLAN11 ??
    I think so since 6509b is the first hop for vlan11 traffic. Am I on the right track ??

    Hi Frined,
    The reason WHY I pointed it will not be a good design is the convergence time will be more even when you are running HSRP.
    To explain:
    Lets take an example 2 cat6k switch with etherchannel and single trunk links between 4k switch to 2 cat6k switch
    Now STP will block a link between 4k switch to one cat6k switch as you confirm.
    If the other link which is in fwding state goes down HSRP hello packet will still flow between 2 cat6k switch via etherchannel and second cat6k switch will not come to know about main link going down and will remain in HSRP standby state and the STP cpnvergenace will make the blocked link as active in 50 seconds so the convergence time will be more as compared to HSRp which is only 10 seconds.
    If the cat6k switch which is in HSRP active state goes down completely then the hello packet will not reach the standby switch and will become HSRp active switch but still STP will take its own time to converge and hence the convergence will be approx 50 seconds though you are running HSRP.
    So my design for HSRP says if I am using HSRP I should avoid STP so no need to run etherchannel between 2 cat6k switch if you are having HSRP and incase you still want that then play with STp also to deplu uplink fast or deply RSTP.
    HTH
    Ankur

  • HSRP "Split Brain" on the STP Topology

    Hello. I'm a network administrator in my company.
    I have a question about HSRP "Split Brain" on the STP Topology.
    verifying to HSRP Down Time that attached network topology.
    Trying "ICMP Ping" from PC to HSRP Virtual IP.
    I have found unexpected senario. PIng goes down when reboot the L2 Core SW 1 and Router 1 HSRP goes Active from Init Status.
    Why ping goes down?
    Router 1's Gratuitouse ARP, It's should not be transffered to L2 Core SW 2?
    Sorry to trouble you, Could you please teach.

    Thank you for your reply, rejeevh. I retried "L2 Core SW 1 Down Test".
    From a result, My verification senario was wrong.
    It was not shown in the figure, there is another link of both routers to other routers over the Core SWs that enable OSPF and "redistribute connected" in practice.
    "ping from PC to HSRP Virtual IP" was wrong. "ping from PC to another OSPF Router's Interface" is correct senario.
    I verified correct senario rebooting L2 Core SW 1. Also, ping goes down.
    but this result was simple, It was dropped at SW 1's EtherChannel Interface in STP LIS/LER status when recieved return packet from another Router. (SW 1's other Interface was enabled Portfast or Portfast Trunk. )
    I was confirmed the result was improved, It was enable Portfast Trunk in Etherchannel Interfaces of SW 2 and SW 1.
    Thank you very much for your reply.

  • Failover time using BFD

    Hi Champs,
    we have configured BFD in multihoming scenario with BGP routing protocol.Timer configuration is bfd interval 100 min_rx 100 multiplier 5.
    Failover from first ISP to second ISP takes 30 sec and same from first ISP to second ISP takes more than 1min. Can you suggest reason for different failver times and how can i have equal failover time from both ISP.How convergence time is calculated in BGP + BFD scenario?
    Regards
    V

    Vicky,
    A simple topology would help better understand the scenario. Do you have both the ISP terminated on same router or different router?.
    How many prefixes are you learning?. Full internet table or few prefixes?.
    Accordingly, you can consider BGP PIC or best external to speed up the convergence.
    -Nagendra

  • MPLS TE fast-reroute swicth time?

    Hi,
    I always heard we can achieve 50ms switch time by FRR, when applying to VPLS, make Metro Ethernet using VPLS without STP reduce the covergence time greatly to 50ms too. But I wonder if it's ture? The 50ms switch time could be achieved when SDH is used because FRR can detect alarm from SDH, but how can FRR dectect ethernet link failure faster?
    Your comments are appreciated!

    Hi,
    there are mainly two contributions to FRR time as far as I know.
    First you have to detect the failure, second the router switches over to the predefined backup LSP.
    The second portion is only a local rewrite of the LFIB and should be practically instantaneous.
    The first one is the major contribution. In SONET/SDH a failure condition in the optical network is propagated in the SONET/SDH frames, thus it is rather quick to detect failures.
    With ethernet that might be different. Assume two routers connected through a LAN switch. In this case a link-down event in one router will only be detected by means of keepalives. Those might be your routing hellos or in MPLS TE also RSVP messages. With standard timers that means several seconds. However f.e. in IS-IS we can lower keepalive timers to less than a second and hold timer to one second. This should be designed carefully not to introduce unwanted instability into your routing.
    In any case you cannot reach 50ms in this scenario.
    The question should be really which convergence time is acceptable in your environment, i.e. which applications require much less than a second outage and does it justify the efforts.
    Regards
    Martin

  • MSTP / RSTP convergence

    Hi All
    I am looking at issues with convergence on ring of 3550 / 3750.
    What is the limit of switches that can be put on a ring topology ? I am not sure but my issues seems to be related to number of trunks, each time I add a trunk convergence times increases.
    Any thoughts....before I segment the ring.
    As a side mote, looking after this issues has cost me the nickname of Frodo Baggins.
    TIA
    Cisco Lad

    Hi Sam,
    Ok, so at least there are things to do. The ports that are in "Bound PVST" are not running MST but plain old slow legacy PVST. That would not be a big deal if it was some access switches connected to your ring, but in your particular case, this is the root port. Even when doing interaction between PVST and MST, we suggest that the root be located on the side of the MST region (for performance reasons). Here, the root is obviously on the PVST side.
    Don't be too concerned about the "Pre-STD-Rx" flag. We recently released a new version of MST that is fully IEEE compliant (our initial release was proprietary because it was delivered before the MST standard was out). The switch that is displaying this "Pre-STD-Rx" flag is running this IEEE standard version and has detected a pre-standard neighbor. As a result, it automatically start sending pre-standard BPDUs on this port. The detection is not 100% reliable, that's why we display this flag to encourage network administrator to hardcode the neighbor type as pre-standard on the interface. To put it short, to get rid of the message: -1- configure "spanning-tree mst pre-standard" on the interface -2- or upgrade the neighbor to the latest release. -1- is supposed to be a temporary solution before -2- is possible.
    So in the end, please make sure that all the switches in the ring are running MST. You will not get fast convergence if your netork is only partly running MST.
    Some port may also be running PVST because they used to be connected to a PVST neighbor that has been later converted to MST. You can do a shut/no shut on the link or a "clear spann detected-protocol" on both sides to fix that.
    Let me know what you find out!
    Regards,
    Francois

  • BGP Conversion Time

    Guys,
    Has anyone can tell me the convergence time for BGP. Basically, I have 2 CEs on redundancy. However, I find that the time taken to failover from primary to secondary link took 22secs. Is this a normal timing taken for conversion over MPLS on BGP?

    Hi,
    the time could even be worse. eBGP takes up to 30 seconds to announce an update, iBGP up to 5 seconds. So you could get a convergence time of over a minute (2x eBGP + iBGP).
    To reduce this use the command "neighbor advertisement-interval"
    http://www.cisco.com/en/US/products/sw/iosswrel/ps1835/products_command_reference_chapter09186a00800ca79d.html#wp1183687
    You can also improve convergence time using BFD: "Decreasing BGP Convergence Time Using BFD"
    http://www.cisco.com/en/US/products/ps6441/products_configuration_guide_chapter09186a00805387d7.html#wp1118014
    Or by adjusting timers with "neighbor timers"
    http://www.cisco.com/en/US/products/sw/iosswrel/ps1835/products_command_reference_chapter09186a00800ca79e.html#wp1020454
    You might want to investigate the convergence time improvement by adjusting the scan interval. This might be, however, somewhat risky in that it could lead to less stability, if the interval is choosen too small and the CPU load generated is too high. So use with care.
    Hope this helps! Please rate all posts.
    Regards, Martin

  • VPC - peer gateway and peer switch

    I understand that we need to use peer gateway on a vPC pair when HSRP is running, but why do we use peer switch if the vPC pair is not the root or seconday root of the network? Does it matter they send out different BIDs? What would be the worst case scenario when not using peer switch?

    If you read the vPC Best Practices Design Guide the peer-switch feature reduces convergence time as a result of a spanning-tree failure from 3 seconds to sub-second.

  • Routed access design to support up to 2500 multicast groups

    Hi all,
    I'm doing routed access design with following:
    - Routing protocol is OSPF.
    - 3 layers with: Core/Distribution/Access design.
    - Core: Catalyst 6500E Sup 2T.
    - Distribution: ME 3800X / Catalyst 4500X.
    - Access: ME 3400E.
    We enable PIM from Core to Access layer.
    At the moment, there are around 1000 multicast group in the network.
    Cisco hardware limitation of multicast:
    - Cisco ME 3400E can only support up to 1000 IGMP groups and multicast routes.
    - Cisco ME 3800X can only support up to 2000 IGMP group and multicast routes with Metro IP Services license.
    - Cisco Catalyst 4500X support up to 32K multicast routes.
    - Cisco Catalyst 6500E Sup 2T support up to 128K multicast routes.
    Requirement:
    Design the network to support up to 2500 multicast groups.
    So, are there any fine-tune in the design to meet above requirement?
    Or I have to change access layer switch to higher model? (ex: 4500E)
    Thanks,

    hi there
    replacing the hardware is one option
    but the question here why are you going with routed access layer and in your case you have large Mcast routing table  that going to be everywhere !!
    have you considered using Cat65k or 45K in the distribution in VSS and use L2 from access to distribution
    in this case you need to concern about the distribution routing table size and also VSS simplify the topology and managerially of the network rather than have it with a complicated routing design
    routed access is a good and recommend as well for quicker convergence time and no reliance on STP or HSRP/VRRP timers
    however it could comlicate the routing design !
    while VSS will also eliminate the reliance on HSRP/VRRP and STP with more added simplicity to the topology and design
    so you may go with 45K VSS in the distribution and 65K with VSS in the core as well and in the access using L2 uplinks to the distribution and the uplinks everywhere can use multi-chassis etherchannel MEC, for increased network capacity in terms of bandwidth and quicker convergence time in the case of a link failure as well, Plus you will be able to support the desired Mcast routing table in the distribution and Core !!
    hope this help

  • Trying to understand RSTP - Please can someone explain this?

    Hi Group
    I am still confused about how RSTP is implemented. From what i understand the major difference is that STP
    used Timers for Loop prevention whereas RSTP coordinates between neighbors via messages (proposal/aggreement) to turn on links
    more quickly after topology changes and is "timer free".
    However, I have not noticed any difference in the configuration from the legacy STP
    configurations and the RSTP configuration on cisco devices. Or are there any differences??
    I have read in documentation that RSTP natively includes features like UplinkFast, BackboneFast and PortFast. So are these features now obsolete
    and not needed to be configured if you are running RSTP. (Although i have seen Portfast still configured along with RSTP on many switches)
    Also can someone explain the below Points from Cisco Documentation
    1) should STP be disabled on edge ports all together as suggested below?
    "STP edge ports are bridge ports that do not need STP enabled, where loop protection is not needed out
    of that port or an STP neighbor does not exist out of that port. For RSTP, it is important to disable STP
    on edge ports, which are typically front-side Ethernet ports, using the command bridge
    bridge-group-number spanning-disabled on the appropriate interface. If RSTP is not disabled on edge
    ports, convergence times will be excessive for packets traversing those ports."
    2) It seems RSTP relies on duplex setting to determine inter-switch links. What is the configuration to explicitly
    configure RSTP link types? (I couldnt find this in the documentation)
    "RSTP can only achieve rapid transition to the forwarding state on edge ports and on point-to-point links.
    The link type is automatically derived from the duplex mode of a port. A port that operates in fullduplex
    is assumed to be point-to-point, while a half-duplex port is considered as a shared port by
    default. This automatic link type setting can be overridden by explicit configuration. In switched
    networks today, most links operate in full-duplex mode and are treated as point-to-point links by RSTP.
    This makes them candidates for rapid transition to the forwarding state."
    Also i am a bit rough on my RSTP knowledge even after skimming a few Cisco documents. Can someone please explain this in simple way.
    Thanks in advance

    to configure it on a device:-
    spanning-tree mode rapid-pvst
    PortFast/UplinkFast & BackboneFast were cisco "Enhancements" to 802.1d STP. RSTP just incorperates them. If you want to configure portfast, the command is still "spanning-tree portfast"
    OK
    1) That is your choice - I have bitter experiance of users/IT admins just plugging hubs/switches in when ever they can. Also cabling the switch back to itself creating a cabled loop. So my advice to you is to leave STP enabled on all switch ports, BUT enable BPDUGuard - this is a life saver, if you have configured portfast.
    2) duplex auto! or duplex full (overiding)
    I really suggest that you read the 802.1d standard, once you understand normal spanning-tree - RSTP will come to you.
    http://www.cisco.com/en/US/tech/tk389/tk621/tsd_technology_support_protocol_home.html')">http://www.cisco.com/en/US/tech/tk389/tk621/tsd_technology_support_protocol_home.html')">http://www.cisco.com/en/US/tech/tk389/tk621/tsd_technology_support_protocol_home.html')">http://www.cisco.com/en/US/tech/tk389/tk621/tsd_technology_support_protocol_home.html
    http://en.wikipedia.org/wiki/Spanning_tree_protocol')">http://en.wikipedia.org/wiki/Spanning_tree_protocol')">http://en.wikipedia.org/wiki/Spanning_tree_protocol')">http://en.wikipedia.org/wiki/Spanning_tree_protocol
    HTH>

Maybe you are looking for