MPLS TE Fast ReRoute

Hi Experts,
I'm just getting started with MPLS TE and wondering on how fast the "fast reroute" feature can be.
I'm planning to create two tunnels for a specific traffic of my network, and looks like MPLS TE with FRR is the most reliable option if we are talking about a really 0% packet loss network.
I saw on some documentations that with MPLS TE is possible to reroute the traffic with 50 ms of RTT  and no packet loss at all, considering that the backup tunnel is so reliable as the primary is.
Is this true? I'm new on this subject so I would like to know more about what I could achieve in terms of high availability.
Regards
Paulo Varanda

Hi,
Yes MPLS-TE with FRR gives faster convergence in range of 50ms (usually 50ms is standard convergence time for SDH/Sonet network). But there are some pre-requisities for MPLS-TE FRR to provide that faster convergence.
Tunnel Headend -- Router 1 --- Router 2 ---- Router 3--- Tunnel Tailend
                        -- Router 4 ---- Router 5----
MPLS-TE FRR protects a particular link or a particular node.
For link protection, the concept is to have a primary tunnel protected by a backup tunnel. The backup tunnel path should be on completely different and fault tolerant physical path when the primary tunnel path fails i.e. both the tunnels should not be in same SRLG links. In the above case if link between Router1-Router2-Router3 fails the tunnel should fallback over Router 4 and Router 5.
Detecting the link or node goes down should require a keepalive mechanism, usually RSVP hellos are used to detect the failure.
Node protection by default provides link protection. So when Router 2 goes down the traffic falls back over backup path.
MPLS-TE FRR wokrs by pre-signalling LSP over both primary and secondary paths even before the failure occurs. In normal conditions (with multiple path-option), only when primary LSP on primary path goes down, LSP gets signalled over secondary path option.
HTH
Arun

Similar Messages

  • MPLS TE Fast-Reroute question?

    Hi:
    I am trying to configure the mpls te fast-reroute command but the router complains!!
    I am running 12.4 Enterprise on a 3640.
    Does this only work on a 7200 and up?
    Thanks.

    Niraj,
    The 3640 doesn't support FRR. It does support traffic engineering but can not be used as the Point of Local Repair (PLR) router, which is the router where the backup tunnel is configured.
    Hope this helps,

  • MPLS TE fast-reroute swicth time?

    Hi,
    I always heard we can achieve 50ms switch time by FRR, when applying to VPLS, make Metro Ethernet using VPLS without STP reduce the covergence time greatly to 50ms too. But I wonder if it's ture? The 50ms switch time could be achieved when SDH is used because FRR can detect alarm from SDH, but how can FRR dectect ethernet link failure faster?
    Your comments are appreciated!

    Hi,
    there are mainly two contributions to FRR time as far as I know.
    First you have to detect the failure, second the router switches over to the predefined backup LSP.
    The second portion is only a local rewrite of the LFIB and should be practically instantaneous.
    The first one is the major contribution. In SONET/SDH a failure condition in the optical network is propagated in the SONET/SDH frames, thus it is rather quick to detect failures.
    With ethernet that might be different. Assume two routers connected through a LAN switch. In this case a link-down event in one router will only be detected by means of keepalives. Those might be your routing hellos or in MPLS TE also RSVP messages. With standard timers that means several seconds. However f.e. in IS-IS we can lower keepalive timers to less than a second and hold timer to one second. This should be designed carefully not to introduce unwanted instability into your routing.
    In any case you cannot reach 50ms in this scenario.
    The question should be really which convergence time is acceptable in your environment, i.e. which applications require much less than a second outage and does it justify the efforts.
    Regards
    Martin

  • IP-Fast Reroute with MPLS remote LFA tunnels

    I have a simple ring network with 4 3600Xs with IP/MPLS 10 gig backbone between all units (with OSPF running in the core).  Per the 3600 design guide I turned on IPFRR under OSPF for fast reroute of traffic around faults.  I have a l3vpn on the 3600s that I'm using to test.  The FRR works quite well when the repair route is a ECMP (equal cost multipath) route, I don't even notice an interruption in ping between l3vpn sites when an 'active' link goes down.
    The issue arises when the repair route is a remote-LFA (loop free alternative) MPLS tunnel.  I've done a few tests, and the failover time when the repair route is a remote LFA tunnel is the same as when FRR isn't turned on at all, it's just the normal route convergence time and there is a significant traffic interruption (as compared to FRR when an ECMP route is the repair route).
    The thing is I'm not quite sure how even to diagnose this.  I was thinking that maybe the remote FLA tunnel was using the link that failed, so it in essence was 'down' as well, hence the traffic interruption as routing fully converged.  But I looked at the remote-LFA interfaces, and as much as I understand them they are taking the right path out of the router anyway (that is, away from the link that would fail in order to activate the remote-LFA route).
    Are there any resources or tips to help troubleshoot why these remote-LFA tunnel repair routes don't seem to be working well?

    Thanks for the reply Nagendra.  When you ask if I've seen the back path installed in RIB/FIB, I'm not exactly sure what you mean.  I do see repair paths referncing remote LFAs on both the 3600 that would be the source and the destination of the test traffic.  Like this:
      * 172.16.0.3, from 10.10.10.3, 01:55:50 ago, via TenGigabitEthernet0/2
          Route metric is 2, traffic share count is 1
          Repair Path: 10.10.10.4, via MPLS-Remote-Lfa40
    and on the other router:
      * 172.16.0.2, from 10.10.10.1, 01:56:34 ago, via TenGigabitEthernet0/1
          Route metric is 2, traffic share count is 1
          Repair Path: 10.10.10.2, via MPLS-Remote-Lfa32
    If you're looking for some specific command output, let me know.

  • MPLS: changing mtu-size on a fast reroute tunnel

    Hi,
    please can someone tell me how to change the MTU of a fast reroute tunnel interface ?
    Best regards

    You should be able to change the MTU on the interface tunnel itself. Using the "ip MTU xxx" command.
    Regards,
    Niranjan

  • Fast-Reroute on a ring with Gig Interface

    Hi,
    I'm trying to setup a fast-reroute option on a 4 routers ring connected through gige with OSPF.
    The main idea is to be able to use xconnect between A & B for normal route with a backup through C & D in case of Gig failure AB.
    I created two tunnel as follow on A
    interface Tunnel50
    ip unnumbered Loopback0
    tunnel mode mpls traffic-eng
    tunnel destination b.b.b.b
    tunnel mpls traffic-eng autoroute announce
    tunnel mpls traffic-eng priority 2 2
    tunnel mpls traffic-eng path-option 1 explicit name b-fast
    tunnel mpls traffic-eng fast-reroute node-protect
    interface Tunnel51
    ip unnumbered Loopback0
    tunnel mode mpls traffic-eng
    tunnel destination b.b.b.
    tunnel mpls traffic-eng autoroute announce
    tunnel mpls traffic-eng priority 5 5
    tunnel mpls traffic-eng path-option 10 explicit name b-low
    and configure A to B interfaces with
    mpls traffic-eng backup-path Tunnel51
    ip rsvp bandwidth
    router ospf
    mpls traffic-eng router-id Loopback0
    mpls traffic-eng area 0
    Same kind of conf on B side...
    Well, if I shutdown A to B interface, the fast-reroute doesn't seems to operate and the xconnect resume on a ospf convergence base latency.
    Should I also create tunnel A-C, A-D, B-C, B-D, ... like a full mesh ? or point to point on a ring AB, BC, CD, DA ?
    Thanks for your help.
    Laurent

    I tried a xconnect between br01 and br04.
    Routers are on a ring : br01(ge3/2)--(ge3/1)br03--br02--br04(ge3/2)--(ge3/1)br01
    tunnel50 is the straight route and tunnel 51 is the low route
    *Here the output from br04
    br04-7600-mtp02#show mpls traffic-eng tunnels brief
    Signalling Summary:
        LSP Tunnels Process:            running
        Passive LSP Listener:           running
        RSVP Process:                   running
        Forwarding:                     enabled
        Periodic reoptimization:        every 3600 seconds, next in 2803 seconds
        Periodic FRR Promotion:         Not Running
        Periodic auto-bw collection:    every 300 seconds, next in 103 seconds
    P2P TUNNELS/LSPs:
    TUNNEL NAME                      DESTINATION      UP IF      DOWN IF    STATE/PROT
    br04-7600-mtp02_t50              94.103.128.56    -         Gi3/2     up/up
    br04-7600-mtp02_t51              94.103.128.56    -         Gi3/1     up/up
    br01-7600-par01_t50              94.103.128.59    Gi3/2      -          up/up
    br01-7600-par01_t51              94.103.128.59    Gi3/1      -          up/up
    Displayed 2 (of 2) heads, 0 (of 0) midpoints, 2 (of 2) tails
    P2MP TUNNELS:
    Displayed 0 (of 0) P2MP heads
    P2MP SUB-LSPS:
    Displayed 0 P2MP sub-LSPs:
              0 (of 0) heads, 0 (of 0) midpoints, 0 (of 0) tails
    br04-7600-mtp02#show mpls traffic-eng fast-reroute database
    P2P Headend FRR information:
    Protected tunnel               In-label Out intf/label   FRR intf/label   Status
    Tunnel50                       Tun hd   Gi3/2:implicit-n Tu51:implicit-nu Ready
    P2P LSP midpoint frr information:
    LSP identifier                 In-label Out intf/label   FRR intf/label   Status
    P2MP Sub-LSP FRR information:
    *Sub-LSP identifier
    src_lspid[subid]->dst_tunid    In-label Out intf/label   FRR intf/label   Status
    * Sub-LSP identifier format: _[SubgroupID]->_
      Note: Sub-LSP identifier may be truncated.
      Use 'detail' display for the complete key.
    br04-7600-mtp02#show mpls traffic-eng tunnels backup
    br04-7600-mtp02_t51
      LSP Head, Admin: up, Oper: up
      Tun ID: 51, LSP ID: 35, Source: 94.103.128.59
      Destination: 94.103.128.56
      Fast Reroute Backup Provided:
        Protected i/fs: Gi3/2
        Protected LSPs/Sub-LSPs: 1, Active: 0
        Backup BW: any pool unlimited; inuse: 0 kbps
        Backup flags: 0x0
    *Here the output from br01
    br01-7600-par01#show mpls traffic-eng tunnels brief
    Signalling Summary:
        LSP Tunnels Process:            running
        Passive LSP Listener:           running
        RSVP Process:                   running
        Forwarding:                     enabled
        Periodic reoptimization:        every 3600 seconds, next in 2489 seconds
        Periodic FRR Promotion:         Not Running
        Periodic auto-bw collection:    every 300 seconds, next in 89 seconds
    P2P TUNNELS/LSPs:
    TUNNEL NAME                      DESTINATION      UP IF      DOWN IF    STATE/PROT
    br01-7600-par01_t50              94.103.128.59    -         Gi3/1     up/up
    br01-7600-par01_t51              94.103.128.59    -         Gi3/2     up/up
    br04-7600-mtp02_t50              94.103.128.56    Gi3/1      -          up/up
    br04-7600-mtp02_t51              94.103.128.56    Gi3/2      -          up/up
    Displayed 2 (of 2) heads, 0 (of 0) midpoints, 2 (of 2) tails
    P2MP TUNNELS:
    Displayed 0 (of 0) P2MP heads
    P2MP SUB-LSPS:
    Displayed 0 P2MP sub-LSPs:
              0 (of 0) heads, 0 (of 0) midpoints, 0 (of 0) tails
    br01-7600-par01#show mpls traffic-eng fast-reroute database
    P2P Headend FRR information:
    Protected tunnel               In-label Out intf/label   FRR intf/label   Status
    Tunnel50                       Tun hd   Gi3/1:implicit-n Tu51:implicit-nu Ready
    P2P LSP midpoint frr information:
    LSP identifier                 In-label Out intf/label   FRR intf/label   Status
    P2MP Sub-LSP FRR information:
    *Sub-LSP identifier
    src_lspid[subid]->dst_tunid    In-label Out intf/label   FRR intf/label   Status
    * Sub-LSP identifier format: _[SubgroupID]->_
      Note: Sub-LSP identifier may be truncated.
      Use 'detail' display for the complete key.
    br01-7600-par01#show mpls traffic-eng tunnels backup
    br01-7600-par01_t51
      LSP Head, Admin: up, Oper: up
      Tun ID: 51, LSP ID: 30, Source: 94.103.128.56
      Destination: 94.103.128.59
      Fast Reroute Backup Provided:
        Protected i/fs: Gi3/1
        Protected LSPs/Sub-LSPs: 1, Active: 0
        Backup BW: any pool unlimited; inuse: 0 kbps
        Backup flags: 0x0
    Thx,
    Laurent

  • RSVP Hello for Fast Reroute

    Hi,
    i am trying to set up TE tunnels with Fast Reroute protection. As i am using FastEthernet and GigabitEthernet links i need to use RSVP Hello for link or node failure detection.
    I have a little problem understanding and properly configuring the rsvp hello feature. The topology looks as follows:
    PE1 - P1 - PE2
      |                |
    P3 ----------- P4
    I am trying to set up fast reroute to  protect the connection PE1-P1-PE2 with backup path PE1-P3-P4-PE2. Both primary and backup tunnels are set up and working. I configure RSVP Hello on the link between PE1 and P1 with following commands on  PE1 and P1:
    PE1#configure terminal
    PE1(config)#interface Fastethernet 0/0
    PE1(config-if)ip rsvp signalling hello
    PE1(config-if)ip rsvp signalling hello refresh interval 50
    P1#configure terminal
    P1(config)#interface Fastethernet 0/0
    P1(config-if)ip rsvp signalling hello
    P1(config-if)ip rsvp signalling hello refresh interval 50
    After issuing the sh ip rsvp hello instance detail command on PE1 i can see that the RSVP hello session is active:
    Neighbor 195.10.3.1 (router ID: 3.3.3.3)  Source  195.10.3.254
        Type: Active    (sending requests)
        I/F:  FastEthernet0/0
        State:   Up        (Since: 2012 January Thursday 12 00:14:22 )
        Clients: Fast Reroute
        LSPs protecting: 1
        Missed acks: 4, IP DSCP: 0x30
        Refresh Interval (msec)
          Configured: 50
    However on the P1 router the output shows different values:
    Neighbor 195.10.3.254 (router ID: 33.33.33.33)  Source  195.10.3.1
        Type: Active    (sending requests)
        I/F:  FastEthernet0/0
        State:   Up        (Since: 2012 January Wednesday 11 23:43:18 )
        Clients: ReRoute
        LSPs protecting: 3
        Missed acks: 4, IP DSCP: 0x30
        Refresh Interval (msec)
          Configured: 2000
    The configuration guide for IOS 12.2SR refers to active instances:
    If a neighbor is unreachable when an LSP is ready  to be fast  rerouted, an active Hello instance is needed. Create an  active Hello  instance for each neighbor with at least one LSP in this  state.
    and to passive instances:
    Passive  Hello instances respond to Hello Request  messages (sending Ack  messages), but do not initiate Hello Request  messages and do not cause  LSPs to be fast rerouted.
    At this point i am not sure if i configured the RSVP Hello properly. After shuting the interface FastEthernet 0/0 on P1 down i do get the backup tunnel active and rerouted, but the convergence time is too slow. I would expect the convergence time to be 4xRSVP Hello interval which is 4x50=200ms. However, the testing revealed the convergence time to be round 2,5 seconds. My goal is to get the convergence time under 300ms.
    The question is, what is the actual difference between active and passive rsvp hello session? What does the Clients statement in the sh ip rsvp hello instance detail command mean and shoud i see the same interval on both ends of the link?
    If you need any other specification, i will provide any other show command outputs necessary.
    Thank you for any help or clarifiation.
    Adrian

    Adrian,
    Would it be possible to post the relevant tunnel configurations from your PE routers? From what you described, I am not sure if you want to achieve an MPLS path protection, MPLS link protection (NHOP) or MPLS node protection (NNHOP) here.
    Best regards,
    Peter

  • Fast Reroute on 7600 platform

    Hi,
    One quick enquiry, does 7600 platform support MPLS Fast Reroute.
    Practical inputs would be of immense help.
    Thanks
    Cheers
    ~sultan

    The MPLS Traffic EngineeringFast Reroute MIB provides Simple Network Management Protocol (SNMP)-based network management of the Multiprotocol Label Switching (MPLS) Fast Reroute (FRR) feature in Cisco IOS software.
    The Fast Reroute MIB has the following features:
    "Notifications can be created and queued.
    "Command-line interface (CLI) commands enable notifications, and specify the IP address to where the notifications will be sent.
    "The configuration of the notifications can be written into nonvolatile memory.
    Refer to MPLS Traffic EngineeringFast Reroute MIB section for more information
    http://www.cisco.com/en/US/docs/ios/mpls/configuration/guide/mp_te_fast_rr_mib_ps6922_TSD_Products_Configuration_Guide_Chapter.html#wp1101191

  • Fast-reroute: failure detection on ATM

    Hi,
    I have read in Designing MPLS TE Networks book that in essence fast-reroute feature failure detection of under 50ms is possible on pure POS interfaces and not possible on ATM interfaces which might also be runnig over POS.
    I have noticed a command that had been available since 12.1T - "oam ais-rdi". According to this command description, it is possible to bring down ATM PVC after receiving one AIS indicating cell.
    Isn't that exactly how it works on POS?
    Was this Ciscopress book so much outdated or am I missing something?
    Thanks,
    David

    Hello,
    there is a major difference (on a ms time scale) in POS and in ATM AIS. With POS interfaces one utilizes the error indication in the SONET/SDH frame. This means the first frame after failure detection of end OR intermediate systems (muxer) will contain the error bits needed to bring down the interface and trigger fast reroute.
    If you look into an ATM solution, then only ATM equipment will be able to insert ATM cells for failure indication. This means a mux will not be able to indicate anything. Only ATM switches or end devices might "help" out, but for them failure detection is maybe bound to keepalives as well.
    So you MIGHT get very low recovery times, but you could also have failure conditions only detectable by keepalive mechanisms. The latter are slower than in POS.
    Once you are relying on keepalives you could also use subsecond keepalives with OSPF or ISIS to achieve pretty fast recovery times.
    Hope this helps! Please rate all posts.
    Regards, Martin

  • Does OSPF support IP LDP Fast Reroute Loop Free Alternate?

                      I only saw examples with IS-IS protocol. I have got ARS9010 with IOS XR 4.01. OSPF is in MPLS core and I need failover below 1sec. The core will state 5 ASR in ring.

    Yep, it's supported in IOS XE, XR and 15S.  Check the feature navigator to see if it's supported on your platform & release, but basically it's in almost all XE & XR boxes plus 7600 & ME3600/3800, and probably other stuff running 15S that the FN doesn't mention.
    HtH

  • Configure IP LDP Fast Reroute Loop Free Alternate - OSPF

    Hi,
    The below link is giving example to configure IP LDP FRR LFA using IS-IS as an IGP.
    http://www.cisco.com/en/US/docs/routers/asr9000/software/asr9k_r4.3/mpls/configuration/guide/b_mpls_cg43asr9k_chapter_01.html#reference_063CBD50AC624F28B69D6B2173B53A75
    Is it possible the same while having OSPF as an IGP ?
    Br,
    Anand

    Yep, it's supported in IOS XE, XR and 15S.  Check the feature navigator to see if it's supported on your platform & release, but basically it's in almost all XE & XR boxes plus 7600 & ME3600/3800, and probably other stuff running 15S that the FN doesn't mention.
    HtH

  • BFD-triggered Fast Reroute (FRR) in IOS XR

    I'm migrating the configuration from a Cisco 12416 router running Cisco IOS, to a CRS1 router running IOS-XR.
    In  IOS I have configured the following command for FRR "ip rsvp signalling  hello bfd". I cannot find the way to configure this parameter using  IOS-XR.
    I don't know if anyone can help me with this request.
    Thanks.
    Jose.

    Replied to your other posting at:
    https://supportforums.cisco.com/message/3161126#3161126
    Atif

  • MPLS TE load-balancing --- CEF Problem

    Dears
    Would like your assistance please regarding below issue
    We are having 5 TE tunnels going to same destination and we are doing load-balancing between these 5 LSPs TE tunnels.
    Command "mls ip cef load-sharing full simple" is configured so that CEF will use L4 ports in its algorithm
    Problem that due to CEF behavior, 2 link are v.highly utilized and the other 3 utilization are below average
    What I am thinking of but not sure If this will help or not is to have 2 TE tunnels instead of 5
    1 TE tunnel load balancing on 3 links ( This can be done by using static route to tail loopback poiting to the 3 links) and another TE tunnel load balancing on the other 2 links
    By doing this, I think CEF would be used 2 times; first to determine which TE tunnel to use then to determine which link within the tunnel
    Will this help ?
    For example
    interface Tunnel1
    ip unnumbered Loopback0
    mpls ip
    tunnel destination 10.0.0.1
    tunnel mode mpls traffic-eng
    tunnel mpls traffic-eng autoroute announce
    tunnel mpls traffic-eng path-option 1 dynamic
    tunnel mpls traffic-eng fast-reroute
    ip route 10.0.0.1 255.255.255.255 link-1
    ip route 10.0.0.1 255.255.255.255 link-2
    ip route 10.0.0.1 255.255.255.255 link-3

    Hello Sherif,
    traffic of a single TE tunnel will not be load balanced over multiple physical links as the TE tunnel is setup using a reservation and the path will use only one link for each router hop.
    So moving to two TE tunnels is not an option for you.
    Hope to help
    Giuseppe

  • 7600 as Backbone router for MPLS core

    I have 7600's and 7500's in my backbone and 7200's on the edge. My question is that when I look at the feature navigator the 7600 with Sup 720 is missing a lot of basic features required to be a core router of an MPLS Backbone. Features like Traffic engineering fast reroute, MPLS enabled Netflow are missing on this platform, is this platform not a good candidate to be a Backbone router for a service provider offering MPLS services??? 7500 on the other hand it seems 7500 seems to have support for the MPLS related features.

    Not sure what version you where looking at but...
    Cisco Internetwork Operating System Software
    IOS (tm) s72033_rp Software (s72033_rp-ADVIPSERVICESK9_WAN-M), Version 12.2(18)SXF, RELEASE SOFTWARE (fc1)
    Technical Support: http://www.cisco.com/techsupport
    Copyright (c) 1986-2005 by cisco Systems, Inc.
    Compiled Sat 10-Sep-05 01:18 by ccai
    Image text-base: 0x40101040, data-base: 0x42D60000
    ROM: System Bootstrap, Version 12.2(17r)S2, RELEASE SOFTWARE (fc1)
    BOOTLDR: s72033_rp Software (s72033_rp-ADVIPSERVICESK9_WAN-M), Version 12.2(18)SXF, RELEASE SOFTWARE (fc1)
    CASAN_Core1 uptime is 1 week, 4 hours, 9 minutes
    Time since CASAN_Core1 switched to active is 1 week, 4 hours, 8 minutes
    System returned to ROM by power cycle (SP by power on)
    System image file is "disk0:s72033-advipservicesk9_wan-mz.122-18.SXF.bin"
    This product contains cryptographic features and is subject to United
    States and local country laws governing import, export, transfer and
    use. Delivery of Cisco cryptographic products does not imply
    third-party authority to import, export, distribute or use encryption.
    Importers, exporters, distributors and users are responsible for
    compliance with U.S. and local country laws. By using this product you
    agree to comply with applicable laws and regulations. If you are unable
    to comply with U.S. and local laws, return this product immediately.
    A summary of U.S. laws governing Cisco cryptographic products may be found at:
    http://www.cisco.com/wwl/export/crypto/tool/stqrg.html
    If you require further assistance please contact us by sending email to
    [email protected].
    cisco CISCO7609 (R7000) processor (revision 1.1) with 983008K/65536K bytes of memory.
    Processor board ID FOX092307Q5
    SR71000 CPU at 600Mhz, Implementation 0x504, Rev 1.2, 512KB L2 Cache
    Last reset from power-on
    SuperLAT software (copyright 1990 by Meridian Technology Corp).
    X.25 software, Version 3.0.0.
    Bridging software.
    TN3270 Emulation software.
    1 SIP-200 controller .
    1 Virtual Ethernet/IEEE 802.3 interface
    74 Gigabit Ethernet/IEEE 802.3 interfaces
    1917K bytes of non-volatile configuration memory.
    8192K bytes of packet buffer memory.
    65536K bytes of Flash internal SIMM (Sector size 512K).
    Configuration register is 0x2102
    CASAN_Core1#
    CASAN_Core1(config)#mpls traffic-eng ?
    auto-bw auto-bw parameters
    fast-reroute fast-reroute parameters
    link-management Link Management configuration
    logging Trap logging configuration
    path-selection Path Selection Configuration
    reoptimize Reoptimization parameters
    signalling Traffic Engineering Signalling Parameters
    topology Topology Database Configuration
    tunnels Traffic Engineering tunnels

  • L3-MPLS VPN Convergence

    Perhaps someone on this group can identify the missing timers/processing-delays in end-to-end client route convergence
    Scenarios:
    a) BGP New route Advertised by Cleint(CPE1)
    b) BGP Route withdrawn by Client(CPE1)
    PE-to-RR i-M-BGP (Logical)
    ========= ----RR------ ======
    " | | "
    CPE1---->PE1------->P1-------->P2---->PE2----->CPE2
    | |
    --------->P3-------->P4-------
    Routing:
    - eBGP btw CPE and PE (any routing prot within Cust site),
    - OSPF, LDP in Core,
    Timers/Steps I'm aware of:
    - Advertisement of routes from CE to PE and placement into VRF
    - Propagation of routes across the MPLS VPN backbone
    - Import process of these routes into relevant VRFs
    - Advertisement of VRF routes to attached VPN sites
    - BGP advertisement-interval: Default = 5 seconds for iBGP, 30 for eBGP
    - BGP Import Process: Default = 15 seconds
    - BGP Scanner Process Default = 60 seconds
    Would appreciate if you someone can identify any missing process-delay, timers? specially w.r.t RR.
    Thanks
    SH

    Check the LDP/TDP timers in the core. Remember if a link fails in the core, reroute occurs, LDP/TDP binding needs to be renewed. tags are binded on those routes being in the routing table (IGP). So, there is a delay possible from a core prespective:
    mpls ldp holdtime
    mpls ldp discovery hello [holdtime | interval]
    In case you are using TE check these:
    mpls traffic-eng topology holddown
    mpls traffic-eng signalling forwarding sync
    mpls traffic-eng fast-reroute timers promotion
    I believe the latter one onyl applies to SDH. In which you use segment loss feature.
    Regards,
    Frank

Maybe you are looking for