ASR1k default routes load-balancing.

Hi cisco community team,
I would like to balance outgoing traffic forwarding to the standalone NAT servers.
Does anyone try to balance by default routes with the same metric on ASR1k?
For example:
0.0.0.0 0.0.0.0 x.x.x.x
0.0.0.0 0.0.0.0 y.y.y.y
0.0.0.0 0.0.0.0 z.z.z.z
Regards,
Konstantin

there are the difference between, load balancing and load sharing..which we need to understand.
load sharing means you have 2 users, user A and User B, user A wants to use ISP1 and user B wants to use ISP2. this is called load sharing. and can be achieved via PBR (Policy based routing).
we should not try to use load balancing for Internet traffic with 2 different ISPs.

Similar Messages

  • ACE Routing Load-Balance problem

    I'm trying to configure a routing load-balance with Cisco ACE Module based on the following scenario:
    local users has a router (R1) as it default gateway, this router (R1) has a default route to the VIP that represent the serverfarm with two linux servers that should be used for Data Shaping over the WAN. I need to balance the traffic over the two linux servers and not necessary over the WAN.
    The problem is that when I set up the local network router default route to VIP the routing process simply stop work ! If I change the route to the real server ip address everything start working again without any problem.
    Follow the configs:
    Local network Router - Static route
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    ip route 0.0.0.0 255.255.255.0 10.0.0.1 (VIP address)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Follow the ACE configs:
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    access-list 100 line 8 extended permit ip any any
    rserver host rout001
    ip address 10.0.0.32
    inservice
    rserver host rout002
    ip address 10.0.0.31
    inservice
    serverfarm host BLC_ROUTING
    predictor leastconns
    rserver rout001
    inservice
    rserver rout002
    inservice
    class-map match-any VIP
    2 match virtual-address 10.0.0.1 any
    class-map type management match-any mgmt
    2 match protocol icmp any
    3 match protocol telnet any
    4 match protocol ssh any
    policy-map type management first-match access
    class mgmt
    permit
    policy-map type loadbalance first-match INT_router
    class class-default
    serverfarm BLC_ROUTING
    policy-map multi-match VIP
    class VIP
    loadbalance vip inservice
    loadbalance policy INT_router
    loadbalance vip icmp-reply
    interface vlan 6
    bridge-group 10
    access-group input 100
    service-policy input access
    service-policy input VIP
    no shutdown
    interface vlan 8
    bridge-group 10
    access-group input 100
    service-policy input access
    service-policy input VIP
    no shutdown
    interface bvi 10
    ip address 10.0.0.5 255.255.255.0
    no shutdown
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    I tried to change some parameters like "transparent" at serverfarm config and change the "predictor" method to "hash address source" but there was no good results at all.
    Anyone has any idea why this process is not working ?
    Is there any special configuration for this scenario ?
    Regards,
    Ricardo

    Ricardo,
    What is this route ??
    ip route 0.0.0.0 255.255.255.0 10.0.0.1 (VIP address)
    You can't have 0.0.0.0/24.
    You must be missing something ?
    Also, since the vip is part of a vlan with subnet 10.0.0.0/24 you don't need to add a static route to reach that vip.
    It should normally be directly connected to your router.
    With the static route, do you see traffic coming to the ACE module ?
    Does it loadbalance to the server ?
    'show service-policy detail' check the packet counters
    Gilles.

  • Router can perform static route load balance

    Dear All
    I am not sure a question. I need your idea and help. The question is if the router can perform static route load balance. I tested it. The result showed No. If you have any experience on it, could share it with me. I also post my result here. Thank you

    Disclaimer
    The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
    Liability Disclaimer
    In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
    Posting
    Normally they can, but you generally need different next hops.  How did you "test".

  • Load Balancing 2 routers

    Hi all
    Can someone tell me the best way to load balance between 2 routers, can someone post a typical config ?
    thanks a million
    Carl

    Hi Carl,
    You can use the following load-blancing methods between the routers.
    1. Static routes or default route load-balancing
    2. Multilink PPP
    3. Equal/unequal cost path load-balancing with Dynamic Routing protocol.
    4. CEF load-balancing.
    Please see the DOC below.
    http://www.cisco.com/en/US/products/hw/modules/ps2033/products_white_paper09186a0080091d4b.shtml
    I have multilink PPP and static route load-balancing config handy with me. Please see below :
    interface Multilink1
    ip address 193.193.193.1 255.255.255.252
    ppp multilink
    multilink-group 1
    interface Serial0/2
    no ip address
    encapsulation ppp
    ppp multilink
    multilink-group 1
    interface Serial0/3
    no ip address
    encapsulation ppp
    ppp multilink
    multilink-group 1
    interface Serial1/1:0
    ip address 192.168.170.1 255.255.255.0
    encapsulation ppp
    no ip route-cache
    interface Serial1/2:0
    ip address 192.168.180.1 255.255.255.0
    encapsulation ppp
    no ip route-cache
    ip route 172.28.20.0 255.255.255.0 192.168.170.2
    ip route 172.28.20.0 255.255.255.0 192.168.180.2
    HTH,
    -amit singh

  • Load balancing by equal cost Static Routes

    Hello All,
    I have 2 WAN links for Internet connectivity and I want to load balance IP traffic on both links. If I use 2 default routes like this,
    ip route 0.0.0.0 0.0.0.0 serial 0
    ip route 0.0.0.0 0.0.0.0 serial 1
    then its enough to achieve load balancing or I have to configure following interface configuration command.
    (config-int)# ip load-sharing per-packet
    Kindly advice.
    Regards,
    Mujeeb

    hi ankurbhasin. I have one doubt pertaining to per-packet load-sharing. In order to connect my two remote sites- A & B, Site A is having two WAN links and Site B is having two WAN links - one from ISP1 (30Mbps link) and the other from ISP2 (50Mbps link). I am doing static route load balancing using same AD values for both the ISPs. I have configured "ip load-sharing per-packet" on both the outgoing interfaces.
    The load is getting distributed equally across both the links but total bandwidth utilization across both the links is not going beyond 30Mbps. The combined bandwidth of both links is 80Mbps (50+30). However links are not getting fully utilized even though heavy load is there on the links. Can you please tell me how to make full use of both the wan links at both the ends?

  • Load Balance reference to be default server in connection wizard

    We have a multi server environment (3 web servers - load balanced) and database and OLAP on separate servers.
    BPC 7.5 Microsoft SP6
    Servers 2998 R2
    I am able to use the fully qualified load balance reference successfully, however, I would prefer to always default the load balance reference and not server 1, 2 or 3 as a default value in the connection wizard for my user communiity? What is the correct changes to ServerInfo to make the load balance the defatul reference?
    I have attemtped to updated the server info in 6 places on a server to reference the load Balance reference.It works, but I want to default this value for all users. Is there another reference besides the items below that need updating
    1) Reporting Services instance name,
    2) Reporting Services external server name
    3) Application Server external server name
    4) Application Server virtual server name
    5) Web Server name
    6) Web Server Virtual Name
    Thanks for the insight

    Hi William
    You can remove the references to the other servers. For example AppServer2, AppServer3
    Backup the Appserver database first before making any changes, in the table AppServer.dbo.tblServerInfo you can remove the entries AppServer2, AppServer3. You will need to ensure that the Virtual IP (Loadbanced FQDN) is valid for AppServer1, etc
    The client reads the configuration values from multiple tables, but in the table AppServer.dbo.tblAppSetInfo it can only make reference to one value in the column. For Example: AppServer1, ReportServer1, etc
    So in the end, you would only need values for AppServer1, ReportServer1, OLAPServer1, WebServer1,SQLServer1, etc
    That is how you would always default to the load balanced configuration values.
    Hope this helps
    Kind regards
    Daniel

  • MPLS/VPN network load balancing in the core

    Hi,
    I've an issue about cef based load-balancing in the MPLS core in MPLS/VPN environment. If you consider flow-based load balancing, the path (out interface) will be chosen based on source-destination IP address. What about in MPLS/VPN environment? The hash will be based on PE router src-dst loopback addresses, or vrf packet src-dst in P and PE router? The topology would be:
    CE---PE===P===PE---CE
    I'm interested in load balancing efficiency if I duplicate the link between P and PE routers.
    Thank you for your help!
    Gabor

    Hi,
    On the PE router you could set different types and 2 levels of load-balancing.
    For instance, in case of a DUAL-homed site, subnet A prefix for VPN A could be advertised in the VPN by PE1 or PE2.
    PE1 receives this prefix via eBGP session from CE1 and keep this route as best due to external state.
    PE2 receives this prefix via eBGP session from CE2 and keep this route as best due to external state.
                                 eBGP
                         PE1 ---------CE1
    PE3----------P1                          Subnet A
                         PE2----------CE2 /
                                eBGP
    Therefore from PE3 point of view, 2 routes are available assuming that IGP metric for PE3/PE1 is equal to PE3/PE2.
    The a 1rst level of load-sharing can be achieve thanks to the maximum-paths ibgp number command.
    2 MP-BGP routes are received on PE3:
    PE3->PE1->CE1->subnet A
    PE3->PE2->CE2->subnet A
    To use both routes you must set the number at 2 at least : maximum-paths ibgp 2
    But gess what, in the real world an MPLS backbone hardly garantee an equal IGP cost between 2 Egress PE for a given prefix.
    So it is often necessary to ignore the IGP metric by adding the "unequal-cost" keyword: maximum-paths unequal-cost ibgp 2
    By default the load-balancing is called "per-session": source and destination addresses are considered to choose the path and the outgoing interface avoiding reordering the packets on the target site. Overwise it is possible to use "per-packet" load-balancing.
    Then a 2nd load-sharing level can occur.
    For instance:
             __P1__PE1__CE1
    PE3           \/                   Subnet A
            \ __P2__PE2__CE2
    There is still 2 MP-BGP paths :
    PE3->P1->PE1->CE1->subnet A
    PE3->P1->PE2->CE2->subnet A
    But this time for 2 MP-BGP paths 4 IGP path are available:
    PE3->P1->PE1->CE1->subnet A
    PE3->P1->PE2->CE2->subnet A
    PE3->P2->PE1->CE1->subnet A
    PE3->P2->PE2->CE2->subnet A
    For a load-balancing to be active between those 4 paths, they must exist in the routing table thanks to the "maximum-path 4 "command in the IGP (ex OSPF) process.
    Therefore if those 4 paths are equal-cost IGP paths then a 2nd level load-balancing is achieved. the default behabior is the same source destination mechanism to selected the "per-session" path as mentionned before.
    On an LSP each LSR could use this feature.
    BR

  • ACE 4710 Load Balancer

    Hello,
    I have a requirement to load balance between real servers on different subnets, but I need to preserve the original source IP address through the ACE.  I know the ACE can do Asymmetric server normalization but that appears to require the servers to be on the same subnet.  The traffic is just generic TCP and I don't want the ACE to take any action on the traffic other than to do basic balancing and allow me to direct all traffic to one server or the other for maintenance.  Is there any way to accomplish routed load balancing that preserves the original source IP?

    Hi B-Cunningham,
    Very simple !!
    When you need the same user to be always sent to the same server, you need some sort of stickyness.
    There are many different ways to achieve this.
    Some predictor algorithms will by definition always select the same server for a given client.  This is the case with the source ip hashing predictor.
    But very often you will need to configure a sticky method in combination with your predictor algorithm.
    What is the source ip hash predictor a sticky method ?
    Actually, this is not a sticky method.  But since the hash algorithm always give the same result for a given source ip address, it guarantees that a client using the same ip address will always be sent to the same server.
    The advantage is that it does not require to configure a specific sticky method.  It also works without the need for a sticky table.  So it does preserve resources.
    But the hash function will have different results when you add or remove a server.  Therefore, when your rserver list is modified your clients might be sent to different servers breaking stickyness.
    Is sticky source ip a good solution ?
    Because of the changing hash results mentioned above, most people will prefer to use a standard predictor (roundrobin , leastconn, ...) and add a sticky source ip option.
    The idea is to also use the source ip address to identify the client and select the corresponding server.
    Unlike the hash method, the stick source ip solution will need sticky resources to save the information necessary for ACE to remember which client uses which server.
    The advantage of the sticky option is that the sticky table is not affected when the rserver list is modified.
    Why not use sticky source ip ?
    Very often this solution is enough to guarantee stickyness.
    But because a lot of clients do not have a static ip address, this method does not work.
    There is also the problem of proxy servers hiding many clients behind a single ip address resulting in rserver overload when using sticky source ip.
    For HTTP the solution is to use information contained in the client HTTP request and server HTTP response.
    An HTTP Cookie is an object used by a server to identify HTTP clients.  A loadbalancer can therefore also use this information to map a client to a server.
    One drawback of hash predictor is that the hash predictor methods do not recognize the weight value you configure for real servers. The ACE uses the weight that you assign to real servers only in the round-robin and least-connections predictor methods.
    Here is the hash algorithm
    ((_key) + (_key >> 8) + (_key >> 16) + (_key >> 24))The _key in this case is the source ip address has an unsigned 32 bits number.You then do rserver_index = hash % number_of_rserver.
    Session persistence (stickiness) based on client source IP address or HTTP cookies are recommended to be configured on the Cisco ACE for this flow.
    IP Address Stickiness
    You can use the source IP address, the destination IP address, or both to uniquely identify individual clients and their requests for stickiness purposes based on their IP netmask. However, if an enterprise or a service provider uses a megaproxy to establish client connections to the Internet, the source IP address no longer is a reliable indicator of the true source of the request. In this case, you can use cookies or one of the other sticky methods to ensure session persistence.
    Here can be the sample configuration:
    resource-class websrv
    limit-resource all minimum 0.00 maximum unlimited
    limit-resource sticky minimum 20.00 maximum equal-to-min
    rserver host webserver1
    ip address 10.10.10.1
    inservice
    rserver host webserver2
    ip address 10.10.10.2
    inservice
    rserver host webserver3
    ip address 10.10.10.3
    inservice
    serverfarm host werbsrv1only
    probe websrv
    rserver webserver1 1000
    inservice
    serverfarm host werbsrv123
    probe websrv
    rserver webserver1 1000
    inservice
    rserver webserver2 1000
    inservice
    rserver webserver3 1000
    inservice
    ACE receives requests to the VIP on port 80 and translates them to port 1000 using the server farm configuration shown above.
    The link to the websrv home page is http://websrv:1000/index.html. A probe to this link is configured on ACE as follows:
    probe http websrv
    port 1000
    interval 2
    faildetect 2
    passdetect interval 2
    request method get url /index.html
    expect status 200 200
    Session persistence can be established by tying the session to an IP address,  that uniquely identifies the client.
    Create a sticky-group
    sticky ip-netmask 255.255.255.255 address source Client_subnet_1
    timeout 10
    serverfarm werbsrv1only
    Change the server farm to the sticky-group:
    policy-map type loadbalance first-match basic-slb
    class class-default
    sticky-serverfarm werbsrv1only
    sticky ip-netmask 255.255.255.255 address source Client_subnet_2
    timeout 10
    serverfarm werbsrv123
    sticky ip-netmask 255.255.255.255 address source Client_subnet_3
    timeout 10
    serverfarm werbsrv123
    Here you can find the details in the below url :
    http://www.cisco.com/en/US/docs/app_ntwk_services/data_center_app_services/ace_appliances/vA3_1_0/configuration/slb/guide/sticky.html#wp1004411
    I have also attached a jpeg for your reference.
    Hope you will get the idea how to use the sticky based on IP address.
    http://www.cisco.com/en/US/docs/app_ntwk_services/data_center_app_services/ace_appliances/vA3_1_0/configuration/slb/guide/sticky.html#wp1004411
    Here you can find sample config of similar type:
    http://www.cisco.com/en/US/prod/collateral/modules/ps2706/ps6906/prod_white_paper0900aecd804edab0.html
    HTH .
    Please rate if you find it useful.
    Thanks and regards,
    Sachin Garg
    Senior Specialist Security
    HCL Comnet Ltd.
    http://www.hclcomnet.co.in
    A-10, Sector 3, Noida- 201301
    INDIA

  • Load Balancing per packet not working properly

    Hi,
    I am attaching you the configs of issue. There are two links E1 links from
    Karac-1(Serial0/0/0:0 & 0:1) and Karac-2 with (Tunnel10) which were connected with Khask-1w
    Now the issue is that Load balancing per packet were not done sucessfully the NMS snap shot is already attached.
    Load balancin g only configured in KarAC-1 & 2
    What is the resolution of this problem traffic only use on two links but third links were not utilize.
    Kind regards,Salman Ahmed

    Hi Paolo!
    I have one doubt pertaining to per-packet load-sharing. In order to connect my two data-centres- A & B, Site A is having two WAN links and Site B is having two WAN links - one from ISP1 (30Mbps link) and the other from ISP2 (50Mbps link). I am doing static route load balancing using same AD values for both the ISPs. I have configured "ip load-sharing per-packet" on both the outgoing interfaces.
    The load is getting distributed equally across both the links but total bandwidth utilization across both the links is not going beyond 30Mbps. The combined bandwidth of both links is 80Mbps (50+30). However links are not getting fully utilized even though heavy load is there on the links. Can you please tell me how to make full use of both the wan links at both the ends? OR Can you tell me how I can distribute the traffic across both the links with full utilization without using per-packet load sharing. Moreover, my links can be configured statically only at both the ends.

  • ACE 4710 one-arm L4 load balancing removes accept-encoding?

    We have built a simple one-arm PAT config to round robin load balance two Varnish servers. In the "Default L7 load-balancing action" we have left compression to "N/A". It looks like the ACE removes "Accept-Encoding: gzip, deflate" from the client header.
    Is this normal behaviour? We would like the Varnish to do the compression. Do we need modify the headers to get this through the ACE?

    Hi,
    Yes this does seem to be the behavior. Please read below:
    HTTP compression is a capability built into web servers and web browsers to improve site performance by reducing the amount of time required to transfer data between the server and the client. Performing compression on the ACE offloads that work from the server, thereby freeing up the server to provide other services to clients and helping to maintain fast server response times.
    When you enable HTTP compression on the ACE, the appliance overwrites the client request with "Accept-Encoding identity" and turns off compression on the server-side connection. HTTP compression reduces the bandwidth associated with a web content transfer from the ACE to the client.
    So ACE rewrites the ACCEPT-ENCODING header to IDENTITY to indicate to the server that it should not compress the return data. That would be done by ACE.
    Also, default method is used when client comes with both gzip or deflate for "ACCEPT ENCODING". For compression to work, a client must send a request with an ACCEPT-ENCODING method of gzip or deflate. If a client sends both methods, then the ACE uses the configured method(default method).
    Also, you can see if ACE is compressing the packets or in "show service-policy detail.
    switch/Admin#
    show service-policy L7_COMP_SLB_POLICY detail
    Status     : ACTIVE
    Description: -----------------------------------------
    Interface: vlan 1 108
      service-policy: L7_COMP_SLB_POLICY
        class: vip
         VIP Address:    Protocol:  Port:
         2.0.5.1         tcp        eq    80
          loadbalance:
            L7 loadbalance policy: pm
            VIP ICMP Reply       : ENABLED
            VIP state: OUTOFSERVICE
            Persistence Rebalance: ENABLED
            curr conns       : 0         , hit count        : 0
            dropped conns    : 0
            client pkt count : 0         , client byte count: 0
            server pkt count : 0         , server byte count: 0
            conn-rate-limit      : 0         , drop-count : 0
            bandwidth-rate-limit : 0         , drop-count : 0
            L7 Loadbalance policy : pm
              class/match : h
                ssl-proxy client : c
                LB action :
                   primary serverfarm: sf1
                        state: DOWN
                    backup serverfarm : -
                hit count        : 0
                dropped conns    : 0
                compression      : on  <------------------------------ Compression is enabled if the value is "on"
    compression  bytes_in  : 0       bytes_out : 0  <--- Number of bytes transmitted after compressing the server response
    Compression ratio : 0.00%  <------------------------------ Percentage of data compressed
    Gzip: 0               Deflate: 0  <--------------- Number of times the method is used
    compression errors:                                     _
    User-Agent  : 0               Accept-Encoding    : 0   |
    Content size: 0               Content type       : 0   |
    Not HTTP 1.1: 0               HTTP response error: 0   |-- Check these error counters to see if they are increasing
    Let me know if you have any questions.
    Regards,
    Kanwal

  • Load balance between DLSw and CIP routers

    Take a look on this environment:
    - 4 routers receiving all DLSw peers and circuits
    - 4 routers with CIP boards connected to 2 mainframes
    All CIP routers are configured with same MAC address. All routers (DLSw and CIP) are connected on a Ethernet LAN switching, so this traffic are pure LLC2.
    How I can balance the traffic between DLSw and CIP routers ?
    Thank's in advance.

    I am not sure if I totally understand the topology. Let me rephrase it. Please correct me if I misunderstand the topology. In a data centre, there are 4 DLSw routers terminating DLSw peer connections from the remote sites. In the same data centre, there are 4 CIP routers which connects to 2 mainframes. CSNA is configured on all CIP router, which uses the same MAC. You configure transparent bridging on the DLSw routers, which connect to the same ethernet switches as the CIP routers. You configure SR/TLB on the CIP routers; so that all LLC2 circuits coming from the DLSw routers connect through the ethernet interfaces of the CIP routers.
    Do you want the LLC2 circuits from a DLSw router load balance across 4 CIP routers? As duplicate MAC address is not allowed, there is no way to connect all 4 DLSw routers and CIP 4 routers on the same VLAN.
    I can think of a couple of workarounds.
    1. Enable SNASw on the 4 DLSw routers. Create a VDLC port on all 4 DLSw routers. The MAC address of the VDLC interface is the same. The VDLC MAC address is pointed by the remote SNA stations. Each DLSw router uses one of the CIP routers as DLUS.
    2. If this is the case, create 4 VLANs on the ethernet switches. Connect a pair of DLSw router and CIP router to each VLAN.

  • Load balancing using multiple default routes

    Hi Guys,
    I just want to ask does creating multiple default routes on my router provides load-balancing on my WAN side? As far as i know, for example if I have two default routes on my router and let say I have two users connecting to the internet, the first one might go to the first WAN link while the second user might go to the second WAN link.
    Thank you so much
    Rex

    there are the difference between, load balancing and load sharing..which we need to understand.
    load sharing means you have 2 users, user A and User B, user A wants to use ISP1 and user B wants to use ISP2. this is called load sharing. and can be achieved via PBR (Policy based routing).
    we should not try to use load balancing for Internet traffic with 2 different ISPs.

  • How to use default route achieving load-balancing?

    show me an example
    thanks

    Hi Friend,
    If you have 2 default route with same admin distance you can have equa load balancing.
    Something like this
    ip route 0.0.0.0 0.0.0.0
    ip route 0.0.0.0 0.0.0.0
    Now because these 2 default routes have equal admin distance so both will be peferred and will result into equal load ballancing.
    HTH, if yes please rate the post.
    Ankur

  • Load balancing weirdness using NAT and same-metric route

    Hi.
    I'm trying to set up a double-WAN load-balancing scenario:
    I decided to attempt the "multiple same-metric routes with NAT" approach so I went for the example used in the IOS NAT Load-Balancing for Two ISP Connections Configuration Guide [1].
    I decided to use an upside-down Cisco 871-SEC/K9: use Vlan1 and Vlan2 for the routers and Fa4 for the LAN. I am hoping this is not an issue.
    There is this weirdness with some connections, particularly FTP. I pinpointed the problem to the following scenario: if I do a couple of pings to 100.1.1.1 using the FastEthernet4 as the source address, this is what I get in the logs:
    === PING 1 ECHO REQUEST ===
    *Mar 3 04:38:43.521: IP: tableid=0, s=192.168.60.4 (FastEthernet4), d=100.1.1.1 (Vlan1), routed via RIB
    *Mar 3 04:38:43.521: NAT: s=192.168.60.4->10.129.124.2, d=100.1.1.1 [14152]
    *Mar 3 04:38:43.521: IP: s=10.129.124.2 (FastEthernet4), d=100.1.1.1 (Vlan1), g=10.129.124.1, len 60, forward
    *Mar 3 04:38:43.521: ICMP type=8, code=0
    === PING 1 ECHO REPLY ===
    *Mar 3 04:38:45.589: NAT*: s=100.1.1.1, d=10.129.124.2->192.168.60.4 [19824]
    *Mar 3 04:38:45.589: IP: tableid=0, s=100.1.1.1 (Vlan1), d=192.168.60.4 (FastEthernet4), routed via RIB
    *Mar 3 04:38:45.589: IP: s=100.1.1.1 (Vlan1), d=192.168.60.4 (FastEthernet4), g=192.168.60.4, len 60, forward
    *Mar 3 04:38:45.589: ICMP type=0, code=0
    === (something else) ===
    *Mar 3 04:38:52.353: RT: SET_LAST_RDB for 0.0.0.0/0
    OLD rdb: via 10.129.124.33, Vlan2
    NEW rdb: via 10.129.124.1, Vlan1
    === PING 2 ECHO REQUEST ===
    *Mar 3 04:38:52.353: IP: tableid=0, s=192.168.60.4 (FastEthernet4), d=100.1.1.1 (Vlan2), routed via RIB
    *Mar 3 04:38:52.353: NAT: s=192.168.60.4->10.129.124.2, d=100.1.1.1 [14159]
    *Mar 3 04:38:52.353: IP: s=10.129.124.2 (FastEthernet4), d=100.1.1.1 (Vlan2), g=10.129.124.33, len 60, forward
    *Mar 3 04:38:52.353: ICMP type=8, code=0
    === PING 2 ECHO REPLY ===
    *Mar 3 04:38:53.029: NAT*: s=100.1.1.1, d=10.129.124.2->192.168.60.4 [19825]
    *Mar 3 04:38:53.029: IP: tableid=0, s=100.1.1.1 (Vlan1), d=192.168.60.4 (FastEthernet4), routed via RIB
    *Mar 3 04:38:53.033: IP: s=100.1.1.1 (Vlan1), d=192.168.60.4 (FastEthernet4), g=192.168.60.4, len 60, forward
    *Mar 3 04:38:53.033: ICMP type=0, code=0
    In the section "Ping 2 Echo Request" line 2 shows the NAT translating the packet to the address for the first provider but line 3 shows it routing it through the second one.
    In this case, the ICMP packet goes through but it is problematic if the ISP restricts the service by source-address (like RPF) or there is some acceleration mechanism inside the provider cloud, other than just plain routing.
    What am I missing? Here is the relevant part of the configuration. I deliberately disabled CEF to be able to debug the messages, but I *think* this may be altering the actual router behavior. This router does not have a "debug ip cef packet" command.
    no ip cef
    ip dhcp pool lan-side
    import all
    network 192.168.60.0 255.255.255.0
    default-router 192.168.60.1
    domain-name doublewan.local
    dns-server 8.8.8.8 8.8.4.4
    lease infinite
    ip domain name doublewan
    interface FastEthernet0
    !doesn't appear on running-config: vlan 1 is the default access vlan
    !switchport access vlan 1
    interface FastEthernet1
    switchport access vlan 2
    interface FastEthernet2
    shutdown
    interface FastEthernet3
    shutdown
    interface FastEthernet4
    ip address 192.168.60.1 255.255.255.0
    ip nat inside
    ip virtual-reassembly
    no ip route-cache
    duplex auto
    speed auto
    interface Vlan1
    ip address 10.129.124.2 255.255.255.224
    ip nat outside
    ip virtual-reassembly
    no ip route-cache
    interface Vlan2
    ip address 10.129.124.35 255.255.255.224
    ip nat outside
    ip virtual-reassembly
    no ip route-cache
    ip route 0.0.0.0 0.0.0.0 Vlan1 10.129.124.1
    ip route 0.0.0.0 0.0.0.0 Vlan2 10.129.124.33
    ip nat inside source route-map nat1 interface Vlan1 overload
    ip nat inside source route-map nat2 interface Vlan2 overload
    ip access-list standard acl4-nexthop-vlan1
    permit 10.129.124.1
    ip access-list standard acl4-nexthop-vlan2
    permit 10.129.124.33
    route-map nat2 permit 10
    match ip address 102
    match ip next-hop acl4-nexthop-vlan2
    match interface Vlan2
    route-map nat1 permit 10
    match ip address 101
    match ip next-hop acl4-nexthop-vlan1
    match interface Vlan1
    control-plane
    Of course, there is some configuration pending for redundancy and stuff.
    Thanks a lot in advance.
    [1] http://www.cisco.com/c/en/us/support/docs/ip/network-address-translation-nat/100658-ios-nat-load-balancing-2isp.html

    Hello.
    This might be a bug in debug command or the IOS (without ip cef) you use; as routing is done before NAT (inside to outside).
    To make sure it works fine with ip cef, just enable strict uRPF (or just ACL) on .1 and .33 interfaces and see if you see any packet sent over wrong interface.
    PS: please check "sh ip cef 100.1.1.1"; I guess ip cef would tell you "per-destination sharing".

  • Cisco 2811 Router with 3 ADSL card and load balancing

    Dear All,
    I have few queries:
    1. Does Cisco 2811 Router support 3 ADSL card?
    2. We are the ISP. I want to do load balancing with 3 dsl
    line on Cisco 2811 Router.
    Please send me the linke for this configuration.
    Thanks/Regards
    Atul

    hi
    In 2811 you have 4 HWIC and 1 NME you can install 1-port ADSL WAN Interface Cardon the HWIC slots.
    Also just enable 3 default (equal cost) routes towards the interfaces which will take care of the load balancing.
    if you need more info and inputs do post out with ur requirements along with network topology in place at present..
    regds

Maybe you are looking for

  • How to create a link to base oracle olap express(ODBC DATA source Name)

    hello, How can I create a datasource name, because I want to connect to Express base with Business Objects. thank you for your help

  • Issue with Game Center Accounts

    I believe this was posted previously by richardgraham90, but i am having an issue with game center. It seems that i somehow have two active game center accounts under one apple ID. I have been using one ever since i got the itouch 4, and just a few w

  • I need to know the right tools and java technology

    Please help,I need to know the right tools and java technology to support what I need. I had background programming in Assembly,C++,Visual Basic,SAP/ABAP 4. All I can say, programming is about logic, now we are very helped building program using obje

  • How many monitor displays

    can you find this out using Applescript, I need to write a script that places an icon (which I can do) but I need to know based on a 1 or 2 monitor setup, what the coordinates would be thanks.

  • Problem with adding participation date as updateable field during HRBEN0001

    We have a need to update the participation date on our health plans (infotype 0167) during the enrollment process after modifying an adjustment reason (infotype 0378). We used SE84 to add the participation date (RPBEN_SA-PARDT) to screen 0100 of prog