ACE 4710 one-arm L4 load balancing removes accept-encoding?

We have built a simple one-arm PAT config to round robin load balance two Varnish servers. In the "Default L7 load-balancing action" we have left compression to "N/A". It looks like the ACE removes "Accept-Encoding: gzip, deflate" from the client header.
Is this normal behaviour? We would like the Varnish to do the compression. Do we need modify the headers to get this through the ACE?

Hi,
Yes this does seem to be the behavior. Please read below:
HTTP compression is a capability built into web servers and web browsers to improve site performance by reducing the amount of time required to transfer data between the server and the client. Performing compression on the ACE offloads that work from the server, thereby freeing up the server to provide other services to clients and helping to maintain fast server response times.
When you enable HTTP compression on the ACE, the appliance overwrites the client request with "Accept-Encoding identity" and turns off compression on the server-side connection. HTTP compression reduces the bandwidth associated with a web content transfer from the ACE to the client.
So ACE rewrites the ACCEPT-ENCODING header to IDENTITY to indicate to the server that it should not compress the return data. That would be done by ACE.
Also, default method is used when client comes with both gzip or deflate for "ACCEPT ENCODING". For compression to work, a client must send a request with an ACCEPT-ENCODING method of gzip or deflate. If a client sends both methods, then the ACE uses the configured method(default method).
Also, you can see if ACE is compressing the packets or in "show service-policy detail.
switch/Admin#
show service-policy L7_COMP_SLB_POLICY detail
Status     : ACTIVE
Description: -----------------------------------------
Interface: vlan 1 108
  service-policy: L7_COMP_SLB_POLICY
    class: vip
     VIP Address:    Protocol:  Port:
     2.0.5.1         tcp        eq    80
      loadbalance:
        L7 loadbalance policy: pm
        VIP ICMP Reply       : ENABLED
        VIP state: OUTOFSERVICE
        Persistence Rebalance: ENABLED
        curr conns       : 0         , hit count        : 0
        dropped conns    : 0
        client pkt count : 0         , client byte count: 0
        server pkt count : 0         , server byte count: 0
        conn-rate-limit      : 0         , drop-count : 0
        bandwidth-rate-limit : 0         , drop-count : 0
        L7 Loadbalance policy : pm
          class/match : h
            ssl-proxy client : c
            LB action :
               primary serverfarm: sf1
                    state: DOWN
                backup serverfarm : -
            hit count        : 0
            dropped conns    : 0
            compression      : on  <------------------------------ Compression is enabled if the value is "on"
compression  bytes_in  : 0       bytes_out : 0  <--- Number of bytes transmitted after compressing the server response
Compression ratio : 0.00%  <------------------------------ Percentage of data compressed
Gzip: 0               Deflate: 0  <--------------- Number of times the method is used
compression errors:                                     _
User-Agent  : 0               Accept-Encoding    : 0   |
Content size: 0               Content type       : 0   |
Not HTTP 1.1: 0               HTTP response error: 0   |-- Check these error counters to see if they are increasing
Let me know if you have any questions.
Regards,
Kanwal

Similar Messages

  • ACE in one-arm model. VIP on Client Side, servers in other vlan

    Hello All
    i have a LAN whit many servers,but only 2 need to be balanced. So i think in one-arm model, due to the higth trafic that not be pass trought ACE.
    i have a vlan 900 where is the client side and the VIP also. (10.0.9.64/26)
    the servers are in vlan 503 (10.12.3.0/24)
    it mi first design with ONE-arm but i thinks something is missing, because doesn't work.
    the configuration is the next:
    MSFC:
    svclc module 1 vlan-group 1,2,
    svclc vlan-group 1 503,900-902
    svclc vlan-group 2 511
    interface Vlan503
    description OSS_&_Otros
    ip address 10.12.3.253 255.255.255.0
    standby 10 ip 10.12.3.254
    standby 10 priority 150
    standby 10 preempt delay minimum 305
    interface Vlan900
    description MSF_<->_ACE
    ip address 10.0.9.126 255.255.255.192
    end
    access-list 101 permit ip 10.12.3.0 0.0.0.255 10.0.9.64 0.0.0.63
    access-list 101 deny ip any any
    route-map From_Server_OSS_to_ACE permit 10
    match ip address 101
    set ip next-hop 10.0.9.125
    ACE_1/admin#
    ip route 0.0.0.0 0.0.0.0 10.0.9.126
    context OSS
    allocate-interface vlan 511
    allocate-interface vlan 900
    allocate-interface vlan 902
    member Max20
    ACE_1/OSS# sh run
    Generating configuration....
    access-list EVERYONE line 10 extended permit ip any any
    access-list EVERYONE line 20 extended permit icmp any any
    rserver host OSS_FES_1
    description OSS_Front_End_Server_1
    ip address 10.12.3.140
    inservice
    rserver host OSS_FES_2
    description OSS_Front_End_Server_2
    ip address 10.12.3.150
    inservice
    serverfarm host SERVER_farm_OSS
    rserver OSS_FES_1
    inservice
    rserver OSS_FES_2
    inservice
    class-map match-all VIP-OSS
    2 match virtual-address 10.0.9.66 any
    policy-map type loadbalance first-match OSS-LB-POLICY
    class class-default
    serverfarm SERVER_farm_OSS
    policy-map multi-match OSS-POLICY-MAP
    class VIP-OSS
    loadbalance vip inservice
    loadbalance policy OSS-LB-POLICY
    loadbalance vip icmp-reply
    interface vlan 900
    description Clients-side
    ip address 10.0.9.125 255.255.255.192
    access-group input EVERYONE
    access-group output EVERYONE
    service-policy input OSS-POLICY-MAP
    no shutdown
    ip route 0.0.0.0 0.0.0.0 10.0.9.126
    maybe a i need to allocate the vlan 503 in OSS Context, any advice?
    Thanks in advace,
    Gianni From Chile

    Since you server are not behind the ACE in either bridge or routed mode add the follwoing to your config and use nat to get the traffic back to the ace.
    This is how one-armed mode works.
    ACE_1/OSS# sh run
    Generating configuration....
    access-list EVERYONE line 10 extended permit ip any any
    access-list EVERYONE line 20 extended permit icmp any any
    rserver host OSS_FES_1
    description OSS_Front_End_Server_1
    ip address 10.12.3.140
    inservice
    rserver host OSS_FES_2
    description OSS_Front_End_Server_2
    ip address 10.12.3.150
    inservice
    serverfarm host SERVER_farm_OSS
    rserver OSS_FES_1
    inservice
    rserver OSS_FES_2
    inservice
    class-map match-all VIP-OSS
    2 match virtual-address 10.0.9.66 any
    policy-map type loadbalance first-match OSS-LB-POLICY
    class class-default
    serverfarm SERVER_farm_OSS
    policy-map multi-match OSS-POLICY-MAP
    class VIP-OSS
    loadbalance vip inservice
    loadbalance policy OSS-LB-POLICY
    loadbalance vip icmp-reply
    nat dynamic 10 vlan 900
    interface vlan 900
    description Clients-side
    ip address 10.0.9.125 255.255.255.192
    nat-pool 10 0.9.126 10 0.9.126 netmask 255.255.255.192 pat
    access-group input EVERYONE
    access-group output EVERYONE
    service-policy input OSS-POLICY-MAP
    no shutdown

  • Multiple WAN connections all through one router with load balancing?

    I am setting up a network in my dormatory for myself and about 20 friends. about half of us have DSL connections at the moment. Is there a way to have all the DSL connections (possibly run through cheap home DSL routers) all connect into a cisco router that then acts as the gateway for our entire network? woudl it be possible for each internet request to go out over the connection that has the least load AND also be able to use some sort of load balancing, so one user cant use all of the outgoing/incoming bandwidth?
    If you have any ideas please let me know

    Hi Ian,
    To get this working, you would either need to use something like PPP to bundle your links together or use a dynamic protocol.
    In bundling the links, you could make them appear as one link, with a single IP address each end and the router takes care of distributing the load. To implement this though, you would need control of both sides of the link, or be terminating with one carrier who is happy to implement this for you.
    The second is to use a dynamic protocol (such as eigrp, ospf, etc), which can build up a map of the network to router from point a to point b. For this you also need control of the link.
    I can't think of another method, unless you can control the link from both sides. Your other option it to pool your money and buy a larger link or a leased line. If you bought a leased line or two, your carrier would be more than happy to talk to you about routing over that, but generally you're looking at mega bucks for that.
    HTH,
    Mark

  • ACE 4700 one-arm design with SSL termination

    Hi,
    We are evaluating the one-arm design for the ACE 4700 and need some clarifications:
    1. Are there any limitations in the one-arm design and the SSL offloading
    2. Can the ACE be configured with an IN and an OUT vlan to the router
    CLIENT -> Router -> ACE IN -> ACE OUT -> Router -> Server Vlan
    so that the SSL and the clear text traffic is in a separate Vlan?
    3. In some sample configuration i saw SNAT configuration on the ACE to modify the client IP. This i assume is for instructing the return traffic from the server to go through ACE? Using SNAT we eliminate the requirement for NAT or PBR on the router? Will i still be able to insert the client IP address after the SSL offload?
    I would appreciate if you can share some sample configs
    Regards,
    George Georgiou

    There are two ways to implement One Arm topology.
    1. One Arm with PBR & 2.One Arm with SRC NAT
    PBR/Source Nat is needed to ensure that the return traffic from Real Servers should not bypass ACE.
    1. Are there any limitations in the one-arm design and the SSL offloading
    The limitations/config issues I can think of are following
    One ARM with PBR:
    Direct access to Servers require the enabling of Assymtric routing (by turning off Normalization). If direct server access is not required then you dont need to enable assymtric routing. Now for these assymetric connection (Direct Server Access return traffic) its required to purge idle connections more frequently (default being one hour).
    One ARM with SRC NAT:
    You will loose the client information. Server logs will show the connections initiated from NAT IP Pool configured on ACE.
    2. Can the ACE be configured with an IN and an OUT vlan to the router
    CLIENT -> Router -> ACE IN -> ACE OUT -> Router -> Server Vlan
    so that the SSL and the clear text traffic is in a separate Vlan?
    Yes you can do that but wouldnt it make it routed mode topology?
    3. In some sample configuration i saw SNAT configuration on the ACE to modify the client IP. This i assume is for instructing the return traffic from the server to go through ACE? Using SNAT we eliminate the requirement for NAT or PBR on the router? Will i still be able to insert the client IP address after the SSL offload?
    As I said earlier you loose the Source IP address with SRC NAT. But with ACE you have an option to use header-insert and insert this source ip as an HTTP Header.
    Details at
    http://www.cisco.com/en/US/docs/interfaces_modules/services_modules/ace/v3.00_A1/configuration/slb/guide/classlb.html#wp1040008
    HTH
    Syed Iftekhar Ahmed

  • Probe fail on Standby ACE in One-armed mode

    Hi there
    I'm Kilsoo.
    I made One-armed mode using ACE.
    Real servers are in away Vlan from ACE.
    So, I configured the PBR with ACE alias ip address for the next-hop on the real server's gateway interface.
    And, the probe from active ACE works well.
    But, the probe from standby ACE was fail.
    At this point, my first question
    Is it normal situation that the probe fail from standby ACE????
    So, I made the route-map for PBR like below for temporary solution.
    route-map deny PBR 5
    match ip address Probe_ACL
    route-map permit PBR 10
    match ip address L4_ACL
      set ip next-hop <Alias IP address>
    ip access-list extended Probe_ACL
      pemit ip any <Standby ACE's IP address>
    ip access-list extended L4_ACL
    permit tcp <Real server's IP address> eq 80 any
    Second question...
    Do you have any other good solutions???
    Thanks

    Hi Cesar
    Thanks for your reply.
    But I think I was confuse when I wrote the message.
    I used both ace's vlan ip address for next-hop ip address like your advice.
    Do you know the standby ace can't check probe without route-map in one-armed mode like below diagram???
    Backbone Router
             |
             |
             |
    Supervisor --------------------ACE(vserver: 172.19.100.100)
             |         (vlan 200)
             |
             |
             |(vlan 110)
             |
             |
    Real servers
    (172.19.110.111)

  • ACE: if one server is loaded and it want to use the server not loaded? how?

    Hello,
    I have 2 real Servers (10.24.8.200 and 10.24.8.201) in loadbalance (HTTP and HTTPS) with VIP 10.24.16.10, and the type of loadbalance is round robin, but when the server (10.24.8.200) has high proccessing for example memory or hard disk and users try to access to server (10.24.8.200) this is more slow. if this server is too loaded? how can the ACE switch to another real server? in 10 seconds for example?
    Best Regards
    My configuration is:
    ACE-MOD6/integracion1# sh runn
    Generating configuration....
    access-list anyone line 8 extended permit ip any any
    probe http get-index
    interval 4
    open 2
    recieve 2
    faildetect 2
    passdetect interval 10
    expect status 200 200
    rserver host Srv1
    ip address 10.24.8.200
    probe get-index
    inservice
    rserver host Srv2
    ip address 10.24.8.201
    probe get-index
    inservice
    serverfarm host servers
    rserver Srv1
    inservice
    rserver Srv2
    inservice
    class-map type management match-any ADM-CONTEX-SERV1
    2 match protocol telnet any
    3 match protocol ssh any
    4 match protocol icmp any
    class-map type http loadbalance match-all Check-Headers
    2 match http url .*
    3 match http header Host header-value "10.24.16.*"
    4 match http header User-Agent header-value ".*MSIE.*"
    class-map match-all VIP-10-HTTP
    2 match virtual-address 10.24.16.10 tcp eq www
    class-map type http loadbalance match-all other-HTTP
    2 match http url .*
    policy-map type management first-match ADM-CTX-SERV1
    class ADM-CONTEX-SERV1
    permit
    policy-map type loadbalance first-match L7-logic
    class Check-Headers
    serverfarm servers
    class other-HTTP
    serverfarm servers
    policy-map type loadbalance first-match lb-logic
    class class-default
    serverfarm servers
    policy-map multi-match client-vips
    class VIP-10-HTTP
    loadbalance vip inservice
    loadbalance policy L7-logic
    loadbalance vip icmp-reply active
    interface vlan 60
    description inside
    ip address 10.24.8.5 255.255.255.0
    access-group input anyone
    access-group output anyone
    service-policy input ADM-CTX-SERV1
    no shutdown
    interface vlan 233
    description outside
    ip address 10.24.16.5 255.255.255.0
    access-group input anyone
    access-group output anyone
    service-policy input ADM-CTX-SERV1
    service-policy input client-vips
    no shutdown
    ip route 0.0.0.0 0.0.0.0 10.24.16.1

    If your server is running an SNMP agent, the ACE can use SNMP to pull stats from the server. You'll just need the correct OID. For instance, if you were using Linux, you might use something like the following as a probe:
    probe snmp linux-stats
    interval 10
    community public
    oid .1.3.6.1.4.1.2021.10.1.5.1
    threshold 75
    .1.3.6.1.4.1.2021.10.1.5.1 is the OID for CPU load average (for Linux, Windows would have a different OID). If it goes above 75, the server is marked as out. When used with the least-loaded predictor, it will also divert more traffic to the least loaded server, as defined by that OID. You can use multiple OIDs in conjunctions and give them different weights.
    However, judging from your timeout value of your get-http health check, I would check to see if the issue isn't that your servers are flapping because of a too-low receive threshold. Each server has 2 seconds to respond to the ACE, which may not enough time given that the servers may be getting a lot of traffic and you're doing these checks every 4 seconds.
    If one fails, the other gets all the traffic, until it is overloaded, and it fails. By this time, your other servers has calmed down, and gets all the traffic, and the cycle repeats itself. Check SNMP traps or SYSLOG to see if this is the case.
    Either way, you might want to change the timeout to 5 or 10, to give them more breathing room.

  • Need help to Configure Cisco ACE 4710 Cluster Deployment

    Dear Experts,
    I'm newbie for Cisco ACE 4710, and still I'm in learning stage. Meanwhile I got chance at my work place to deploy a Cisco ACE 4710 cluster which should load balance the traffic between  two Application Servers based on HTTP and HTTPS traffic. So I was looking for good deployment guide in Cisco SBA knowledge base then finall found this guide.
    http://www.cisco.com/en/US/docs/solutions/SBA/February2013/Cisco_SBA_DC_AdvancedServer-LoadBalancingDeploymentGuide-Feb2013.pdf
    This guide totally fine with my required deployment model. I have same deployment environment as this guide contains with ACE cluster that connects to two Cisco 3750X (Stack) switches. But I have some confusion places in this guide
    This guide follow the "One-armed mode" as a deployment method. But when I go through it further I have noticed that they have configured server VLAN as a 10.4.49.0/24 (all servers reside in it) and Client side VIP also in same VLAN which is 10.4.49.100/24 (even NAT pool also).
    My confusion is, as I have learned about Cisco ACE 4710 one-armed mode deployment method, it should has two VLAN segments, one for Client side which client request come and hit the VIP and then second one for Server side. which means besically two VLANs. So please be kind enough to go through above document then tell me where is wrong, what shoud I need to do for the best. Please this is an urgent, so need your help quickly.
    Thanks....!
    -Amal-

    Dear Kanwal,
    I need quick help for you. Following are the Application LB requirements which I received from my clinet side.
    Following detail required for configuring Oracle EBS Apps tier on HA:
    LBR IP and Name required to configure EBS APPS Tier (i.e, ap1ebs & ap2ebs nodes)
    Suggested IP and Name for LBR:
    IP : 172.25.45.x [should be on same 172.25.45 subnet of ap1ebs & ap2ebs nodes]
    ebiz.xxxx.lk [on port 80 for http protocol accessibility]
    This LBR IP & name must be resolve and respond on DNS network
    Server Farm detail for LBR Setup
    Following detail will be use for configuring the LBR:
    LBR IP and Name :
    IP : 172.25.45.x [should be on same 172.25.45 subnet of ap1ebs & ap2ebs nodes]
    ebiz.xxxx.lk [on port 80 for http protocol accessibility]
    This LBR IP & name must be resolve and respond on DNS network
    Server Farm Detail for LBR setup:
    Server 1 (EBS App1 Node, ap1ebs):
    IP : 172.25.45.19
    Server Name: ap1ebs.xxxx.lk [ap1ebs hostname is an example, actual hostname will be use]
    Protocol: http
    Port: 8000
    Server 2 (EBS App2 Node, ap2ebs):
    IP : 172.25.45.20
    Server Name: ap2ebs.xxxx.lk [ap2ebs hostname is an example, actual hostname will be use]
    Protocol: http
    Port: 8000
    Since my client needs to access URL ebiz.xxxx.lk which should be resolved by IP 172.25.45.21 (virtual IP) via http (80) before they deploy the app on the two servers I just ran web service on both servers (Linux) and was trying to access http://172.25.45.21 it was working fine and gave me index.html page. Now after my client has deployed the application then when he tries to access the page http://172.25.45.21 he cannot see his main login page. But still my testing web servers are there on both servers when I type http://172.25.45.21 it will get index.html page, but not my client web login page. What can I do for this ?
    Following are my latest config :
    probe http Get-Method
      description Check to url access /OA_HTML/OAInfo.jsp
      interval 10
      faildetect 2
      passdetect interval 30
      request method get url /OA_HTML/OAInfo.jsp
      expect status 200 200
    probe udp http-8000-iRDMI
      description IRDMI (HTTP - 8000)
      port 8000
    probe http http-probe
      description HTTP Probes
      interval 10
      faildetect 2
      passdetect interval 30
      passdetect count 2
      request method get url /index.html
      expect status 200 200
    probe https https-probe
      description HTTPS traffic
      interval 10
      faildetect 2
      passdetect interval 30
      passdetect count 2
      ssl version all
      request method get url /index.html
    probe icmp icmp-probe
      description ICMP PROBE FOR TO CHECK ICMP SERVICE
    rserver host ebsapp1
      description ebsapp1.xxxx.lk
      ip address 172.25.45.19
      conn-limit max 4000000 min 4000000
      probe icmp-probe
      probe http-probe
      inservice
    rserver host ebsapp2
      description ebsapp2.xxxx.lk
      ip address 172.25.45.20
      conn-limit max 4000000 min 4000000
      probe icmp-probe
      probe http-probe
      inservice
    serverfarm host ebsppsvrfarm
      description ebsapp server farm
      failaction purge
      predictor response app-req-to-resp samples 4
      probe http-probe
      probe icmp-probe
      inband-health check log 5 reset 500
      retcode 404 404 check log 1 reset 3
      rserver ebsapp1 80
        conn-limit max 4000000 min 4000000
        probe icmp-probe
        inservice
      rserver ebsapp2 80
        conn-limit max 4000000 min 4000000
        probe icmp-probe
        inservice
    sticky http-cookie jsessionid HTTP-COOKIE
      cookie insert browser-expire
      replicate sticky
      serverfarm ebsppsvrfarm
    class-map type http loadbalance match-any default-compression-exclusion-mime-type
      description DM generated classmap for default LB compression exclusion mime types.
      2 match http url .*gif
      3 match http url .*css
      4 match http url .*js
      5 match http url .*class
      6 match http url .*jar
      7 match http url .*cab
      8 match http url .*txt
      9 match http url .*ps
      10 match http url .*vbs
      11 match http url .*xsl
      12 match http url .*xml
      13 match http url .*pdf
      14 match http url .*swf
      15 match http url .*jpg
      16 match http url .*jpeg
      17 match http url .*jpe
      18 match http url .*png
    class-map match-all ebsapp-vip
      2 match virtual-address 172.25.45.21 tcp eq www
    class-map type management match-any remote_access
      2 match protocol xml-https any
      3 match protocol icmp any
      4 match protocol telnet any
      5 match protocol ssh any
      6 match protocol http any
      7 match protocol https any
      8 match protocol snmp any
    policy-map type management first-match remote_mgmt_allow_policy
      class remote_access
        permit
    policy-map type loadbalance first-match ebsapp-vip-l7slb
      class default-compression-exclusion-mime-type
        serverfarm ebsppsvrfarm
      class class-default
        compress default-method deflate
        sticky-serverfarm HTTP-COOKIE
    policy-map multi-match int455
      class ebsapp-vip
        loadbalance vip inservice
        loadbalance policy ebsapp-vip-l7slb
        loadbalance vip icmp-reply active
        nat dynamic 1 vlan 455
    interface vlan 455
      ip address 172.25.45.36 255.255.255.0
      peer ip address 172.25.45.35 255.255.255.0
      access-group input ALL
      nat-pool 1 172.25.45.22 172.25.45.22 netmask 255.255.255.0 pat
      service-policy input remote_mgmt_allow_policy
      service-policy input int455
      no shutdown
    ft interface vlan 999
      ip address 10.1.1.1 255.255.255.0
      peer ip address 10.1.1.2 255.255.255.0
      no shutdown
    ft peer 1
      heartbeat interval 300
      heartbeat count 10
      ft-interface vlan 999
    ft group 1
      peer 1
      no preempt
      priority 110
      associate-context Admin
      inservice
    ip route 0.0.0.0 0.0.0.0 172.25.45.1
    Hope you will reply me soon
    Thanks....!
    -Amal-

  • 4710 in one-armed mode

    is it possible to preserve the clients originating IP address somewhere while using the 4710 in one armed mode?  I have a situation where the client source ip is needed, and I am deciding between one-armed mode and inline.  I'd like to use one-armed, so that only load balanced traffic traverses the load balancer, but I haven't seen an example where that can be done without  loosing the clients src address.

    Only thing I can think of is http header-insertion. Create an action-list, that inserts the original client src.ip/port into the http-header. The configuration is quite simple:
    action-list type modify http name
      header insert both Host header-value %is:%ps
    Then apply the action-list to your loadbalance policy-map.
    Take a look at the url below for futher information:
    http://www.cisco.com/en/US/docs/app_ntwk_services/data_center_app_services/ace_appliances/vA3_1_0/configuration/slb/guide/classlb.html#wp1131842
    But that depends on your situation. If is the original client src.ip/port is expected in the L3/L4 header, this won't cut it. Is this for logging purposes or some form of packet filtering ?
    If you intend to run your ACE in one-arm mode, in my opponion, src.nat and header-insertion is your only option.
    hth
    /Ulrich

  • Load balancing imbalance in ACE

    We are facing slowness an http application which is due to connection imbalance. This setup has one set of Load balancer and a proxy in DMZ where the connections gets terminated from the users and a load balancer inside LAN which load balances between the end point servers. All user connections terminate on the DMZ load balancer / proxy and proxy connects back to the internal load balancer VIP. (By collating a number of connections to very few - default proxy behavior) . Internal load balancer VIP does load balancing based on the number of connections in a least loaded manner and this load balancer doesn’t see how many sessions are beneath each connections and it distributes each connection to server underneath. Thus if one connection has around 100 sessions, another may have only a few and each of this gets forwarded to the end server causing the imbalance.
    Is there a way that this imbalance can be tackled in this setup.
    Users --> Proxy ---> Load balancer (Cisco ACE) --> Server 1
                                                                                                    Server 2
                                                                                                    Server 3
    Least Connections predictor
    HTTP Cookie insert sticky

    Hi,
    Persistance rebalance should solve the issue for you.
    The persistent-rebalance function is required if you have proxy users and the proxy shares one TCP connection between multiple users.
    With this behavior, inside a single connection you will see different cookies. Therefore, for each cookie, ACE needs to first detect the new cookie and then loadbalance to the appropriate server.
    this is from the admin Guide :
    The following example specifies the parameter-map type http command to enable HTTP persistence after it has been disabled:
    host1/Admin(config)# parameter-map type http http_parameter_map
    Host1/Admin(config-parammap-http)# persistence-rebalance
    Please refer the following link for more info :
    http://www.cisco.com/en/US/docs/interfaces_modules/services_modules/ace/vA4_2_0/configuration/slb/guide/classlb.html#wp1062907
    hope that helps,
    Ajay Kumar

  • Timeouts on non load balanced traffic thru ACE

    I have a backend server creating a connection to a db server outside the ACE environment. This traffic is using the L3 function of the ACE and is not being load balanced. The connection is timing out after 1 hour. I have normalization disabled on the backend server VLAN but not on the front side VLAN of the ACE.
    2 Questions:
    - With normalization disabled do I still need to change the tcp inactivity timeout for this traffic? Or with normalization disabled shouldn't the non load balanced traffic be L3 routed and not effected by the tcp timeout value?
    - Also do I need to disable normalization on the front side VLAN of the ACE?
    thanks,
    kurt

    As per
    http://www.cisco.com/en/US/docs/interfaces_modules/services_modules/ace/v3.00_A1/configuration/security/guide/tcpipnrm.html#wp1075741
    "Disabling TCP normalization affects only Layer 4 traffic. TCP normalization is always enabled for Layer 7 traffic."
    By disabling TCP normalization the following Layer 4 connection parameters are ignored.
    exceed-mss-----Configure behavior if a packet exceeds MSS
    random-seq-num-disable----Disable TCP sequence number randomization
    reserved-bits-----Configure Reserved bits in TCP header
    syn-data-----Configure behavior for a SYN packet containing data
    tcp-options-----Configure TCP header options
    urgent-flag-----Allow/Clear Urgent flag
    I think you will need "Set timeout inactivity xxxx" command even if "no normalization" command is defined.
    Syed Iftekhar Ahmed

  • FTPS with ACE 4710

    Hi,
    I need to configure ACE for load-balancing FTPS. And simply deploying L4 policies are not helping either. Configured the FTPS servers and both of them are working fine when accessed via physical IP, but do not work when accessed via the VIP.
    if it is possible, a reference URL would really be a great help.

    Hi Rajiv,
    Do you want to loadbalance SFTP ?
    Or just have it pass through ??
    Loadbalancing SFTP is difficult because it starts as regular FTP and switches over to SSL which ACE can't do and fails to understand.
    you don't need anything to have it passthrough.
    As long as you don't ask ACE to inspect the traffic, and assuming this traffic is permitted in your access-group, then there is nothing to do to have it go through.
    I think your goal is to distribute inbound file deposits evenly across SFTP servers.
    High-level Overview
    Clients -> Internet -> Tier-1 Firewall -> ACE Load-balancer -> SFTP Servers
    I would like to tell you that SFTP is nothing but SSH. It uses a single connection. There are no issues loadbalancing it using traditional Layer 4 load balancing.
    So you are good.
    On the other hand FTP over SSL (FTPS) can neither offloaded nor loadbalanced using ACE.
    FTPS uses multiple channels and Since the control channel is encrypted, ACe is not able to get the port numbers for the data connections.
    Kindly find these examples for FTP load balance method in cisco ACE:
    1. FTP serverfarm on Cisco ACE
    http://snippets101.blogspot.com/2007/06/ftp-serverfarm-on-cisco-ace.html
    2. FTP Load Balancing on ACE in Routed Mode Configuration Example
    http://docwiki.cisco.com/wiki/FTP_Load_Balancing_on_ACE_in_Routed_Mode_Configuration_Example
    3. FTP Load Balancing on ACE in One-Arm Mode Configuration Example
    http://docwiki.cisco.com/wiki/FTP_Load_Balancing_on_ACE_in_One-Arm_Mode_Configuration_Example
    Kindly refer the folowing URL for Layer4 policies:
    http://cisco.com/en/US/products/hw/modules/ps2706/products_configuration_example09186a00809c3048.shtml
    http://www.cisco.com/en/US/docs/app_ntwk_services/data_center_app_services/ace_appliances/vA3_1_0/configuration/slb/guide/classlb.html
    http://docwiki.cisco.com/wiki/Cisco_Application_Control_Engine_(ACE)_Module_Troubleshooting_Guide,_Release_A2(x)_--_Troubleshooting_Layer_4_Load_Balancing
    http://snippets101.blogspot.com/2008/08/cisco-ace-and-private-vlans-in-switch.html
    http://snippets101.blogspot.com/2008/08/asymmetric-server-normalization-on.html
    http://docwiki.cisco.com/wiki/Cisco_ACE_4700_Series_Appliance_Quick_Start_Guide,_Release_A3(1.0)_--_Configuring_Server_Load_Balancing
    http://www.cisco.com/en/US/docs/app_ntwk_services/data_center_app_services/ace_appliances/vA1_7_/configuration/security/guide/tcpipnrm.html#wpmkr1116809
    Hope it will help you furhter in configuring the ACE load balancing L4 policies.
    Kindly rate
    Sachin Garg

  • Load balancing ssl that terminates on servers

    hi,
    right now i have a very simple clear-text http + https setup. initially, my load-balancer was terminating SSL, but because of the way our application works, we moved away from that and installed an SSL-server on the servers themselves which we know works fine when we access the servers directly.
    on the css i have a very simple ssl-balance rule:
    content srv.443
    add service srv1.ssl
    add service srv2.ssl
    advanced-balance sticky-srcip
    protocol tcp
    port 443
    url "/*"
    vip address 10.72.39.17
    active
    service srv1.ssl
    ip address 10.72.39.71
    protocol tcp
    keepalive port 51001
    port 51001
    active
    service srv2.ssl
    ip address 10.72.39.72
    protocol tcp
    port 51001
    keepalive port 51001
    active
    the problem i'm seeing right now is that even though i deleted all config regarding ssl-termination on the css, every time i hit the 'ssl-vip' i still get the locally generated certificate instead of the valid one i get when hitting the web-servers directly.
    it's weird that the css keeps trying to use its own certificate, when all related config has been deleted.
    now i have a question, i assumed that there was no problem if one tries to load-balance ssl-traffic when the traffic is terminated on the servers themselves. now i'm not so sure, so an initial question is: can this be done?
    regards,
    c.

    yes, SSL can be terminated on the servers and loadbalancer by the CSS.
    You should remove the "url" from your config because the traffic is now encrypted and the CSS can't see the url.
    If the config is what you indicated, there is no way the CSS can send its own certificate.
    Absolutely no way :-)
    Are you sure your server is sending the correct certificate ?
    Gilles.

  • How to properly load balance between diffrent server farms.

    Hi experts,
    We are using an ACE 4710. We chose for our server farms to load balance using the least_connections predictor. it seems to work fine inside the same server farm but is it working properly between server farms? It doesn't seem because some of my real servers seems to be more loaded than others. Each server farm are using the same real servers.
    Any idea about what is the problem or any suggestion regarding the best load balancing predictor we should use using this kind of configuration?
    Thank's to all.

    The ACE uses load-balancing algorithms or predictors to determine how to balance the traffic among the devices configured in the server farms, independent of the device type. For FWLB, we recommend that you use only the hash address source and the hash address destination predictors. Using any other predictor with FWLB may fail and block traffic, especially for applications that have separate control and data channels.
    Here is the configuration guide for the Cisco ACE 4700 Series Appliance Server Load-Balancing.
    http://www.cisco.com/en/US/docs/app_ntwk_services/data_center_app_services/ace_appliances/vA1_7_/configuration/slb/guide/fwldbal.html

  • A question about the SharePoint services load balancer

    Let's consider a farm with one WFE and two app servers, A and B. Both app servers are running the Managed Metadata Service (MMS). 
    User requests a page from the WFE, which talks to the database server. The operation needs information from the MMS, so the WFE requests information from the round robin load balancer for SharePoint web services. Let's say server A is down. 
    Here's my question - what happens next?
    a) The round robin load balancer tells the WFE the MMS is on servers A & B. The WFE tries server A, fails, and returns a failure. 
    b) The round robin returns servers A & B. The WFE tries server A, which fails. The WFE then tries server B.
    c) The round robin returns either A or B, depending on which is next in rotation. The WFE tries the server returned. If the server returned is A, the WFE returns a failure. 
    d) The round robin returns either A or B, depending on which is next in rotation. The WFE tries the server returned. If the server returned is A, the WFE queries the round robin service again.
    e) The round robin knows server A is down, returns only server B to the WFE. 
    Philo Janus, MCP Bridging business & Technology: http://www.saintchad.org/ Telecommuter? http://www.homeofficesurvival.com/ Author: Pro InfoPath 2007 & Pro InfoPath 2010 Pro PerformancePoint 2007 Pro SQL Server Analysis Services 2008 Building Integrated
    Business Intelligence Solutions

    When a Service Application is down, the application load balancer removes that endpoint from the load balancer. When it becomes available again, it adds it back. This way the WFE would just contact the MMS endpoint that was available, not try and timeout
    against an unavailable endpoint.
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • Load balancing across multiple application servers not working with JCo RFC

    We have a problem where inbound messages to the Mapping Runtime engine (ABAP -> J2EE) are not load balanced over application servers. However, load balancing does take place across server nodes within one application server.
    Our system comprises of the following:
    Central Instance (2 X server nodes)
    Database Instance
    2 X Dialog Instances (with 2 X server nodes each)
    The 1st application server that starts is usually the one that is used for inbound messaging.
    We have looked at the sap gateway configuration and have tried various options without much luck:
    i.e.: local gateways vs. one central gateway, load balancing type by changing parameter gw/reg_lb_level, see: http://help.sap.com/saphelp_nw70/helpdata/EN/bb/9f12f24b9b11d189750000e8322d00/frameset.htm
    Here are our release levels:
    SAP_ABA     700     0012     SAPKA70012
    SAP_BASIS     700     0012     SAPKB70012
    PI_BASIS     2005_1_700     0012     SAPKIPYJ7C
    ST-PI     2005_1_700     0005     SAPKITLQI5
    SAP_BW     700     0013     SAPKW70013
    ST-A/PI     01J_BCO700     0000          -
    Any help would be greatly appreciated.
    Many thanks

    Tim
    Did you follow the guide here:
    How to Scale Up SAP Exchange Infrastructure 3.0  
    Learn what the most likely scaled system architecture looks like, and read about a step by step procedure to install additional dialog instances. The guide also walks you through additional configuration steps and the application of Support Package Stacks.
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/c3d9d710-0d01-0010-7486-9a51ab92b927
    We followed this guide for XI3.0 and PI7.0 and works successfully!

Maybe you are looking for