Load Balancing proxy based firewalls

I need to load balance http and ssl traffic through proxy based firewalls (Gauntlet)to a server farm. I've been told I can't use the usual paths through the firewalls but need to load balance the firewalls as if they were servers which would then proxy the session to the Internal content switch which will load balance to the servers.
Any ideas if this will work or how to do it? I need to keep the SSL sessions sticky as well.

could you clarify what you mean by proxy firewall.
Is it just a proxy server with some filtering feature ?
If so, what was suggested to you is correct.
You define your proxy servers as services and then you simply configure
a content rule for 8080 or 80 (whatever your proxy listen on) and another content rule for port 443 SSL (or whatever port your proxy is setup for).
If the proxy is setup to use its own ip address to request HTML data, the response all aways come back to the right proxy. No need for the firewall loadbalancing feature.
An example is this
service proxyfw1
ip address x.x.x.x1
active
service proxyfw2
ip address x.x.x.x2
active
owner mycompany
content HTTPproxy
vip address x.x.x.x
add service proxyfw1
add server proxyfw2
proto tcp
port 8080
active
content SSLproxy
vip address x.x.x.x
add serv proxyfw1
add serv proxyfw2
proto tcp
port 443
application ssl
advanced-balance ssl
active
Then you setup your browser to point to proxy address x.x.x.x port 8080 for http and 443 for ssl.
Gilles.

Similar Messages

  • Query about Load-Balancer 'proxy'

    Hi,
    When using load-balancer 'proxy', with multiple remote addresses defined, does the client randomly select the initial connection from the list of remote connections in the config file?
    I know the proxy will redirect a client to a less loaded proxy, however I want to distribute the initial connection randomly. In our configuration we will have a lot of extend clients. If they all connect to the first proxy in the list, this will cause that proxy to run hot (and possibly fall over).
    Hopefully I've explained that ok? It's quite a tounge-twister of technical terms. Anyhow if someone knows the answer to this I'd be grateful, as I can't find any clarification in the documentation.
    Cheers
    Rich

    Rich,
    When multiple remote addresses are defined, Coherence does randomize the address list defined in the configuration file and connect to the next address in the list.
    -Luk
    Edited by: lsho on Jul 19, 2012 10:56 AM

  • Load Balance https based on url

    I am trying to configure ACE 4710 to load balance base on the URL, If it matches the specific URL ( /456/ ), the traffic will be sent to server farm 456 else the traffic will be sent to server farm 123.
    I attached an image of the topology.
    Ace Config:
    rserver host SRV01_123
      ip address 192.168.1.101
      inservice
    rserver host SRV02_123
      ip address 192.168.1.102
      inservice
    rserver host SRV01_456
      ip address 192.168.1.111
      inservice
    serverfarm host farm_123
      rserver SRV01_123
        inservice
      rserver SRV02_123
        inservice
    serverfarm host farm_456
      rserver SRV01_456
        inservice
    class-map match-all VIP_Application
      2 match virtual-address 192.168.1.10 tcp eq https
    class-map type http loadbalance match-all L7_server_456
      2 match http url /456/
    policy-map type loadbalance http first-match LB_Application
      class L7_server_456
        serverfarm farm_456
      class class-default
        serverfarm farm_123
    policy-map multi-match ServerGroup1_PM
      class VIP_Application
        loadbalance vip inservice
        loadbalance policy LB_Application
        loadbalance vip icmp-reply
    interface vlan 70
      bridge-group 1
      no shutdown
    interface vlan 700
      bridge-group 1
      service-policy input ServerGroup1_PM
      no shutdown
    Thanks

    Hi John,
    If you want to do the offload in the ACE also called SSL termination, it is a two step process:
    1- You need to upload your certificate and key to the ACE using FTP or one of the available methods.
    2- Create the the SSL proxy service where you add these two files and finally add this service under the policy-multimatch for the VIP in question.
    You also need to decide whether you want to keep your server listening in the encrypted port (that would be a two way encryption process called End-to-End SSL) or you can change the port to 80 and leave all the decyption process to the ACE (this would be transparent to the client, the site will show up as HTTPS all the time).
    Here you can take a look at the SSL termination process (using clear text port in the backend servers).
    Oficial Configuration Example
    http://www.cisco.com/en/US/partner/docs/app_ntwk_services/data_center_app_services/ace_appliances/vA4_1_0/configuration/ssl/guide/terminat.html
    Cisco Wiki Example
    http://docwiki.cisco.com/wiki/SSL_Termination_on_the_Cisco_Application_Control_Engine_Without_an_Existing_Chained_Certificate_and_Key_in_Routed_Mode_Configuration_Example
    HTH
    Pablo

  • ACE to load balance proxy servers

    Hi,
    i have a set of 4 proxy servers that are already load balanced. But they are using a incorrectly configured health probe on the ace. I need to know a good configuration for a heath probe that will send a http request over port 80 , wait for response, and read it?  I searched the forum and the cisco pages but could not find a proper answer.        
    the current probe is as follows:
    probe http HTTPGET
      description Tests that www.gmail.com returns 302 redirect
      interval 10
      request method get url http://www.gmail.com
      expect status 302 302
    -Gordon

    Hi Gordon,
    This is what you want to achieve :
    I need to know a good configuration for a heath probe that will send a  http request over port 80 , wait for response, and read it?
    So ideally you have to choose what content you want to request and what you expect as response.
    Any HTTP request will assume that the request is going to the web server or the device can understand HTTP and respond accordingly.
    If you ask me I would say that the probes which you are using make sense.
    If the probe fails that means the proxy is unable to reach "www.gmail.com" which is almost as good as proxy is not working.
    Let me know your thought about it.
    regards,
    Ajay Kumar

  • Load balancing of PIX firewalls with multiple DMZs

    I need a suggestion about how to balance the traffic through two PIX firewalls, with 4 interfaces (IN,OUT,DMZ1,DMZ2)
    In all the documentation related to the subject, I see always the firewalls with only two interfaces:
    http://www.cisco.com/warp/customer/117/fw_load_balancing.html
    http://www.cisco.com/univercd/cc/td/doc/product/webscale/css/advcfggd/firewall.htm
    What if I need to balance on more than 2 interfaces?
    Do I have to add more content switches, one for each interface ?
    Or could I use VLANs inside the same content switches, and assign the ports to DMZs appropriately ?
    Thank you in advance for any help.

    We just had some internal discussions about that at my work, and the suggestion from a local cisco specialist was, if you want to levarage load balacing over multiple DMZ's, then you get the CSS blades for the 65xx's. Right now we have mulriple CSS and LD failover pairs (One pair for each DMZ) and it is starting to become expensive, while we aren't really utilizing the full capacity of them. If you get the Blades, they have Gigabit traces to the backplane of the switch, and you can use them for as many poers as you have on the 6500.
    Then again, it depends on if physical security is essential to you, and you are concerned with L2 attacks (VLAN Hopping, etc) There are tradeoffs and benefits when using a consildated infrastructure.

  • Load balancing proxy chain with LD

    Next Week we have to do some consulting at a customers, who owns 4 LD 416. He wants to do full HA balancing of his web proxy chain, consisting of 2 proxy servers, 2 viruswalls and 2 applet traps.
    In his current configuration he routes the HTTP requests from internal clients through a firewall and
    LD1 into DMZ1-proxy, then through the firewall and LD2 to DMZ2-viruswall, then through the firewall and LD1 back to DMZ1-applettrap, and finally towards the internet. This results in a tremendous load on the firewall box.
    Our suggestion to overcome this situation is to set up to VLANs at interfaces 2 and 3 of LD1. The proxy servers will reside in VLAN2, the viruswall at VLAN3, and the applettrap at VLAN2 again. So the LD can bridge all the VLANs and balance the complete proxy chain.
    Will this work? Anything we overlooked? Is there somebody out there who has done something similar before? What configuration specialties have to been taken into account?
    Thanks in advance,
    Oliver

    Next Week we have to do some consulting at a customers, who owns 4 LD 416. He wants to do full HA balancing of his web proxy chain, consisting of 2 proxy servers, 2 viruswalls and 2 applet traps.
    In his current configuration he routes the HTTP requests from internal clients through a firewall and
    LD1 into DMZ1-proxy, then through the firewall and LD2 to DMZ2-viruswall, then through the firewall and LD1 back to DMZ1-applettrap, and finally towards the internet. This results in a tremendous load on the firewall box.
    Our suggestion to overcome this situation is to set up to VLANs at interfaces 2 and 3 of LD1. The proxy servers will reside in VLAN2, the viruswall at VLAN3, and the applettrap at VLAN2 again. So the LD can bridge all the VLANs and balance the complete proxy chain.
    Will this work? Anything we overlooked? Is there somebody out there who has done something similar before? What configuration specialties have to been taken into account?
    Thanks in advance,
    Oliver

  • ACE30 Load balancing based on IP and using x-forward-for header

    Hi Guys,
    We currently have a load balancing policy setup to direct traffic to say FARM-A based on a particular range of source (client) IP addresses, and the default FARM-B for all the other traffic.
    We are now looking to introduce a web application firewall (WAF) before the ACE.  The WAF will be inserting the client IP address into the x-forward-for http header.  Now I was wondering how best can be achieve the load balancing based on source IP given that we'll have to parse the HTTP header for this x-forward-for field?  Are there any examples that anyone can point me to? 
    let me know if you have any questions.
    thanks
    Sheldon

    Hi Sheldon,
    You might try creating a class map that matches on the XFF header. Then use that as the L7 load balance criteria (based on the hash value of the XFF header), using the predictor hash header.
    -Alex

  • Load Balance method for proxy - ISA or BlueCoat

    Hi,
    I would like to know that which load balance method such as src-ip, cookie or etc is most suitable for load balancing proxy servers such as ISA or Bluecoat. The Proxy will listen to many services - http, https, ftp, and etc. Thanks for the help.

    The methods you mentioned are not loadbalancing technics, but stickyness features.
    Stickyness is not always necessary.
    Now, for caching devices, it is good to always send users requesting a same object to a single proxy, so that the same object is not cached in all the proxies.
    Therefore, the solution in this case is loadbalancing with url hashing.
    For HTTPs, if you terminate SSL on the loadbalancer, you can use the same solution.
    For all the other traffic, I would suggest to start with roundrobin and see after a while if it requires some adjustments or not.
    Gilles.

  • Load Balancing E-Business Suite 11i using BIG-IP

    "Has anyone deployed an Oracle E-Business Suite 11i solution in a load balanced environment based on the F5 BIG-IP 2400 device?"
    Background:
    When loadbalanced, Oracle forms requires a form of persistence to be in place, presumably to maintain state information.
    If using simple persistence based on client source IP address, then there is no problem.
    However in our environment, 1000s of clients are hidden behind the single IP address of a proxy server, therefore simple persistence will provide true load balancing.
    The alternative is cookie based persistence which will allow true load balancing even with clients hidden behind a proxy. However the challenge here is that Oracle Forms is java and not http based which means that BIG-IP cannot insert an http cookie into the java packets sent to the client by the Oracle server.
    If anyone has come across this issue and found a way round it, could you please describe how this is achieved? Either by configuration of the BIG-IP switch or at the Oracle Application side.

    Metalink doc id 290807.1 says that Internet Explorer 8 is now ccertified using Sun JRE 1.6.0_03 and higher. I have JRE 1.6.0_07 with Internet Explorer 8 for my Oracle 11i and the windows are freezing up consistently and works fine with IE 7, but i have users in IE 7 and IE 8, could you anyone help me with this issue. my full version is oracle 11.5.10.2 and my desktop in Windows XP.
    Thanks in advance

  • 3rd party distributed SW load balancing with In-Memory Replication

              Hi,
              Could someone please comment on the feasibility of the following setup?
              I've started testing replication with a software load balancing product. This
              product lets all nodes receive all packets and uses a kernel-level filter
              to let only one node at the time receive it. Since there's minimum 1 heartbeat
              between the nodes, there are several NICs in each node.
              At the moment it seems like it doesn't work: - I use the SessionServlet - with
              a 2-node cluster I first have the 2 nodes up and I access it with a single client:
              .the LB is configured to be sticky wrt. source IP address, so the same node gets
              all the traffic - when I stop the node receiving the traffic the other node takes
              over (I changed the colours of SessionServlet) . however, the counter restarts
              at zero
              From what I read of the in-memory replication documentation I thought that it
              might work also with a distributed software load balancing cluster. Any comments
              on the feasability of this?
              Is there a way to debug replication (in WLS6SP1)? I don't see any replication
              messages in the logs, so I'm not even sure that it works at all. - I do get a
              message about "Clustering Services startting" when I start the examples server
              on each node - is there anything tto look for in the console to make sure that
              things are working? - the evaluation license for WLS6SP1 on NT seems to support
              In-Memory Replication and Cluster. However, I've also seen a Cluster-II somewhere:
              is that needed?
              Thanks for your attention!
              Regards, Frank Olsen
              

    We are considering Resonate as one of the software load balancer. We haven't certified
              them yet. I have no idea how long its going to take.
              As a base rule if the SWLB can do the load balancing and maintain stickyness that is fine
              with us as long as it doesn't modify the cookie or the URL if URL rewriting is enabled.
              Having said that if you run into problems we won't be able to support you since it is not
              certified.
              -- Prasad
              Frank Olsen wrote:
              > Prasad Peddada <[email protected]> wrote:
              > >Frank Olsen wrote:
              > >
              > >> Hi,
              > >>
              > > We don't support any 3rd party software load balancers.
              >
              > Does that mean that there are technical reasones why it won't work, or just that
              > you haven't tested it?
              >
              > > As >I said before I am thinking your configuration is >incorrect if n-memory
              > replication is not working. I would >strongly suggest you look at webapp deployment
              > descriptor and >then the config.xml file.
              >
              > OK.
              >
              > >Also doing sticky based on source ip address is not good. You >should do it based
              > on passive cookie persistence or active >cookie persistence (with cookie insert,
              > a new one).
              > >
              >
              > I agree that various source-based sticky options (IP, port; network) are not the
              > best solution. In our current implementation we can't do this because the SW load
              > balancer is based on filtering IP packets on the driver level.
              >
              > Currently I'm more interested in understanding whether it can our SW load balancer
              > can work with your replication at all?
              >
              > What makes me think that it could work is that in WLS6.0 a session failed over
              > to any cluster node can recover the replicated session.
              >
              > Can there be a problem with the cookies?
              > - are the P/S for replication put in the cookie by the node itself or by the proxy/HW
              > load balancer?
              >
              > >
              > >The options are -Dweblogic.debug.DebugReplication=true and
              > >-Dweblogic.debug.DebugReplicationDetails=true
              > >
              >
              > Great, thanks!
              >
              > Regards,
              > Frank Olsen
              

  • Load balancing imbalance in ACE

    We are facing slowness an http application which is due to connection imbalance. This setup has one set of Load balancer and a proxy in DMZ where the connections gets terminated from the users and a load balancer inside LAN which load balances between the end point servers. All user connections terminate on the DMZ load balancer / proxy and proxy connects back to the internal load balancer VIP. (By collating a number of connections to very few - default proxy behavior) . Internal load balancer VIP does load balancing based on the number of connections in a least loaded manner and this load balancer doesn’t see how many sessions are beneath each connections and it distributes each connection to server underneath. Thus if one connection has around 100 sessions, another may have only a few and each of this gets forwarded to the end server causing the imbalance.
    Is there a way that this imbalance can be tackled in this setup.
    Users --> Proxy ---> Load balancer (Cisco ACE) --> Server 1
                                                                                                    Server 2
                                                                                                    Server 3
    Least Connections predictor
    HTTP Cookie insert sticky

    Hi,
    Persistance rebalance should solve the issue for you.
    The persistent-rebalance function is required if you have proxy users and the proxy shares one TCP connection between multiple users.
    With this behavior, inside a single connection you will see different cookies. Therefore, for each cookie, ACE needs to first detect the new cookie and then loadbalance to the appropriate server.
    this is from the admin Guide :
    The following example specifies the parameter-map type http command to enable HTTP persistence after it has been disabled:
    host1/Admin(config)# parameter-map type http http_parameter_map
    Host1/Admin(config-parammap-http)# persistence-rebalance
    Please refer the following link for more info :
    http://www.cisco.com/en/US/docs/interfaces_modules/services_modules/ace/vA4_2_0/configuration/slb/guide/classlb.html#wp1062907
    hope that helps,
    Ajay Kumar

  • Load balancing across remote systems

    Hi everyone,
    When a request is routed to a remote machine (assuming that the service is not
    available on the local machine) load balancing is based on the load factors (plus
    NETLOAD, if set) of requests previously sent to the remote systems from the local
    system. My question is: is the load accumulation ever reset? As I see it, a
    reset could probably happen in one or more of the following ways:
    1. A rolling time period that will cause earlier load values to be progressively
    discareded.
    2. The load accumulations are reset every time a system is booted.
    3. The load accumulations are reset whenever the tuxconfig is modified.
    4. The accumulations are reset when the tuxconfig file is removed and recreated
    (I guess this is obvious, assuming this is where the accumulations would be stored
    between boots).
    The question is prompted by the following scenario. Machines A, B and C have
    been configured to offer the same service, but the service has only been enabled
    on A and B. Machine D does not offer this service, and therefore requests from
    D will be balanced between A and B based on the accumulated load recorded on D.
    This situation has been in operation for a couple of months without rebooting
    (quite possible - after all, this is Tuxedo we are talking about!), and it has
    now been decided to activate the service on machine C (without rebooting). The
    big question is: Will all the requests be routed to C until such time as it catches
    up to the load processed by A and B?
    Thanks to anyone who can shed some light on this.
    Regards,
    Malcolm.

    Where are you looking for the load accumulation?
    "load done" with psr command?
    In my case, with Tux 6.5, these stats are only reset with a reboot.
    Christian
    "Malcolm Freeman" <[email protected]> wrote:
    >
    Thanks, Scott - I found confirmation that stats are reset every SANITYSCAN
    interval
    in some old 4.2.1 documentation.
    Regards,
    Malcolm.
    Scott Orshan <[email protected]> wrote:
    Malcolm,
    This was an issue a number of years ago, back in one of the early 4.x
    releases.
    But now the stats are reset, I believe every SANITYSCAN interval.It
    might even
    happen as soon as the new service comes into operation. I'm too lazy
    to look, but
    it's easy to test.
         Scott
    Malcolm Freeman wrote:
    Hi everyone,
    When a request is routed to a remote machine (assuming that the serviceis not
    available on the local machine) load balancing is based on the loadfactors (plus
    NETLOAD, if set) of requests previously sent to the remote systemsfrom the local
    system. My question is: is the load accumulation ever reset? As
    I
    see it, a
    reset could probably happen in one or more of the following ways:
    1. A rolling time period that will cause earlier load values to beprogressively
    discareded.
    2. The load accumulations are reset every time a system is booted.
    3. The load accumulations are reset whenever the tuxconfig is modified.
    4. The accumulations are reset when the tuxconfig file is removedand recreated
    (I guess this is obvious, assuming this is where the accumulationswould be stored
    between boots).
    The question is prompted by the following scenario. Machines A, Band C have
    been configured to offer the same service, but the service has onlybeen enabled
    on A and B. Machine D does not offer this service, and therefore
    requests
    from
    D will be balanced between A and B based on the accumulated load recordedon D.
    This situation has been in operation for a couple of months withoutrebooting
    (quite possible - after all, this is Tuxedo we are talking about!),and it has
    now been decided to activate the service on machine C (without rebooting).The
    big question is: Will all the requests be routed to C until such
    time
    as it catches
    up to the load processed by A and B?
    Thanks to anyone who can shed some light on this.
    Regards,
    Malcolm.

  • Using Web Cache to Load balance Forms Server application.

    Hello,
    I apologize for cross posting this question in the Forms and Caching Services forum. But I thought my question will have a better chance.
    I have read that it's possible to use Oracle Web Cache as a software load balancer between multiple Application Servers.
    We are running Oracle9iAS R1.0.2.2.2a, with Forms/Reports6i servers on 2 Win2k boxes i.e our Forms6i application is deployed on two seperate boxes in two distinct locations. Users at each location, use their respective App Server url.
    Since the application is the same i.e. Forms6i code/fmx is the same for both locations, I am looking into loadbalancing and failover capability that Web Cache might be able to provide.
    I AM ONLY LOOKING AT THE LOADBALANCING & FAILOVER capabilities and NOT caching.
    So basically all users from both locations will point their browser to this Web Cache and the Web Cache will direct each connection to either of the two boxes. So, if either of the boxes dies, Web Cache will divert the requests to the other box.
    My concern is whether Web Cache supports this for the Forms requests that it will receive from the users. We are using Servlet Deployment of Forms, so technically, all communication is going though the HTTPD.
    Has anyone done this or has any ideas as to whether it's going to work or not? Oracle's FAQ insists that Forms is not supported. But I want to make sure that even loadbalancing is not supported. And if not supported then is there any other solution.
    Any comments appreciated.
    Thanks,
    Manish

    Using Web Cache to load balance servlet-based Forms (6i and 9i) is unofficially supported. I say "unofficially" because we have actual customers doing it and getting support, but the 2 development teams (Forms and Web Cache) haven't actually done any integration testing of this sort of configuration yet. For your case, please contact your Support rep and ask what was done to use Web Cache as a load balancer for Forms6i at METRO in Germany. The Forms product managemment team is writing up a white paper to describe how to do it, but until then, you'll need to go through Support. Please contact me if you want more information.

  • Load balancing across 4 web servers in same datacentre - advice please

    Hi All
    Im looking for some advice please
    The apps team have asked me about load balancing across some servers but im not that well up on it for applications
    Basically we have 4 apache web servers with about 2000 clients connecting to them, they would like to load balance connections to all these servers, they all need the same DNS name etc.
    what load balancing methods would I need for this, I believe they run on Linux
    Would I need some sort of device, or can the servers run some software that can do this, how would it work? and how would load balancing be achieved here?
    cheers

    Carl,
    What you have mentioned sounds very straightforward then everything should go well.
    The ACE is a load balancer which takes a load balancing decisions based on different matching methods like matching virtual address, url, source address, etc then once the load balance decision has been taken then the ACE will load balance the traffic based on the load balance method which you have configured (if you do not configure anything then it will use the default which is "round robin"), then it will send the traffic to the servers which it has available and finally the client should get the content.
    If you want to get some details about the load balancing methods here you have them:
    http://www.cisco.com/en/US/docs/app_ntwk_services/data_center_app_services/ace_appliances/vA3_1_0/configuration/slb/guide/overview.html#wp1000976
    For ACE deployments the most common designs are the following.
    Bridge Mode
    One Arm Mode
    Routed Mode
    Here you have a link for Bridge Mode and a sample for that:
    http://docwiki.cisco.com/wiki/Basic_Load_Balancing_Using_Bridged_Mode_on_the_Cisco_Application_Control_Engine_Configuration_Example
    Here you have a link for One Arm Mode and a sample for that:
    http://docwiki.cisco.com/wiki/Basic_Load_Balancing_Using_One_Arm_Mode_with_Source_NAT_on_the_Cisco_Application_Control_Engine_Configuration_Example
    Here you have a link for Routed Mode and a sample for that:
    http://docwiki.cisco.com/wiki/Basic_Load_Balancing_Using_Routed_Mode_on_the_Cisco_Application_Control_Engine_Configuration_Example
    Then as you could see in all those links you may end up having a configuration like this:
    interface vlan 40
      description "Default gateway of real servers"
      ip address 192.168.1.1 255.255.255.0
      service-policy input remote-access
      no shutdown
    ip route 0.0.0.0 0.0.0.0 172.16.1.1
    class-map match-all slb-vip
      2 match virtual-address 172.16.1.100 any
    policy-map multi-match client-vips
      class slb-vip
        loadbalance vip inservice
        loadbalance policy slb
    policy-map type loadbalance http first-match slb
      class class-default
        serverfarm web
    serverfarm host web
      rserver lnx1
        inservice
      rserver lnx2
        inservice
      rserver lnx3
        inservice
    rserver host lnx1
      ip address 192.168.1.11
      inservice
    rserver host lnx2
      ip address 192.168.1.12
      inservice
    rserver host lnx3
      ip address 192.168.1.13
      inservice
    Please mark it if it answered you question then other users can use it as reference in the future.
    Hope this helps!
    Jorge

  • Question on how does load balancing work on Firewall Services Module (FWSM)

    Hi everyone,
    I have a question about the algorithm of load balancing on Firewall Services Module (FWSM).
    I understand that the FWSM supports up to three equal cost routes on the same interface for load balancing.
    Please see a lower simple figure.
    outside inside
    --- L3 SW --+
    |
    MHSRP +--- FWSM ----
    |
    --- L3 SW --+
    I am going to configure the following default routes on FWSM point to each MHSRP VIP (192.168.13.29 and 192.168.13.30) for load balancing.
    route outside_1 0.0.0.0 0.0.0.0 192.168.13.29 1
    route outside_1 0.0.0.0 0.0.0.0 192.168.13.30 1      
    However I don't know how load balancing work on FWSM.
    On FWSM, load balancing work based on
    Per-Destination ?
    Per-Source ?
    Per-Packet ?
    or
    Other criteria ?
    Your information would be greatly appreciated.
    Best Regards,

    Configuring "tunnel default gateway' on the concentrator allowed traffic to flow as desired through the FWSM.
    FWSM is not capable of performing policy based routing, the additional static routes for the VPN load balancing caused half of the packets to be lost. As a result, it appears that the VPN concentrators will not be able to load balance.

Maybe you are looking for