Load balance based on OS

Is it possible to load balance incoming requests based on client's operating system on ACE?
For example, we have different web pages specifically for Blackberry or iPhones.
Instead of having multiple URL's & VIP's, we'd like to have a single VIP, but load balance traffic to different serverfarms based on client's OS.

You can loadbalance based on User-Agent header, first you need to quantify what Iphone and blackberry use for user-agent for instance from a regular browser you might see:
User-AgentT=Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; .NET CLR 2.0.50727)
from an iphone you will typically see:
User-Agent=Mozilla/5.0 (iPhone; U; CPU like Mac OS X; en)
AppleWebKit/420+ (KHTML, like Gecko) Version/3.0 Mobile/1C25 Safari/419.3
you can go to http://www.user-agents.org to find out what strings are used
That being the case you can make classes on the header to match for loadbalancing decisions:
class-map type http loadbalance match-any mobile
2 match http header User-Agent header-value .*iphone
4 match http header Uswer-Agent header-value .*blackberry
then in LB policy say we want to go to farmA for mobile and farmB for pc's
policy-map type loadbalance first-match L7POLICY
   class mobile
     serverfarm farmA
    class class-default
    serverfarm farmB
see:
http://www.cisco.com/en/US/docs/interfaces_modules/services_modules/ace/v3.00_A2/configuration/slb/guide/classlb.html#wp1021388

Similar Messages

  • ACE30 Load balancing based on IP and using x-forward-for header

    Hi Guys,
    We currently have a load balancing policy setup to direct traffic to say FARM-A based on a particular range of source (client) IP addresses, and the default FARM-B for all the other traffic.
    We are now looking to introduce a web application firewall (WAF) before the ACE.  The WAF will be inserting the client IP address into the x-forward-for http header.  Now I was wondering how best can be achieve the load balancing based on source IP given that we'll have to parse the HTTP header for this x-forward-for field?  Are there any examples that anyone can point me to? 
    let me know if you have any questions.
    thanks
    Sheldon

    Hi Sheldon,
    You might try creating a class map that matches on the XFF header. Then use that as the L7 load balance criteria (based on the hash value of the XFF header), using the predictor hash header.
    -Alex

  • ACE load balance based on Source IP Address

    Hi Cisco  Support,
    I have question  related to Cisco ACE behavior in term to taking a decision based on source  address
    I currently have two  servers sits behind ACE part of one server farm, these servers are load balanced  via one VIP on ACE module and every things looks fine.
    Now service  owners want to replace these old servers with new hardware hence before the  migration we need to make sure these new servers are working as required standard hence  need to create a testing scenario for new servers along with old server. The problem is that number of third party partners are accessing existing servers by hitting VIP on ace and we  can't engage all our partner to participate in this test therefore decided to  engage only one partner to carry our test with us.
    For that reason can  we some how configure the ACE so when packet arrive on ACE from one test partner  mentioned above, ACE send only that partner's traffic based on it's source address  (define via class/policy map on ACE if possible) towards new servers in the existing server  farm and not to the old server in the same server farm.
    Thanks for your  support

    Hi,
    Just to put some config sample that might help you to get this done.
    First create the new rservers and include them under a new serverfarm (New-APP)/
    serverfarm host Webfarm
      rserver SVR1
        inservice
      rserver SVR2
        inservice
    serverfarm host New-APP
      rserver New-1
        inservice
      rserver New-2
        inservice
    - Same VIP already working.
    class-map match-all VIP-HTTP
      2 match virtual-address 10.10.10.10 tcp eq www
    - Create a new class that will include your partner's IP(s).
    class-map type http loadbalance match-any 3rd-Party
      2 match source-address 200.200.200.1 255.255.255.255 
      3 match source-address 200.200.200.10 255.255.255.255 
    Modify your current first-match policy to put the new class on top so that all the traffic matched by the statement above (IP) will be redirected to the new farm with the new APP, any other traffic that does not match the "rule" will be sent to the old serverfam with the old app.
    policy-map type loadbalance first-match L7-SLB
      class 3rd-Party
        serverfarm New-APP
      class class-default
        serverfarm Webfarm
    Since you already have LB working then this is it, nothing needs to be added under the multi-match policy nor interface.
    HTH
    Pablo

  • ACE load balancing based on URL

    I am trying to send traffic to one server or another based on the URL. I want traffic to foo.com/selfserv to direct to server A and traffic to foo.com/webui to direct to server B. I found URL inspection etc but I am not sure how to apply it the scenario as I do not want the ACE to inspect all inbound HTTP requests.

    The ACE performs regular expression matching against the received packet data from a particular connection based on the HTTP URL string. To configure a class map to make Layer 7 SLB decisions based on the URL name and, optionally, the HTTP method, use the match http url command in class-map HTTP load balance configuration mode.
    The ACE performs regular expression matching against the received packet data from a particular connection based on the RTSP URL string. You can configure a class map to make Layer 7 SLB decisions based on the URL name and optionally, the RTSP method, by using the match rtsp url command in class-map RTSP load balance configuration mode.
    Configuring Traffic Policies for Server Load Balancing:
    http://www.cisco.com/en/US/docs/app_ntwk_services/data_center_app_services/ace_appliances/vA3_1_0/configuration/slb/guide/classlb.html

  • HOWTO: load balance based on source subnet

    Hi Guys,
    We are currently working out if there is a way to load balance specific subnets to a specific rserver within a server farm behind the one VIP.
    For example (all Rservers within one serverfarm Serv_farm001):
    Subnet 10.10.10.0/24  load balance to Rserver A ( with Rserver B as backup )
    Subnet 20.20.20.0/24  load balance to Rserver B ( with Rserver A as backup )
    I can see from the configuration guide that you could maybe use sticky src IP to do this, but I haven't seen anything to confirm this.
    Any takers on this, I'm sure it would be a familar common thing that others are doing out there?
    Looking fwd to the responses!
    Cheers
    R

    Hi Rob,
    You can either do this on the incoming-interface ACL or for easier management you can do the following:
    class-map type http loadbalance match-any Subnet-A
      2 match source-address 10.10.10.0 255.255.255.0
    class-map type http loadbalance match-any Subnet-B
      2 match source-address 20.20.20.0 255.255.255.0
    policy-map type loadbalance first-match SLB
      class Subnet-A
        serverfarm A
      class Subnet-B
        Serverfarm B
    HTH
    Pablo

  • Load Balancing based on website not on Interface

    LocalDIrector 416
    LOcalDirector is load balancing 2 IIS web servers ServerA and ServerB. The servers are running in round robin. If a client requests a webpage and is sent to ServerA and the site is not servicing requests but the interface is still up I want Local Director to fail over to ServerB. Is this possible?
    Thanks

    you need http probe.
    Check the following url:
    http://www.cisco.com/en/US/products/hw/contnetw/ps1894/products_configuration_example09186a0080093df4.shtml
    Gilles.

  • Load balancing based on source IP address

    Hi,
    I configured a CSS to balance the load depending on source IP address to suppport a application feature in the server.
    We have two firewalls and behind we have different users. We have also two servers behind the CSS.
    Firewalls perform NAT with a unique outside IP address. So, for example, in these conditions the CSS balances requests coming from FW 1 to server 1 and requests coming from FW 2 to server 2. Is it correct this scenario?
    Is it possible that requests coming from FW 1 could be forwarded to Server 2 and viceversa?
    Could anyone answer me?
    Thanks in advance.
    Best regards.
    Giuseppe.

    Giuseppe,
    it all depends on how you configured your CSS.
    Did you use an ACL to force traffic from SRC1 to server1 and traffic from SRC2 to server2 ?
    Or did you simply configure sticky based on source ip or a source ip hash loadbalancing ?
    Except the ACL, all other methods do not guarantee that the traffic will be splitted in 2.
    Gilles.

  • CSS Load Balancing with Cookies

    We are trying to load balance 2 backend servers hosted on Websphere with advance balance cookies method.
    Restrictions
    ServerA is unable to accept cookies generated from ServerB.
    ServerA and ServerB are generating random cookies
    Unable to modify cookie string with a constant.
    How can we load balance based on cookies considering the above restrictions?
    We have attempted to do hash based load balancing with cookies but the problem we run into is the servers do not accept cookies generated from another server.
    The configuration we tried is written below:
    service ServerA
    ip address 192.168.10.2
    keepalive type tcp
    keepalive port 80
    active
    service ServerB
    ip address 192.168.20.2
    keepalive type tcp
    keepalive port 80
    active
    content ABC
    url "/*"
    add service ServerA
    string prefix "JSESSIONID="
    advanced-balance cookies
    port 80
    add service ServerB
    string skip-length 5
    string process-length 16
    string operation hash-xor
    protocol tcp
    vip address 172.16.32.1
    active
    Can we change the string prefix to JSESSION instead of JSESSIONID= ?
    The only place the app guys can add a constant string to match on is before the = sign.
    Is it possible for CSS to match on a constant string before = sign e.g below:
    service ServerA
    ip address 192.168.10.2
    keepalive type tcp
    keepalive port 80
    string id567=
    active
    service ServerB
    ip address 192.168.20.2
    keepalive type tcp
    keepalive port 80
    string id123=
    active
    content ABC
    url "/*"
    add service ServerA
    string prefix "JSESSION"
    advanced-balance cookies
    port 80
    add service ServerB
    string skip-length 0
    string process-length 6
    protocol tcp
    vip address 172.16.32.1
    active

    It should work.
    There is no reason for it not to work...
    This is the best method you can have on the CSS for stickyness.
    Get a sniffer trace on the client and server with arrowpoint cookie configured on the CSS and capture a failure so we can see what is going on.
    also send me the config so I can verify everything is ok.
    If you have a service request open with the TAC, you can also give the SR # so I can review what has been done.
    Gilles.

  • Mod_oc4j for load-balancing

    Guys,
    Looking for some feedback on DMSMetricCollector when using "metric" as the selectmethod for load-balancing. What i want to know is that whether the DMSMetricCollector can be configured to use multiple metrics such jvm-heapsiize, servlet-processRequestTime etc, Connection-pools FreePoolSize.value etc ? the documentation doesn't explicitly mention that if its possible, apparently it doesn't but want to check from the experts here, Steve you might have something on this...
    Or if there is any other implementation available which supports load-balancing based on multiple performance metrics
    Thanks and Regards

    can i use this loadbalancer.jar or not?
    how to mod_oc4j in standalone app server

  • MPLS Load Balancing/Sharing with TE or CEF or Both?

    So I am just playing around in GNS3 trying to set up multiple ECMP links between to P routers like this;
    CE1 -- PE1 -- P1 == P2 -- PE2 -- CE2
    (There are actually four links between P1 & P2!)
    I have set up a pseudoswire xconnect from PE1 to PE2 so CE1 & 2 can ping each other on the same local subnet range. That works just fine.
    My question is this:
    I have configured "ip load-sharing per-packet" on each of the four interfaces on P1 and P2 that are facing each other (I know per-packet balancing is frowned upon but lets not talk about that right now!) and this works, traffic is distributed across all links (I can see with packet captures in GNS3).
    Where does "ip load-sharing per-packet" fit in to the chain of events with regards to MPLS and CEF etc?; So, with MPLS enabled everywhere the two P routers are forwarding based on labels and not IP address. With MPLS enabled, does this command force the P routers to load-balance each MPLS frame as it comes in, round-robbin'ing the ingress frames across all links, the same as it would if it were a plain IP packet? So the command is ignorate of the kind of traffic being used? Or is the P router looking down into the MPLS frame for the IP in the IP packet?
    Also, in order to get the same sort of performance boost you get from per-packet load balancing, seeing as I am using MPLS here, should I be using some francy MPLE TE to do this instead of that interface sub-command?
    If I remove that command, I seem to always use link 2 for sending traffic towards P2 from P1, and link 3 for receiving the return traffic from P2 to P1. This is presumably because the ICMP packets have nothing to hash on except the source and destination IP addresses, so they always hash to the same physical links. Without using that command how else can I make use of the four links?

    Hello Jwbensley,
    first of all,
    "ip load-sharing per-packet" is not a viable option as it causes out  of order issues.
    Real world devices perform load balancing based on the second (more internal ) label value so to achieve some load balancing for example multiple pseudowires must be defined between the same pair of PE nodes.
    L3 VPN use different internal labels for different customer prefixes of the same VRF site ( unless some special command is used to say use one label per VRF site)
    >> f I remove that command, I seem to always use link 2 for sending traffic towards P2 from P1, and link 3 for receiving the return traffic from P2 to P1
    This is the expected behaviour in this scenario.
    With MPLS TE you can achieve results similar to the use of multiple pseudowires /LSPs : forms of load sharing not true load balancing. In all cases in MPLS world flow based and not per packet
    Hope to help
    Giuseppe

  • RX load balancing on SG200-18

    Hi guys,
    I put this question on Spiceworks and someone chimed in and said it wasn't possible due to the nature of how etherchanel balances, but I wanted to double check.  Here is my question:
    I have a cisco SG200-18 managed switch configured with LAG with LACP and a new Supermicro X9SCM-F motherboard that uses two Intel NICs (82579LM & 82574L).  The server is running Server 2012 r2 standard and I'm teaming the NICs via intel's driver.  The team type is set to IEEE 802.3ad Dynamic Link Aggregation.  From my understand that means that inbound and outbound packets should be able to utilize the increased bandwidth (thus the dynamic part).  So far in my testing coping files to and from the server from multiple PCs at the same time only files being copied from the server utilize the increased bandwidth. I can see in task manager on the server that the ethernet is using over 1 Gbps.  However, files going TO the server from multiple computers at the same time max out at 1Gbps.
    Any insight on why this would be?
    Edit: Also want to note that the switch is running the most recent version of the firmware.
    Attached you'll find some screen setups of the different windows on the server & the switch.  Thanks!

    Hello,
    This is a common question with LACP and LAGs in general.
    It all comes down to this.  Any single connection will only ever be able to use a single member of the LAG.  Meaning that whatever the maximum speed (1Gbps) of one physical link is, that is the limit of the transfer.
    It is because of how the load balancing algorithm works.  When a packet comes in, the switch hashes either the IP or MAC address of the source and destination, and comes up with a number.  If your LAG has 4 links, it is a number from 1-4.  That determines which link in the LAG gets used in that connection.  That connection will only ever use that LAG member, and cannot spill over, even if the link it is using gets full.
    The load balancing algorithm can be changed to better utilize the links, however the test of a single computer transferring to another computer will always give the results you saw.
    There are several enterprise level Cisco switches which can load balance based on source and destination port number, which could enable two computers to utilize multiple links, if they were transferring data on different TCP ports.  However the small business switches are only able to load balance by MAC/IP.  You can experiment with the load balancing setting to see which setting optimizes your link usage. You may also be able to tweak this setting on the server side, but that one is up to you.
    Hope that helps a bit, you've done some nice testing already, so I'm really just confirming what you've already seen.
    Thank you for choosing Cisco,
    Christopher Ebert
    Network Support Engineer - Cisco Small Business Support Center

  • Load Balancing and WLS primary server offset

    I've got a load balancer in front of my WLS cluster, and I'm trying to
              set up load balancing based on WLS clustering. What I need to know to
              do this is the offset within the cookie that's responsible for
              determining which machine within the cluster to direct to.
              Any idea how I can get this information?
              thanks,
              cfraser
              

    Chris Fraser wrote:
              > The proxy/plug-in solution sounds pretty cool, but I've got a high speed
              > Alteon Load Balancer already set up. I would prefer to use that as the load
              > balancer to the WL cluster rather than pay to bring another WLS online to do
              > pretty much what the load balancer, that I already own, can do. I know that
              > going this route means that we're probably not going to be able to do things
              > like failover to the secondary when the primary dies, but we will be able to
              > load balance and also have the ability to dynamically add/delete servers
              > from the list of available servers as they are brought up/down.
              In Memory session replication doesn't work without our plugins. I will have to
              do little bit of investigation to figure out if other persistence mechanism's
              would work without our plugins if you are interested in them. I have to remind
              you though that other types of persistence mechanism's we support are slower
              compared to in memory session replication.
              > Are there any plans to work with an Alteon or a Foundry to have their Load
              > Balancers act as the front end to a WLS cluster?
              Currently none. We are taking steps to make the plugin's and cluster more
              robust, we currently don't have any plans to work with other 3rd party vendors.
              > For us it would be ideal, because we wouldn't have to support another piece of
              > software, we would just
              > have to support the hardware based Alteon, which can handle thousands of
              > transactions per second.
              > I understand that the primary and secondary server information is available
              > in the sessionID, I'm just not quite sure how to extract it.
              This information is saved in the cookie. But I wouldn't count that, as we
              have plans to change this. I cannot give your more details.
              > Is there a particular offset within the session ID where it can always be
              > found?
              I don't quite get what you mean here.
              Hope this helps.
              - Prasad
              > thanks for the help,
              > cfraser
              > ----------
              > C h r i s t o p h e r A . F r a s e r
              > Director, Technology
              > macroplay.com, Inc.
              > [email protected]
              >
              > Viresh Garg wrote:
              >
              > > You should be using
              > > -- NES +NSAPI Plugin
              > > -- IIS + ISAPI Plugin
              > > -- WEblogic server acting as proxy
              > > -- Apache +Apache Plugin ( only in Denali)
              > >
              > > front-ending your Weblogic cluster
              > >
              > > These proxies/plug-ins are smart to do a lot of things like:
              > >
              > > -- Load balancing in weblogic cluster
              > > -- Adding/deleting servers dynamically in cluster when the servers
              > > join/leave Weblogic cluster
              > > -- failover to secondary when primary dies.
              > >
              > > As far as the information about primary and secondary is concerned it is
              > > available in session ID.
              > >
              > > --Viresh Garg
              > >
              > > Chris Fraser wrote:
              > >
              > > > I've got a load balancer in front of my WLS cluster, and I'm trying to
              > > > set up load balancing based on WLS clustering. What I need to know to
              > > > do this is the offset within the cookie that's responsible for
              > > > determining which machine within the cluster to direct to.
              > > >
              > > > Any idea how I can get this information?
              > > >
              > > > thanks,
              > > > cfraser
              

  • Load balancing imbalance in ACE

    We are facing slowness an http application which is due to connection imbalance. This setup has one set of Load balancer and a proxy in DMZ where the connections gets terminated from the users and a load balancer inside LAN which load balances between the end point servers. All user connections terminate on the DMZ load balancer / proxy and proxy connects back to the internal load balancer VIP. (By collating a number of connections to very few - default proxy behavior) . Internal load balancer VIP does load balancing based on the number of connections in a least loaded manner and this load balancer doesn’t see how many sessions are beneath each connections and it distributes each connection to server underneath. Thus if one connection has around 100 sessions, another may have only a few and each of this gets forwarded to the end server causing the imbalance.
    Is there a way that this imbalance can be tackled in this setup.
    Users --> Proxy ---> Load balancer (Cisco ACE) --> Server 1
                                                                                                    Server 2
                                                                                                    Server 3
    Least Connections predictor
    HTTP Cookie insert sticky

    Hi,
    Persistance rebalance should solve the issue for you.
    The persistent-rebalance function is required if you have proxy users and the proxy shares one TCP connection between multiple users.
    With this behavior, inside a single connection you will see different cookies. Therefore, for each cookie, ACE needs to first detect the new cookie and then loadbalance to the appropriate server.
    this is from the admin Guide :
    The following example specifies the parameter-map type http command to enable HTTP persistence after it has been disabled:
    host1/Admin(config)# parameter-map type http http_parameter_map
    Host1/Admin(config-parammap-http)# persistence-rebalance
    Please refer the following link for more info :
    http://www.cisco.com/en/US/docs/interfaces_modules/services_modules/ace/vA4_2_0/configuration/slb/guide/classlb.html#wp1062907
    hope that helps,
    Ajay Kumar

  • WSA Load Balancing with WCCP

    Hi,
    We have 2 x WSA S670s that we wish to load balance across. The WSAs are running 7.5.1 and can only be in transparent mode. These are connected through WCCP to a pair of Nexus 7ks, running 6.1(3). We are seeing active/standby behaviour and we are expecting A/A. If we shut the port on the active WSA, the second WSA will begin proxing traffic. When we remove the shut command, the traffic will again go back to first WSA. Is this expected behaviour? We were expecting both WSA to handle traffic.
    Thanks

    This may be more of a Nexus question than a WSA question, but check this:    
         Go to Network>Transparent Redirection> Click on your Service Profile name
         Check "Load balance based on client address"
         Click on Advanced near the bottom.
         Set the Load-Balancing Select "Allow Mask Only" and try a custom mask of 0x1
    That should make it switch between WSA's based on whether the last bit in the client's IP is 1 or 0...
    There are some good comments in this thread:
    https://supportforums.cisco.com/thread/2109988
    Nexus want's "mask"
    http://www.cisco.com/en/US/docs/switches/datacenter/sw/4_2/nx-os/unicast/configuration/guide/wccp.html#wp1278718

  • Client load-balancing

    Hi all,
    a short question.
    Is there a feature in Cisco WLC like load-balancing based on bandwidth utilization?
    What I mean is, one AP (channel6) has a channel utilization of 40%, the neighbor AP (channel 11) has a channel utilization of 10%.
    So I would like push new clients automatically to the AP in channel 11.
    many thanks
    Martin

    It doesnt work worth a crap anyway ... Most clients don't adhere to code 17 so what's the point ...
    Aggressive load-balancing works at the association phase. If enabled       and the conditions to load-balance are met, when a wireless client attempts to       associate to a LAP, association response frames are sent to the client with an       802.11 response packet that includes status code 17. This code indicates that       the AP is too busy to accept any more associations.
    It is the responsibility of the client to honor, process or discard       that association response frame with reason code 17. Some clients ignore it,       even though it is part of the 802.11 specification. The standard dictates that       the client driver must look for another AP to connect to since it receives a       "busy" message from the first AP it tries. Many clients do not do this and send       the association request again. The client in question is allowed on to the       wireless network upon subsequent attempts to associate.

Maybe you are looking for