Server Load-balancing Across Two Data centers on Layer 3

Hi,
I have a customer who would like to load balance two Microsoft Exchange 2010 CAS Servers which are residing across two data centers.
Which is the best solution for this? Cisco ACE or Cisco ACE GSS or both?

I would go with source natting the clients ip addresses, so that return traffic from the servers is routed correctly.
It saves you the trouble with maintaining PBR as well.
Source NAT can be done on the ACE, by applying the configuration to either the load balancing policy, or adding the configuration to the class-map entries in the multi-match policy.
Cheers,
Søren
Sent from Cisco Technical Support iPad App

Similar Messages

  • ACE30 load balancing across two slightly different rservers

    Hi,
    is there a possibility to get a load balancing across two rservers so:
    when client sends http://vip/ and it goes to rserver1 then url is sent without change
    when client sends http://vip/ and it goes to rserver2 then url is modified to http://vip/xyz/
    Or maybe load balancing can be done across two serverfarms ?
    thanks

    Ryszard,
    I hope you are doing great.
    I do not think that´s possible since the ACE just load balance the traffic to the servers and once the load balance decision has been taken it will pass the "ball" to the chosen server.
    Think about this, let´s say user A needs to go to Server1 but guess what? based on the load balance decision it was sent to Server2 which unfortunately does not have what the customer was looking for. OK, fine, user A close the connection and tries again but now the Server1 is down then the only available is Server2 then the ACE sends it to Server2 again then user A just decides to leave, you see how bad that can be.
    A better approach would be to have either 2 VIPs ( different IP addresses) or 2 with the same IP address but hearing on another port, perhaps, one port per server.
    Hope this helps!
    Jorge

  • Guest N+1 redundancy & load balancing in seperate data centers

    I need assistance in aquiring documentation to setup N+1 redundancy & load balancing between two seperate guest anchor controllers installed in seperate data centers. Can you explaing how it should be setup or point me in the right direction for documentation? If you can't point me in the right direction to aquire documentation; can you answer the following questions?
    1) How do I setup my mobility groups on my guest anchor controllers installed in the DMZ? Should both guest anchor's be in the same mobility group.
    2) Do both guest anchors share the same virtual IP or do they need to be seperate (DMZ01 - 1.1.1.1 / DMZ02 - 2.2.2.2)? I think seperate!
    3) Are there any configuration parameters on the guest anchors for load balancing?
    4) Do either on of the guest anchors need to be setup as a master controller? I'm not sure?
    5) Are there any configuration parameters on the foreign controllers for load balancing?
    6) How do I setup my foreign controllers? Should both guest controllers be added to the mobility group on the foreigh controller? I would think both of them would be added to the foreign controller mobility group.
    7) Should both guest anchors be added as an anchor on the WLAN? I would think both controllers would need to be added as anchors under the WLAN!
    8) Am I missing anything here? This is how I think it should logically work?
    Thanks,
    Gordon

    I need to elaborate on my questions:
    1) Do both of my guest DMZ anchors need to be in a seperate mobility group on their own or can the guest anchors be in completely seperate mobility groups? All 100 + foreign controllers are in seperate mobility groups.
    I) Example #1: Guest anchor number 1 (Mobility group: DMZ) / Guest anchor number 2 (Mobility group: DMZ)
    II) Example #2: Guest anchor number 1 (Mobility group: DMZ01) / Guest anchor number 2 (Mobility group: DMZ02)
    2) Do both guest anchor controllers have to be configured with seperate virtual IP's or do they share the same address?
    I) Follow up to this question: I want to register the DMZ controllers with our DNS servers so that my clients receive a name when authenticating through my customized webauth. I am currently using 1.1.1.1 as the virtual address and I'm pretty sure this is the address I need to register with my external DNS server. My question is this. Does the address I use for the virtual interface matter? 1.1.1.1 is not a valid address with my network. Do I need to assign a valid address registered with my network if I'm going to add this address to my external DNS servers?
    3) No change to my original question.
    4) No change to my original question.
    5) No change to my original question. I have run into Cisco documentation that mentions guest anchor load balancing, but the documentation is very vague. I'd love to be able to load balance as the network group wants to limit my guest traffic to the internet. I could double my pipe if I could load balance the guest anchors.
    6) No change to my original question, but the answer to question one is key to the setup of my foreign controllers.
    7) Elaboration: Should both guest controllers be added as an anchor under the WLAN on the foreign controllers? I would think both of them would be added.
    8) No change:
    9) Should my secondary guest controller be added as an anchor on the WLAN of the primary guest DMZ controller and visa versa?
    Can my Cisco expert answer this or do I need to open a TAC case?
    Thanks,
    Gordon Shelhon
    SR. Wireless Services Engineer
    Company: Not specified

  • ACE module not load balancing across two servers

    We are seeing an issue in a context on one of our load balancers where an application doesn't appear to be load balancing correctly across the two real servers.  At various times the application team is seeing active connections on only one real server.  They see no connection attempts on the other server.  The ACE sees both servers as up and active within the serverfarm.  However, a show serverfarm confirms that the load balancer sees current connections only going to one of the servers.  The issue is fixed by restarting the application on the server that is not receiving any connections.  However, it reappears again.  And which server experiences the issue moves back and forth between the two real servers, so it is not limited to just one of the servers.
    The application vendor wants to know why the load balancer is periodically not sending traffic to one of the servers.  I'm kind of curious myself.  Does anyone have some tips on where we can look next to isolate the cause?
    We're running A2(3.3).  The ACE module was upgraded to that version of code on a Friday, and this issue started the following Monday.  The ACE has 28 contexts configured, and this one context is the only one reporting any issues since the upgrade.
    Here are the show serverfarm statistics as of today:
    ACE# show serverfarm farma-8000
    serverfarm     : farma-8000, type: HOST
    total rservers : 2
                                                    ----------connections-----------
           real                  weight state        current    total      failures
       ---+---------------------+------+------------+----------+----------+---------
       rserver: server#1
           x.x.x.20:8000      8      OPERATIONAL  0          186617     3839
       rserver: server#2
           x.x.x.21:8000      8      OPERATIONAL  67         83513      1754

    Are you enabling sticky feature? What kind of predictor are you using?
    If sticky feature is enabled and one rserver goes down, traffic will leans to one side.
    Even after the rserver retuns to up, traffic may continue to lean due to sticky feature.
    The behavior seems to depend on the configuration.
    So, please let me know a part of configuration?
    Regards,
    Yuji

  • DAG across 2 Data Centers

    Looking for pros and cons of 2 potential Exchange 2013 implementations.
    ADSite1: 400 users
    ADSite2: 100 users
    ADSite3: 50 users
    Implementation 1: (DAG across two Data Centers without DAC implemented)
    ADSite1: ExchSrv1 (MBX/CAS) --- DAG (2 DB) --- ADSite2: ExchSrv2 (MBX/CAS)
    ADSite3: FSW
    Implementation 2: (DAG across two Data Centers with DAC implemented)
    ADSite1: ExchSrv1 (MBX/CAS) ExchSrv2 (MBX/CAS)--- DAG (2 DB) --- ADSite2: ExchSrv3 (MBX/CAS)
    Site3: FSW
    1. Am I gaining any true benefit from Implementation 2 (additional server in Primary Site)? i.e. Implementation 1 covers me for HA and DR. Would it make sense to consolidate ADSite1 and ADsite2 into a single ADSite for Implementation 1?
    2. In either case, is it ok the configure NLB for all the servers (for the CAS role). So, if a user on ADSite2 hits the CAS on ADSite1 they could then be proxied to either ADSite ExchSrv depending on where their MBX is.
    3. If all the MBX DBs in ADSite2 are replicas and not active and a user hits the CAS on ADSite2, is this increasing network traffic to then allow ExchSrv3 (CAS) to have to proxy to the ExchSrv1 (MBX). If so, does it not make sense to have the ADSite2
    server only hosting replicas?

    Hello,
    1. I recommend you use Implementation 2. When you enable DAC mode, it will prevent split brain from occurring by including a protocol called Datacenter Activation Coordination Protocol (DACP). After a catastrophic failure, when the DAG recovers, it won't
    automatically mount databases even though the DAG has a quorum. Instead DACP is used to determine the current state of the DAG and whether Active Manager should attempt to mount the databases.
    2. You can deploy CAS NLB, but you can't depoly DAG+WNLB. If you deploy NLB, the CAS will proxy traffic to the Mailbox servers hosting the active copies.
    3. If you deploy NLB,  the ExchSrv3 (CAS) may proxy requets to the ExchSrv1 (MBX).
    Additional article for your reference.
    http://blogs.technet.com/b/exchange/archive/2013/01/25/exchange-2013-client-access-server-role.aspx
    Cara Chen
    TechNet Community Support

  • Using ACE for proxy server load balancing

    Hello groups,
    I wanted to know your experiences of using ACE for proxy server load balancing.
    I want to load balance to a pool of proxy servers. Note: load-balancing should be based on the HTTP URL (i can't use source or dest. ip address) so that
    a certain domain always gets "cached/forwarded" to the same proxy server. I don't really want to put matching
    criteria in the configuration (such as /a* to S1, /b* to S2, /c* to S3,etc..), but have this hash calculated automatically.
    Can the ACE compute its own hash based on the number of "online" proxy servers ? ie. when 4 servers are online, distribute domains between 1,2,3,4 evenly.
    Should server 4 fail, recalculate hash so that the load of S4 gets distributed across the other 3 evenly. Also load-balancing domains of S1 ,S2 and S3 should not change if S4 fails.....
    regards,
    Geert

    This is done with the following predictor command:
    Scimitar1/Admin# conf t
    Enter configuration commands, one per line.  End with CNTL/Z.
    Scimitar1/Admin(config)# serverfarm Proxy
    Scimitar1/Admin(config-sfarm-host)# predictor hash ?
      address         Configure 'hash address' Predictor algorithms
      content         Configure 'hash http content' Predictor algorithms
      cookie          Configure 'hash cookie' Predictor algorithms
      header          Configure 'hash header' Predictor algorithm
      layer4-payload  Configure 'hash layer4-payload' Predictor algorithms
      url             Configure 'hash url' Predictor algorithm
    Scimitar1/Admin(config-sfarm-host)# predictor hash url
    It does hash the url and the result takes into account the number of active proxies dynamically.
    This command has been designed for this kind of scenario that you describe.
    Gilles.

  • App.server load balancing for SAP System with 1 PS

    Hi,
    In SAP CPS 7.0 (Build M26.12) I have a SAP system with Central Instance + 10 App.servers, but all instances are managed by 1 ProcessServer.
    After activating the "App.server load balancing" setting in SAP system definition the application servers are becoming visible in CPS with their load factors (number of BGD wp's on app.servers) and load numbers (number of active jobs on app.servers).
    This is so far fine, but the additional functionality is not working as I would expect, I have issues with 2 functionalities:
    1. Based on documentation after activating also the XAL connection the CPS should submit the job on app.server with best performance based on XAL monitoring data filling the TARGET_SERVER parameter.
    This functionality is not working for me at all
    2. A useful functionality after activating the "App.server load balancing" setting is that the ProcessServer is going to "Overloaded" status when all BGD wp's of SAP system are occupied, thus restricting submitting new jobs during overload situation. But I had an issue also with this functionality, after SAP system recovery from overload situation, the CPS still remained in Overload status (so no new jobs were submitted).
    As a workaround I had increased the treshold values for loads on all app.servers for this SAP system, what was fine for several days, but after a while I believe this was a reason of unexpected performance issues in CPS, therefore I have deactivated the 'App.server load balancing" setting at all for this ProcessServer.
    I would appreciate your feedbacks with this functionality.
    Thanks and Regards,
    Ernest Liczki

    Hi Preetish,
    This connect string option is to loadbalance RFC connections. These are balanced upon login, once you are connected to a particular application server (AS) you stay on that server until you reconnect.
    Since CPS uses multiple RFC connections, this will result in the connections being distributed over the available AS resources which is fine as long as they are generally evenly loaded. If you have certain AS hosts that are continuosly more loaded than the rest, then you probably don't want the CPS RFC connections to end up on these servers.
    The original question is about loadbalancing of batch jobs over the available AS resources, and this is done independent of the RFC connection load balancing. Even if all CPS RFC connections are pinned to the DB/CI host, you can still loadbalance jobs over the available SAP AS hosts, either by using SAPs builtin balancing, or the CPS algorithm by activating the checkbox as indicated in the first entry in this thread.
    Finally, to reply to Ernest's question: I believe there are some fixes on the app load balancing in the latest release, M26.17 should be available on the SWDC now.
    Regards,
    Anton.
    Edited by: Anton Goselink on May 29, 2009 9:06 PM

  • Load balancing across DMZs - Revisited

    I know this question has been asked before and the answer is to have separate content switches per DMZ in order to maintain the security policy. There is an option to have the content switch in front of the firewall and then use only one content switch to load balance across multiple DMZs. Is this an acceptable design or the recommendation is to have a separate content switch behind the firewall for each DMZ of the firewall?
    Can a Cisco 6500 with CSM be configured for multiple layer 2 load balanced VLANs thus achieving a mutiple DMZ load balancing scenario with only one switch/CSM?

    How do you connect the router to the firewall ?
    Problem is the response from the server to a client on the internet.
    Traffic needs to get back to the CSS and if the firewall default gateway is the router, the response will not go to the CSS and the CSS will reset it.
    If you configure the default gateway of the firewall to be the CSS, than all traffic from your network to the outside will go through the CSS.
    This could be a concern as well.
    If you don't need to know the ip address of the client for your reporting, you can enable client nat on the CSS to guarantee that server response is sent to the css without having the firewall default gateway pointing at the CSS.
    Gilles.

  • Load balancing across multiple machines

    I am looking for assistance in configuring Tuxedo to perform load balancing across
    multiple machines. I have successfully performed load balancing for a service
    across different servers hosted on one machine but not to another server that's
    hosted on a different machine.
    Any assistance in this matter is greatly appreciated.

    Hello, Christina.
    Load balancing with multiple machines is a little bit different than
    in the same machine. One of the important resource in this kind
    of application is network bandwidth, so tuxedo tries to keep the
    traffic among the machines as low as possible. So, it only
    balance the load (call services in other machine) in case all the
    services are busy in the machine where they are call.
    I mean, if you have workstation clients attached only to one
    machine, then tuxedo will call services in this machine untill
    all servers are busy.
    If you want load balancing, try to put one WSL in each machine,
    and the corresponding configuration in your WSC ( with the | to
    make tuxedo randomly choose one or the other) or spread your
    native clients among all the machines.
    And so, be carefull with the routing!
    Ramón Gordillo
    "Christina" <[email protected]> wrote:
    >
    I am looking for assistance in configuring Tuxedo to perform load balancing
    across
    multiple machines. I have successfully performed load balancing for a
    service
    across different servers hosted on one machine but not to another server
    that's
    hosted on a different machine.
    Any assistance in this matter is greatly appreciated.

  • Load balancing across multiple paths to Internet

    Hello,
    I have a 2821 router. Currently, I have two bonded T-1 circuits to the Internet.
    I would like to add a DSL circuit to augment the T1s. I would also like to load balance across all of the circuits. Currently, IOS performs inherent load balancing for the T1 circuits. The DSL circuit is from a different provider than the T1s.
    The T1s are coming from a local ISP that runs no routing protocols within their infrastructure. (They run static routes and rely on the upstream provider for BGP.) The DSL provider is a national telecom carrier.
    What is the best way to perform load balancing for this scenario?

    Here is the answer (sort of) for anyone reading this post with the same question:
    No matter which way I choose to do it, the trick is to have the local ISP subnet advertised via BGP through both pipes. The national telecom DSL provider will not advertise a /20 subnet down a DSL pipe. (Ahh, why not? =:)
    Had the secondary pipe been a T-1,T-3, or other traditional pipe, I could have used a load balancer like a BigIP, or FatPipe device or possibly CEF within the IOS.
    Case closed. Thanks to everyone that took a look.
    Doug.

  • Dev6 Server Load Balancing

    Hi
    I try to install Load Balancing with Dev6/Patch2 and OAS4.0.7.1
    on 4 Machines with WinNT Server 4 SP 5. I tried to do it as
    described in the documentation. But I did not succeed. It seems
    to be that the Doc is not complete or wrong. Could somebody give
    an example how to set up the LB Servers and Clients as NT
    Services ?
    Thank's in advance
    Charly
    null

    Hi Steven,
    No LACP and SLB are different.
    LACP is the Link Aggregation Control Protocol, which is the protocol used within the IEEE 802.3ad (now 802.1AX) Link Aggregation mechanism to control the bundling and unbundling of the physical links into an aggregate link.
    Server Load Balancing is a feature in IOS to load balance traffic destined to a virtual IP across a group of real IP. From Configuring Server Load Balancing:
    The SLB feature is a Cisco IOS-based solution that provides IP server load balancing. Using the IOS SLB feature, the network administrator defines a virtual server that represents a group of real servers in a cluster of network servers known as a server farm.
    Server Load Balancing is effectively what the Cisco Application Control Engine (ACE) etc., does but in IOS.
    Regards

  • Server Load balancing issues

    the servers are loadbalancing between the switches '3'
    and '2' and the link between the two switches is blocked.
    This link was forwarding before and thus any traffic going to the server was
    send to the Servers correctly no matter on which switch they are active.
    However after addition of another link in between the switches 'root'
    and '1', the path cost to the root has decreased and thus the link
    between the '2' and '3' is Blocking and the other link between
    the '2' and '1' is Forwarding as it should be ideally. But
    this would be creating an issue because the trafic coming from outside i.e.
    through switch '1' to the server will be correctly send to the
    server if the server NIC is active on the '3' because the Virtual MAC
    addresses are binded accordinlgy. In case the server falls onto the other
    NIC which is on the '2' the traffic won't be able to pass because the
    MAC address is not binded on the trunk connecting the switches '1'
    and '2'. This binding cannot be done because the same MAC address is
    being learned on the another trunk on the '1' which is connecting
    to 'root'. So if we bind the same Virtual MAC on two trunks on the same
    switch ('1') then this will cause MAC Address Flapping on the
    switch and hence canot be done.
    In another case,we can able to bind virtuak MAC on two trunks on the same
    switch('1') and it's working fine.
    The servers are load balancing in round robin fashion. each server has 2 NICs and work in Active-Passive mode. The servers load balance each other when all their active links are connected to '3' switch but when two of the active NICs of two servers are connected to '3' and the rest of the two active NICs from the other two servers are connected to '2' switch then only the forst 2 servers load balance and the other 2 servers do not load balance.
    Please help.
    Thanks in advance.

    In my experience, server load balancing is one of the most difficult things to get going properly in a switched LAN environment. Switched LANs are designed so that one MAC address can only be bound to one switch port. Therefore, if you have two NICs with the same MAC address (real or virtual), then you will get flapping somewhere.
    I have seen various ways that the manufacturers try to get around this limitation of switched LANs. For example, one technique I have seen, practiced by ISA Server, is to use a multicast MAC address for the service so that frames go to both exit ports. But that does not always work well unless you tweak the network to acommodate it.
    What sort of servers are they, and what system is used for the load balancing?
    Kevin Dorrell
    Luxembourg

  • IPTV load balancing across broadcast servers.

    I know that across Archive servers in the same cluster that IPTV control server will load balance , is there is a similar function with Broadcast servers. I know broadcast servers use a different delivery mechanism (Multicast). We have multiple broadcast servers that take in an identical live stream, but the only way to advertise thru a URL is a seperate URL per server. Is there some way to hide the multiple URL's to the client population?

    No. There is no way to load balance across multiple broadcast servers for live streams. Since this is going to be multicast, there should not be any additional load on the servers when the number of users are more.

  • Load balancing across database connection

    Do you provide load balancing across database connections and allow RDBMS load
    balancing for read only access?
    Thanks in advance.

    Hello, Christina.
    Load balancing with multiple machines is a little bit different than
    in the same machine. One of the important resource in this kind
    of application is network bandwidth, so tuxedo tries to keep the
    traffic among the machines as low as possible. So, it only
    balance the load (call services in other machine) in case all the
    services are busy in the machine where they are call.
    I mean, if you have workstation clients attached only to one
    machine, then tuxedo will call services in this machine untill
    all servers are busy.
    If you want load balancing, try to put one WSL in each machine,
    and the corresponding configuration in your WSC ( with the | to
    make tuxedo randomly choose one or the other) or spread your
    native clients among all the machines.
    And so, be carefull with the routing!
    Ramón Gordillo
    "Christina" <[email protected]> wrote:
    >
    I am looking for assistance in configuring Tuxedo to perform load balancing
    across
    multiple machines. I have successfully performed load balancing for a
    service
    across different servers hosted on one machine but not to another server
    that's
    hosted on a different machine.
    Any assistance in this matter is greatly appreciated.

  • Load balancing across 4 web servers in same datacentre - advice please

    Hi All
    Im looking for some advice please
    The apps team have asked me about load balancing across some servers but im not that well up on it for applications
    Basically we have 4 apache web servers with about 2000 clients connecting to them, they would like to load balance connections to all these servers, they all need the same DNS name etc.
    what load balancing methods would I need for this, I believe they run on Linux
    Would I need some sort of device, or can the servers run some software that can do this, how would it work? and how would load balancing be achieved here?
    cheers

    Carl,
    What you have mentioned sounds very straightforward then everything should go well.
    The ACE is a load balancer which takes a load balancing decisions based on different matching methods like matching virtual address, url, source address, etc then once the load balance decision has been taken then the ACE will load balance the traffic based on the load balance method which you have configured (if you do not configure anything then it will use the default which is "round robin"), then it will send the traffic to the servers which it has available and finally the client should get the content.
    If you want to get some details about the load balancing methods here you have them:
    http://www.cisco.com/en/US/docs/app_ntwk_services/data_center_app_services/ace_appliances/vA3_1_0/configuration/slb/guide/overview.html#wp1000976
    For ACE deployments the most common designs are the following.
    Bridge Mode
    One Arm Mode
    Routed Mode
    Here you have a link for Bridge Mode and a sample for that:
    http://docwiki.cisco.com/wiki/Basic_Load_Balancing_Using_Bridged_Mode_on_the_Cisco_Application_Control_Engine_Configuration_Example
    Here you have a link for One Arm Mode and a sample for that:
    http://docwiki.cisco.com/wiki/Basic_Load_Balancing_Using_One_Arm_Mode_with_Source_NAT_on_the_Cisco_Application_Control_Engine_Configuration_Example
    Here you have a link for Routed Mode and a sample for that:
    http://docwiki.cisco.com/wiki/Basic_Load_Balancing_Using_Routed_Mode_on_the_Cisco_Application_Control_Engine_Configuration_Example
    Then as you could see in all those links you may end up having a configuration like this:
    interface vlan 40
      description "Default gateway of real servers"
      ip address 192.168.1.1 255.255.255.0
      service-policy input remote-access
      no shutdown
    ip route 0.0.0.0 0.0.0.0 172.16.1.1
    class-map match-all slb-vip
      2 match virtual-address 172.16.1.100 any
    policy-map multi-match client-vips
      class slb-vip
        loadbalance vip inservice
        loadbalance policy slb
    policy-map type loadbalance http first-match slb
      class class-default
        serverfarm web
    serverfarm host web
      rserver lnx1
        inservice
      rserver lnx2
        inservice
      rserver lnx3
        inservice
    rserver host lnx1
      ip address 192.168.1.11
      inservice
    rserver host lnx2
      ip address 192.168.1.12
      inservice
    rserver host lnx3
      ip address 192.168.1.13
      inservice
    Please mark it if it answered you question then other users can use it as reference in the future.
    Hope this helps!
    Jorge

Maybe you are looking for