Additional Ways to Load Balance????

We are currently using the following configuration for a number of situations on our 11501 (No SSL module).
content webapps443
protocol tcp
port 443
balance leastconn
vip address 192.168.2.106
add service webapps1
add service webapps2
add service webapps3
redundant-index 136
advanced-balance sticky-srcip-dstport
active
Is there any additional options that we could use to further load balance the traffic? We are having multiple clients where their source address is NATed, so those multiple clients are ending up on the same server when other servers are less active.
If I can't find any additional config options, we may have to look to getting a new box.

this is not a problem a missing feature.
This is a general issue.
With SSL traffic, if you do not have an SSL module, the only information available to the CSS [or any other loadbalancer from any vendor] is the information contained in the IP,TCP and SSL headers.
There is nothing useful in the TCP header.
The only option in the IP header is the source IP.
And for the SSL header, you could use the SSLID.
However, this last option is very reliable as Internet Explorer is well-known for changing this ID very frequently during the same session.
Therefore, the only possible choice is source ip unless you can get an SSL module, then you can decrypt the traffic and use cookies.
Regards,
Gilles.

Similar Messages

  • What's the best way to load balance multiple protocols on one vserver?

    Hi,
    We have a CSM blade on a 6513, in bridge mode. I'm just wondering what is the best way to serve HTTP and HTTPS (or any two or more ports) from the same group of servers. As I see it, we have two options:
    1. Don't set a port on the vserver, so it is load balancing "any" or "tcp". This is easy but I want to be sure there isn't a downside to this, other than the obvious security issue.
    2. Create multiple vservers and point them at the same serverfarm. I tried this and I got some odd results with the health checks.
    Any ideas? Thanks a lot.

    you listed the only 2 options available.
    The advantage of solution #2 is that you can apply specific config for each protocol ie: for HTTP you can turn 'persistent rebalance' if needed.
    If you want to use specific probes [not icmp], it is also a good practice to create a different serverfarm for each protocol.
    Like this, if the HTTP service goes down but not the server, you can still have other protocols loadbalanced.
    Regards,
    Gilles.
    Thanks for rating this answer.

  • Best way to load balance VPNs

    I have two ASA 5540s that I would like to configure for VPN load balancing. I had been looking at the Active / Standby configurations, but am curious if doing this I can truly get VPN load balancing or if this means all VPNs on the active unit and then when a failure happens all VPNs go over to the standby unit. This isn't what I want.
    I have found some documents that talk about setting up a cluster. But I think these documents are telling me not to configure the two ASAs as a active / standby failover pair. Does that make sense?
    Anyway - what is the best way to accomplish VPN load balancing? In our setup these ASAs will only be handling VPNs (no firewalling will be done here).

    An active/standby failover pair configuration will provide for resiliency in the event of a hardware or software failure. One ASA is "Active" while the other is in a "Standby" mode. Config and state information is synchronized between the two devices. Only one ASA services client connections at any given time.
    Load balancing, on the other hand, allows you to configure a "cluster" with multiple participants. Each participating ASA can service client connections thus sharing the load. The following doc gives a good overview of load balancing and provides sample configurations.
    http://www.cisco.com/en/US/docs/security/asa/asa80/configuration/guide/vpnsysop.html#wp1048959

  • Oracle Weblogic Load Balancing/Clustering

    Can anyone tell me what the recommendation from Oracle is on how to best set up load balancing?
    We currently use the configuration.properties file to identify the 2 servers we load balance.  We will be implementing additional servers in the near future and we were wondering if this is the best way to load balance 4 or more weblogic servers or if there is some other way.
    We set up one cluster address to utilize these servers and use a separate hardware load balancer device.
    HRDEV\webserv\hrdev\applications\peoplesoft\PORTAL.war\WEB-INF\psftdocs\hrdev\configuration.properties
    # To enable jolt failover and load balancing, provide a list of application server
    # domains in the format of;  psserver=AppSrvr:JSLport,...
    # For example:  psserver=SERVER1:9000,SERVER2:9010,SERVER3:9020

    As this is peoplesoft specific configuration
    Please try positing here:
    https://community.oracle.com/community/oracle-applications/peoplesoft_enterprise/peoplesoft_general_discussion?customTheme=otn
    Best Regards
    Luz

  • Load balancing with only one Real Server on CSM

    Other than create a VSERVER with only one real server in it - is there a way of load balancing when you have only one real server now and may be additional servers to be added later?

    the only way is to use the vserver.
    Gilles.

  • IPTV load balancing across broadcast servers.

    I know that across Archive servers in the same cluster that IPTV control server will load balance , is there is a similar function with Broadcast servers. I know broadcast servers use a different delivery mechanism (Multicast). We have multiple broadcast servers that take in an identical live stream, but the only way to advertise thru a URL is a seperate URL per server. Is there some way to hide the multiple URL's to the client population?

    No. There is no way to load balance across multiple broadcast servers for live streams. Since this is going to be multicast, there should not be any additional load on the servers when the number of users are more.

  • Load balancing host named site collection

    I am jumping into the realm of host named site collection. While the learning experience has been good, still there are some questions unanswered. Please bare patience since my questions are long.
    - I have a non host header site on port 80 that has https certificate added to IIS for supporting app store in https mode.
    - I tried to created the host name site collection using https in this default port 80 non host header web application and was greeted with error. Then i extended the web app to different  zone with port 443 . Then created the host header site collection
    with https with web application name for extended 443 one. Creation went in fine.
    - I tired to use IPs on now extended IIS site and bind certificates on that one. The site does not load. I do the same again in the default zone iss site, bind ips on that one and site loads. Now question is even though host header site collection was created
    using extended web application url , why binding had to be done on default zone IIS site?
    - Second test, i changed the authentication mode for extended, no effect on host named site collection but as soon as i changed it in default zone it reflected in host named site collection. I am confused why it needs extended zone url to create the https
    site but every change done in default zone is getting reflected on this host named site collection.
    Now for load balancing , it works fine with IP? But how to load balance these host named site collection using url. I talked with f5 team and they said i need to send some reply query string from each site. Where do i do that? Or is it even needed? 
    Accoring to this link : https://devcentral.f5.com/articles/name-based-virtual-hosting-with-ltm
    . If the site hosts an application, though, the monitor should request a dynamic page on each webserver which forces a transaction with the application to verify its health and returns a specific phrase upon success.
    For application monitoring, the recommended best practice is to create such a script specific to your application, configure the monitor Send string to call that script, and set the Receive string to match that phrase. 
    Has any one done this before? I tired to search for resource regarding this for iis or sharepoint but was not able to get anything.
    Thank you for your patience for reading such a long question. 
    Adit

    first part of question:
    Default Web Appliction in port 80: Creating https host named site collection fails.
    Extend default web application on port 443 : Https hostnamed site collection created when web application name is passed for extended web application on port 443. This means this site collection is associated with this extended web application correct? But
    all the changes made in IIS only reflect if it is made to port 80 web application. Also changing authentication scheme from Central Admin, only changes on default zone reflects on site collection not the one in extended web application? Why  if the site
    was only created on extended web application paremeter, changes on default are reflecting on it but not from extended.
    Second part of question:
    Each Hostnamed site collection when load balanced thorough f5 using IP for 3 WFE uses 3 IPs for each. This way we will run out of IPs pretty soon. I want to know if there is way to load balance these sites using Hostname or anyother paramenter through f5
    and if any body has done it? 
    https://devcentral.f5.com/articles/name-based-virtual-hosting-with-ltm link talks about sending reply string
    from application but i do not know where to set it up or how to do it? No resources in the net. Just asking if any one else has done it. 
    Adit

  • Load Balancing Windows File Share? Doable ??

    So I have a storage design question. The goal of this question is to find out how to best design a high availability storage solution without a single point of failure (SPF). 
    We need a storage location that needs to be accessed via an smb share in format of \\servername\sharename. We need to have an extremely high available solution as this share will be used to host remote IIS files and other important files being used by multiple
    web servers. The most ideal solution is if there is a way to load balance an SMB fileshare so that it can be accessed by \\loadbalanced_name\sharename. 
    I am not sure if such technology exists for Windows 2008, Windows Storage Server, or with using 3rd party load balancing / file server / iscsi solution. 
    The current setup of our high available fileshare is with using Windows Clustering with 2 file servers attached to a singled Direct Attached Storage (DAS). Although we never really have an issue with this design, we would like to entirely eliminate the single
    point of failure of the DAS.
    So, any ideas / Input from anyone? 
    Thanks in advance!
    Ed

    In message
    <[email protected]> ed
    4t wowrack.com was claimed to have wrote:
    hmm after thinking about this a little bit more, I guess I can use DFS. 
    However, I used to run a corporate file server for mid size company of 400 people using windows 2003 and I hated the replication to the guts. It doesn't seem to work well for large volumes... 
    FRS had issues, especially with large volumes.  DFS-R solves that.
    More importantly, DFS and DFS-R can be used independently, so if you
    can't figure out how to get DFS-R working for you, you can always look
    into your own replication scheme and just use DFS to get clients to the
    appropriate server.

  • HOWTO: load balance based on source subnet

    Hi Guys,
    We are currently working out if there is a way to load balance specific subnets to a specific rserver within a server farm behind the one VIP.
    For example (all Rservers within one serverfarm Serv_farm001):
    Subnet 10.10.10.0/24  load balance to Rserver A ( with Rserver B as backup )
    Subnet 20.20.20.0/24  load balance to Rserver B ( with Rserver A as backup )
    I can see from the configuration guide that you could maybe use sticky src IP to do this, but I haven't seen anything to confirm this.
    Any takers on this, I'm sure it would be a familar common thing that others are doing out there?
    Looking fwd to the responses!
    Cheers
    R

    Hi Rob,
    You can either do this on the incoming-interface ACL or for easier management you can do the following:
    class-map type http loadbalance match-any Subnet-A
      2 match source-address 10.10.10.0 255.255.255.0
    class-map type http loadbalance match-any Subnet-B
      2 match source-address 20.20.20.0 255.255.255.0
    policy-map type loadbalance first-match SLB
      class Subnet-A
        serverfarm A
      class Subnet-B
        Serverfarm B
    HTH
    Pablo

  • Load balancing UDP application in ACE

    Hi all,
    What's the proper way to load balance a UDP application (NTP protocol) using ACE? We used to do it in our CSS using a content to load-balance and a source group to source-NAT the UDP replies from the servers to the VIP. I guess this should be implemented using NAT in the ACE, but I can't find any example.
    According to the manual, src-natting to VIPs is supported only in A1(8) and it is supposed to be used "when there is a limited number of real-world IP addresses on the client-side network".
    This is not our case, we just need to ensure that the client receives the UDP replies as coming from the VIP, not from real IP address of the server. This is not a problem in TCP-based applications, because the NAT from the rserver IP to the VIP is automatic. What is the proper way to obtain this behaviour for UDP applications? Thanks a lot!
    Regards,
    Pedro

    Pedro,
    reverse nating is not required in ACE world.
    This is done automatically.
    So, the server response will be automatically nated to the vip address when going back to the client.
    If you have an appliance and are just deploying now, I would recommend version A3(2.1).
    If you have a module go for A2(1.3).
    Gilles

  • Load balancing for webservices

              I have a webservice I'd like to run from a cluster for scaling purposes. I dont
              see a way to load balance a SOAP based webservice on the server. What (among
              many things) am I missing? I'm using wls 7.0 sp4. Any ideas would be appreciated.
              Thanks
              

    The webservice is a web application. You should be able to cluster it
              like any other web application. If you've implemented your service as a
              Statless Session Bean then you should be able to cluster it just like
              any other Stateless Session Bean. That is there's really no difference
              when it comes to physical implementation of a web service
              ~Ryan Upton
              fred wrote:
              > I have a webservice I'd like to run from a cluster for scaling purposes. I dont
              > see a way to load balance a SOAP based webservice on the server. What (among
              > many things) am I missing? I'm using wls 7.0 sp4. Any ideas would be appreciated.
              >
              > Thanks
              

  • Load balancing / fail over

    Dear forum,
    Is there an other and more simple / "cheaper" way for load balancing / fail over then using RAC ? E.g. starting more instances againts the (same) database ?
    Thanks in advance,
    Michel
    Edited by: Michel77 on Dec 9, 2008 12:06 PM

    The answer would be highly platform and version dependent.
    On several platforms you have software implementing a 'shared nothing cluster'. This means you failover complete services to a second server, which is usually dormant.
    There is HP Service Guard, Suncluster, IBM HACMP, and Microsoft cluster, which are or can be configured as shared nothing clusters.
    For Microsoft there was or is, depending on version, Oracle Failsafe, on top of a MS Cluster.
    RAC can be configured in a shared nothing mode too. RAC itself is essentially active/active.
    On the other side, you can of course buy a second server, and use Dataguard to set up a standby database.
    For a physical standby database, which isn't enabled for more than 10 days, you don't pay a license.
    Setting up multiple instances on a single server is obviously not going to work. Those instances need to coordinate what they are doing. This is essentially the task of RAC.
    Hth
    Sybrand Bakker
    Senior Oracle DBA

  • OSPF load balancing across multiple port channels

    I have googled/searched for this everywhere but haven't been able to find a solution. Forgive me if I leave something out but I will try to convey all relevant information. Hopefully someone can provide some insight and many thanks in advance.
    I have three switches (A, B, and C) that are all running OSPF and LACP port channelling among themselves on a production network. Each port channel interface contains two physical interfaces and trunks a single vlan (so a vlan connecting each switch over a port channel). OSPF is running on each vlan interface.
    Switch A - ME3600
    Switch B - 3550
    Switch C - 3560G
    This is just a small part of a much larger topology. This part forms a triangle, if you will, where A is the source and C is the destination. A and C connect directly via a port channel and are OSPF neighbors. A and B connect directly via a port channel and are OSPF neighbors. B and C connect directly via a port channel and are OSPF neighbors. Currently, all traffic from A to C traverses B. I would like to load balance traffic sourced from A with a destination of C on the direct link and on the links through B. If all traffic is passed through B, traffic is evenly split on the two interfaces on the port channel. If all traffic is pushed onto the direct A-C link, traffic is evenly balanced on the two interfaces on that port channel. If OSPF load balancing is configured on the two vlans from A (so A-C and A-B), the traffic is divided to each port channel but only one port on each port channel is utilized while the other one passes nothing. So half of each port channel remains unused. The port channel on B-C continues to load balance, evenly splitting the traffic received from half of the port channel from A.
    A and C port channel load balancing is configured for src-dst-ip. B is a 3550 and does not have this option, so it is set to src-mac.
    Relevant configuration:
    Switch A:
    interface Port-channel1
    description Link to B
     port-type nni
     switchport trunk allowed vlan 11
     switchport mode trunk
    interface Vlan11
     ip address x.x.x.134 255.255.255.254
    interface Port-channel3
    description Link to C
     port-type nni
     switchport trunk allowed vlan 10
     switchport mode trunk
    interface Vlan10
     ip address x.x.x.152 255.255.255.254
    Switch B:
    interface Port-channel1
     description Link to A
     switchport trunk encapsulation dot1q
     switchport trunk allowed vlan 11
     switchport mode trunk
    interface Vlan11
     ip address x.x.x.135 255.255.255.254
    interface Port-channel2
     description Link to C
     switchport trunk encapsulation dot1q
     switchport trunk allowed vlan 12
     switchport mode trunk
    interface Vlan12
     ip address x.x.x.186 255.255.255.254
    Switch C:
    interface Port-channel1
     description Link to B
     switchport trunk encapsulation dot1q
     switchport trunk allowed vlan 12
     switchport mode trunk
    interface Vlan12
     ip address x.x.x.187 255.255.255.254
    interface Port-channel3
     description Link to A
     switchport trunk encapsulation dot1q
     switchport trunk allowed vlan 10
     switchport mode trunk
    interface Vlan10
     ip address x.x.x.153 255.255.255.254

    This is more FYI. 10.82.4.0/24 is a subnet on switch C. The path to it is split across vlans 10 and 11 but once it hits the port channel interfaces only one side of each is chosen. I'd like to avoid creating more vlan interfaces but right now that appears to be the only way to load balance equally across the four interfaces out of switch A.
    ME3600#sh ip route 10.82.4.0
    Routing entry for 10.82.4.0/24
      Known via "ospf 1", distance 110, metric 154, type extern 1
      Last update from x.x.x.153 on Vlan10, 01:20:46 ago
      Routing Descriptor Blocks:
        x.x.x.153, from 10.82.15.1, 01:20:46 ago, via Vlan10
          Route metric is 154, traffic share count is 1
      * x.x.x.135, from 10.82.15.1, 01:20:46 ago, via Vlan11
          Route metric is 154, traffic share count is 1
    ME3600#sh ip cef 10.82.4.0
    10.82.4.0/24
      nexthop x.x.x.135 Vlan11
      nexthop x.x.x.153 Vlan10
    ME3600#sh ip cef 10.82.4.0 internal       
    10.82.4.0/24, epoch 0, RIB[I], refcount 5, per-destination sharing
    sources: RIB 
    ifnums:
    Vlan10(1157): x.x.x.153
    Vlan11(1192): x.x.x.135
    path 093DBC20, path list 0937412C, share 1/1, type attached nexthop, for IPv4
    nexthop x.x.x.135 Vlan11, adjacency IP adj out of Vlan11, addr x.x.x.135 08EE7560
    path 093DC204, path list 0937412C, share 1/1, type attached nexthop, for IPv4
    nexthop x.x.x.153 Vlan10, adjacency IP adj out of Vlan10, addr x.x.x.153 093A4E60
    output chain:
    loadinfo 088225C0, per-session, 2 choices, flags 0003, 88 locks
    flags: Per-session, for-rx-IPv4
    16 hash buckets             
    < 0 > IP adj out of Vlan11, addr x.x.x.135 08EE7560
    < 1 > IP adj out of Vlan10, addr x.x.x.153 093A4E60
    < 2 > IP adj out of Vlan11, addr x.x.x.135 08EE7560
    < 3 > IP adj out of Vlan10, addr x.x.x.153 093A4E60
    < 4 > IP adj out of Vlan11, addr x.x.x.135 08EE7560
    < 5 > IP adj out of Vlan10, addr x.x.x.153 093A4E60
    < 6 > IP adj out of Vlan11, addr x.x.x.135 08EE7560
    < 7 > IP adj out of Vlan10, addr x.x.x.153 093A4E60
    < 8 > IP adj out of Vlan11, addr x.x.x.135 08EE7560
    < 9 > IP adj out of Vlan10, addr x.x.x.153 093A4E60
    <10 > IP adj out of Vlan11, addr x.x.x.135 08EE7560
    <11 > IP adj out of Vlan10, addr x.x.x.153 093A4E60
    <12 > IP adj out of Vlan11, addr x.x.x.135 08EE7560
    <13 > IP adj out of Vlan10, addr x.x.x.153 093A4E60
    <14 > IP adj out of Vlan11, addr x.x.x.135 08EE7560
    <15 > IP adj out of Vlan10, addr x.x.x.153 093A4E60
    Subblocks:                                                                                  
    None

  • Load balancing with failover questions

    If we install 2 multi-role Exchange servers in our building and a 3rd multi-role server in our remote data center, what is the best way to load balance them?  Do we need two load balancers or is there some way to span a single load balancer across the
    WAN ?
    What about using Windows NLB as an alternative to using round robin internally?
    Can a load balancer keep our interoffice Exchange CAS traffic from leaving our LAN and only failover to using the 3rd CAS/mailbox sever for internal users if both internal Exchange servers are offline?
    We would also like remote users to "prefer" to use the data center CAS unless it is down. Right now we point our smart host directly to a CAS, but if we had a load balancer there, we could point the smart host to the IP of the load balancer and
    the load balancer could normally send it to data center CAS if it's up and forward it to one of the servers in the office otherwise.
    Is it possible to do all this without a very complicated and expensive solution?

    Depends... what is the connectivity speed between two sites, is it good enough?
    You can use load balance in front of all the 3 CAS if your inter-site connectivity is very good.
    What about using Windows NLB as an alternative to using round robin internally? WNLB and round robin is different,. You can use DNS Roud Robin if you want to or WNLB for all three CAS Server. Or Hardware loadbalancer for all three CAS servers
    Can a load balancer keep our interoffice Exchange CAS traffic from leaving our LAN and only failover to using the 3rd CAS/mailbox sever for internal users if both internal Exchange servers are offline? If you want to use the load balancer then you don't
    need to fail them over one by one -- again you can use DNS Round Robing so the request will go to eah CAS servers one by one or use Hardware Load balance.
    We would also like remote users to "prefer" to use the data center CAS unless it is down. Right now we point our smart host directly to a CAS, but if we had a load balancer there, we could point the smart host to the IP of the load balancer and
    the load balancer could normally send it to data center CAS if it's up and forward it to one of the servers in the office otherwise.
    Use DNS Server and point the A record to the Primary Data center load balanced CAS server instead using IP or host file.
    Hope that helps
    Where Technology Meets Talent

  • Network Load Balancing for AFP Sharing

    Dear all,
    Anyone can kindly teach me how can I configure network load balancing with 2 xserve?
    Currently I success to bond 6 ethernet port with a virtual IP in a single machine and I have a link aggregation setup in my switch. It works fine.
    How to configure 2 xserve with 6 ethernet port per each to have a single virtual IP?
    my switch do not support link aggregation with a virtual IP to do the load balancing so I just can consider to do it in software level.
    Anyone know whether OS Leopard/Snow Leopard can do it? Or any suggestion for 3rd party software can perform this?
    Thanks expert!
    Karl

    It sounds like you need a load balancer. There's nothing built-in to Mac OS X Server that's going to support one virtual IP address shared across multiple physical servers.
    Your problem, though, is likely to be one of throughput - I don't know any cheap load balancers that will sustain that kind of throughput, so you may be looking at some serious $$$$s.
    There are some software-based load balancers that might be able to handle the load balancing side of things but many of them are designed around HTTP so might not work so well for other protocols.
    In addition, the software load balancer is going to suffer the same bottleneck as your AFP server, but doubly so - two servers with 6 x 1gbps links each you have a theoretical limit of 12gbps.
    To run that through a load balancer, the load balancer will need double that - 12gbps for the client-side, plus 12gbps for the server side. In reality you're probably looking at needing 10gbps interfaces and switches if you're really pulling that much bandwidth.

Maybe you are looking for

  • Processing msgs one at a time

    Hi, Is there any way in JMS which will help in processing messages received by onMessage() on a one at a time basis. What I mean is, I shud not process the next message till the first one is processed? I am assuming that onMessage() executes as soon

  • A Fix for mail not pushing after iPhone upgrade.

    Here is the fix. Keep in mind that I'm a Mac, if you're a PC you will need to read between the lines a bit. .But this did fix the Push issue. What you need to do is a Restore, but when the dialog box asks if you would like to apply the Back Up iTunes

  • Application error with blackberry 8520 curve

    hello, i own a blackberry 8520 curve and i keep having the same problem with logging in into yahoo messenger. while i can log in from a PC or another phone, logging in from this phone seems impossible, regardless the account i try logging in with. af

  • Installtion JDEV1012 at Windows2000 Advance Server -Not running

    Dear All I am new to JDeveloper. I download Jdev1012 zip file then Unzip it into D:\jdev. When I click on Jdev or Jdevw (both in D:\jdev\jdev\bin) just a screen flash and closes? what is the wrong and what I should have to do in addtion? Please expla

  • Module Pool Error handling

    Hi friends, This issue regarding Module Pool Error Handling How to handle the multiple error on the same screen , For example in the below code you could see "Carrid is Invalid"  when carrid is entred wrongly "in the same way i also need " Connid is