Load Balancing across Domains

We have been carrying out some tests with the WebLogic Tuxedo Connector (WTC),
and have observed some strange behaviour. Can anyone out there explain it?
We are using the 8.1 version of both Tux and WLS/WTC, and have set up a Tux domain
consisting of a master and slave machine (actually, both are set up on a single
Windows 2000 system using PMID). Each machine has a local domain (TDOM1 and TDOM2),
each going to a separate instance of a WebLogic Server (WDOM1 and WDOM2). The
domain config on the Tux side is pretty standard, and the only service listed
is as follows (TOLOWER is provided by each of the WLS instances via WTC):
*DM_REMOTE_SERVICES
TOLOWER RDOM="WDOM1"
TOLOWER RDOM="WDOM2"
We have modified the client calling TOLOWER to use a tpacall with TPNOREPLY (so
that we can fire off 1000 calls one after the other). Load Balancing is turned
on, and we have a NETLOAD value of 0. When we run the client on the slave machine
it load balances beautifully - 500 each through TDOM1 and TDOM2; but when we run
the same client on the master machine it sends the first request via the slave
(TDOM2) and the other 999 through the master (TDOM1). If we run the client on
the slave again immediately afterwards, it load balances perfectly as before,
so there does not seem to be any bottleneck.
One thing we noticed when doing a psr is that the count for the number of messages
is correct for each GWTDOMAIN server, but the load is always zero irrespective
of how many messages have been processed (we have not specified any load factors
- the *SERVICES section is empty).
Any ideas?
Thanks for any feedback,
Malcolm.

We're looking into another load balancing issue that might be related to
this. I'll pass this message on to the person who is looking into this.
Malcolm Freeman wrote:
We have been carrying out some tests with the WebLogic Tuxedo Connector (WTC),
and have observed some strange behaviour. Can anyone out there explain it?
We are using the 8.1 version of both Tux and WLS/WTC, and have set up a Tux domain
consisting of a master and slave machine (actually, both are set up on a single
Windows 2000 system using PMID). Each machine has a local domain (TDOM1 and TDOM2),
each going to a separate instance of a WebLogic Server (WDOM1 and WDOM2). The
domain config on the Tux side is pretty standard, and the only service listed
is as follows (TOLOWER is provided by each of the WLS instances via WTC):
*DM_REMOTE_SERVICES
TOLOWER RDOM="WDOM1"
TOLOWER RDOM="WDOM2"
We have modified the client calling TOLOWER to use a tpacall with TPNOREPLY (so
that we can fire off 1000 calls one after the other). Load Balancing is turned
on, and we have a NETLOAD value of 0. When we run the client on the slave machine
it load balances beautifully - 500 each through TDOM1 and TDOM2; but when we run
the same client on the master machine it sends the first request via the slave
(TDOM2) and the other 999 through the master (TDOM1). If we run the client on
the slave again immediately afterwards, it load balances perfectly as before,
so there does not seem to be any bottleneck.
One thing we noticed when doing a psr is that the count for the number of messages
is correct for each GWTDOMAIN server, but the load is always zero irrespective
of how many messages have been processed (we have not specified any load factors
- the *SERVICES section is empty).
Any ideas?
Thanks for any feedback,
Malcolm.

Similar Messages

  • Load balancing across DMZs - Revisited

    I know this question has been asked before and the answer is to have separate content switches per DMZ in order to maintain the security policy. There is an option to have the content switch in front of the firewall and then use only one content switch to load balance across multiple DMZs. Is this an acceptable design or the recommendation is to have a separate content switch behind the firewall for each DMZ of the firewall?
    Can a Cisco 6500 with CSM be configured for multiple layer 2 load balanced VLANs thus achieving a mutiple DMZ load balancing scenario with only one switch/CSM?

    How do you connect the router to the firewall ?
    Problem is the response from the server to a client on the internet.
    Traffic needs to get back to the CSS and if the firewall default gateway is the router, the response will not go to the CSS and the CSS will reset it.
    If you configure the default gateway of the firewall to be the CSS, than all traffic from your network to the outside will go through the CSS.
    This could be a concern as well.
    If you don't need to know the ip address of the client for your reporting, you can enable client nat on the CSS to guarantee that server response is sent to the css without having the firewall default gateway pointing at the CSS.
    Gilles.

  • Load balancing across multiple machines

    I am looking for assistance in configuring Tuxedo to perform load balancing across
    multiple machines. I have successfully performed load balancing for a service
    across different servers hosted on one machine but not to another server that's
    hosted on a different machine.
    Any assistance in this matter is greatly appreciated.

    Hello, Christina.
    Load balancing with multiple machines is a little bit different than
    in the same machine. One of the important resource in this kind
    of application is network bandwidth, so tuxedo tries to keep the
    traffic among the machines as low as possible. So, it only
    balance the load (call services in other machine) in case all the
    services are busy in the machine where they are call.
    I mean, if you have workstation clients attached only to one
    machine, then tuxedo will call services in this machine untill
    all servers are busy.
    If you want load balancing, try to put one WSL in each machine,
    and the corresponding configuration in your WSC ( with the | to
    make tuxedo randomly choose one or the other) or spread your
    native clients among all the machines.
    And so, be carefull with the routing!
    Ramón Gordillo
    "Christina" <[email protected]> wrote:
    >
    I am looking for assistance in configuring Tuxedo to perform load balancing
    across
    multiple machines. I have successfully performed load balancing for a
    service
    across different servers hosted on one machine but not to another server
    that's
    hosted on a different machine.
    Any assistance in this matter is greatly appreciated.

  • Load balancing across multiple paths to Internet

    Hello,
    I have a 2821 router. Currently, I have two bonded T-1 circuits to the Internet.
    I would like to add a DSL circuit to augment the T1s. I would also like to load balance across all of the circuits. Currently, IOS performs inherent load balancing for the T1 circuits. The DSL circuit is from a different provider than the T1s.
    The T1s are coming from a local ISP that runs no routing protocols within their infrastructure. (They run static routes and rely on the upstream provider for BGP.) The DSL provider is a national telecom carrier.
    What is the best way to perform load balancing for this scenario?

    Here is the answer (sort of) for anyone reading this post with the same question:
    No matter which way I choose to do it, the trick is to have the local ISP subnet advertised via BGP through both pipes. The national telecom DSL provider will not advertise a /20 subnet down a DSL pipe. (Ahh, why not? =:)
    Had the secondary pipe been a T-1,T-3, or other traditional pipe, I could have used a load balancer like a BigIP, or FatPipe device or possibly CEF within the IOS.
    Case closed. Thanks to everyone that took a look.
    Doug.

  • Load balancing across 4 web servers in same datacentre - advice please

    Hi All
    Im looking for some advice please
    The apps team have asked me about load balancing across some servers but im not that well up on it for applications
    Basically we have 4 apache web servers with about 2000 clients connecting to them, they would like to load balance connections to all these servers, they all need the same DNS name etc.
    what load balancing methods would I need for this, I believe they run on Linux
    Would I need some sort of device, or can the servers run some software that can do this, how would it work? and how would load balancing be achieved here?
    cheers

    Carl,
    What you have mentioned sounds very straightforward then everything should go well.
    The ACE is a load balancer which takes a load balancing decisions based on different matching methods like matching virtual address, url, source address, etc then once the load balance decision has been taken then the ACE will load balance the traffic based on the load balance method which you have configured (if you do not configure anything then it will use the default which is "round robin"), then it will send the traffic to the servers which it has available and finally the client should get the content.
    If you want to get some details about the load balancing methods here you have them:
    http://www.cisco.com/en/US/docs/app_ntwk_services/data_center_app_services/ace_appliances/vA3_1_0/configuration/slb/guide/overview.html#wp1000976
    For ACE deployments the most common designs are the following.
    Bridge Mode
    One Arm Mode
    Routed Mode
    Here you have a link for Bridge Mode and a sample for that:
    http://docwiki.cisco.com/wiki/Basic_Load_Balancing_Using_Bridged_Mode_on_the_Cisco_Application_Control_Engine_Configuration_Example
    Here you have a link for One Arm Mode and a sample for that:
    http://docwiki.cisco.com/wiki/Basic_Load_Balancing_Using_One_Arm_Mode_with_Source_NAT_on_the_Cisco_Application_Control_Engine_Configuration_Example
    Here you have a link for Routed Mode and a sample for that:
    http://docwiki.cisco.com/wiki/Basic_Load_Balancing_Using_Routed_Mode_on_the_Cisco_Application_Control_Engine_Configuration_Example
    Then as you could see in all those links you may end up having a configuration like this:
    interface vlan 40
      description "Default gateway of real servers"
      ip address 192.168.1.1 255.255.255.0
      service-policy input remote-access
      no shutdown
    ip route 0.0.0.0 0.0.0.0 172.16.1.1
    class-map match-all slb-vip
      2 match virtual-address 172.16.1.100 any
    policy-map multi-match client-vips
      class slb-vip
        loadbalance vip inservice
        loadbalance policy slb
    policy-map type loadbalance http first-match slb
      class class-default
        serverfarm web
    serverfarm host web
      rserver lnx1
        inservice
      rserver lnx2
        inservice
      rserver lnx3
        inservice
    rserver host lnx1
      ip address 192.168.1.11
      inservice
    rserver host lnx2
      ip address 192.168.1.12
      inservice
    rserver host lnx3
      ip address 192.168.1.13
      inservice
    Please mark it if it answered you question then other users can use it as reference in the future.
    Hope this helps!
    Jorge

  • IPTV load balancing across broadcast servers.

    I know that across Archive servers in the same cluster that IPTV control server will load balance , is there is a similar function with Broadcast servers. I know broadcast servers use a different delivery mechanism (Multicast). We have multiple broadcast servers that take in an identical live stream, but the only way to advertise thru a URL is a seperate URL per server. Is there some way to hide the multiple URL's to the client population?

    No. There is no way to load balance across multiple broadcast servers for live streams. Since this is going to be multicast, there should not be any additional load on the servers when the number of users are more.

  • ACE30 load balancing across two slightly different rservers

    Hi,
    is there a possibility to get a load balancing across two rservers so:
    when client sends http://vip/ and it goes to rserver1 then url is sent without change
    when client sends http://vip/ and it goes to rserver2 then url is modified to http://vip/xyz/
    Or maybe load balancing can be done across two serverfarms ?
    thanks

    Ryszard,
    I hope you are doing great.
    I do not think that´s possible since the ACE just load balance the traffic to the servers and once the load balance decision has been taken it will pass the "ball" to the chosen server.
    Think about this, let´s say user A needs to go to Server1 but guess what? based on the load balance decision it was sent to Server2 which unfortunately does not have what the customer was looking for. OK, fine, user A close the connection and tries again but now the Server1 is down then the only available is Server2 then the ACE sends it to Server2 again then user A just decides to leave, you see how bad that can be.
    A better approach would be to have either 2 VIPs ( different IP addresses) or 2 with the same IP address but hearing on another port, perhaps, one port per server.
    Hope this helps!
    Jorge

  • Load balancing across database connection

    Do you provide load balancing across database connections and allow RDBMS load
    balancing for read only access?
    Thanks in advance.

    Hello, Christina.
    Load balancing with multiple machines is a little bit different than
    in the same machine. One of the important resource in this kind
    of application is network bandwidth, so tuxedo tries to keep the
    traffic among the machines as low as possible. So, it only
    balance the load (call services in other machine) in case all the
    services are busy in the machine where they are call.
    I mean, if you have workstation clients attached only to one
    machine, then tuxedo will call services in this machine untill
    all servers are busy.
    If you want load balancing, try to put one WSL in each machine,
    and the corresponding configuration in your WSC ( with the | to
    make tuxedo randomly choose one or the other) or spread your
    native clients among all the machines.
    And so, be carefull with the routing!
    Ramón Gordillo
    "Christina" <[email protected]> wrote:
    >
    I am looking for assistance in configuring Tuxedo to perform load balancing
    across
    multiple machines. I have successfully performed load balancing for a
    service
    across different servers hosted on one machine but not to another server
    that's
    hosted on a different machine.
    Any assistance in this matter is greatly appreciated.

  • Load balancing across multiple application servers not working with JCo RFC

    We have a problem where inbound messages to the Mapping Runtime engine (ABAP -> J2EE) are not load balanced over application servers. However, load balancing does take place across server nodes within one application server.
    Our system comprises of the following:
    Central Instance (2 X server nodes)
    Database Instance
    2 X Dialog Instances (with 2 X server nodes each)
    The 1st application server that starts is usually the one that is used for inbound messaging.
    We have looked at the sap gateway configuration and have tried various options without much luck:
    i.e.: local gateways vs. one central gateway, load balancing type by changing parameter gw/reg_lb_level, see: http://help.sap.com/saphelp_nw70/helpdata/EN/bb/9f12f24b9b11d189750000e8322d00/frameset.htm
    Here are our release levels:
    SAP_ABA     700     0012     SAPKA70012
    SAP_BASIS     700     0012     SAPKB70012
    PI_BASIS     2005_1_700     0012     SAPKIPYJ7C
    ST-PI     2005_1_700     0005     SAPKITLQI5
    SAP_BW     700     0013     SAPKW70013
    ST-A/PI     01J_BCO700     0000          -
    Any help would be greatly appreciated.
    Many thanks

    Tim
    Did you follow the guide here:
    How to Scale Up SAP Exchange Infrastructure 3.0  
    Learn what the most likely scaled system architecture looks like, and read about a step by step procedure to install additional dialog instances. The guide also walks you through additional configuration steps and the application of Support Package Stacks.
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/c3d9d710-0d01-0010-7486-9a51ab92b927
    We followed this guide for XI3.0 and PI7.0 and works successfully!

  • Server Load-balancing Across Two Data centers on Layer 3

    Hi,
    I have a customer who would like to load balance two Microsoft Exchange 2010 CAS Servers which are residing across two data centers.
    Which is the best solution for this? Cisco ACE or Cisco ACE GSS or both?

    I would go with source natting the clients ip addresses, so that return traffic from the servers is routed correctly.
    It saves you the trouble with maintaining PBR as well.
    Source NAT can be done on the ACE, by applying the configuration to either the load balancing policy, or adding the configuration to the class-map entries in the multi-match policy.
    Cheers,
    Søren
    Sent from Cisco Technical Support iPad App

  • OSPF load balancing across multiple port channels

    I have googled/searched for this everywhere but haven't been able to find a solution. Forgive me if I leave something out but I will try to convey all relevant information. Hopefully someone can provide some insight and many thanks in advance.
    I have three switches (A, B, and C) that are all running OSPF and LACP port channelling among themselves on a production network. Each port channel interface contains two physical interfaces and trunks a single vlan (so a vlan connecting each switch over a port channel). OSPF is running on each vlan interface.
    Switch A - ME3600
    Switch B - 3550
    Switch C - 3560G
    This is just a small part of a much larger topology. This part forms a triangle, if you will, where A is the source and C is the destination. A and C connect directly via a port channel and are OSPF neighbors. A and B connect directly via a port channel and are OSPF neighbors. B and C connect directly via a port channel and are OSPF neighbors. Currently, all traffic from A to C traverses B. I would like to load balance traffic sourced from A with a destination of C on the direct link and on the links through B. If all traffic is passed through B, traffic is evenly split on the two interfaces on the port channel. If all traffic is pushed onto the direct A-C link, traffic is evenly balanced on the two interfaces on that port channel. If OSPF load balancing is configured on the two vlans from A (so A-C and A-B), the traffic is divided to each port channel but only one port on each port channel is utilized while the other one passes nothing. So half of each port channel remains unused. The port channel on B-C continues to load balance, evenly splitting the traffic received from half of the port channel from A.
    A and C port channel load balancing is configured for src-dst-ip. B is a 3550 and does not have this option, so it is set to src-mac.
    Relevant configuration:
    Switch A:
    interface Port-channel1
    description Link to B
     port-type nni
     switchport trunk allowed vlan 11
     switchport mode trunk
    interface Vlan11
     ip address x.x.x.134 255.255.255.254
    interface Port-channel3
    description Link to C
     port-type nni
     switchport trunk allowed vlan 10
     switchport mode trunk
    interface Vlan10
     ip address x.x.x.152 255.255.255.254
    Switch B:
    interface Port-channel1
     description Link to A
     switchport trunk encapsulation dot1q
     switchport trunk allowed vlan 11
     switchport mode trunk
    interface Vlan11
     ip address x.x.x.135 255.255.255.254
    interface Port-channel2
     description Link to C
     switchport trunk encapsulation dot1q
     switchport trunk allowed vlan 12
     switchport mode trunk
    interface Vlan12
     ip address x.x.x.186 255.255.255.254
    Switch C:
    interface Port-channel1
     description Link to B
     switchport trunk encapsulation dot1q
     switchport trunk allowed vlan 12
     switchport mode trunk
    interface Vlan12
     ip address x.x.x.187 255.255.255.254
    interface Port-channel3
     description Link to A
     switchport trunk encapsulation dot1q
     switchport trunk allowed vlan 10
     switchport mode trunk
    interface Vlan10
     ip address x.x.x.153 255.255.255.254

    This is more FYI. 10.82.4.0/24 is a subnet on switch C. The path to it is split across vlans 10 and 11 but once it hits the port channel interfaces only one side of each is chosen. I'd like to avoid creating more vlan interfaces but right now that appears to be the only way to load balance equally across the four interfaces out of switch A.
    ME3600#sh ip route 10.82.4.0
    Routing entry for 10.82.4.0/24
      Known via "ospf 1", distance 110, metric 154, type extern 1
      Last update from x.x.x.153 on Vlan10, 01:20:46 ago
      Routing Descriptor Blocks:
        x.x.x.153, from 10.82.15.1, 01:20:46 ago, via Vlan10
          Route metric is 154, traffic share count is 1
      * x.x.x.135, from 10.82.15.1, 01:20:46 ago, via Vlan11
          Route metric is 154, traffic share count is 1
    ME3600#sh ip cef 10.82.4.0
    10.82.4.0/24
      nexthop x.x.x.135 Vlan11
      nexthop x.x.x.153 Vlan10
    ME3600#sh ip cef 10.82.4.0 internal       
    10.82.4.0/24, epoch 0, RIB[I], refcount 5, per-destination sharing
    sources: RIB 
    ifnums:
    Vlan10(1157): x.x.x.153
    Vlan11(1192): x.x.x.135
    path 093DBC20, path list 0937412C, share 1/1, type attached nexthop, for IPv4
    nexthop x.x.x.135 Vlan11, adjacency IP adj out of Vlan11, addr x.x.x.135 08EE7560
    path 093DC204, path list 0937412C, share 1/1, type attached nexthop, for IPv4
    nexthop x.x.x.153 Vlan10, adjacency IP adj out of Vlan10, addr x.x.x.153 093A4E60
    output chain:
    loadinfo 088225C0, per-session, 2 choices, flags 0003, 88 locks
    flags: Per-session, for-rx-IPv4
    16 hash buckets             
    < 0 > IP adj out of Vlan11, addr x.x.x.135 08EE7560
    < 1 > IP adj out of Vlan10, addr x.x.x.153 093A4E60
    < 2 > IP adj out of Vlan11, addr x.x.x.135 08EE7560
    < 3 > IP adj out of Vlan10, addr x.x.x.153 093A4E60
    < 4 > IP adj out of Vlan11, addr x.x.x.135 08EE7560
    < 5 > IP adj out of Vlan10, addr x.x.x.153 093A4E60
    < 6 > IP adj out of Vlan11, addr x.x.x.135 08EE7560
    < 7 > IP adj out of Vlan10, addr x.x.x.153 093A4E60
    < 8 > IP adj out of Vlan11, addr x.x.x.135 08EE7560
    < 9 > IP adj out of Vlan10, addr x.x.x.153 093A4E60
    <10 > IP adj out of Vlan11, addr x.x.x.135 08EE7560
    <11 > IP adj out of Vlan10, addr x.x.x.153 093A4E60
    <12 > IP adj out of Vlan11, addr x.x.x.135 08EE7560
    <13 > IP adj out of Vlan10, addr x.x.x.153 093A4E60
    <14 > IP adj out of Vlan11, addr x.x.x.135 08EE7560
    <15 > IP adj out of Vlan10, addr x.x.x.153 093A4E60
    Subblocks:                                                                                  
    None

  • Load Balancing across Multiple DMZ's

    Can you split one Css11503 across two separate DMZ's securely. I have a group of server that are currently being load balanced in one DMZ I now have a requirement to Load balance another set of server in another DMZ is it possible spilt the CSS across two DMZ's and still maintain a high level of Security

    You need a separate CSS for each interface of the firewall.
    If you use the same CSS for 2 DMZ, traffic inter DMZ will be routed by CSS and will bypass the firewall.
    Gilles.

  • Sticky load balancing across 2 ports with cookies

    Hi,
    I have a server configuration where I have 1 top level Apache server that deals with SSL termination (and handles static content) and proxy passes dynamic content onto 2 Tomcat servers on 2 ports, one for http requests (9001) and one for the requests that were secure, but have now been un-encrypted by Apache (9002).  My 2 Tomcat servers are load balanced using a CSS and I need this load balancing to stick to the tomcat servers regardless of port so that the user is stuck to the same Tomcat server for their entire session. 
    I would like to use arrowpoint cookies to perform this stickyness, but the documentation suggests that arrowpoint cookie load balancing (in fact any cookie based load balancing) requires the port to be specified in the content rule.  Is this correct?  Is my only option to use the source IP for stickyness? I don't understand why the port should be required if the stickyness is via a cookie. Can I not simply configure my 2 tomcat servers as services with no port and add a single content rule that load balances these services using arrowpoint-cookie advanced balancing?
    service tomcat1
      ip address x.x.x.x
      active
    service tomcat2
      ip address x.x.x.x
      active
    owner me
       content sticky
         vip address x.x.x.x
         protocol tcp
         url "/*"
         add service tomcat-1
         add service tomcat-2
         advanced-balance arrowpoint-cookie
         active

    Angela-
    The issue with port is that cookies are very specifically HTTP only and the CSS has no way of knowing what protocol will hit a VIP prior to trying to address it as HTTP. Your issue is actually a bit clearer than it is initially led to be - you can still use 2 different rules by using the configuration below. 
    However, you might be headed for a headache if you don't implicitly control the client's actions.  By default, browsers don't generally send cookies cross-protocol and definitely not cross-domain.  Use something like httpwatch or iewatch to check out the headers your client sends to your site.  Make sure when the 200ok arrives with the set-cookie that the client sends that cookie in all preceeding packets that are HTTP and HTTPS both.
    service tomcat1
      string "tomcat1"
      ip address x.x.x.x
      active
    service tomcat2
      string "tomcat2"
      ip address x.x.x.x
      active
    owner me
       content sticky9001
         vip address x.x.x.x
         protocol tcp
         url "/*"
         port 9001
         add service tomcat-1
         add service tomcat-2
         advanced-balance arrowpoint-cookie
         active
       content sticky9002
         vip address x.x.x.x
         protocol tcp
         url "/*"
         port 9002
         add service tomcat-1
         add service tomcat-2
         advanced-balance arrowpoint-cookie
         active
    With this configuration, the CSS will use the "string" as the cookie value. So if the client were to recieve set-cookie: ArrowpointCookie=tomcat1, it should use it for either rule, and end up on tomcat1 accessing either VIP.
    Regards,
    Chris

  • Tuxedo load balancing across system

    Hi there,
    does bea tuxedo version 8 or above has feature like weblogic to configure load balancing/failover across multiple system?
    Thanks,
    Simon

    Simon,
    Tuxedo does offer load balancing. A high-level description of this feature
    for Tuxedo 8.0 is at http://edocs.bea.com/tuxedo/tux80/atmi/intatm24.htm
    A description of the configuration file parameters used to set up load
    balancing is at http://edocs.bea.com/tuxedo/tux80/atmi/rf537.htm ; look for
    the LDBAL, NETLOAD, and LOAD parameters.
    <Simon Leong> wrote in message news:[email protected]..
    Hi there,
    does bea tuxedo version 8 or above has feature like weblogic to configureload balancing/failover across multiple system?
    >
    Thanks,
    Simon

  • ACE module not load balancing across two servers

    We are seeing an issue in a context on one of our load balancers where an application doesn't appear to be load balancing correctly across the two real servers.  At various times the application team is seeing active connections on only one real server.  They see no connection attempts on the other server.  The ACE sees both servers as up and active within the serverfarm.  However, a show serverfarm confirms that the load balancer sees current connections only going to one of the servers.  The issue is fixed by restarting the application on the server that is not receiving any connections.  However, it reappears again.  And which server experiences the issue moves back and forth between the two real servers, so it is not limited to just one of the servers.
    The application vendor wants to know why the load balancer is periodically not sending traffic to one of the servers.  I'm kind of curious myself.  Does anyone have some tips on where we can look next to isolate the cause?
    We're running A2(3.3).  The ACE module was upgraded to that version of code on a Friday, and this issue started the following Monday.  The ACE has 28 contexts configured, and this one context is the only one reporting any issues since the upgrade.
    Here are the show serverfarm statistics as of today:
    ACE# show serverfarm farma-8000
    serverfarm     : farma-8000, type: HOST
    total rservers : 2
                                                    ----------connections-----------
           real                  weight state        current    total      failures
       ---+---------------------+------+------------+----------+----------+---------
       rserver: server#1
           x.x.x.20:8000      8      OPERATIONAL  0          186617     3839
       rserver: server#2
           x.x.x.21:8000      8      OPERATIONAL  67         83513      1754

    Are you enabling sticky feature? What kind of predictor are you using?
    If sticky feature is enabled and one rserver goes down, traffic will leans to one side.
    Even after the rserver retuns to up, traffic may continue to lean due to sticky feature.
    The behavior seems to depend on the configuration.
    So, please let me know a part of configuration?
    Regards,
    Yuji

Maybe you are looking for

  • CS6 - Mac & PC

    Can I run 1 copy of CS6 on MacBook Pro (64)bit (new install) and upgrade the CS3 to CS6 on a windows vista (64)bit machine?  Basicly, can I put CS6 on my two machines?    Thanks.   Rick

  • Modify the field TXJCD_ST of RFC_CALCULATE_TAXES_DOC

    HI, I have a requirements to modify the field TXJCD_ST of RFC_CALCULATE_TAXES_DOC with the ShipTo jurisdiction code. What i found is the user exit exit_saplfytx_user_001 that sends to the RFC a list of fields that are modified. The field TXJCD_ST is

  • Report Output appears diffrently

    Hi, The alignment of the labels Report Output appears differently in the different 3 views: a) Report Output viewed on the screen. b) Report Output viewed on the paper after screen. c) Report Output viewed on the Layout Editor of the Report Writer. I

  • Targeted Adjustment Tool in Camera Raw 5.2

    Thanks Tom for the Targeted Adjustment Tool it works great. Where can I find documentation on the SNAPSHOT function?

  • Installing and Controlling Photoshop's ScriptListener Plug-in

    I've long wanted a way to control Adobe's Photoshop ScriptListener Plug-in. Yesterday I stumbled onto  James A. Taylor Photoshop script vScriptListener.jsx which is dated 9/19/2007. This script can install and give you some control over the script lo