Load balancing across DMZs - Revisited
I know this question has been asked before and the answer is to have separate content switches per DMZ in order to maintain the security policy. There is an option to have the content switch in front of the firewall and then use only one content switch to load balance across multiple DMZs. Is this an acceptable design or the recommendation is to have a separate content switch behind the firewall for each DMZ of the firewall?
Can a Cisco 6500 with CSM be configured for multiple layer 2 load balanced VLANs thus achieving a mutiple DMZ load balancing scenario with only one switch/CSM?
How do you connect the router to the firewall ?
Problem is the response from the server to a client on the internet.
Traffic needs to get back to the CSS and if the firewall default gateway is the router, the response will not go to the CSS and the CSS will reset it.
If you configure the default gateway of the firewall to be the CSS, than all traffic from your network to the outside will go through the CSS.
This could be a concern as well.
If you don't need to know the ip address of the client for your reporting, you can enable client nat on the CSS to guarantee that server response is sent to the css without having the firewall default gateway pointing at the CSS.
Gilles.
Similar Messages
-
Load balancing across multiple machines
I am looking for assistance in configuring Tuxedo to perform load balancing across
multiple machines. I have successfully performed load balancing for a service
across different servers hosted on one machine but not to another server that's
hosted on a different machine.
Any assistance in this matter is greatly appreciated.Hello, Christina.
Load balancing with multiple machines is a little bit different than
in the same machine. One of the important resource in this kind
of application is network bandwidth, so tuxedo tries to keep the
traffic among the machines as low as possible. So, it only
balance the load (call services in other machine) in case all the
services are busy in the machine where they are call.
I mean, if you have workstation clients attached only to one
machine, then tuxedo will call services in this machine untill
all servers are busy.
If you want load balancing, try to put one WSL in each machine,
and the corresponding configuration in your WSC ( with the | to
make tuxedo randomly choose one or the other) or spread your
native clients among all the machines.
And so, be carefull with the routing!
Ramón Gordillo
"Christina" <[email protected]> wrote:
>
I am looking for assistance in configuring Tuxedo to perform load balancing
across
multiple machines. I have successfully performed load balancing for a
service
across different servers hosted on one machine but not to another server
that's
hosted on a different machine.
Any assistance in this matter is greatly appreciated. -
Load balancing across multiple paths to Internet
Hello,
I have a 2821 router. Currently, I have two bonded T-1 circuits to the Internet.
I would like to add a DSL circuit to augment the T1s. I would also like to load balance across all of the circuits. Currently, IOS performs inherent load balancing for the T1 circuits. The DSL circuit is from a different provider than the T1s.
The T1s are coming from a local ISP that runs no routing protocols within their infrastructure. (They run static routes and rely on the upstream provider for BGP.) The DSL provider is a national telecom carrier.
What is the best way to perform load balancing for this scenario?Here is the answer (sort of) for anyone reading this post with the same question:
No matter which way I choose to do it, the trick is to have the local ISP subnet advertised via BGP through both pipes. The national telecom DSL provider will not advertise a /20 subnet down a DSL pipe. (Ahh, why not? =:)
Had the secondary pipe been a T-1,T-3, or other traditional pipe, I could have used a load balancer like a BigIP, or FatPipe device or possibly CEF within the IOS.
Case closed. Thanks to everyone that took a look.
Doug. -
Load balancing across 4 web servers in same datacentre - advice please
Hi All
Im looking for some advice please
The apps team have asked me about load balancing across some servers but im not that well up on it for applications
Basically we have 4 apache web servers with about 2000 clients connecting to them, they would like to load balance connections to all these servers, they all need the same DNS name etc.
what load balancing methods would I need for this, I believe they run on Linux
Would I need some sort of device, or can the servers run some software that can do this, how would it work? and how would load balancing be achieved here?
cheersCarl,
What you have mentioned sounds very straightforward then everything should go well.
The ACE is a load balancer which takes a load balancing decisions based on different matching methods like matching virtual address, url, source address, etc then once the load balance decision has been taken then the ACE will load balance the traffic based on the load balance method which you have configured (if you do not configure anything then it will use the default which is "round robin"), then it will send the traffic to the servers which it has available and finally the client should get the content.
If you want to get some details about the load balancing methods here you have them:
http://www.cisco.com/en/US/docs/app_ntwk_services/data_center_app_services/ace_appliances/vA3_1_0/configuration/slb/guide/overview.html#wp1000976
For ACE deployments the most common designs are the following.
Bridge Mode
One Arm Mode
Routed Mode
Here you have a link for Bridge Mode and a sample for that:
http://docwiki.cisco.com/wiki/Basic_Load_Balancing_Using_Bridged_Mode_on_the_Cisco_Application_Control_Engine_Configuration_Example
Here you have a link for One Arm Mode and a sample for that:
http://docwiki.cisco.com/wiki/Basic_Load_Balancing_Using_One_Arm_Mode_with_Source_NAT_on_the_Cisco_Application_Control_Engine_Configuration_Example
Here you have a link for Routed Mode and a sample for that:
http://docwiki.cisco.com/wiki/Basic_Load_Balancing_Using_Routed_Mode_on_the_Cisco_Application_Control_Engine_Configuration_Example
Then as you could see in all those links you may end up having a configuration like this:
interface vlan 40
description "Default gateway of real servers"
ip address 192.168.1.1 255.255.255.0
service-policy input remote-access
no shutdown
ip route 0.0.0.0 0.0.0.0 172.16.1.1
class-map match-all slb-vip
2 match virtual-address 172.16.1.100 any
policy-map multi-match client-vips
class slb-vip
loadbalance vip inservice
loadbalance policy slb
policy-map type loadbalance http first-match slb
class class-default
serverfarm web
serverfarm host web
rserver lnx1
inservice
rserver lnx2
inservice
rserver lnx3
inservice
rserver host lnx1
ip address 192.168.1.11
inservice
rserver host lnx2
ip address 192.168.1.12
inservice
rserver host lnx3
ip address 192.168.1.13
inservice
Please mark it if it answered you question then other users can use it as reference in the future.
Hope this helps!
Jorge -
IPTV load balancing across broadcast servers.
I know that across Archive servers in the same cluster that IPTV control server will load balance , is there is a similar function with Broadcast servers. I know broadcast servers use a different delivery mechanism (Multicast). We have multiple broadcast servers that take in an identical live stream, but the only way to advertise thru a URL is a seperate URL per server. Is there some way to hide the multiple URL's to the client population?
No. There is no way to load balance across multiple broadcast servers for live streams. Since this is going to be multicast, there should not be any additional load on the servers when the number of users are more.
-
ACE30 load balancing across two slightly different rservers
Hi,
is there a possibility to get a load balancing across two rservers so:
when client sends http://vip/ and it goes to rserver1 then url is sent without change
when client sends http://vip/ and it goes to rserver2 then url is modified to http://vip/xyz/
Or maybe load balancing can be done across two serverfarms ?
thanksRyszard,
I hope you are doing great.
I do not think that´s possible since the ACE just load balance the traffic to the servers and once the load balance decision has been taken it will pass the "ball" to the chosen server.
Think about this, let´s say user A needs to go to Server1 but guess what? based on the load balance decision it was sent to Server2 which unfortunately does not have what the customer was looking for. OK, fine, user A close the connection and tries again but now the Server1 is down then the only available is Server2 then the ACE sends it to Server2 again then user A just decides to leave, you see how bad that can be.
A better approach would be to have either 2 VIPs ( different IP addresses) or 2 with the same IP address but hearing on another port, perhaps, one port per server.
Hope this helps!
Jorge -
Load balancing across database connection
Do you provide load balancing across database connections and allow RDBMS load
balancing for read only access?
Thanks in advance.Hello, Christina.
Load balancing with multiple machines is a little bit different than
in the same machine. One of the important resource in this kind
of application is network bandwidth, so tuxedo tries to keep the
traffic among the machines as low as possible. So, it only
balance the load (call services in other machine) in case all the
services are busy in the machine where they are call.
I mean, if you have workstation clients attached only to one
machine, then tuxedo will call services in this machine untill
all servers are busy.
If you want load balancing, try to put one WSL in each machine,
and the corresponding configuration in your WSC ( with the | to
make tuxedo randomly choose one or the other) or spread your
native clients among all the machines.
And so, be carefull with the routing!
Ramón Gordillo
"Christina" <[email protected]> wrote:
>
I am looking for assistance in configuring Tuxedo to perform load balancing
across
multiple machines. I have successfully performed load balancing for a
service
across different servers hosted on one machine but not to another server
that's
hosted on a different machine.
Any assistance in this matter is greatly appreciated. -
Load Balancing across Multiple DMZ's
Can you split one Css11503 across two separate DMZ's securely. I have a group of server that are currently being load balanced in one DMZ I now have a requirement to Load balance another set of server in another DMZ is it possible spilt the CSS across two DMZ's and still maintain a high level of Security
You need a separate CSS for each interface of the firewall.
If you use the same CSS for 2 DMZ, traffic inter DMZ will be routed by CSS and will bypass the firewall.
Gilles. -
Load balancing across multiple application servers not working with JCo RFC
We have a problem where inbound messages to the Mapping Runtime engine (ABAP -> J2EE) are not load balanced over application servers. However, load balancing does take place across server nodes within one application server.
Our system comprises of the following:
Central Instance (2 X server nodes)
Database Instance
2 X Dialog Instances (with 2 X server nodes each)
The 1st application server that starts is usually the one that is used for inbound messaging.
We have looked at the sap gateway configuration and have tried various options without much luck:
i.e.: local gateways vs. one central gateway, load balancing type by changing parameter gw/reg_lb_level, see: http://help.sap.com/saphelp_nw70/helpdata/EN/bb/9f12f24b9b11d189750000e8322d00/frameset.htm
Here are our release levels:
SAP_ABA 700 0012 SAPKA70012
SAP_BASIS 700 0012 SAPKB70012
PI_BASIS 2005_1_700 0012 SAPKIPYJ7C
ST-PI 2005_1_700 0005 SAPKITLQI5
SAP_BW 700 0013 SAPKW70013
ST-A/PI 01J_BCO700 0000 -
Any help would be greatly appreciated.
Many thanksTim
Did you follow the guide here:
How to Scale Up SAP Exchange Infrastructure 3.0
Learn what the most likely scaled system architecture looks like, and read about a step by step procedure to install additional dialog instances. The guide also walks you through additional configuration steps and the application of Support Package Stacks.
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/c3d9d710-0d01-0010-7486-9a51ab92b927
We followed this guide for XI3.0 and PI7.0 and works successfully! -
Server Load-balancing Across Two Data centers on Layer 3
Hi,
I have a customer who would like to load balance two Microsoft Exchange 2010 CAS Servers which are residing across two data centers.
Which is the best solution for this? Cisco ACE or Cisco ACE GSS or both?I would go with source natting the clients ip addresses, so that return traffic from the servers is routed correctly.
It saves you the trouble with maintaining PBR as well.
Source NAT can be done on the ACE, by applying the configuration to either the load balancing policy, or adding the configuration to the class-map entries in the multi-match policy.
Cheers,
Søren
Sent from Cisco Technical Support iPad App -
OSPF load balancing across multiple port channels
I have googled/searched for this everywhere but haven't been able to find a solution. Forgive me if I leave something out but I will try to convey all relevant information. Hopefully someone can provide some insight and many thanks in advance.
I have three switches (A, B, and C) that are all running OSPF and LACP port channelling among themselves on a production network. Each port channel interface contains two physical interfaces and trunks a single vlan (so a vlan connecting each switch over a port channel). OSPF is running on each vlan interface.
Switch A - ME3600
Switch B - 3550
Switch C - 3560G
This is just a small part of a much larger topology. This part forms a triangle, if you will, where A is the source and C is the destination. A and C connect directly via a port channel and are OSPF neighbors. A and B connect directly via a port channel and are OSPF neighbors. B and C connect directly via a port channel and are OSPF neighbors. Currently, all traffic from A to C traverses B. I would like to load balance traffic sourced from A with a destination of C on the direct link and on the links through B. If all traffic is passed through B, traffic is evenly split on the two interfaces on the port channel. If all traffic is pushed onto the direct A-C link, traffic is evenly balanced on the two interfaces on that port channel. If OSPF load balancing is configured on the two vlans from A (so A-C and A-B), the traffic is divided to each port channel but only one port on each port channel is utilized while the other one passes nothing. So half of each port channel remains unused. The port channel on B-C continues to load balance, evenly splitting the traffic received from half of the port channel from A.
A and C port channel load balancing is configured for src-dst-ip. B is a 3550 and does not have this option, so it is set to src-mac.
Relevant configuration:
Switch A:
interface Port-channel1
description Link to B
port-type nni
switchport trunk allowed vlan 11
switchport mode trunk
interface Vlan11
ip address x.x.x.134 255.255.255.254
interface Port-channel3
description Link to C
port-type nni
switchport trunk allowed vlan 10
switchport mode trunk
interface Vlan10
ip address x.x.x.152 255.255.255.254
Switch B:
interface Port-channel1
description Link to A
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 11
switchport mode trunk
interface Vlan11
ip address x.x.x.135 255.255.255.254
interface Port-channel2
description Link to C
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 12
switchport mode trunk
interface Vlan12
ip address x.x.x.186 255.255.255.254
Switch C:
interface Port-channel1
description Link to B
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 12
switchport mode trunk
interface Vlan12
ip address x.x.x.187 255.255.255.254
interface Port-channel3
description Link to A
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 10
switchport mode trunk
interface Vlan10
ip address x.x.x.153 255.255.255.254This is more FYI. 10.82.4.0/24 is a subnet on switch C. The path to it is split across vlans 10 and 11 but once it hits the port channel interfaces only one side of each is chosen. I'd like to avoid creating more vlan interfaces but right now that appears to be the only way to load balance equally across the four interfaces out of switch A.
ME3600#sh ip route 10.82.4.0
Routing entry for 10.82.4.0/24
Known via "ospf 1", distance 110, metric 154, type extern 1
Last update from x.x.x.153 on Vlan10, 01:20:46 ago
Routing Descriptor Blocks:
x.x.x.153, from 10.82.15.1, 01:20:46 ago, via Vlan10
Route metric is 154, traffic share count is 1
* x.x.x.135, from 10.82.15.1, 01:20:46 ago, via Vlan11
Route metric is 154, traffic share count is 1
ME3600#sh ip cef 10.82.4.0
10.82.4.0/24
nexthop x.x.x.135 Vlan11
nexthop x.x.x.153 Vlan10
ME3600#sh ip cef 10.82.4.0 internal
10.82.4.0/24, epoch 0, RIB[I], refcount 5, per-destination sharing
sources: RIB
ifnums:
Vlan10(1157): x.x.x.153
Vlan11(1192): x.x.x.135
path 093DBC20, path list 0937412C, share 1/1, type attached nexthop, for IPv4
nexthop x.x.x.135 Vlan11, adjacency IP adj out of Vlan11, addr x.x.x.135 08EE7560
path 093DC204, path list 0937412C, share 1/1, type attached nexthop, for IPv4
nexthop x.x.x.153 Vlan10, adjacency IP adj out of Vlan10, addr x.x.x.153 093A4E60
output chain:
loadinfo 088225C0, per-session, 2 choices, flags 0003, 88 locks
flags: Per-session, for-rx-IPv4
16 hash buckets
< 0 > IP adj out of Vlan11, addr x.x.x.135 08EE7560
< 1 > IP adj out of Vlan10, addr x.x.x.153 093A4E60
< 2 > IP adj out of Vlan11, addr x.x.x.135 08EE7560
< 3 > IP adj out of Vlan10, addr x.x.x.153 093A4E60
< 4 > IP adj out of Vlan11, addr x.x.x.135 08EE7560
< 5 > IP adj out of Vlan10, addr x.x.x.153 093A4E60
< 6 > IP adj out of Vlan11, addr x.x.x.135 08EE7560
< 7 > IP adj out of Vlan10, addr x.x.x.153 093A4E60
< 8 > IP adj out of Vlan11, addr x.x.x.135 08EE7560
< 9 > IP adj out of Vlan10, addr x.x.x.153 093A4E60
<10 > IP adj out of Vlan11, addr x.x.x.135 08EE7560
<11 > IP adj out of Vlan10, addr x.x.x.153 093A4E60
<12 > IP adj out of Vlan11, addr x.x.x.135 08EE7560
<13 > IP adj out of Vlan10, addr x.x.x.153 093A4E60
<14 > IP adj out of Vlan11, addr x.x.x.135 08EE7560
<15 > IP adj out of Vlan10, addr x.x.x.153 093A4E60
Subblocks:
None -
Tuxedo load balancing across system
Hi there,
does bea tuxedo version 8 or above has feature like weblogic to configure load balancing/failover across multiple system?
Thanks,
SimonSimon,
Tuxedo does offer load balancing. A high-level description of this feature
for Tuxedo 8.0 is at http://edocs.bea.com/tuxedo/tux80/atmi/intatm24.htm
A description of the configuration file parameters used to set up load
balancing is at http://edocs.bea.com/tuxedo/tux80/atmi/rf537.htm ; look for
the LDBAL, NETLOAD, and LOAD parameters.
<Simon Leong> wrote in message news:[email protected]..
Hi there,
does bea tuxedo version 8 or above has feature like weblogic to configureload balancing/failover across multiple system?
>
Thanks,
Simon -
ACE module not load balancing across two servers
We are seeing an issue in a context on one of our load balancers where an application doesn't appear to be load balancing correctly across the two real servers. At various times the application team is seeing active connections on only one real server. They see no connection attempts on the other server. The ACE sees both servers as up and active within the serverfarm. However, a show serverfarm confirms that the load balancer sees current connections only going to one of the servers. The issue is fixed by restarting the application on the server that is not receiving any connections. However, it reappears again. And which server experiences the issue moves back and forth between the two real servers, so it is not limited to just one of the servers.
The application vendor wants to know why the load balancer is periodically not sending traffic to one of the servers. I'm kind of curious myself. Does anyone have some tips on where we can look next to isolate the cause?
We're running A2(3.3). The ACE module was upgraded to that version of code on a Friday, and this issue started the following Monday. The ACE has 28 contexts configured, and this one context is the only one reporting any issues since the upgrade.
Here are the show serverfarm statistics as of today:
ACE# show serverfarm farma-8000
serverfarm : farma-8000, type: HOST
total rservers : 2
----------connections-----------
real weight state current total failures
---+---------------------+------+------------+----------+----------+---------
rserver: server#1
x.x.x.20:8000 8 OPERATIONAL 0 186617 3839
rserver: server#2
x.x.x.21:8000 8 OPERATIONAL 67 83513 1754Are you enabling sticky feature? What kind of predictor are you using?
If sticky feature is enabled and one rserver goes down, traffic will leans to one side.
Even after the rserver retuns to up, traffic may continue to lean due to sticky feature.
The behavior seems to depend on the configuration.
So, please let me know a part of configuration?
Regards,
Yuji -
Load balancing across remote systems
Hi everyone,
When a request is routed to a remote machine (assuming that the service is not
available on the local machine) load balancing is based on the load factors (plus
NETLOAD, if set) of requests previously sent to the remote systems from the local
system. My question is: is the load accumulation ever reset? As I see it, a
reset could probably happen in one or more of the following ways:
1. A rolling time period that will cause earlier load values to be progressively
discareded.
2. The load accumulations are reset every time a system is booted.
3. The load accumulations are reset whenever the tuxconfig is modified.
4. The accumulations are reset when the tuxconfig file is removed and recreated
(I guess this is obvious, assuming this is where the accumulations would be stored
between boots).
The question is prompted by the following scenario. Machines A, B and C have
been configured to offer the same service, but the service has only been enabled
on A and B. Machine D does not offer this service, and therefore requests from
D will be balanced between A and B based on the accumulated load recorded on D.
This situation has been in operation for a couple of months without rebooting
(quite possible - after all, this is Tuxedo we are talking about!), and it has
now been decided to activate the service on machine C (without rebooting). The
big question is: Will all the requests be routed to C until such time as it catches
up to the load processed by A and B?
Thanks to anyone who can shed some light on this.
Regards,
Malcolm.Where are you looking for the load accumulation?
"load done" with psr command?
In my case, with Tux 6.5, these stats are only reset with a reboot.
Christian
"Malcolm Freeman" <[email protected]> wrote:
>
Thanks, Scott - I found confirmation that stats are reset every SANITYSCAN
interval
in some old 4.2.1 documentation.
Regards,
Malcolm.
Scott Orshan <[email protected]> wrote:
Malcolm,
This was an issue a number of years ago, back in one of the early 4.x
releases.
But now the stats are reset, I believe every SANITYSCAN interval.It
might even
happen as soon as the new service comes into operation. I'm too lazy
to look, but
it's easy to test.
Scott
Malcolm Freeman wrote:
Hi everyone,
When a request is routed to a remote machine (assuming that the serviceis not
available on the local machine) load balancing is based on the loadfactors (plus
NETLOAD, if set) of requests previously sent to the remote systemsfrom the local
system. My question is: is the load accumulation ever reset? As
I
see it, a
reset could probably happen in one or more of the following ways:
1. A rolling time period that will cause earlier load values to beprogressively
discareded.
2. The load accumulations are reset every time a system is booted.
3. The load accumulations are reset whenever the tuxconfig is modified.
4. The accumulations are reset when the tuxconfig file is removedand recreated
(I guess this is obvious, assuming this is where the accumulationswould be stored
between boots).
The question is prompted by the following scenario. Machines A, Band C have
been configured to offer the same service, but the service has onlybeen enabled
on A and B. Machine D does not offer this service, and therefore
requests
from
D will be balanced between A and B based on the accumulated load recordedon D.
This situation has been in operation for a couple of months withoutrebooting
(quite possible - after all, this is Tuxedo we are talking about!),and it has
now been decided to activate the service on machine C (without rebooting).The
big question is: Will all the requests be routed to C until such
time
as it catches
up to the load processed by A and B?
Thanks to anyone who can shed some light on this.
Regards,
Malcolm. -
We have been carrying out some tests with the WebLogic Tuxedo Connector (WTC),
and have observed some strange behaviour. Can anyone out there explain it?
We are using the 8.1 version of both Tux and WLS/WTC, and have set up a Tux domain
consisting of a master and slave machine (actually, both are set up on a single
Windows 2000 system using PMID). Each machine has a local domain (TDOM1 and TDOM2),
each going to a separate instance of a WebLogic Server (WDOM1 and WDOM2). The
domain config on the Tux side is pretty standard, and the only service listed
is as follows (TOLOWER is provided by each of the WLS instances via WTC):
*DM_REMOTE_SERVICES
TOLOWER RDOM="WDOM1"
TOLOWER RDOM="WDOM2"
We have modified the client calling TOLOWER to use a tpacall with TPNOREPLY (so
that we can fire off 1000 calls one after the other). Load Balancing is turned
on, and we have a NETLOAD value of 0. When we run the client on the slave machine
it load balances beautifully - 500 each through TDOM1 and TDOM2; but when we run
the same client on the master machine it sends the first request via the slave
(TDOM2) and the other 999 through the master (TDOM1). If we run the client on
the slave again immediately afterwards, it load balances perfectly as before,
so there does not seem to be any bottleneck.
One thing we noticed when doing a psr is that the count for the number of messages
is correct for each GWTDOMAIN server, but the load is always zero irrespective
of how many messages have been processed (we have not specified any load factors
- the *SERVICES section is empty).
Any ideas?
Thanks for any feedback,
Malcolm.We're looking into another load balancing issue that might be related to
this. I'll pass this message on to the person who is looking into this.
Malcolm Freeman wrote:
We have been carrying out some tests with the WebLogic Tuxedo Connector (WTC),
and have observed some strange behaviour. Can anyone out there explain it?
We are using the 8.1 version of both Tux and WLS/WTC, and have set up a Tux domain
consisting of a master and slave machine (actually, both are set up on a single
Windows 2000 system using PMID). Each machine has a local domain (TDOM1 and TDOM2),
each going to a separate instance of a WebLogic Server (WDOM1 and WDOM2). The
domain config on the Tux side is pretty standard, and the only service listed
is as follows (TOLOWER is provided by each of the WLS instances via WTC):
*DM_REMOTE_SERVICES
TOLOWER RDOM="WDOM1"
TOLOWER RDOM="WDOM2"
We have modified the client calling TOLOWER to use a tpacall with TPNOREPLY (so
that we can fire off 1000 calls one after the other). Load Balancing is turned
on, and we have a NETLOAD value of 0. When we run the client on the slave machine
it load balances beautifully - 500 each through TDOM1 and TDOM2; but when we run
the same client on the master machine it sends the first request via the slave
(TDOM2) and the other 999 through the master (TDOM1). If we run the client on
the slave again immediately afterwards, it load balances perfectly as before,
so there does not seem to be any bottleneck.
One thing we noticed when doing a psr is that the count for the number of messages
is correct for each GWTDOMAIN server, but the load is always zero irrespective
of how many messages have been processed (we have not specified any load factors
- the *SERVICES section is empty).
Any ideas?
Thanks for any feedback,
Malcolm.
Maybe you are looking for
-
How do I move an entire app to a dual monitor? Not just a window?
How do I move an entire app to a dual monitor? Not just a window? I can drag single windows, but if I move a window to the second monitor and hit a button, like 'New', that opens a new window, then it opens the window in the first monitor. I want it
-
How do I hide my mailbox and mail contents from other users, leaving other functions of my iMac accessible?
-
Is there a better / alternative music player available anywhere for the N97? The default player is poor in my opinion - the player I had on my 2 year old Sony W960 was way ahead of this Nokia product. Simple things like being able to hide artists ass
-
hello friends. please anybody tell me in what conditions can we use get pernr late event. can we give get pernr late after end-of-selection. pls explain with an example. thanks
-
DDGenerator.generateDDs(projectDefinition(), ddDefinitions);
Pls help I'm using JDk1.4 on windows NT, trying to generate my deployment descriptors with the DDGenerator.generateDDs(projectDefinition(), ddDefinitions); class and methods. Last time we used Toplink 353, and it works great. Has something changed, c