Tuxedo load balancing across system
Hi there,
does bea tuxedo version 8 or above has feature like weblogic to configure load balancing/failover across multiple system?
Thanks,
Simon
Simon,
Tuxedo does offer load balancing. A high-level description of this feature
for Tuxedo 8.0 is at http://edocs.bea.com/tuxedo/tux80/atmi/intatm24.htm
A description of the configuration file parameters used to set up load
balancing is at http://edocs.bea.com/tuxedo/tux80/atmi/rf537.htm ; look for
the LDBAL, NETLOAD, and LOAD parameters.
<Simon Leong> wrote in message news:[email protected]..
Hi there,
does bea tuxedo version 8 or above has feature like weblogic to configureload balancing/failover across multiple system?
>
Thanks,
Simon
Similar Messages
-
Load balancing across multiple machines
I am looking for assistance in configuring Tuxedo to perform load balancing across
multiple machines. I have successfully performed load balancing for a service
across different servers hosted on one machine but not to another server that's
hosted on a different machine.
Any assistance in this matter is greatly appreciated.Hello, Christina.
Load balancing with multiple machines is a little bit different than
in the same machine. One of the important resource in this kind
of application is network bandwidth, so tuxedo tries to keep the
traffic among the machines as low as possible. So, it only
balance the load (call services in other machine) in case all the
services are busy in the machine where they are call.
I mean, if you have workstation clients attached only to one
machine, then tuxedo will call services in this machine untill
all servers are busy.
If you want load balancing, try to put one WSL in each machine,
and the corresponding configuration in your WSC ( with the | to
make tuxedo randomly choose one or the other) or spread your
native clients among all the machines.
And so, be carefull with the routing!
Ramón Gordillo
"Christina" <[email protected]> wrote:
>
I am looking for assistance in configuring Tuxedo to perform load balancing
across
multiple machines. I have successfully performed load balancing for a
service
across different servers hosted on one machine but not to another server
that's
hosted on a different machine.
Any assistance in this matter is greatly appreciated. -
Load balancing across database connection
Do you provide load balancing across database connections and allow RDBMS load
balancing for read only access?
Thanks in advance.Hello, Christina.
Load balancing with multiple machines is a little bit different than
in the same machine. One of the important resource in this kind
of application is network bandwidth, so tuxedo tries to keep the
traffic among the machines as low as possible. So, it only
balance the load (call services in other machine) in case all the
services are busy in the machine where they are call.
I mean, if you have workstation clients attached only to one
machine, then tuxedo will call services in this machine untill
all servers are busy.
If you want load balancing, try to put one WSL in each machine,
and the corresponding configuration in your WSC ( with the | to
make tuxedo randomly choose one or the other) or spread your
native clients among all the machines.
And so, be carefull with the routing!
Ramón Gordillo
"Christina" <[email protected]> wrote:
>
I am looking for assistance in configuring Tuxedo to perform load balancing
across
multiple machines. I have successfully performed load balancing for a
service
across different servers hosted on one machine but not to another server
that's
hosted on a different machine.
Any assistance in this matter is greatly appreciated. -
Load balancing across DMZs - Revisited
I know this question has been asked before and the answer is to have separate content switches per DMZ in order to maintain the security policy. There is an option to have the content switch in front of the firewall and then use only one content switch to load balance across multiple DMZs. Is this an acceptable design or the recommendation is to have a separate content switch behind the firewall for each DMZ of the firewall?
Can a Cisco 6500 with CSM be configured for multiple layer 2 load balanced VLANs thus achieving a mutiple DMZ load balancing scenario with only one switch/CSM?How do you connect the router to the firewall ?
Problem is the response from the server to a client on the internet.
Traffic needs to get back to the CSS and if the firewall default gateway is the router, the response will not go to the CSS and the CSS will reset it.
If you configure the default gateway of the firewall to be the CSS, than all traffic from your network to the outside will go through the CSS.
This could be a concern as well.
If you don't need to know the ip address of the client for your reporting, you can enable client nat on the CSS to guarantee that server response is sent to the css without having the firewall default gateway pointing at the CSS.
Gilles. -
Load balancing across multiple paths to Internet
Hello,
I have a 2821 router. Currently, I have two bonded T-1 circuits to the Internet.
I would like to add a DSL circuit to augment the T1s. I would also like to load balance across all of the circuits. Currently, IOS performs inherent load balancing for the T1 circuits. The DSL circuit is from a different provider than the T1s.
The T1s are coming from a local ISP that runs no routing protocols within their infrastructure. (They run static routes and rely on the upstream provider for BGP.) The DSL provider is a national telecom carrier.
What is the best way to perform load balancing for this scenario?Here is the answer (sort of) for anyone reading this post with the same question:
No matter which way I choose to do it, the trick is to have the local ISP subnet advertised via BGP through both pipes. The national telecom DSL provider will not advertise a /20 subnet down a DSL pipe. (Ahh, why not? =:)
Had the secondary pipe been a T-1,T-3, or other traditional pipe, I could have used a load balancer like a BigIP, or FatPipe device or possibly CEF within the IOS.
Case closed. Thanks to everyone that took a look.
Doug. -
Load balancing across 4 web servers in same datacentre - advice please
Hi All
Im looking for some advice please
The apps team have asked me about load balancing across some servers but im not that well up on it for applications
Basically we have 4 apache web servers with about 2000 clients connecting to them, they would like to load balance connections to all these servers, they all need the same DNS name etc.
what load balancing methods would I need for this, I believe they run on Linux
Would I need some sort of device, or can the servers run some software that can do this, how would it work? and how would load balancing be achieved here?
cheersCarl,
What you have mentioned sounds very straightforward then everything should go well.
The ACE is a load balancer which takes a load balancing decisions based on different matching methods like matching virtual address, url, source address, etc then once the load balance decision has been taken then the ACE will load balance the traffic based on the load balance method which you have configured (if you do not configure anything then it will use the default which is "round robin"), then it will send the traffic to the servers which it has available and finally the client should get the content.
If you want to get some details about the load balancing methods here you have them:
http://www.cisco.com/en/US/docs/app_ntwk_services/data_center_app_services/ace_appliances/vA3_1_0/configuration/slb/guide/overview.html#wp1000976
For ACE deployments the most common designs are the following.
Bridge Mode
One Arm Mode
Routed Mode
Here you have a link for Bridge Mode and a sample for that:
http://docwiki.cisco.com/wiki/Basic_Load_Balancing_Using_Bridged_Mode_on_the_Cisco_Application_Control_Engine_Configuration_Example
Here you have a link for One Arm Mode and a sample for that:
http://docwiki.cisco.com/wiki/Basic_Load_Balancing_Using_One_Arm_Mode_with_Source_NAT_on_the_Cisco_Application_Control_Engine_Configuration_Example
Here you have a link for Routed Mode and a sample for that:
http://docwiki.cisco.com/wiki/Basic_Load_Balancing_Using_Routed_Mode_on_the_Cisco_Application_Control_Engine_Configuration_Example
Then as you could see in all those links you may end up having a configuration like this:
interface vlan 40
description "Default gateway of real servers"
ip address 192.168.1.1 255.255.255.0
service-policy input remote-access
no shutdown
ip route 0.0.0.0 0.0.0.0 172.16.1.1
class-map match-all slb-vip
2 match virtual-address 172.16.1.100 any
policy-map multi-match client-vips
class slb-vip
loadbalance vip inservice
loadbalance policy slb
policy-map type loadbalance http first-match slb
class class-default
serverfarm web
serverfarm host web
rserver lnx1
inservice
rserver lnx2
inservice
rserver lnx3
inservice
rserver host lnx1
ip address 192.168.1.11
inservice
rserver host lnx2
ip address 192.168.1.12
inservice
rserver host lnx3
ip address 192.168.1.13
inservice
Please mark it if it answered you question then other users can use it as reference in the future.
Hope this helps!
Jorge -
IPTV load balancing across broadcast servers.
I know that across Archive servers in the same cluster that IPTV control server will load balance , is there is a similar function with Broadcast servers. I know broadcast servers use a different delivery mechanism (Multicast). We have multiple broadcast servers that take in an identical live stream, but the only way to advertise thru a URL is a seperate URL per server. Is there some way to hide the multiple URL's to the client population?
No. There is no way to load balance across multiple broadcast servers for live streams. Since this is going to be multicast, there should not be any additional load on the servers when the number of users are more.
-
ACE30 load balancing across two slightly different rservers
Hi,
is there a possibility to get a load balancing across two rservers so:
when client sends http://vip/ and it goes to rserver1 then url is sent without change
when client sends http://vip/ and it goes to rserver2 then url is modified to http://vip/xyz/
Or maybe load balancing can be done across two serverfarms ?
thanksRyszard,
I hope you are doing great.
I do not think that´s possible since the ACE just load balance the traffic to the servers and once the load balance decision has been taken it will pass the "ball" to the chosen server.
Think about this, let´s say user A needs to go to Server1 but guess what? based on the load balance decision it was sent to Server2 which unfortunately does not have what the customer was looking for. OK, fine, user A close the connection and tries again but now the Server1 is down then the only available is Server2 then the ACE sends it to Server2 again then user A just decides to leave, you see how bad that can be.
A better approach would be to have either 2 VIPs ( different IP addresses) or 2 with the same IP address but hearing on another port, perhaps, one port per server.
Hope this helps!
Jorge -
Load balancing across remote systems
Hi everyone,
When a request is routed to a remote machine (assuming that the service is not
available on the local machine) load balancing is based on the load factors (plus
NETLOAD, if set) of requests previously sent to the remote systems from the local
system. My question is: is the load accumulation ever reset? As I see it, a
reset could probably happen in one or more of the following ways:
1. A rolling time period that will cause earlier load values to be progressively
discareded.
2. The load accumulations are reset every time a system is booted.
3. The load accumulations are reset whenever the tuxconfig is modified.
4. The accumulations are reset when the tuxconfig file is removed and recreated
(I guess this is obvious, assuming this is where the accumulations would be stored
between boots).
The question is prompted by the following scenario. Machines A, B and C have
been configured to offer the same service, but the service has only been enabled
on A and B. Machine D does not offer this service, and therefore requests from
D will be balanced between A and B based on the accumulated load recorded on D.
This situation has been in operation for a couple of months without rebooting
(quite possible - after all, this is Tuxedo we are talking about!), and it has
now been decided to activate the service on machine C (without rebooting). The
big question is: Will all the requests be routed to C until such time as it catches
up to the load processed by A and B?
Thanks to anyone who can shed some light on this.
Regards,
Malcolm.Where are you looking for the load accumulation?
"load done" with psr command?
In my case, with Tux 6.5, these stats are only reset with a reboot.
Christian
"Malcolm Freeman" <[email protected]> wrote:
>
Thanks, Scott - I found confirmation that stats are reset every SANITYSCAN
interval
in some old 4.2.1 documentation.
Regards,
Malcolm.
Scott Orshan <[email protected]> wrote:
Malcolm,
This was an issue a number of years ago, back in one of the early 4.x
releases.
But now the stats are reset, I believe every SANITYSCAN interval.It
might even
happen as soon as the new service comes into operation. I'm too lazy
to look, but
it's easy to test.
Scott
Malcolm Freeman wrote:
Hi everyone,
When a request is routed to a remote machine (assuming that the serviceis not
available on the local machine) load balancing is based on the loadfactors (plus
NETLOAD, if set) of requests previously sent to the remote systemsfrom the local
system. My question is: is the load accumulation ever reset? As
I
see it, a
reset could probably happen in one or more of the following ways:
1. A rolling time period that will cause earlier load values to beprogressively
discareded.
2. The load accumulations are reset every time a system is booted.
3. The load accumulations are reset whenever the tuxconfig is modified.
4. The accumulations are reset when the tuxconfig file is removedand recreated
(I guess this is obvious, assuming this is where the accumulationswould be stored
between boots).
The question is prompted by the following scenario. Machines A, Band C have
been configured to offer the same service, but the service has onlybeen enabled
on A and B. Machine D does not offer this service, and therefore
requests
from
D will be balanced between A and B based on the accumulated load recordedon D.
This situation has been in operation for a couple of months withoutrebooting
(quite possible - after all, this is Tuxedo we are talking about!),and it has
now been decided to activate the service on machine C (without rebooting).The
big question is: Will all the requests be routed to C until such
time
as it catches
up to the load processed by A and B?
Thanks to anyone who can shed some light on this.
Regards,
Malcolm. -
How does Tuxedo 10.0 load balance across the BRIDGE?
Is the load balancing round robin or does work done factored in?
Yes and no. In all honesty, see if this white paper helps answer your question: http://www.oracle.com/technetwork/middleware/tuxedo/overview/ld-balc-in-oracle-tux-atmi-apps-1721269.pdf
Regards,
Todd Little
Oracle Tuxedo Chief Architect -
I have set LDBAL to YES in my tuxconfig and have booted 10 servers.
But still calls from my client is not getting evenly distributed and it almost hits the same server for 80% of the times.
Any suggestions on how to have a proper load balancing among the servers?Hi,
This is a common misunderstanding of how Tuxedo performs load balancing. Instead of using a simple round robin algorithm which would spread requests across all available servers, Tuxedo looks for the first server that is not busy. If all servers are busy, then Tuxedo places the request on the server's queue that has the least amount of work queued to it. The only way you will see all servers getting requests in your scenario is if you have more than 10 concurrent requests outstanding. Also Tuxedo always starts its scan for a free server with the same server and in the same order, thus the first server will take most of the requests on a lightly loaded system, the next taking fewer requests, and so on. Part of the reason for doing this is that you get better memory coherence, i.e., as long as the first server is available, it's likely to have its pages in memory and probably in cache. Round robin scheduling would force all servers to have their pages in memory and flush any memory caches pretty quickly.
Hope that makes sense!
Regards,
Todd Little
Oracle Tuxedo Chief Architect -
Load balancing across multiple application servers not working with JCo RFC
We have a problem where inbound messages to the Mapping Runtime engine (ABAP -> J2EE) are not load balanced over application servers. However, load balancing does take place across server nodes within one application server.
Our system comprises of the following:
Central Instance (2 X server nodes)
Database Instance
2 X Dialog Instances (with 2 X server nodes each)
The 1st application server that starts is usually the one that is used for inbound messaging.
We have looked at the sap gateway configuration and have tried various options without much luck:
i.e.: local gateways vs. one central gateway, load balancing type by changing parameter gw/reg_lb_level, see: http://help.sap.com/saphelp_nw70/helpdata/EN/bb/9f12f24b9b11d189750000e8322d00/frameset.htm
Here are our release levels:
SAP_ABA 700 0012 SAPKA70012
SAP_BASIS 700 0012 SAPKB70012
PI_BASIS 2005_1_700 0012 SAPKIPYJ7C
ST-PI 2005_1_700 0005 SAPKITLQI5
SAP_BW 700 0013 SAPKW70013
ST-A/PI 01J_BCO700 0000 -
Any help would be greatly appreciated.
Many thanksTim
Did you follow the guide here:
How to Scale Up SAP Exchange Infrastructure 3.0
Learn what the most likely scaled system architecture looks like, and read about a step by step procedure to install additional dialog instances. The guide also walks you through additional configuration steps and the application of Support Package Stacks.
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/c3d9d710-0d01-0010-7486-9a51ab92b927
We followed this guide for XI3.0 and PI7.0 and works successfully! -
We have been carrying out some tests with the WebLogic Tuxedo Connector (WTC),
and have observed some strange behaviour. Can anyone out there explain it?
We are using the 8.1 version of both Tux and WLS/WTC, and have set up a Tux domain
consisting of a master and slave machine (actually, both are set up on a single
Windows 2000 system using PMID). Each machine has a local domain (TDOM1 and TDOM2),
each going to a separate instance of a WebLogic Server (WDOM1 and WDOM2). The
domain config on the Tux side is pretty standard, and the only service listed
is as follows (TOLOWER is provided by each of the WLS instances via WTC):
*DM_REMOTE_SERVICES
TOLOWER RDOM="WDOM1"
TOLOWER RDOM="WDOM2"
We have modified the client calling TOLOWER to use a tpacall with TPNOREPLY (so
that we can fire off 1000 calls one after the other). Load Balancing is turned
on, and we have a NETLOAD value of 0. When we run the client on the slave machine
it load balances beautifully - 500 each through TDOM1 and TDOM2; but when we run
the same client on the master machine it sends the first request via the slave
(TDOM2) and the other 999 through the master (TDOM1). If we run the client on
the slave again immediately afterwards, it load balances perfectly as before,
so there does not seem to be any bottleneck.
One thing we noticed when doing a psr is that the count for the number of messages
is correct for each GWTDOMAIN server, but the load is always zero irrespective
of how many messages have been processed (we have not specified any load factors
- the *SERVICES section is empty).
Any ideas?
Thanks for any feedback,
Malcolm.We're looking into another load balancing issue that might be related to
this. I'll pass this message on to the person who is looking into this.
Malcolm Freeman wrote:
We have been carrying out some tests with the WebLogic Tuxedo Connector (WTC),
and have observed some strange behaviour. Can anyone out there explain it?
We are using the 8.1 version of both Tux and WLS/WTC, and have set up a Tux domain
consisting of a master and slave machine (actually, both are set up on a single
Windows 2000 system using PMID). Each machine has a local domain (TDOM1 and TDOM2),
each going to a separate instance of a WebLogic Server (WDOM1 and WDOM2). The
domain config on the Tux side is pretty standard, and the only service listed
is as follows (TOLOWER is provided by each of the WLS instances via WTC):
*DM_REMOTE_SERVICES
TOLOWER RDOM="WDOM1"
TOLOWER RDOM="WDOM2"
We have modified the client calling TOLOWER to use a tpacall with TPNOREPLY (so
that we can fire off 1000 calls one after the other). Load Balancing is turned
on, and we have a NETLOAD value of 0. When we run the client on the slave machine
it load balances beautifully - 500 each through TDOM1 and TDOM2; but when we run
the same client on the master machine it sends the first request via the slave
(TDOM2) and the other 999 through the master (TDOM1). If we run the client on
the slave again immediately afterwards, it load balances perfectly as before,
so there does not seem to be any bottleneck.
One thing we noticed when doing a psr is that the count for the number of messages
is correct for each GWTDOMAIN server, but the load is always zero irrespective
of how many messages have been processed (we have not specified any load factors
- the *SERVICES section is empty).
Any ideas?
Thanks for any feedback,
Malcolm. -
Hi,
We have a performance situation where Tuxedo is favouring one MSSQ out of the
8 available across 2 boxes.
Our configuration is that we have 8 Tuxedo Servers each publishing 4 instances
of a particular E2A adapter service (a total of 32 instances of a particular service).
Each of the 8 Tuxedo Servers has its own MSSQ and there are 4 Tuxedo Servers on
each of the physical boxes. ie. 2 physical boxes.
When the aforementioned service is called it favours one particular Tuxedo in
a very odd manner.
TMIB output shows the Q depths as (typically):
Mon Sep 15 13:42:59 EST 2003
TA_MSG_QNUM 0
TA_MSG_QNUM 0
TA_MSG_QNUM 0
TA_MSG_QNUM 0
TA_MSG_QNUM 0
TA_MSG_QNUM 10
TA_MSG_QNUM 0
TA_MSG_QNUM 93
Mon Sep 15 13:43:04 EST 2003
TA_MSG_QNUM 0
TA_MSG_QNUM 0
TA_MSG_QNUM 0
TA_MSG_QNUM 0
TA_MSG_QNUM 0
TA_MSG_QNUM 11
TA_MSG_QNUM 0
TA_MSG_QNUM 92
Mon Sep 15 13:43:10 EST 2003
TA_MSG_QNUM 0
TA_MSG_QNUM 0
TA_MSG_QNUM 0
TA_MSG_QNUM 0
TA_MSG_QNUM 0
TA_MSG_QNUM 11
TA_MSG_QNUM 0
TA_MSG_QNUM 92
Does anyone know what might cause Tuxedo to bias like like this?
Significant ubb entries:
$ tmunloadcf |grep -e LDBAL -e MODEL
MODEL MP
LDBAL Y
$
Master is HP-UX. Slaves running the E2A's and MSSQ are NT.
NigelScott,
You are right, the assignment of an MSSQ per server makes no sense, we will change
it to have a single MSSQ per physical server, hopefully we won't get Message Q
blocking as a result (since we will now have only 25% of the previous Q space).
We are (by design) using uncontrolled tpacall. Yes, yes, I know...
For the record, we are on Tuxedo 6.5 (patch 389).
Thanks for your assistance. I'll let you know what happens when we reconfigure
the Q's.
Nigel
Scott Orshan <[email protected]> wrote:
I don't completely understand your configuration. MSSQ means
multi-server, single-queue, yet it sounds like each of your servers has
its own queue.
I think we need to see more about your service configuration. Do you
have the latest patches applied? You didn't sey what release you're
running, but there could have been a load balancing problem at some point.
Remember that Tuxedo cannot "see" remote queue lengths. For services
with equal load factor, it round-robins among the available queues. If
one of those adapter services takes longer, then its queue will build
up. Make sure that you are not using uncontrolled tpacall(TPNOREPLY)
calls, as that can easily flood the remote queues. You've got over 100
queued messages. Does that mean that you have 100 tpcalls waiting for
replies, or were 100 tpacalls done?
What might help is if you did use MSSQ. Set RQADDR the same for all
of
the servers on a machine (so long as they offer the identical set of
services). Then all requests will go to one queue, and be taken off as
quickly as they can be processed by the services. If one process gets
hung up, the rest will be able to continue processing.
For further control, you might have to use Routing or service naming
to
better direct the service calls.
Scott Orshan
Nigel wrote:
The attached file should be easier to read than the output in the originalmessage.
"Nigel" <[email protected]> wrote:
Hi,
We have a performance situation where Tuxedo is favouring one MSSQ
out
of the
8 available across 2 boxes.
Our configuration is that we have 8 Tuxedo Servers each publishing4
instances
of a particular E2A adapter service (a total of 32 instances of a particular
service).
Each of the 8 Tuxedo Servers has its own MSSQ and there are 4 Tuxedo
Servers on
each of the physical boxes. ie. 2 physical boxes.
When the aforementioned service is called it favours one particularTuxedo
in
a very odd manner.
TMIB output shows the Q depths as (typically):
Mon Sep 15 13:42:59 EST 2003
TA_MSG_QNUM 0
TA_MSG_QNUM 0
TA_MSG_QNUM 0
TA_MSG_QNUM 0
TA_MSG_QNUM 0
TA_MSG_QNUM 10
TA_MSG_QNUM 0
TA_MSG_QNUM 93
Mon Sep 15 13:43:04 EST 2003
TA_MSG_QNUM 0
TA_MSG_QNUM 0
TA_MSG_QNUM 0
TA_MSG_QNUM 0
TA_MSG_QNUM 0
TA_MSG_QNUM 11
TA_MSG_QNUM 0
TA_MSG_QNUM 92
Mon Sep 15 13:43:10 EST 2003
TA_MSG_QNUM 0
TA_MSG_QNUM 0
TA_MSG_QNUM 0
TA_MSG_QNUM 0
TA_MSG_QNUM 0
TA_MSG_QNUM 11
TA_MSG_QNUM 0
TA_MSG_QNUM 92
Does anyone know what might cause Tuxedo to bias like like this?
Significant ubb entries:
$ tmunloadcf |grep -e LDBAL -e MODEL
MODEL MP
LDBAL Y
$
Master is HP-UX. Slaves running the E2A's and MSSQ are NT.
Nigel
Mon Sep 15 13:42:59 EST 2003
TA_MSG_QNUM 0
TA_MSG_QNUM 0
TA_MSG_QNUM 0
TA_MSG_QNUM 0
TA_MSG_QNUM 0
TA_MSG_QNUM 10
TA_MSG_QNUM 0
TA_MSG_QNUM 93
Mon Sep 15 13:43:04 EST 2003
TA_MSG_QNUM 0
TA_MSG_QNUM 0
TA_MSG_QNUM 0
TA_MSG_QNUM 0
TA_MSG_QNUM 0
TA_MSG_QNUM 11
TA_MSG_QNUM 0
TA_MSG_QNUM 92
Mon Sep 15 13:43:10 EST 2003
TA_MSG_QNUM 0
TA_MSG_QNUM 0
TA_MSG_QNUM 0
TA_MSG_QNUM 0
TA_MSG_QNUM 0
TA_MSG_QNUM 11
TA_MSG_QNUM 0
TA_MSG_QNUM 92 -
Server Load-balancing Across Two Data centers on Layer 3
Hi,
I have a customer who would like to load balance two Microsoft Exchange 2010 CAS Servers which are residing across two data centers.
Which is the best solution for this? Cisco ACE or Cisco ACE GSS or both?I would go with source natting the clients ip addresses, so that return traffic from the servers is routed correctly.
It saves you the trouble with maintaining PBR as well.
Source NAT can be done on the ACE, by applying the configuration to either the load balancing policy, or adding the configuration to the class-map entries in the multi-match policy.
Cheers,
Søren
Sent from Cisco Technical Support iPad App
Maybe you are looking for
-
Hp Solution Centre software will not install on windows 8.1
I have just replaced my old HP Laptop with a new HP Pavilion 15 Notebook PC and have managed to install my Hp Photosmart C4780 printer but am unable to get the software for the Hp Solutions to install. I have downloaded the file from the Hp site and
-
Making Wifi Network Only Available on Certain Computers
My roommate and I have a password-protected wireless network. However, we are pretty sure someone else is able to use our wifi because we can manually check our bandwidth online; when we are out of town, bandwidth is still being used, and I have had
-
Parameters using an "in" clause
i need to write a report that will select all values based on a SQL in clause. Example. here is the SQL Select * from Customer where firstname in ("Jason", "Mary", "Todd") does anybody know how to set up the record selection to do this? Thanks!
-
I need to know how to download my past purchases to my windows computer?
I purchased a movie and I have watched it before on this computer. Now it only shows up in my purchases and I can't watch it or download it. Does anyone know why.
-
Background image no longer displays
I have been editing the same site for years. Today when I opened Contribute and selected a page to edit, the background image did not appear in the draft. The background was gray instead. I tried changing the background to a color as an experiment an