Dynamic Load Balancing

We have developed Client Server Communication through Tuxedo Server and Workstation Client. This approach should handle 500 concurrent requests.
We want to configure our Tuxedo Server in such a way that it could handle all the Requests in parallel.
Currently below is our UBB Configuration
WSL SRVGRP="WSLGRP" SRVID=1 RESTART=Y MAXGEN=255 GRACE=3600
##CLOPT="-A -- -n //10.223.26.28:12791 -m 5 -M 10 -x 15 -p 16914 -P 16924"
##CLOPT="-A -- -n //10.223.26.28:12791 -m 2 -M 4 -x 45" ##Minimum 90 ports needs to be available
CLOPT="-A -- -n //10.223.26.28:12791 -m 2 -M 9 -x 60" ##Minimum 120 ports needs to be available
"REQ_HANDLER" SRVGRP="DATA" SRVID=22280
CLOPT="-A -r -e /log/timings/stderr -o /log/tuxlog/req_handler_stdout"
RQPERM=0666 RPPERM=0666 MIN=1 MAX=200 CONV=N
SYSTEM_ACCESS=FASTPATH MAXGEN=255 GRACE=3600 RESTART=Y
The Problem with the above configuration is that, it is not booting another instance of REQ_HANDLER Server when multiple Request receives.
For the Alternative, we have set both MIN and MAX as 10 to make sure it can handle concurrent request.
But We want the Configuration like if there are multiple request received, it will automatically boot the Another instance of the Server and down it(except MIN value) in case of no request.
Please advise.
Thanks in advance

Hi Vikas,
It's not really possible to give you an answer just by looking at the config; you will need to observe the queue behaviour to get an idea of what is happening; but perhaps the following will help.
You mention that two servers have booted, so since MIN=1 that does indicate that automatic spawning has taken place. What I think is happening is that the two servers are managing with the workload. When you say you "push 20 requests" how quickly did these arrive, and were the servers able to cope with the workload? You can use the pq command in tmadmin to view the queue "REQ_HANDLER" - based on your config the queue would need to exceed the 50 load for at least 5 seconds to cause another server to start, and you may find that by the time the second server has started up the 20 messages have already been processed (I think I am correct in saying that automatic spawning will only be tested again once the second server has completed it's startup and is able to process messages). Also remember that if the queue depth drops below the level you specified, even briefly, the timer starts again from zero. Do you know how long it takes for the service to execute (you can use the -r CLOPT option and run "txrpt" to determine the execution time)? You can also use the psc and psr commands of tmadmin to see which servers are actually processing the messages.
I notice you have used the "L" option in -p, and are working with service loads rather than number of messages. If the messages all have the same load value then you can simply use the number of messages as an alternative in the -p option.
Regards,
Malcolm.

Similar Messages

  • Nic teaming - what is dynamic load balancing

    When set up nic teaming in Windows  2012 I have the option of selecting "Address Hash", "Hyper-V Port", or "Dynamic" for the load balancing mode. The technet documentation explains "Address Hash" and "Hyper-V
    Port" but there is nothing about "Dynamic". Is there anywhere I can find a description of what the "Dynamic" option provides?

    Microsoft's official recommendation is to use Dynamic load balancing in most configurations.
    Section 3.3 of
    the NIC Teaming Deployment Guide explains what Dynamic is.  Section 3.4 suggests when to use Dynamic load balancing, and when to use other modes.
    I suggest reading the Guide from start to finish.  I learn new things every time I look at it.

  • Dynamic Load Balancing vs Static Load Balancing

    What do you guys recommend, Dynamic or Static Loading Balancing+Logon Groups? This is to be for 2 instances envoirnment. 1 Central and 1 Dialog.
    Thank you,
    Samer

    I would say (as so often) - it depends.
    If you CI also carries the batch and update processes and it's not overloaded one could add it to the logon group. If you have a heavy batch load (such as we have) and the CI is already loaded just use the DI for the interactive work.
    Markus

  • Coherence Extend Address-provider for dynamic load balancing

    Hi,
    I want to have load balancing across my proxies. so, trying to use <address-provider> feature.
    Here is my setup,
    I have two proxies and many clients.I am configuring both proxy and the client to use load sharing scheme.
    There are implemented classes for the interface AddressProvider. So, i am going to use that.
    Here is proxy server settings,
    <proxy-scheme>
    <service-name>ExtendTcpProxyService</service-name>
    <thread-count>10</thread-count>
    <acceptor-config>
    <tcp-acceptor>
    <address-provider>
    <class-name>com.tangosol.net.RefreshableAddressProvider</class-name>
    </address-provider>
    </tcp-acceptor>
    </acceptor-config>
    <autostart>true</autostart>
    </proxy-scheme>
    I know there is some silly mistake with that,
    I am getting error message in the following log,
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at com.emc.srm.common.daemon.SrmDaemon.start(SrmDaemon.java:59)
    ... 5 more
    Caused by: (Wrapped: Missing or inaccessible constructor "com.tangosol.net.RefreshableAddressProvider()"
    <address-provider>
    <class-name>com.tangosol.net.RefreshableAddressProvider</class-name>
    </address-provider>) java.lang.InstantiationException
    at com.tangosol.util.Base.ensureRuntimeException(Base.java:293)
    at com.tangosol.run.xml.XmlHelper.createInstance(XmlHelper.java:2542)
    at com.tangosol.run.xml.XmlHelper.createInstance(XmlHelper.java:2426)
    at com.tangosol.net.ConfigurableAddressProvider.createAddressProvider(ConfigurableAddressProvider.java:277)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.peer.acceptor.TcpAcceptor.configure(TcpAcceptor.CDB:78)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.ProxyService.configure(ProxyService.CDB:67)
    at com.tangosol.coherence.component.util.SafeService.startService(SafeService.CDB:6)
    at com.tangosol.coherence.component.util.SafeService.ensureRunningService(SafeService.CDB:27)
    at com.tangosol.coherence.component.util.SafeService.start(SafeService.CDB:14)
    at com.tangosol.net.DefaultConfigurableCacheFactory.ensureServiceInternal(DefaultConfigurableCacheFactory.java:1057)
    at com.tangosol.net.DefaultConfigurableCacheFactory.ensureService(DefaultConfigurableCacheFactory.java:892)
    at com.tangosol.net.DefaultCacheServer.startServices(DefaultCacheServer.java:81)
    at com.tangosol.net.DefaultCacheServer.intialStartServices(DefaultCacheServer.java:250)
    at com.tangosol.net.DefaultCacheServer.startAndMonitor(DefaultCacheServer.java:55)
    at com.tangosol.net.DefaultCacheServer.main(DefaultCacheServer.java:197)
    ... 10 more
    Caused by: java.lang.InstantiationException
    at sun.reflect.InstantiationExceptionConstructorAccessorImpl.newInstance(InstantiationExceptionConstructorAccessorImpl.java:30)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
    at java.lang.Class.newInstance0(Class.java:355)
    at java.lang.Class.newInstance(Class.java:308)
    at com.tangosol.util.ClassHelper.newInstance(ClassHelper.java:587)
    at com.tangosol.run.xml.XmlHelper.createInstance(XmlHelper.java:2501)
    ... 23 more
    Cannot start daemon
    First question:
    1) without address provider section in the proxy server configuration xml, will it use any default address provider implementation?
    2) what's wrong with the proxy server settings xml which causes above error?
    Thanks
    Prab

    Hi LuK,
    Sounds like there is no load balancing is happening in my environment.
    I didn't use address provider tag since it's mentioned in AddressProvider Interface API that by default it uses ConfigurableAddressProvider implementation.
    Here is a xml snippet in remote client cache configuration xml,
    <tcp-initiator>
    *<remote-addresses>*
    *<socket-address>*
    *<address>PROXY-A</address>*
    *<port>9099</port>*
    *</socket-address>*
    *<socket-address>*
    *<address>PROXY-B</address>*
    *<port>9099</port>*
    *</socket-address>*
    *</remote-addresses>*
    <connect-timeout>10s</connect-timeout>
    </tcp-initiator>
    what i mean by load balancing is,
    I will be having lot of web service queries which runs very fast. I would like to see both the proxies serves the web requests.
    I am observing, One connection has been made to one proxy when it's starts up and keep hold on to it (no matter whether other proxy is free or not).
    is it a bug? or am i doing some mistake?
    Thanks
    Prab

  • Round robin DNS for load balancing between multiple network adapters (Xserve)

    I'm attempting to use 'round robin' DNS to load balance between the two ethernet adapters of an Xserve.
    Both ethernet adapters are connected to the same LAN and have static IP addresses of 192.168.2.250 and 192.168.2.251.
    The DNS zone for the server's local domain/host (macserver.private) has a machine record with both IP addresses (set up in the Lion Server UI).
    Having read up on round robin DNS, I would have expected DNS requests for 'macserver.private' to be answered with the two IP addresses ordered at random, achiving my aim of requests being served at random via each ethernet adapter.
    However this doesn't seem to be the case. Doing a 'nslookup' from any of the network clients results in the two IP addresses being listed in the same order everytime. And pinging 'macserver.private' only ever results in a response from the same address.
    Does anyone know why this is the case? Does Lion Server use a non-standard DNS configuration? Are there any additional settings I need to configure in Lion's DNS server to make adopt a round robin approach to responding to requests?
    Thanks in advance for any help!

    Be careful what you wish for
    Round Robin DNS is rarely the best option for 'load balancing'. At the very least it's subject to caching at various point on the network - even at the client side, once the client looks up the address it will cache that response - this means that subsequent lookups may be served from the client's cache and not refer back to the server. Therfore any given client will always see the same address until the cache expires.
    I suspect this is what you're seeing.
    You can minimize this by setting a lower TTL on the records. This should result in the response being cached for a shorter period, meaning the client will make more requests to the server, with a higher change of using the 'other' address.
    However, you're also going to run into issues with the server having two interfaces/addresses in the same LAN. This isn't recommended.
    As Jonathon mentioned, you may be better off just bonding the two interfaces. This will provide an automatic level of dynamic load balancing without the latency of DNS caches, as well as automatic failover should one link fail (as opposed to round robin DNS which will cause 50% of requests to fail until the client cache expires and a new lookup is performed (and, even then, there's still a chance the client will try to use the failed link).

  • Load Balancing and Failover in Dual Ethernet

    I have a cisco 2911/K9 router with two 4Mbps Leased line connection from two different ISPs to my remote office. Remote office has cisco 2811 router
    Main office has MPLS connection with static Ip routing apart from the two leased lines
    All handoffs are ethernet
    Is it possible to do load sharing as well as fail over between the two ISPs, if so how am i to achieve that
    Kindly help me

    Disclaimer
    The   Author of this posting offers the information contained within this   posting without consideration and with the reader's understanding that   there's no implied or expressed suitability or fitness for any purpose.   Information provided is for informational purposes only and should not   be construed as rendering professional advice of any kind. Usage of  this  posting's information is solely at reader's own risk.
    Liability Disclaimer
    In   no event shall Author be liable for any damages whatsoever (including,   without limitation, damages for loss of use, data or profit) arising  out  of the use or inability to use the posting's information even if  Author  has been advised of the possibility of such damage.
    Posting
    If your MPLS vendor supports no dynamic routing, they why do you ask about BGP?  Or, do they only support dynamic routing with BGP?
    You can do equal cost multi-path with BGP (may require a hidden command to fully utilize).
    You could do GRE tunnels across the MPLS cloud and dynamically route between them using your choice of a dynamic routing protocol.
    Both your devices should support OER/PfR (may require a feature upgrade).  OER/PfR will actually dynamically load balance.
    SLA features should also be available on both your routers, those too might require a IOS feature upgrade.
    Configuration examples might be found on Cisco's main web site.

  • Help choose the appropriate etherchannel load balance method

    Hi
    I have 2 network architectures :case A and case B  (found architecture below)
    Case A : one server connected on the switch on each site
    Case B : 3 server connected on the switch behind a router on each site
    2 site are connected by 2 wireless link :each wireless link have 105 Mbps bandwith (I absolutly need the agregate bandwith 210)
    Site headquarter is the principal site and site backup is use to backup data located on the principal site
    I use Gbit cisco  2960 switch
    I use etherchannel to agregate the 2 switch port (port 1 and port 2) where the 2 wireless link are connected
    I configure src-mac for case A but all trafic is send only on one wireless link .
    Please help me to choose the more appropriate load balance method to load balance traffic between the 2 link for the case A and for the case B
    Please advise
    Thanks in advance

    Disclaimer
    The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
    Liability Disclaimer
    In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
    Posting
    Your Case A might be handled by port hashing, but unfortunately most Cisco platforms don't support it.
    Your Case B isn't much better, as you only have 3 hosts on each side, and according to your drawing, they are behind routers, so you don't want to use MAC hashes.  If you don't have port hashing, next best choice might be src-dest-IP hashing.  Again, though, with just 3 hosts, your distribution will likely not be very balanced, especially over shorter time intervals.
    To obtain best utilization of your links, you need some kind of better link bonding, such as MLPPP (unfortunately, usually won't scale to FE rates) or a hardware MUX.  Next best option, if you could route across the links, would be something like Cisco's OER/PfR which can dynamically load balance.

  • Dynamic LBFO Load Balancing mode causing issues

    Hi,
    We`re running a couple of virtual machines with the
    BIG-IP Virtual Edition in a Windows Server 2012 R2 Hyper-V cluster.
    These virtual machines have had problems where traffic sent through the virtual machines doesn`t get through due to the MAC Addresses of the physical team NICs being replaced with the Mac Address from the team member actually used to transmit the
    packet.
    Reference:
    Windows Server 2012 R2 NIC Teaming (LBFO) Deployment and Management
    Blog post - Server 2012 Hyper-V / NIC Team Oddity
    One of the comments in the blog-post states what we are seeing:
    The reason for the MAC Address switching you’re seeing is that Server 2012 in some cases will replace the source MAC address on Ethernet frames with the MAC Address from the team member actually used to transmit the packet. The reason for this is that
    if it always kept the MAC Address intact, the switch would throw alarms for “MAC flapping”, i.e. seeing a given MAC Address bouncing back and forth between switch ports.
    When we changed the Load Balancing mode from Dynamic to Hyper-V port, the problem was resolved.
    Is it possible to solve this problem while still using Dynamic as the Load Balancing mode? Would LACP instead of Switch independent teaming mode solve the problem?

    @Rob Thanks, that`s useful information. Did they suggest any other solutions/workarounds? (such as LACP)
    @Alex I understand that I need to configure my switches if I`m going to use LACP, but will LACP cause a different behaviour regarding the replacement of the source MAC address on Ethernet frames? In other words: Will LACP be an alternate solution/workaround
    to using Hyper-V Port in Switch Independent mode?
    I can't answer this from experience because I've never had this problem.
    But, the basic issue with the switch-independent mode is that the physical switch is completely unaware that there is any team situation at all. It can only operate within the base rules of Ethernet, which say that a MAC address can only appear on one endpoint
    at a time. So, if you have built a switch-independent team that crosses 4 physical adapters and a Hyper-V virtual switch on top of that, what the physical switch "sees" is four distinct endpoints that are hosting multiple MAC addresses. When one
    of the virtual adapters transmits on a virtual switch, it could, depending on the load-balancing mode, use any of the four physical lines. If it uses the same source MAC address while communicating across all four lines, the switch might panic. It wants to
    know where the MAC really is for purposes of knowing where to deliver its inbound packets, and depending on security configuration, to be sure that there's not an unauthorized spoofing attempt in progress. That's why the dynamic mode uses MAC substitution.
    The Hyper-V port mode gets around this by locking each virtual adapter on to a single physical channel so that its MAC address doesn't move. This has a cost of not allowing traffic on any given virtual adapter to be load-balanced.
    In a LACP connection, the physical switch is fully aware of the team, and furthermore, it knows that it's not an endpoint. All the MAC addresses of the virtual adapters are associated with this single aggregated tunnel, not the individual physical adapters.
    When it comes down to deciding which of the physical adapters to use to carry any given transmission, that can be negotiated by the switches without the need to lock a MAC to a specific adapter. There isn't, or at least there shouldn't be, any need for the
    dynamic mode to perform MAC substitution.
    Again, I'm speaking from theory, not direct experience with what you're asking about. I do make regular use of the dynamic mode on LACP trunks, but I don't run any applications that would have this MAC sensitivity issue. For all I know, dynamic still performs
    this substitution and I just don't understand why. Also, there's a chance your symptoms just happen to point to this substitution as being a problem. But, I would say there's a good chance that using Dynamic/LACP will solve your issue.
    Eric Siron
    Altaro Hyper-V Blog
    I am an independent blog contributor, not an Altaro employee. I am solely responsible for the content of my posts.

  • Load balancing host named site collection

    I am jumping into the realm of host named site collection. While the learning experience has been good, still there are some questions unanswered. Please bare patience since my questions are long.
    - I have a non host header site on port 80 that has https certificate added to IIS for supporting app store in https mode.
    - I tried to created the host name site collection using https in this default port 80 non host header web application and was greeted with error. Then i extended the web app to different  zone with port 443 . Then created the host header site collection
    with https with web application name for extended 443 one. Creation went in fine.
    - I tired to use IPs on now extended IIS site and bind certificates on that one. The site does not load. I do the same again in the default zone iss site, bind ips on that one and site loads. Now question is even though host header site collection was created
    using extended web application url , why binding had to be done on default zone IIS site?
    - Second test, i changed the authentication mode for extended, no effect on host named site collection but as soon as i changed it in default zone it reflected in host named site collection. I am confused why it needs extended zone url to create the https
    site but every change done in default zone is getting reflected on this host named site collection.
    Now for load balancing , it works fine with IP? But how to load balance these host named site collection using url. I talked with f5 team and they said i need to send some reply query string from each site. Where do i do that? Or is it even needed? 
    Accoring to this link : https://devcentral.f5.com/articles/name-based-virtual-hosting-with-ltm
    . If the site hosts an application, though, the monitor should request a dynamic page on each webserver which forces a transaction with the application to verify its health and returns a specific phrase upon success.
    For application monitoring, the recommended best practice is to create such a script specific to your application, configure the monitor Send string to call that script, and set the Receive string to match that phrase. 
    Has any one done this before? I tired to search for resource regarding this for iis or sharepoint but was not able to get anything.
    Thank you for your patience for reading such a long question. 
    Adit

    first part of question:
    Default Web Appliction in port 80: Creating https host named site collection fails.
    Extend default web application on port 443 : Https hostnamed site collection created when web application name is passed for extended web application on port 443. This means this site collection is associated with this extended web application correct? But
    all the changes made in IIS only reflect if it is made to port 80 web application. Also changing authentication scheme from Central Admin, only changes on default zone reflects on site collection not the one in extended web application? Why  if the site
    was only created on extended web application paremeter, changes on default are reflecting on it but not from extended.
    Second part of question:
    Each Hostnamed site collection when load balanced thorough f5 using IP for 3 WFE uses 3 IPs for each. This way we will run out of IPs pretty soon. I want to know if there is way to load balance these sites using Hostname or anyother paramenter through f5
    and if any body has done it? 
    https://devcentral.f5.com/articles/name-based-virtual-hosting-with-ltm link talks about sending reply string
    from application but i do not know where to set it up or how to do it? No resources in the net. Just asking if any one else has done it. 
    Adit

  • ACE load balancing servers on different subnets...

    Hello,
    I have the following issue.... need to load balance traffic between two servers already working in two different subnets (vlans), at this point is highly desirable to avoid changing IP addresses. Is it possible to accomplish this goal using ACE? routed or bridged mode? is it strictly necessary to have all servers belonging to a serverfarm in the same subnet?
    Thanks in advanced for your support.

    Hi,
    You can do this, but you have to use client-NAT (Source-NAT) to force the return traffic to pass back through the ACE. You also then need static routes in the ACE context to point at each server. PBR is an alternative approach but I have not implemented that in a live network. The important thing is that the ACE sees both sides of the conversation.
    The following extract from a configuration shows the basic principle:
    rserver host master
    ip address 10.199.95.2
    inservice
    rserver host slave
    ip address 10.199.38.68
    inservice
    serverfarm host FARM-web2-Master
    description Serverfarm Master
    probe PROBE-web2
    rserver master
    inservice
    serverfarm host FARM-web2-Slave
    description Serverfarm Slave
    probe PROBE-web2
    rserver slave
    inservice
    class-map match-any L4VIPCLASS
    2 match virtual-address 10.199.80.12 tcp eq www
    3 match virtual-address 10.199.80.12 tcp eq https
    policy-map type management first-match REMOTE-MGMT-ALLOW-POLICY
    class REMOTE-ACCESS
    permit
    policy-map type loadbalance first-match LB-POLICY
    class class-default
    serverfarm FARM-web2-Master backup FARM-web2-Slave
    policy-map multi-match L4POLICY
    class L4VIPCLASS
    loadbalance vip inservice
    loadbalance policy LB-POLICY
    loadbalance vip icmp-reply active
    loadbalance vip advertise
    nat dynamic 1 vlan 384
    service-policy input L4POLICY
    interface vlan 383
    description ACE-web2-Clientside
    ip address 10.199.80.13 255.255.255.248
    alias 10.199.80.12 255.255.255.248
    peer ip address 10.199.80.14 255.255.255.248
    access-group input ACL-IN
    access-group output PERMIT-ALL
    no shutdown
    interface vlan 384
    description ACE-web2-Serverside
    ip address 10.199.80.18 255.255.255.240
    alias 10.199.80.17 255.255.255.240
    peer ip address 10.199.80.19 255.255.255.240
    access-group input PERMIT-ALL
    access-group output PERMIT-ALL
    nat-pool 1 10.199.80.20 10.199.80.20 netmask 255.255.255.240 pat
    no shutdown
    ip route 0.0.0.0 0.0.0.0 10.199.80.9
    ip route 10.199.95.2 255.255.255.255 10.199.80.21
    ip route 10.199.38.68 255.255.255.255 10.199.80.21
    HTH
    Cathy

  • How do I load balance TFTP between two servers and a client on the same subnet?

    Hi,
    I have trawled through several documents and tried umpteen different configs, all to no avail. I have a PXE boot client trying to access a boot file via TFTP from a couple of TFTP servers on the same VLAN/subnet. For HA purposes I want to load balance the two TFTP servers.
    Config is currently;
    =====
    probe icmp ICMP_PROBE
      description icmp probe for default gateway tracking
      interval 5
      passdetect interval 15
    rserver host server1
      description Server1
      ip address 10.0.0.1
      inservice
    rserver host server2
      description Server 2
      ip address 10.0.0.2
      inservice
    serverfarm host serverfarm_01
      description servers used
      probe ICMP_PROBE
      rserver server1
        inservice
      rserver server2
        inservice
    class-map match-all L4_VIP_TFTP
      10 match virtual-address 10.0.0.10 udp eq 69
    policy-map type loadbalance first-match L7_TFTP
      class class-default
        serverfarm serverfarm_01
    policy-map multi-match L4_LB_VIP_POLICY
      class L4_VIP_TFTP
        loadbalance vip inservice
        loadbalance policy L7_TFTP
        loadbalance vip icmp-reply active
    nat dynamic 1 vlan 200
    interface vlan 200
      ip address 10.0.0.250 255.255.255.0
      nat-pool 1 10.0.0.241 10.0.0.243 netmask 255.255.255.255 pat
      service-policy input L4_LB_VIP_POLICY
      no shutdown
    ip route 0.0.0.0 0.0.0.0 10.0.0.254
    =====
    I have read the doco by Ivan Kovacevic amongst many others but as my clients and servers are on the same subnet, the config doesnt work.
    Can anybody point me in the right direction please. The devices are ACE 4710 running A3(2.3).
    Thanks

    Try using the following configuration:
    Note: Please make sure to configure also a udp probe to probe udp port 69, in case the application is down.
    You need to configure a management policy on the interface when using a UDP probe.
    That is because, when port 69 on the server will be unreachable, the server will send an ICMP unreachable.
    ACE will consider a udp probe as "failed" only when it sees ICMP unreachable.
    Without a management policy-map, the ICMP unreachable message will be dropped.
    Also, add an ICMP probe to the rserver because udp probe will not be enough when the physical interface will be down.
    That is because UDP is a connection-less protocol. To consider a UDP probe successfull, ACE need to see NO answer from the server in respose to the probe.
    The ACE will not see any answer from the server when the interface is down and thus, will consider the probe as "sucessful".
    With ICMP probe attached to the rserver, you also test the reachability of the server and not only the UDP port.
    Here is the configuration (of course, you can chage the names of the of the objects to the name you are using if you want) :
    access-list ALL line 10 extended permit ip any any
    probe udp TFTP
      port 69
      interval 5
      passdetect interval 15
    probe icmp ICMP_PROBE
      interval 5
      passdetect interval 15
    rserver host TFTP_1
      ip address 10.0.0.1
      probe TFTP
      probe ICMP_PROBE
      inservice
    rserver host TFTP_2
      ip address 10.0.0.2
      probe TFTP
      probe ICMP_PROBE
      inservice
    serverfarm host TFTP-SFARM
      rserver TFTP_1
        inservice
      rserver TFTP_2
        inservice
    sticky ip-netmask 255.255.255.255 address source TFTP-STICKY
      timeout 10
      replicate sticky
      serverfarm TFTP-SFARM
    class-map type management match-any MANAGE
      2 match protocol icmp any
    class-map match-all NAT
      2 match virtual-address 0.0.0.0 0.0.0.0 udp any
    class-map match-all TFTP
      2 match virtual-address 10.0.0.10 udp eq 69
    policy-map type management first-match MANAGE
      class MANAGE
        permit
    policy-map type loadbalance first-match ROUTE
      class class-default
        forward
    policy-map type loadbalance first-match TFTP-POL
      class class-default
        sticky-serverfarm TFTP-STICKY
    policy-map multi-match TFTP-MULTI
      class TFTP
        loadbalance vip inservice
        loadbalance policy TFTP-POL
        nat dynamic 1 vlan 212
      class NAT
        loadbalance vip inservice
        loadbalance policy ROUTE
        nat dynamic 2 vlan 212
    interface vlan 212
      ip address 10.0.0.250 255.255.255.0
      no normalization
      access-group input ALL
      nat-pool 1 10.0.0.241 10.0.0.243 netmask 255.255.255.0 pat
      nat-pool 2 10.0.0.10 10.0.0.10 netmask 255.255.255.0 pat
      service-policy input TFTP-MULTI
      service-policy input MANAGE
      no shutdown
    Let me know how it goes.
    Good luck!

  • ACE 4700 load balancing Issue

    Hi,
    I am new in ACE 4700. I have configured ACE 4700 for load balancing the FAX servers. Probe, ServerFarm, Real server, Virtual server, VIP state every thing is up and in service. But I am not able to access the real server using VIP IP address.
    Below is the running configuration. Please help me to troubleshot the problem.
    HOB-ACE-1/Admin# sh run
    Generating configuration....
    no ft auto-sync startup-config
    boot system image:c4710ace-mz.A3_2_0.bin
    hostname HOB-ACE-1
    interface gigabitEthernet 1/1
      description Man_HOB_1
      switchport access vlan 1000
      no shutdown
    interface gigabitEthernet 1/2
      description VIP_HOB_1
      switchport access vlan 24
      no shutdown
    interface gigabitEthernet 1/3
      description HA_HOB_1
      switchport access vlan 180
      no shutdown
    interface gigabitEthernet 1/4
      shutdown
    [7m--More-- [m
    access-list ALL line 8 extended permit ip any any
    probe icmp ICMP_PROBE1
      interval 15
      faildetect 4
      passdetect interval 60
      passdetect count 5
      receive 5
    rserver host MFREFSAS497
      description MAAFAXSERVER
      ip address 10.16.12.148
      conn-limit max 4000000 min 4000000
      inservice
    rserver host MSHOFCFS489
      description HOBFAXSERVER
      ip address 10.26.12.130
      conn-limit max 4000000 min 4000000
      inservice
    [7m--More-- [m
    [K
    serverfarm host SFHOBACE-1
      description SFHOBACE-1
      predictor hash header Accept
      probe ICMP_PROBE1
      rserver MFREFSAS497 80
        conn-limit max 4000000 min 4000000
        inservice
      rserver MSHOFCFS489 80
        conn-limit max 4000000 min 4000000
        inservice
    class-map match-all VSHOBACE-1
      2 match virtual-address 10.26.24.242 any
    class-map type management match-any remote_access
      201 match protocol xml-https any
      202 match protocol icmp any
      203 match protocol telnet any
      204 match protocol ssh any
      205 match protocol http any
      206 match protocol https any
      207 match protocol snmp any
    [7m--More-- [m
    [K
    policy-map type management first-match remote_mgmt_allow_policy
      class remote_access
        permit
    policy-map type loadbalance first-match VSHOBACE-1-l7slb
      class class-default
        serverfarm SFHOBACE-1
    policy-map multi-match global
      class VSHOBACE-1
        loadbalance vip inservice
        loadbalance policy VSHOBACE-1-l7slb
        loadbalance vip icmp-reply
        nat dynamic 1 vlan 24
        nat dynamic 1 vlan 1000
    service-policy input global
    interface vlan 24
      description "Client VLAN"
      ip address 10.26.24.243 255.255.255.0
    [7m--More-- [m
      access-group input ALL
      no shutdown
    interface vlan 1000
      ip address 10.26.12.132 255.255.255.0
      peer ip address 10.26.12.133 255.255.255.0
      access-group input ALL
      service-policy input remote_mgmt_allow_policy
      no shutdown
    ft interface vlan 180
      ip address 192.168.180.2 255.255.255.248
      peer ip address 192.168.180.3 255.255.255.248
      no shutdown
    ft peer 1
      heartbeat interval 300
      heartbeat count 10
      ft-interface vlan 180
    ft group 1
      peer 1
      priority 140
      associate-context Admin
    [7m--More-- [m
      inservice
    ip route 0.0.0.0 0.0.0.0 10.26.12.1
    snmp-server contact "HOB_ACE"
    snmp-server location "HOB"
    snmp-server community FAXSERVER group Network-Monitor
    snmp-server user administrator Network-Monitor
    snmp-server trap-source vlan 1000
    username admin password 5 $1$GtO1e504$eGuyxxDcXck7SkxqBfRkI.  role Admin domain
    default-domain
    username www password 5 $1$N5ClX7jy$kDhGgN.uukWQKvQMd3pY.1  role Admin domain de
    fault-domain
    ssh key rsa 1024 force
    Thanks and Regards,
    Ashfaque

    Hello Hossain,
    Applying the policy globally on the box is commonly not the prefered way to go, you can use instead a single multi-match policy per SVI for easier managent; this will also also help to narrow down problems to a specific policy and VIP while T-Shooting.
    Use the
    ACE/Admin(config)# no service-policy input global
    ACE/Admin(config)# interface vlan 24
    ACE/Admin(config-if)# service-policy input global
    Also you want to remove the NAT from the multi-match policy, you're running in routed mode so NAT should not be required; if it was required then you don't have any natpool configured or as Ahmad mentioned it was truncated from the configuration.
    Something that caught up my attention is that your default route is pointing to the server VLAN that happens to be also your management VLAN, I'll have to lab it up but my first impression is that either the traffic coming to the VIP on vlan 24 should be always NAT'd to an IP of 10.26.24.X/24 before it gets to the ACE or else there will be a routing loop that will not allow the flow to complete correctly.
    Do you happen to have a quick logical diagram of this piece of the network?
    Thnx
    Pablo

  • Disabling load balancing in WebSphere

    Hello,
    We've come across this problem just after we deployed our application to a clustered environment (probably a short sight on our part while designing). This is our situation:
    Environment: WebSphere 6.1, EJB 2.1
    Problem: We use the EJB Timer service for executing some business logic periodically with in our stateless session bean application. We have multiple timers within the same EJB that do stuff dynamically with the same code based on the parameters. However, we don't want the timers to come up automatically when the EJB application (like doing it with in contextInitialized) comes up because we want to bring the timers up and down in a more controlled fashion (more of a business requirement) and so we expose the startTimer and stopTimer methods in our EJB, and we invoke those methods from scripts outside of the WebSphere context as and when needed. This model has worked perfectly in a stand-alone environment. When we switched to our clustered UAT environment and started testing, that's when we got this reality check.
    Our cluster consists of 2 nodes, with 4 clones per node, and our middle ware team worked on the horizontal scaling in this environment. So when we try to invoke the startTimer in each of these node clones, it automatically goes to a random clone, not necessarily the one that we are trying to invoke on. And the same happens when we are trying to stop the timer, it tries to stop it on a random clone and the timer might not even be up on the clone it is trying to stop.
    So my short question: is there any way to force the EJB invocation to go to a particular clone? In other words, can we disable this whole horizontal scaling and just let it go to the clone we want (not let Websphere come in between with its "smart" load balancing) in our request? Something like a magic parameter that can be passed to the java command while invoking the EJB?
    This might sound "impossible" to do it that way and probably better to look at other ways, but we are just looking for something that will not significantly change our architecture at this point in the game.
    Thanks in advance!

    Answer to my question: http://ieoc.com/forums/p/26385/218976.aspx#218976

  • FTP Load-Balancing in DSR mode

    Hello Experts .. 
    Need some clarity on FTP LB under DSR mode ....  I have my DSR working fine for normal http traffic , but facing issues with FTP on the same , please find the configs attached below 
    Topology 
    Client ( 10.20.10.101)   -----> CAT6k  ( 10.20.10.110 & 10.10.15.2)  --> ACE --- > Server 
    VLAN 149                                  VLAN 149 & VLAN 150
    access-list access line 8 extended permit icmp any any
    access-list access line 16 extended permit tcp any any
    access-list acl line 8 extended permit ip any any
    rserver host real2
      ip address 10.10.15.101
      inservice
    serverfarm host ftp
      transparent
      rserver real2
        inservice
    class-map match-all ftp-vip
      2 match virtual-address 192.168.5.5 tcp eq ftp
    class-map match-any ftp_1
      2 match access-list access
    policy-map type management first-match mgmt
      class class-default
        permit
    policy-map type loadbalance first-match ftp
      class class-default
        serverfarm ftp
    policy-map multi-match LBPOL
      class vip
        loadbalance vip inservice
        loadbalance policy lbpol
        loadbalance vip icmp-reply active
      class ftp-vip
        loadbalance vip inservice
        loadbalance policy ftp
        inspect ftp
      class ftp_1
        nat dynamic 5 vlan 150
    interface vlan 61
      ip address 61.202.200.200 255.0.0.0
      access-group input acl
      service-policy input mgmt
      no shutdown
    interface vlan 150
      description server-side
      ip address 10.10.15.1 255.255.255.0
      no normalization
      access-group input acl
      nat-pool 5 10.10.15.209 10.10.15.209 netmask 255.255.255.255 pat
      service-policy input LBPOL
      service-policy input mgmt
      no shutdown
    ip route 0.0.0.0 0.0.0.0 10.10.15.2
    Client
    ======
    root@TLS_SRV ~]# ifconfig eth1.149
    eth1.149  Link encap:Ethernet  HWaddr 00:1C:23:E2:50:C4
              inet addr:10.20.10.101  Bcast:10.20.10.255  Mask:255.255.255.0
              inet6 addr: fe80::21c:23ff:fee2:50c4/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:203 errors:0 dropped:0 overruns:0 frame:0
              TX packets:68 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0
              RX bytes:10444 (10.1 KiB)  TX bytes:8408 (8.2 KiB)
    route
     192.168.5.0     10.20.10.110    255.255.255.0   UG    0      0        0 eth1.149
    CAT6k
    =======
    interface Vlan149
     ip address 10.20.10.110 255.255.255.0
    end
    interface Vlan150
     ip address 10.10.15.2 255.255.255.0
    end
    ip route 192.168.5.5 255.255.255.255 10.10.15.1    
    Server
    =======
    eth1.150  Link encap:Ethernet  HWaddr 00:1C:23:E2:50:C4
              inet addr:10.10.15.101  Bcast:10.10.15.255  Mask:255.255.255.0
              inet6 addr: fe80::21c:23ff:fee2:50c4/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:9194 errors:0 dropped:0 overruns:0 frame:0
              TX packets:408 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0
              RX bytes:503104 (491.3 KiB)  TX bytes:71884 (70.1 KiB)
    eth1.150:1 Link encap:Ethernet  HWaddr 00:1C:23:E2:50:C4
              inet addr:192.168.5.5  Bcast:192.168.5.255  Mask:255.255.255.0
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
    route
    10.20.0.0       10.10.15.2      255.255.0.0     UG    0      0        0 eth1.150
    When i do FTP from client 10.20.10.101 , my connection is getting refused.... But when i connect to my server directly bypassing ACE i am getting authenticated .. 
    As per the DSR , i made  Rserver & ACE as L2 Adjacent  , so when ACE receives the packet it will change the dest ip instead it will use VIP ip as destination , but the MAC will be rewritten to Rserver MAC address... As i said before all works fine for http DSR ... 
    I know NAT doesn't work in ACE when its configured under DSR , but for FTP i made NAT config , but even if i remove the same its not working , Is my config for FTP is correct ? 
    Could some please look into this and reply ? 
    Thanks
    Charles

    if you need to route / provide load balancing between 2 hosts, then you will need to have Route SAF . you can use web server 7 reverse proxy cli or gui to get this. however, you might want to start from a fresh configuration to avoid reverse-map / map that you have experimented with does not overlap with the 'Route' functionality that you seem to need here
    here are some reference content
    http://blogs.sun.com/amit/entry/setting_up_a_reverse_proxy
    http://blogs.sun.com/meena/entry/configuring_reverse_proxy_in_sun
    http://www.sun.com/bigadmin/features/articles/web_server_zones.jsp

  • Load balancing and Failover

    Hello,
    We are wondering how load-balancing and failover of tpcall() work with
    WTC:
    The scenario:
    We have one WLS Domain and two Tuxedo Domains. The Tuxedo Domains offer
    the same set of services.
    In the bdmconfig.xml, we specify connection_policy as 'ON_STARTUP' for
    both Remote Tuxedo Domains. We also Import (T_DM_IMPORT) the same
    Tuxedo Service from both Tuxedo Domains.
    Questions:
    1. Is there any load-balancing of the tpcall between the two Domains? If
    so, is it round-robin? If round-robin, what determines the order?
    2. If it is ONLY Failover, what determines the order of the tpcall? And,
    is the Failover automatic? Or do we need to code for retry on failure?
    3. ON_DEMAND vs ON_STARTUP: Does ON_DEMAND drop the connection to the
    remote domain upon tpterm? And does ON_STARTUP use a pool of
    TuxedoConnection objects?
    4. Are there any configuration parameters for
    'max_number-of_connections? What determines how many simultaneous
    connections can be made?
    Thanks,
    Suresh Mohan.

    Hi Suresh,
    The following are my answers to your questions.
    Suresh Mohan wrote:
    Hello,
    We are wondering how load-balancing and failover of tpcall() work with
    WTC:
    The scenario:
    We have one WLS Domain and two Tuxedo Domains. The Tuxedo Domains offer
    the same set of services.
    In the bdmconfig.xml, we specify connection_policy as 'ON_STARTUP' for
    both Remote Tuxedo Domains. We also Import (T_DM_IMPORT) the same
    Tuxedo Service from both Tuxedo Domains.
    Questions:
    1. Is there any load-balancing of the tpcall between the two Domains? If
    so, is it round-robin? If round-robin, what determines the order?Yes there is a load balancing between two remote Tuxedo TDomain Gateways.
    The algorithm is random, not RR. Over time this should give equal
    opportunities to both remote TDomain.
    >
    2. If it is ONLY Failover, what determines the order of the tpcall? And,
    is the Failover automatic? Or do we need to code for retry on failure?The load balancing is always there. The failover is automatic. When a
    connection to a remote TDomain encountered a problem (ie network) the remote
    domain will be put on retry open connection (in ON_STARTUP) and the load
    balancing will not select it until the connection re-established.
    However, the tpcall() that encountered the error will not be retried to send
    to different destination. It is up to the application to decide whether it
    want to resend. Any requests called after the error will not select the
    failed Remote TDomain.
    >
    3. ON_DEMAND vs ON_STARTUP: Does ON_DEMAND drop the connection to the
    remote domain upon tpterm? And does ON_STARTUP use a pool of
    TuxedoConnection objects?TPTERM() only terminate your application session to WTC. WTC still maintain
    a secured T-session to remote Tuxedo TDomain. WTC does not use a pool of
    TuxedoConnection Objects, the object stored in the JNDI refers to WTC.
    >
    4. Are there any configuration parameters for
    'max_number-of_connections? What determines how many simultaneous
    connections can be made?No. As described in #3, there is no need to use connection pool in WTC. WTC
    uses session and virtual circuit design concept as Tuxedo TDOMAIN, the
    logical pool is created/destroyed dynamically. That is the reason why you
    can have a lot of TPACALL() outstanding at the same time. (The limitation is
    the availability system resource.)
    >
    >
    Thanks,
    Suresh Mohan.Regards,
    Hong-Hsi :-)

Maybe you are looking for