Hot to do Failover/Load balance on CORBA C++ Client

I have a corba C++ client using Bea/Tuxedo 8.0 ORB talking to stateless session beans
on Weblogic Server 6.1/sp2.
Question:
If the wls server(s) is a cluster, how do I use this feature in my C++ client to
provide the failover/load balance?
Will the rmic generate idl which is cluster-aware if the EJBs are clustered?
Any help is appreciated.
Thanks,
steve

"steve" <[email protected]> writes:
I have a corba C++ client using Bea/Tuxedo 8.0 ORB talking to stateless session beans
on Weblogic Server 6.1/sp2.
Question:
If the wls server(s) is a cluster, how do I use this feature in my C++ client to
provide the failover/load balance?Currently, although the information is available to the client it does
not use it. This will probably change in a future release of Tuxedo,
but you should talk to your sales rep for details.
For now you can get some degree of failover by catching the
COMM_FAILURE exception and re-looking up CosNaming and your bean.
Will the rmic generate idl which is cluster-aware if the EJBs are clustered?The information is provided dynamically at runtime, so its independent
of the IDL.
andy

Similar Messages

  • Load balancing for CORBA servers not happening

    We have configured 10 instances of a corba server, 5 each in two separate server groups. In the ubbconfig file, I have set "LDBAL Y", expecting that the load will be spread equally among the 10 severs. What is happening is that, load is spread between two servers, one in each server_group. (The last one specified for each group) Other servers in each group got very few requests and few servers got 0 load.
    What do I need to do, in order to spread the load almost equally among the 10 servers, similar to what we get, when we use MSSQ, in non-corba tuxedo servers?
    This is a single domain single machine environment.
    James

    Hi James,
    As I believe the whitepaper Ed pointed to you probably explained, what you are likely seeing is normal behavior for load balancing. Assuming that you aren't dealing with issues associated with active objects and that requests to active objects are going to be sent to the server where the object is active, then you seeing normal behavior.
    Roughly speaking, in determining what server to give a request to, Tuxedo scans the list of available servers and places the request on the server with the least amount of work queued. If not work is queued for a server, it will place the request on that servers queue. Also, the scan of servers is always done in the same order. So unless your servers are quite busy, the first server will handle most of the requests. Only when that server is busy will Tuxedo go to the next server, and so on. So the only way the 5th server is going to get a request queued to it is if the previous 4 servers are busy.
    Note that unless you are using parallel objects (user controlled concurrency), Tuxedo will always send the request to a server in the same group as the server that created the object. So in your example, if the factory that created the object was in group 20, all requests to that object are going to go to a server in group 20.
    Regards,
    Todd Little
    BEA Tuxedo Chief Architect

  • IP failover, load balancing and notification...

    Pretend I have the following setup/hardware:
    Two intel xserves running 10.4. One is for http traffic the other https. The http server contains a static html website while the other server has a large dynamic database driven website and all of its pages require ssl encryption. I'll refer to the first as server 1 and server 2 for the other.
    Now I want to implement a solution for providing high availability and performance.
    If I wanted IP failover I would need two additional servers, one for the first webserver and the second for the other. Likewise if I wanted to address load balancing I would also need two additional servers, one for server 1 the other for server 2. Now my questions:
    1) It seems implementing load balancing as described on page 32 of Apples High Availability pdf that this would also provide high availability like IP failover does. If two additional servers were purchased to provide high availability via a load balancing strategy would there be any need to implement IP failover? Does load balancing provide the same benefits as IP failover when talking about high availability? When if ever would one need to implement both strategies?
    2) Can you somehow provide IP failover with only one server as the backup using the setup above (a third server to provide IP failover for both servers 1 and 2)? Assume the third server has all the data of both server 1 and server 2.
    3) Is it possible to have Server Admin or Raid admin notify you of a problem via calling your cell phone or sending you a text message as opposed to only email, maybe via a third party solution? I think (not 100% sure) APC offers this when the power left in their batteries reaches a certain level.
    Thanks.
    G5 xserve   Mac OS X (10.4.8)  

    1) There's generally no need to implement IP Failover at the server level if you're already using a separate load balancing solution. The load balancer should be able to take care of dealing with a failed server.
    2) Good question - it's not clear whether IPFailover will failover for one machine or more than one.
    3) Most cellphone providers offer an email-to-SMS gateway, allowing you to send an email to an email address that's forwarded to your phone as a text message. Check your cellphone provider for details on what that email address might be (e.g. Cingular uses <phonenumber>@cingularme.com, Verizon uses <phonenumber>@msg.myvzw.com, etc.

  • Is there any failover/load balancing among kjs engines on a single ias server?

    I guess the question pretty much says it all. To be more explicit, if a kjs engine crashes, would all requests in progress be re-attempted on a different kjs?
    Also, at what level is the user session information stored (kjs, kxs)?
    Thanks a lot

    hi Mihai,
    Yes, we do have a load balancing mechanism between the KJSs in a single instance of the appserver. Requests from the kxs engine will be load balanced in a round robin fashion to the KJS engines.
    Note that if a KJS engine crashes, then the requests will be directed to the other KJS engines, but if the KJS engine is 'hanging', then this redirection does not happen.
    If the sessions are distributed, then they are stored in the KXS, but if they are lite, then they are stored in the KJS engine.
    Hope that helps,
    Vasanth

  • Cisco RV016 Failover & Load Balance Multi WAN Issue

    Hi,
    I think the RV016 is the correct device to buy for our small building but I am a little confused from the manual whether my intended configuration is possible, so if you could confirm if this is possible I would appreciate it.
    We have a leased line as our primary connection (lets call it WAN1). If this connection is available I don't want to load balance on any other WAN.
    We have 2 identical netgear 4G devices (call them WAN 2 and WAN 3). If the leased line is unavailable, I would like to then load balance these two WAN connections.
    I then have a final connection, WAN4 as a slow adsl backup line. I do not know at present whether I want to load balance this in a WAN1 failure or just have it as a backup to WAN2 and WAN3 (WAN2 and WAN3 have a 20gb data limit each on their monthly contract allowance, if the leased line is down for more than a couple of days, which has unfortunately happened before, then we will hit this limit and then need to face either extremely expensive data charges or just use the ADSL alone)
    Anyway, bottom line is under normal conditions I don't want to load balance. I only want to load balance WAN2 and WAN3 in the case of WAN1 failure.
    Does anyone know if this is possible? If not, is there any other similar device which would suit?
    Thank you
    Ben

    Hi Bencarroll01,
    With RV016 you can achieve what you need.
    RV016 support up to 7 WAN connection, and there is two working mode 
    Intelligent Balancer (Auto Mode): Select this option to balance traffic between all interfaces to increase the available bandwidth. The router balances the traffic between the interfaces in a weighted round robin fashion.
    IP Group (By Users): Select this option to group traffic on each WAN interface by priority levels or classes of service (CoS). With this feature, you can ensure bandwidth and higher priority for the specified services and users. All traffic that is not added to the IP Group uses Intelligent Balancer mode. To specify the services and users, click the Edit icon for the WAN interface and then add protocol binding entries for each service, IP address, or range of IP addresses.
    For our case we need to have RV016 configured with IP Group(By User) so in this case we can configure Protocol binding which we can specify and force all the traffic coming from any IP address from the local network going outside through WAN1. and other WAN connection they always UP but not traffic passing through them
    now if WAN1 is Down, immediately the Rule for redirecting traffic to WAN 1 will be Disabled and all the traffic will be passing through the rest of the WAN connection
    After that if the WAN1 is UP again the rule of protocol binding will be active again and again all the traffic will be through WAN 1
    Please let me know if you have any others questions
    Please rate this post or marked as answered to help other Cisco customers  
    Greetings
    Mehdi

  • Load Balancing, Server and / or Client ?

    Hi
    I am experiencing a problem with the connection pooling in odp.net. I have a simple test app that creates a connection, executes a query, populates an object then closes the connection. I have found that when I have client side load balancing on via the odp.net connection string property many connections are made unnecessary (sometime the actual number created reaches the max pool size but the numbers differ randomly). It appears that rather than a free connection in the pool being used more connections are being created which defeats the point of having a connection pool. I do have server side load balancing configured correctly also. Due to this finding can someone possibly answer the following questions.
    a) Do I need both server side and client side load balancing set?
    b) If I do why is the above behaviour being seen? If not could you give me a short explanation as to why not?
    Current set up is 11g (patched to 6, awaiting 7 to be applied) RAC, 2 nodes.
    Below is the C# code used while testing this. The table queried is a simple person table containing 16000 rows if data.
    OcConnection = "User Id=XXX; Password=XXX; Connection Lifetime = 60; Data Source=(DESCRIPTION=(ADDRESS_LIST=(FAILOVER=on)(LOAD_BALANCE=off)(ADDRESS=(PROTOCOL=tcp)(HOST=XXX)(PORT=1521))(ADDRESS=(PROTOCOL=tcp)(HOST=XXX)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=MyFirstTest))); Pooling=true; HA Events = true; Load Balancing = true";
    Code:-
    Oracle.DataAccess.Client.OracleConnection con;
    con = new Oracle.DataAccess.Client.OracleConnection();
    con.ConnectionString =OcConnection;
    con.Open();
    // the command object to use for this test
    OracleCommand cmd = con.CreateCommand();
    cmd.CommandText = "select * from PERSON";
    OracleDataReader rdr = cmd.ExecuteReader();
    List<test> listTest = new List<test>();
    while (rdr.Read())
    test dc = new test();
    if (!rdr.IsDBNull(0))
    dc.id = Convert.ToInt32(rdr.GetValue(0));
    if (!rdr.IsDBNull(1))
    dc.forename = rdr.GetString(1);
    if (!rdr.IsDBNull(2))
    dc.surname = rdr.GetString(2);
    if (!rdr.IsDBNull(3))
    dc.street = rdr.GetString(3);
    if (!rdr.IsDBNull(4))
    dc.city = rdr.GetString(4);
    if (!rdr.IsDBNull(5))
    dc.postcode = rdr.GetString(5);
    if (!rdr.IsDBNull(6))
    dc.country = rdr.GetString(6);
    if (!rdr.IsDBNull(7))
    dc.email = rdr.GetString(7);
    if (!rdr.IsDBNull(8))
    dc.dateadded = rdr.GetDateTime(8);
    if (!rdr.IsDBNull(9))
    dc.randWords = rdr.GetString(9);
    if (!rdr.IsDBNull(10))
    dc.uniqueNumber = Convert.ToInt32(rdr.GetValue(10));
    listTest.Add(dc);
    rdr.Close();
    con.Close();
    rdr.Dispose();
    cmd.Dispose();
    con.Dispose();
    Thanks for your time
    Victoria

    Here are the HTTP Headers as monitored on the client side. Notice the good.txt file includes a GET as it's initial request. All works fine in this case. However, the initial request in the bad.txt is a POST. This is odd since the URL was opened using the same shortcut in both incidents and the browser was closed between each trace that was taken. I've also reviewed the shortcut with notepad to verify it does not include unwanted data such as the JSESSIONID info....etc.
    Once you have reviewed the HTTP headers, I have these questions.
    1. IIS is sending the 100 Continue messages as you mention, but why is the CSS injecting the cookie in a 100 response that is not typically processed by the client? The bad.txt file shows the client receiving two ARPT cookies because the first cookie in the 100 continue response was ignored.
    2. I know Cisco is not really in the business of troubleshooting browser behaviour. But do you know why the browser would behave differently....GET in one request and a POST in the next? We do not wish to get into modifying the browser, so I'm hoping we can provide a solution on the server side that will allow the browser to function this way if it chooses to do so. Do you think it would make sence to push the state management up a level to the cookie handed out by JRUN? This way, the cookie would not be handed back in a 100 response from IIS, and we could tell the CSS to monitor the JRUN cookie. Of course this would require we determine how to manage this cookie either by modifying to cookie to have static data for each server, or by using the right method of hashing...etc.
    Chris

  • Failover, load balancing, band sharing - problem

    I would like to accomplish the following task:
    SW-A and SW-B two separate subnets, without any contact with each other.
    1. Router A would perform as a gateway for computers connected to SW-A
    2. Router B would perform as a gateway for computers connected to SW-B
    3. Router A transfer all of your unused band "gives" to Router B (office to run periodically and sometimes weeks or even months when it is not used, we do not want to band a wasted)
    4. Router A has a band of 8/8
    5. Router B has a band of 8/8 + unused band of router A
    6. In case of failure of Router A router B takes over as the gateway to SW-A giving him 50% of the total band
    7. In case of failure of Router B router A takes over as the gateway to SW-B giving him 75% of the total band
    Is this possible? problem is point 3. and points. 5, whether Cisco can manage and share transfer unused band ? on which devices can be done? (routers, firewalls ?)
    sorry for my english
    Thanks
    dk

    Disclaimer
    The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
    Liability Disclaimer
    In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
    Posting
    It might be either the partners you've spoken to are incompetent to do what you're requesting, or they and/or myself don't fully understand what you're asking.  I will say I've done load sharing (outbound) using OER/PfR with HSRP/GLBP.  Works very nice, and although there's some learning required, not really too difficult.  PfR supports ingress sharing too, but ingress sharing can be very, very difficult to deal with.  Much depends on what's on the "outside".
    As to sizing a router, number of users often means little.  What's important is how much (and sometimes kind of) traffic will be passing through the router.  I've attached a Cisco document that will explain much about ISR performance for different traffic loads and kinds.

  • JMS Failover & Load balancing.

    Hi,
    I have 4 Managed servers A,B,C,D on 4 physical boxes. We have one JMs server on Box D, All other Managed server uses this only JMS which is on D, if this goes down we loose all messages. I want to have JMS failover in my environment. I suggested to have 4 JMS servers and 4 File stores for each Managed server? my question is that Is weblogic that intellegent that if a client connects to box B JMS server and if the servers goes down, the message will be send top another JMS server?

    ravi tiwari wrote:
    Hi,
    I have 4 Managed servers A,B,C,D on 4 physical boxes. We have one JMs server on Box D, All other Managed server uses this only JMS which is on D, if this goes down we loose all messages. I want to have JMS failover in my environment. I suggested to have 4 JMS servers and 4 File stores for each Managed server? my question is that Is weblogic that intellegent that if a client connects to box B JMS server and if the servers goes down, the message will be send top another JMS server?You don't mention if you're running in a clustered environment or what
    version of WLS you're using, so I've assumed a cluster and WLS 8.1
    For resiliency, you should really have 4 JMS servers, one on each
    managed server. Then each JMS server has it's own filestore on the
    physical managed server machine.
    So, you have JMSA, JMSB, JMSC, JMSD with FileStoreA, FileStoreB,
    FileStoreC & FileStoreD.
    You should also look at using JMS distributed destinations as described
    in the documentation.
    In your current environment, if server D goes down, you not only lose
    your messages, your application would lose access to your JMS queues?
    If you use distributed destinations, and have 4 JMS servers, your JMS
    queues will still be available if a single server goes down.
    If a server does go down however, you have to follow the JMS migration
    procedures to migrate the JMS service from the failed server to a
    running one.
    There are conditions to this process, which are best found out from the
    migration documentation to be honest, rather than describe it here.
    We use this setup, and it works fine for us. We've never had to use JMS
    migration, as so far we haven't had anything serious to cause us to need
    to migrate. Our servers also boot from a SAN which makes our resilience
    processes simpler.
    Hope that helps,
    Pete

  • CSM for enabling hot spare instead of load balancing

    We have a server at a remote site that we want the remote clients to use. We only want them to use the central server if the remote one fails. Is there a way I can use "weight" or some other method to accomplish this. Thanks.

    what device are you talking about ?
    There is a solution but it is different depending of the machine you are using.
    Gilles.

  • How to achive failover & load balancing

    Hi ,
    I have installed WebLogic server 5.1,
    I followed one documentation for clustering.
    I don't know how to set up cluster configuration WebLogic.property file.
    If you have any sample property please
    send it to me,
    Thanks

    I think the best way is to follow the documentation.
    Attached is my startup script.
    Cheers - Wei
    Sanjay <[email protected]> wrote in message
    news:399959ba$[email protected]..
    Hi ,
    I have installed WebLogic server 5.1,
    I followed one documentation for clustering.
    I don't know how to set up cluster configuration WebLogic.property file.
    If you have any sample property please
    send it to me,
    Thanks[swc1.cmd]

  • Connection string in listener log file for loading balance/failover

    Hi Experts,
    I have 4 node RAC for oracle 10g2 in rad hate 5.0
    We creaed service dbsale ( sale1,2 as pr imary and sale3/4 as available) with loading balance/failover.
    The remote user created a local TNS as
    localmarket =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = 155.206.xxx.xx)(PORT = 1521))
    (LOAD_BALANCE = OFF)
    (CONNECT_DATA = (SERVICE_NAME = dbsale))
    From server side, I saw that user send two request connection string. one fail and another is OK.
    It seems that fail connecting come from failover/loading balance from dbsale3?
    Why do we get two connection string in listener log file?
    Which difference is between two connection string?
    Where does system change these connection string?
    Thanks for your explaining.
    Jim
    ==============listener.log message
    [oracle@sale log]$ cat listener_sale.log|grep pmason
    15-SEP-2009 13:52:24 * (CONNECT_DATA=(SERVICE_NAME=dbsale)(CID=(PROGRAM=oracle)(HOST=rock)(USER=test ))) * (ADDRESS=(PROTOCOL=tcp)(HOST=161.55.xxx.xx)(PORT=54326)) * establish * dbsale * 0
    15-SEP-2009 13:52:25 * (CONNECT_DATA=(SERVICE_NAME=dbsale)(CID=(PROGRAM=oracle)(HOST=rock)(USER=test ))(SERVER=dedicated)(INSTANCE_NAME=sale3)) * (ADDRESS=(PROTOCOL=tcp)(HOST=161.55.xxx.xx)(PORT=54327)) * establish * dbsale * 12520
    15-SEP-2009 13:52:30 * (CONNECT_DATA=(SERVICE_NAME=dbsale)(CID=(PROGRAM=oracle)(HOST=rock)(USER=test ))) * (ADDRESS=(PROTOCOL=tcp)(HOST=161.55.xxx.xx)(PORT=54329)) * establish * dbsale* 0
    15-SEP-2009 13:52:47 * (CONNECT_DATA=(SERVICE_NAME=dbsale)(CID=(PROGRAM=oracle)(HOST=rock)(USER=test ))) * (ADDRESS=(PROTOCOL=tcp)(HOST=161.55.xxx.xx)(PORT=54332)) * establish * dbsale * 0
    15-SEP-2009 13:52:47 * (CONNECT_DATA=(SERVICE_NAME=dbsale)(CID=(PROGRAM=oracle)(HOST=rock)(USER=test ))(SERVER=dedicated)(INSTANCE_NAME=sale3)) * (ADDRESS=(PROTOCOL=tcp)(HOST=161.55.xxx.xx)(PORT=54333)) * establish dbsale 12520
    15-SEP-2009 13:52:49 * (CONNECT_DATA=(SERVICE_NAME=dbsale)(CID=(PROGRAM=oracle)(HOST=rock)(USER=test ))) * (ADDRESS=(PROTOCOL=tcp)(HOST=161.55.xxx.xx)(PORT=54334)) * establish * dbsale * 0
    Edited by: user589812 on Sep 16, 2009 7:21 AM

    Hi Jim,
    I think the best way on this case is create one service with one instance as primary and another 3 as available.
    Or use the connect string with two vip addresses, cause the service has two instances and the tnsnames.ora entry has only one.
    Cheers,
    Rodrigo Mufalani
    http://mufalani.blogspot.com

  • Load balancing v/s Clustering with  BOXI enterprise premium

    We are planning to install Businessobjects enterprise premium on windows2008 server (64 bit) and we are going to use oracle database. my question is
    "Can we set up Crystal reports and businessobjects (web intelligence) both either on clustered environment or load balancer ? "
    If not, can you please let me know what is the best option ?

    Oh. All BOE (this includes Crystal) servers support clustering (and software load balancing via corba).  Only the input and output FRS do not support load balancing. i.e. while you can have mulitple input/output FRS, only one of each is active at a time. The others are passive and will only be used if the active FRS is unavailable.
    As an aside, if I remember correctly, a BOE Premium license is required for clustering.
    So, in essence, you do not need a hardware load balancer to support load balancing for both Crystal and Webi.

  • CSS Load Balancing for MS Winsock Proxy Client

    Has anyone load balanced Microsoft Winsock Proxy client? I am trying to load balance internal users using the Winsock client to two MS ISA Servers running Winsock proxy for application access to the internet.

    Thanks for the post, I got this from Microsoft:
    I wanted to update you on the information I investigated on the firewall client. I found the the actual port connection used to control the connection thru ISA is by default UDP. This UDP session is over 1745 to the ISA server. This intial connection then allows for a connection over an ephemeral port to the ISA server for the actual data transfer. The data transfer is done via a TCP connection. The connection control is UDP based by default. This can be changed in the Wspcfg.ini file. By adding the ControlChannel value to the WSP_client_app section of this file, you can use WSP.TCP to allow the connections to be based with TCP. In your situation, this may be the best scenario due to the connections being load balanced.
    TCP is used by default when checking the Firewall configuration. This is why the traces showed the connection with TCP.
    Information on this can be found in the ISA help files. In the search panel of the ISA help, type in "ControlChannel" without the quotes and it will show information on this feature.
    I will re-test with TCP only setup, and see if this helps. I also have some sniffer traces I need to review to see if maybe NAT is killing me, not UDP traffic.
    I'll post back my findings next week.

  • ConnectionFactory - who does the load balancing

              Consider creating a connectionfactory (with server affinity unticked, load balancing
              ticked and using the message delivery policy of round robin) we then go on to
              create a distributed domain targetted at the cluster of two managed server's (managed1
              and managed2)
              If I create a simple java app that put's messages to that distributed destination,
              using the connectionfactory above, who's responsible for doing the load balancing.
              Does the client create the session knowing that the connectionfactory requires
              load balancing and thus takes the responsibility for it, or does the client just
              put a constant stream of JMS messages to the WLS and the connectionfactory class
              takes responsibility for the load balancing
              Who maintain's the delivery state, the client application or WLS (i.e who's job
              is it to look up the last messages queue destination?)
              

    Hi Barry,
              A JMS client's produced messages are first delivered to the WL server
              that hosts the client's JMS connection. The JMS connection
              host remains unchanged for the life of the connection.
              Once produced messages arrive on the connection host,
              they are load balanced to their JMS destination.
              For more information I suggest reading the clustering
              sections of the JMS Performance Guide white-paper. You can find
              the white-paper here:
              http://dev2dev.bea.com/technologies/jms/index.jsp
              Tom Barnes
              Barry Myles wrote:
              > Consider creating a connectionfactory (with server affinity unticked, load balancing
              > ticked and using the message delivery policy of round robin) we then go on to
              > create a distributed domain targetted at the cluster of two managed server's (managed1
              > and managed2)
              >
              > If I create a simple java app that put's messages to that distributed destination,
              > using the connectionfactory above, who's responsible for doing the load balancing.
              >
              > Does the client create the session knowing that the connectionfactory requires
              > load balancing and thus takes the responsibility for it, or does the client just
              > put a constant stream of JMS messages to the WLS and the connectionfactory class
              > takes responsibility for the load balancing
              >
              > Who maintain's the delivery state, the client application or WLS (i.e who's job
              > is it to look up the last messages queue destination?)
              >
              >
              >
              >
              

  • Where Load Balancing Takes Place

    Hi guys:
    I've seen a post by Todd Little.
    http://www.oracle.com/technetwork/middleware/tuxedo/overview/ld-balc-in-oracle-tux-atmi-apps-1721269.pdf
    In section "Where Load Balancing Takes Place"
    It said
    Whereas for /WS clients, the tpcall/tpacall/tpconnect just send the service request to WSH and *do not*
    *perform load balancing in /WS clients*. WSH calls native client routine to achieve the load balancing
    task on behalf of /WS clients. *To achieve the load balancing between /WS clients and WSL servers,*
    *multiple WSL access points can be configured by WSNADDR.* This feature can assign the /WS clients
    evenly to different WSL servers to balance the work load between WSL/WSH in the system.
    I'm very confused about this description. In MP (clustered) mode, Where Load Balancing takes place?? the client side or server side ? I mean, all client send request to master node, and master node dispatch the request to other slave node ??

    Hi,
    All load balancing (or perhaps better called load optimization) occurs in the native client code where all request routing occurs. So in an MP configuration, the native client (or a handler in the case of workstation, Jolt, or IIOP clients) makes the routing decision which includes load balancing. So the client looks at the load on all the servers across the cluster and makes a routing decision taking into consideration such things as NETLOAD. Note however that before Tuxedo 12c the load information held locally for remote servers (actually queues) was never updated in realtime by the remote machines. Thus the load would increase continuously until the next BB scan at which point the BBL would zero the locally held load information for remote queues. In Tuxedo 12c with TSAM installed, the locally held load information for remote queues IS updated dynamically using a variety of techniques including piggybacking load information in reply messages and periodically sweeping load information to other machines in the cluster. The former works OK, whereas the latter works really well.
    Also, the MASTER node in a cluster is basically just the boss for configuration and state changes. It has no special role in request routing or processing. In fact, if the MASTER machine dies, the cluster continues to operate just fine, but configuration and state changes such as starting or stopping servers can't occur until the MASTER is back up or migrated to the BACKUP.
    Regards,
    Todd Little
    Oracle Tuxedo Chief Architect

Maybe you are looking for