SO Load Balancing Question

Hi all,
I have a service object (SO1) which has been set to Load Balancing.
This service object has an attribute which serves as a number allocator
(NA1).
This NA1 provides a unique number across the whole application for each of
the record that require to store into DB.
The problem is, will the NA1 get replicated if the SO1 is replicated?
If yes, will NA1 crash?
Regards,
Martin Chan
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Senior Analyst/Programmer
Dept of Education and Training
Mobile : 0413-996-116
Email: martin.chandet.nsw.edu.au
Tel: 02-9942-9685

Hi Serge,
Could you prefix it with the PID of the load balanced process ?No I can't. At least not at the moment.
When a service object is replicated, it is automatically replicated into adifferent partition...
Thanks.
An advice, make the NA1 shared. So if you get to do multithreaded accessto
it, you won't screw up things.I am thinking it may be better off to create it as a service object on it's
own.
How is the number returned by the NA1 generated ?It gets generated by Forte's code.
... Try to make it so that the
load balanced partitions don't need to access the database more than onein
5 min. to get a new Seed Key. This would not need to PID.Thanks for your advise.
Regards
Martin Chan
-----Original Message-----
From: Serge Blais [mailto:Serge.BlaisSun.com]
Sent: Tuesday, 3 April 2001 14:17
To: Chan, Martin
Subject: RE: (forte-users) SO Load Balancing Question
Your right, they can generate the same number. How much control do you have
over the ID being generated? Could you prefix it with the PID of the load
balanced process ?
Just a note: When a service object is replicated, it is automatically
replicated into a different partition, possibly on the same machine or on a
different one.
An advice, make the NA1 shared. So if you get to do multithreaded access to
it, you won't screw up things.
How is the number returned by the NA1 generated ? If NA1 is using a stored
procedure, or something like:
Start TRX
read number
newnumber = number+5000
write back newnumber
End Trx
Something like will be very safe. The Database Index Table is taking care
of the critical section. Then you can be sure that each replicate can be
independent (not hit into each other) for 5000 iterations. Depending on the
frequency, you may want to up this number or lower this number. Too high it
would make the key very high very soon with wholes in the sequence. Too low
and you would have hit between the replicates. Try to make it so that the
load balanced partitions don't need to access the database more than one in
5 min. to get a new Seed Key. This would not need to PID.
Serge
At 01:59 PM 4/3/2001 +1000, you wrote:
Hi Serge,
The number return by the NA1 is used as a primary key for each of therecord
that stores in the DB.
The Number Allocator NA1 is required to access to DB to update an ID table
which carry the next available sequence number. NA1 will only update this
table for every 5000 records.
For example, the initial value of the sequence is: 1
The next update will change the value to 5001, next will be 10001 and soon.
>
The properties of this NA1 class at runtime
Shared - Disallowed
Distributed - Disallowed
Transactional - Is Default
Monitored - Disallowed
Unfortunately, this attribute is not a handle but is instantiated by theSO1
itself.
I have been thinking, if SO1 is replicated within the same partition, and
each replicate will carry its own NA1. NA1 and the replicate of NA1 may
return a same number if their initial values of the sequence are the same.
Correct?
Regards
Martin Chan
-----Original Message-----
From: Serge Blais [mailto:Serge.BlaisSun.com]
Sent: Tuesday, 3 April 2001 13:11
To: Chan, Martin; forte-userslists.xpedior.com
Subject: Re: (forte-users) SO Load Balancing Question
Let's see if I understand right.
You have a service object that keep a handle to an object that either keep
state information, or that generate state information. Now the thing to
figure out is which is it. Let's assume that NA1 is a number generator,
that does not need to be synchronized or that doesn't need to access any
external resource. It would still work, depending on the algorythm you are
using.
Will they share the same NA1? It depends on the nature of NA1, but for sure
NA1 would have to be an anchored object. An if multiple partitions would
share the same object "only" for key generation, you would bring down your
performance on key generation or key update (by adding one inter-process
call).
In short:
1. Many scenarios can happen, you need to be clearer on your description.
2. If you are sharing an object by load balanced partitions, this greatly
reduce the gain of load balancing the partition.
3. If NA1 is keeping state, any access to it would need to be controlled
"shared".
Have fun now...
Serge
At 12:30 PM 4/3/2001 +1000, Chan, Martin wrote:
Hi all,
I have a service object (SO1) which has been set to Load Balancing.
This service object has an attribute which serves as a number allocator
(NA1).
This NA1 provides a unique number across the whole application for each
of
the record that require to store into DB.
The problem is, will the NA1 get replicated if the SO1 is replicated?
If yes, will NA1 crash?
Regards,
Martin Chan
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Senior Analyst/Programmer
Dept of Education and Training
Mobile : 0413-996-116
Email: martin.chandet.nsw.edu.au
Tel: 02-9942-9685
For the archives, go to: http://lists.xpedior.com/forte-users and use
the login: forte and the password: archive. To unsubscribe, send in a new
email the word: 'Unsubscribe' to: forte-users-requestlists.xpedior.comSerge Blais
Professional Services Engineer
iPlanet Expertise Center
Sun Professional Services
Cell : (514) 234-4110
Serge.BlaisSun.comSerge Blais
Professional Services Engineer
iPlanet Expertise Center
Sun Professional Services
Cell : (514) 234-4110
Serge.BlaisSun.com

Similar Messages

  • Load Balancing question

    My company is in the process of building a small scale network architecture strictly for testing purposes. We have a DMZ area that contains 2 load balancers and 1 web server. The web server is a SunFire 280 and has two gig e nics. They want to cable one nic to one load balancer and one nic to the other. Since this is only one box we have to put the nics on separate subnets. The question is, can I configure the load balancers in a failover situation of an active active situation with one load balancer on one vlan and another load balancer on a separate vlan.

    I did not able to understand why you want to give ip to two NICs from different subnets.
    There is NO any requirement, like that. If you have your own requirement can you explain me that?
    Ashman

  • CSS load balancing questions

    I hope that someone can help with 2 simple (i think) CSS questions.
    1. When configured properly for load balancing, should the CSS round-robin between servers or will it continue to use only one server until triggered by some event or parameter?
    2. If 1 of 2 load balanced servers fails, how does load balancing proceed? Will it continue to try to load balance between the servers or will it give up on the failed server unitil some event or timeout occurs?
    Thanks in advance,
    Eliot

    Hi Eliot,
    The CSS can be configured to perform load balancing in a variety of different ways. Least connections, round robin, ACA etc. Each new connection through the CSS will be round robined over each of the servers in your server group.
    If a server fails then the CSS will know it has failed through the use of keepalives (based on TCP connection, ICMP etc) and no longer send requests through to that server. Traffic associated with a previous connection to the failed server will be sent to on of the surviving servers. It is then up to the behavior of the application as to if the user experiences any disruption.
    Hope this helps
    Brett

  • Clustering problems and load balancing question

              I am using Weblogic 6.1. My Windows NT environment consists of 10 web client-simulator
              machines, 2 App. Server machines and one database server machine. I have defined
              one cluster on each app. server. Each cluster is running 3 Weblogic instances, or
              so it should be when I fix my problems!
              My questions/problems are the following:
              1. Can I use a software dispatcher to perform workload balancing between the 2 weblogic
              clusters? That is, the client-simulator machines send the requests to the software
              dispatcher which performs workload balancing between the 2 Weblogic clusters. The
              clusters perform round-robin amongst all instances. Note that the documentation only
              talks about Hardware Balancing.
              2. I am having problems with my multicast IP addresses. For instance, on one App.
              Server machine, I am using the multicast IP address: 239.0.0.1 for MyCluster. When
              I start the Admin Server, I get a JDBC error: "... multicast socket error: Request
              Time Out". I have used the utils.MulticastTest utility which shows the packets not
              being received:
              I (S1) sent message num 1
              I (S1) sent message num 2
              I (S1) sent message num 3
              I (S1) sent message num 4
              What am I doing wrong?
              3. Re. the cluster configuration:
              NOTE: I have executed my workload using 2 independent App. Server machines with a
              software dispatcher - no clustering. Each App. Server used a jdbc connection pool
              of 84 database connections. The db connections happened to become my bottleneck.
              When I tried to increase the number of connections in the jdbc pool, throughput decreased
              dramatically. Thus, I decided to add a cluster of Weblogic instances to each one
              of my 8 x 900Mhz machines in order to scale up. Unfortunatly, adding clusters have
              not been that simple a task - probably because I am totally new to the Web Application
              Server world!
              Here is what I've got so far:
              I have obtained 3 static IP addresses for the 3 instances of Weblogic instances that
              I wish to run within the cluster. All servers in the cluster use port number 80.
              There is a corresponding DNS entry for each IP address. My base assumption is that
              one of these instances will double up as the Administration Server... Is it true,
              or do I need to define a separate Admin server if I wish to run 3 Weblogic instances
              (each with a connection pool of 84 database connections for a total of 252 database
              connections)?
              Do I need to re-deploy my applications for the cluster? And if so, would this explain
              why I am having problem starting my Admin Server?
              I think this is it for now. Any help will be greatly appreciated!
              Thanks in advance,
              Guylaine.
              

              Guylaine Cantin wrote:
              > I am using Weblogic 6.1. My Windows NT environment consists of 10 web client-simulator
              > machines, 2 App. Server machines and one database server machine. I have defined
              > one cluster on each app. server. Each cluster is running 3 Weblogic instances, or
              > so it should be when I fix my problems!
              >
              > My questions/problems are the following:
              >
              > 1. Can I use a software dispatcher to perform workload balancing between the 2 weblogic
              > clusters? That is, the client-simulator machines send the requests to the software
              > dispatcher which performs workload balancing between the 2 Weblogic clusters. The
              > clusters perform round-robin amongst all instances. Note that the documentation only
              > talks about Hardware Balancing.
              >
              We also support software load balancers (for e.g. resonate)
              The software dispatcher should be intelligent enough to decode the
              cookie and route the request to the appropriate servers. This is
              necessary to maintain sticky load balancing.
              > 2. I am having problems with my multicast IP addresses. For instance, on one App.
              > Server machine, I am using the multicast IP address: 239.0.0.1 for MyCluster. When
              > I start the Admin Server, I get a JDBC error: "... multicast socket error: Request
              > Time Out". I have used the utils.MulticastTest utility which shows the packets not
              > being received:
              >
              > I (S1) sent message num 1
              > I (S1) sent message num 2
              > I (S1) sent message num 3
              > I (S1) sent message num 4
              > ...
              >
              > What am I doing wrong?
              >
              You should run the above utility from multiple windows and see if each
              of them being recognized or not.
              i.e. java utils.MulticastTest -N S1 -A 239.0.0.1
              java utils.MulticastTest -N S1 -A 239.0.0.1
              > 3. Re. the cluster configuration:
              >
              > NOTE: I have executed my workload using 2 independent App. Server machines with a
              > software dispatcher - no clustering. Each App. Server used a jdbc connection pool
              > of 84 database connections. The db connections happened to become my bottleneck.
              > When I tried to increase the number of connections in the jdbc pool, throughput decreased
              > dramatically. Thus, I decided to add a cluster of Weblogic instances to each one
              > of my 8 x 900Mhz machines in order to scale up. Unfortunatly, adding clusters have
              > not been that simple a task - probably because I am totally new to the Web Application
              > Server world!
              >
              You have to stress test your application several times and set
              maxCapacity of the conn pool accordingly.
              > Here is what I've got so far:
              >
              > I have obtained 3 static IP addresses for the 3 instances of Weblogic instances that
              > I wish to run within the cluster. All servers in the cluster use port number 80.
              > There is a corresponding DNS entry for each IP address. My base assumption is that
              > one of these instances will double up as the Administration Server... Is it true,
              > or do I need to define a separate Admin server if I wish to run 3 Weblogic instances
              > (each with a connection pool of 84 database connections for a total of 252 database
              > connections)?
              BEA recommends to use Admin server for administrative tasks only
              like configuring new deployments, jdbc conn pools, adding users etc..
              It's not a good idea to have admin server part of cluster.
              >
              > Do I need to re-deploy my applications for the cluster? And if so, would this explain
              > why I am having problem starting my Admin Server?
              >
              You have to target all your apps to the Cluster.
              > I think this is it for now. Any help will be greatly appreciated!
              >
              > Thanks in advance,
              >
              > Guylaine.
              >
              

  • Office Web App Load balancing Question

    I am going to install office web app in load balanced farm behing f5. There are few questions i want to ask:
    Do i first put servers in load balancer and start installing office web app or after installation of office web app in one server put that in load balancer and add another one or put both in load balancer and start installing.
    While i tried without putting servers in load balancer with offload ssl parameter , when i tried to join second server in the farm it gave me error destination unreachable. Is it because with offload ssl parameter it is looking for load balancer or something
    else? I checked port 809 from the servers and also verified the user was local admin in both machines.
    Any insight will be helpful.
    Adit

    Pls refer below link for configuration of NLB for OWA servers:
    http://blogs.technet.com/b/meamcs/archive/2013/03/27/office-web-apps-2013-multi-servers-nlb-installation-and-deployment-for-sharepoint-2013-step-by-step-guide.aspx
    Please ensure that you mark a question as Answered once you receive a satisfactory response.

  • Hypothetical RAC load balancing question

    Im trying to get a better understanding of RAC. Suppose I have the following tnsnames entry on Client1:
    DEV =
    (DESCRIPTION =
    (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = TCP)(HOST = dev010.net)(PORT = 1795))
    (ADDRESS = (PROTOCOL = TCP)(HOST = dev020.net)(PORT = 1795))
    (LOAD_BALANCE = yes)
    (FAILOVER = yes)
    (CONNECT_DATA =
    (SERVER = DEDICATED)
    (SERVICE_NAME = dev1)
    (failover_mode=(type=select)(method=basic))
    dev010.net and dev020.net are two nodes that I am trying to balance the connection from Client1 to.
    dev010.net and dev020.net have the following listener.ora configurations respectively:
    LISTENER_DEV1 =
    (DESCRIPTION_LIST =
    (DESCRIPTION =
    (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = TCP)(HOST = dev010.net)(PORT
    = 1795))
    SID_LIST_LISTENER_DEV1 =
    (SID_LIST =
    (SID_DESC =
    (SID_NAME = PLSExtProc)
    (ORACLE_HOME = /oracle/ora92)
    (PROGRAM = extproc)
    (SID_DESC =
    (ORACLE_HOME = /oracle/ora92)
    (SID_NAME = dev1)
    LISTENER_DEV2 =
    (DESCRIPTION_LIST =
    (DESCRIPTION =
    (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = TCP)(HOST = dev020.net)(PORT
    = 1795))
    SID_LIST_LISTENER_DEV2 =
    (SID_LIST =
    (SID_DESC =
    (SID_NAME = PLSExtProc)
    (ORACLE_HOME = /oracle/ora92)
    (PROGRAM = extproc)
    (SID_DESC =
    (ORACLE_HOME = /oracle/ora92)
    (SID_NAME = dev2)
    Question: since 'lsnrctl services' shows LISTENER_DEV1 and LISTENER_DEV2 are registered with services dev1 and dev2 respectively, how do I load balance Client1's connection between the two nodes if its service name is set to dev1?
    Can you set up RAC with some sort of master listener?
    Im new to RAC, so any help would be appreciated.

    Try this:
    LISTENER =
      (ADDRESS_LIST =
            (ADDRESS =
              (PROTOCOL = IPC)
              (KEY = DB1.WORLD)
            (ADDRESS=
              (PROTOCOL = IPC)
              (KEY = DB1)
            (ADDRESS =
              (COMMUNITY = MYRAC.WORLD)
              (PROTOCOL = TCP)
              (HOST = )
              (PORT = 1521)
    SID_LIST_LISTENER =
      (SID_LIST =
        (SID_DESC =
          (SID_NAME = DB101)
          (ORACLE_HOME = /oracle/Ora92)
        (SID_DESC =
          (SID_NAME = DB102)
          (ORACLE_HOME = /oracle/Ora92)
      )

  • Exchange 2013 Load Balancing Question

    Hey Everyone,
        I have recently started building up my companies Exchange 2013 environment and ran into some questions that I can't seem to find clear answers for on Google.
        First, a little bit about my set up:
    2 CAS Servers
    2 Mailbox Servers
    Citrix NetScaler load balancing the external URL (Controlling all incoming ports 25, 80, 443, 587, 993, and 995) to both of my CAS servers
    This is not doing SSL offloading, it's just forwarding encrypted traffic to the CAS servers
    I have configured a DAG between the 2 mailbox servers and am able to actively move the database my user account is on between the 2 copies with outlook disconnecting / reconnecting in about 10 - 15 seconds of moving it.
    My questions started when I saw what Outlook was filling in for the "Server" field once autodiscover set it up.  I found this very strange server name in it:  *** Email address is removed for privacy ***
    Once I read up on it, I think i understand what it does.   If i understand correctly, this weird URL is sort of like an old CAS array from Exchange 2010.  When I started testing the failover is when I started running into issues.
    When I shut down one of my mailbox servers, my outlook will lose connection and it won't come back.  The mailbox database that my user account is on successfully failed over to the other DAG copy but outlook never correctly connects.  I
    believe this issue has something to do with the new CAS functions of Exchange 2013 since DAG works fine.
    If I look at my "Connection Status" in Outlook, I see that there are several connections open.  All of them have a Proxy server address of "exchange.domain.com" and out of the 3 that show up there, they are all pointed to
    the weird URL mentioned above.
    Whew, long post but let me summarize my questions below:
    1)  If exchange is configured to be fully redundant, why does my outlook disconnect when I shut down one of the servers?
    2)  What is the weird URL pointing to that I mentioned above that is showing in outlook?
    3)  How can I get outlook to correctly not lose it's connection when any 1 of the servers goes down?
    Thanks,
    Zac

    Hi,
    According to your description, it seems that the load balancer did not configure successfully.
    I recommend you refer to the following article to configure the load balancer for Exchange 2013 :
    http://blogs.vmware.com/vsphere/2012/11/load-balancing-using-vcloud-networking-and-security-5-1-edge.html 
    Hope this helps!
    Thanks.
    Niko Cheng
    TechNet Community Support

  • Connection Load Balancing question

    Hi All,
    this is from oracle net8 Administrator's Guide (between double quotes):
    "Connection load balancing improves connection performance by balancing the number of active connections among multiple dispatchers. In an Oracle Parallel Server environment, connection load balancing also has the capability to balance the number of active connections among multiple instances. "
    My question is, from that statement above, does it mean that the connection load balancing can only be useful on a OPS with MTS instances?
    and second is , does it make sense (or can it be done) to implement it on single instance of MTS?
    TIA,
    Andi

    Hi Andi,
    CLB is only used in cluster configuration! In single node instance there is no advantage for ovbious reason of putting one more layer.
    you are right...use only with OPS and preferably with MTS.
    Best,
    G

  • Load balance question on services deployed in two slaves

    Case:
    One master: 192.172.1.1
    Two slaves: 192.172.2.1/192.172.2.2
    There is service A deployed in slave1(192.172.2.1) and slave2(192.172.2.2). Service A will call Service B, which is also deployed in slave1 and slave2.
    Condition:
    If I tmshutdown -s service_A in slave1, there is a service_A alive only in slave2.
    Question:
    Now there is requests to service A in slave2. Whether service B in slave1 will be called by service A in slave2 or not?
    My experiment proves it true. However, in my mind, the request to service A in slave2 will only call service B in slave2 not slave1. Is that wrong?
    Thanks for your kindly reply.

    Bill,
    There is no unit for LOAD and NETLOAD other than to compare to other values of LOAD and NETLOAD. The objective is to send a request to the server with the lowest total load. If there is an idle local server offering the service requested then a local server will always be chosen.
    Tuxedo keeps track of the total load sent from this machine to other machines. Every sanityscan interval the load balancing statistics are reset to 0.
    Assume that service B has a LOAD of 50 on both machine 1 and machine 2 and that NETLOAD is 80. There is only 1 server offering this service on each machine, it is 1:00:00, queues are empty, the sanity scan has just run, and SCANUNIT*SANITYSCAN=120. We are on machine1. Service B takes 5 seconds to complete on both machine 1 and machine 2.
    1:00:00 [M1 total work = 0, M2 total work = 0]
    1:00:01 request arrives, route to M2 due to idle local server preference [M1 total work = 0, M2 total work = 50]
    1:00:02 request arrives, route to M2 since 0+50+80 > 50+50+0 [M1 total work = 0, M2 total work = 100]
    1:00:03 request arrives, route to M1 since 0+50+80 < 100+50+0 [M2 total work = 130, M2 total work = 100]
    1:00:04 request arrives, route to M2 since 130+50+80 > 100+50+0 [M2 total work = 130, M2 total work = 150]
    1:00:05 request arrives, route to M2 since 130+50+80 > 150+50+0 [M2 total work = 130, M2 total work = 200]
    You're correct that although NETLOAD is set the remote service can sometimes still be called.
    Either LOAD or NETLOAD could have a greater effect depending on how big LOAD is compared to NETLOAD. In the example above approximately 13 requests would be sent to the local machine for every 5 requests setnt to the remote machine. If there are frequently idle local servers on the local machine then a greater percentage of requests will be sent to the local machine, and if there is always an idle local server then all requests will be sent to the local machine. The periodic reset of load balancing statistics can also affect how many requests are sent to each machine.
    Regards,
    Ed

  • Load balance question

    Hi,guys
    Suppose I started a empty cache firstly, then started another cache to load lots of data and put the data into the cache. If two caches join the same cluster, they will implement load balance automatically. It is transparent to us.
    My question is,
    Can I put the data into the empty cache until it is full, then put the data into the another one?
    Thanks,
    Bin

    Hi Bin,
    For partitioned caches (distributed and near-cache topologies), if you have multiple storage-enabled cache nodes within the same partitioned cache service, each of them will serve as the primary node for a share of the data. The identity of the cache node which is primary for a particular data entry depends on the key of the entry and the key association algorithm chosen (see the Wiki and the forum posts about this). By the default algorithm, the amount of data stored by each node as primary is about equal to each other (depends on the distribution of the hashCode algorithms).
    Also, if you have a backup count greater than zero, each node will also hold a backup copy of data for which other nodes are the primary nodes. The amount of the backup data in a node is roughly the amount of primary data multiplied by the configured backup-count (can be provided in the cache configuration, and by default it is 1).
    So it is almost impossible achieve a distribution of data in which you fill up the memory on one cache node before starting to consume memory on another.
    First of all, you would have to turn off backups, otherwise each added entry is stored on more than one servers. With turning off backups, you lose the chance to retain all your data if a node dies.
    Second, since the place of a data (the storage-enabled node which holds its primary copy) is distributed about evenly around a cluster, and not by the order of placing the entries into the cache, you would not be able to direct arbitrarily record-by-record which node is holding the primary (without backups the only) copy of your just-inserted data.
    Anyway, doing what you proposed (filling up caches one by one) reduces the advantages of load balancing and decreases performance otherwise, too, as there are a quite a few operations which are symmetrical in all storage-enabled nodes (e.g. queries, entry processing and aggregation, etc.), which all operate on local data in parallel (providing you use the proper edition of Coherence which supports parallel execution). Storing data on a lesser amount of nodes reduces the total processing power available to the parallel tasks, as some CPUs will not have any data to process, and the fewer rest will have to process all the data.
    As for replicated caches, it is outright impossible to fill up cache nodes one-by-one, as all nodes in a replicated cache service by their very nature store the same data related to that cache service.
    Just my 2 cents, of course.
    Best regards,
    Robert

  • N2000 weighted load balance question

    Hi --
    I have a question about how to use a weighted load balancing configuration to support a failover condition.
    My goal is to have an active and a standby configuration. This is not a web application, so it doesn't follow the same type of rules. In particular, I have a situation where there are multiple clients establishing long-term connections to a server. If the server goes down, I would like the LB to close the connections and when the clients try to reconnect it will route to the standby system.
    My question is: If I configure the 'weighting' for the active server to be 1 and for the standby server to be 0, will it always result in incoming connections being routed to the active server (and never the standby server)? If the active server is down, will it still route to the standby even though the weight is zero?
    Any thoughts/ideas/suggestions are greatly appreciated.

    Hi --
    I have a question about how to use a weighted load balancing configuration to support a failover condition.
    My goal is to have an active and a standby configuration. This is not a web application, so it doesn't follow the same type of rules. In particular, I have a situation where there are multiple clients establishing long-term connections to a server. If the server goes down, I would like the LB to close the connections and when the clients try to reconnect it will route to the standby system.
    My question is: If I configure the 'weighting' for the active server to be 1 and for the standby server to be 0, will it always result in incoming connections being routed to the active server (and never the standby server)? If the active server is down, will it still route to the standby even though the weight is zero?
    Any thoughts/ideas/suggestions are greatly appreciated.

  • Oracle Portal Load Balancing question

    Our customer wishes to have multiple Oracle Application Servers running Oracle Portal, with load balancing. They would like to have Web Cache collocated with one of these Portal servers, operating as a load balancer for the Portal servers (including the one collocated with Web Cache). Our understanding is that this is not supported, since Web Cache must use the same port number as the destination server it is load balancing, and this is a problem with collocation. The alternative of using mod_oc4j load balancing by OHS is we understand not supported for Portal. The customer does not wish to use external, hardware load balancers, or operating system load balancing. We understand that the solution is to use Web Cache on a dedicated separate server, to load balance the Portal servers. Can you please confirm that, or is collocation possible to avoid the cost of the dedicated Web Cache server.
    thanks,
    Message was edited by:
    user582458

    <<
    We understand that the solution is to use Web Cache on a dedicated separate server, to load balance the Portal servers. Can you please confirm that, or is collocation possible to avoid the cost of the dedicated Web Cache server
    >>
    You are right .. if the client does not want use load balancer you have no other choice.

  • BIP and Siebel server - file system and load balancing question

    1. I just need to understand whenever the reports are generated through BIP, are there reports stored on some local directory (say Reports) on the BIP server or Siebel File system? If on a File system how will archiving policy be implemented.
    2. When we talk of load balancing BIP Server. Can the common load balancer be used for BIP and Siebel servers?
    http://myforums.oracle.com/jive3/thread.jspa?threadID=335601

    Hi Sravanthi,
    Please check the below for finding ITS and WAS parameters from backend :
    For ITS - Go to SE37 >> Utilities >> Setting >> Click on ICON Right Top Corner in popup window >> Select Internet Transaction Server >> you will find the Standard Path and HTTP URL.
    For WAS - Go to SE37 >> Run FM - RSBB_URL_PREFIX_GET >> Execute it >> you will find PRefix and PAth paramter for WAS.
    Please refer to this may help step-by-step : How-to create a portal system for using it in Visual Composer
    Hope it helps
    Regards
    Arun

  • Cisco CSS 11503 Arrowpoint/Load Balance question

    I am troubleshooting an issue with my 11503.  I am running version 07.40.0.04. I have it configured as follows:
      content upcadtoa-rule
        add service cadtoa-wls1-e0
        add service cadtoa-wls1-e1
        add service cadtoa-wls2-e0
        add service cadtoa-wls2-e1
        add service cadtoa-wls3-e0
        add service cadtoa-wls3-e1
        add service cadtoa-wls4-e0
        add service cadtoa-wls4-e1
        add service cadtoa-wls5-e0
        add service cadtoa-wls5-e1
        add service cadtoa-wls6-e0
        add service cadtoa-wls6-e1
        arrowpoint-cookie expiration 00:00:15:00
        protocol tcp
        port 8001
        advanced-balance arrowpoint-cookie
        redundant-index 2
        vip address 172.30.194.195 range 2
        arrowpoint-cookie name TOA
        active
    However, the load-balancing across the servers does not seem to be doing much balancing.  One of those servers is getting hit with 5 times as much traffic as another and another server is lucky to get a connection at all.  With the cookie expiration set, one would think that this would all balance out over time.
    I just came across this information from Cisco and I am wondering if it is relevant:
    If you configure a balance or advanced-balance method on a content rule that requires the TCP protocol for Layer 5 (L5) spoofing, you should configure a default URL string, such as url "/*". The addition of the URL string forces the content rule to become an L5 rule and ensures L5 load balancing or stickiness. If you do not configure a default URL string, unexpected results can occur.
    In the following configuration example, if you configure a Layer 3 (L3) content rule with an L5 balance method, the CSS performs L5 load balancing, but will reject UDP packets.
    content testing
    vip address 192.168.128.131
    add service s1
    balance url
    active
    The balance url method is an L5 load-balancing method in which the CSS must spoof the connection and examine the HTTP GET content request to perform load balancing. The CSS rejects the UDP packet sent to this rule because a UDP connection cannot be L5. Though the CSS allows this rule configuration, its expected behavior would be more clear if you promote the rule to L5 by configuring the url "/*" command.
    In the next example, if you configure an L3 content rule with an L5 advanced-balance method, L5 stickiness will not work as expected.
    content testing
    vip address 192.168.128.131
    add service s1
    advanced-balance arrowpoint-cookie
    active
    The advanced-balance arrowpoint-cookie method causes the CSS to spoof the connection, however, the CSS still marks it as an L3 rule. Thus, the CSS does not insert the generated cookie and the rule defaults to L3 stickiness (sticky-srcip). You must configure a URL like url "/*" to promote this rule to L5, ensuring that L5 stickiness works as expected.
    Thanks in advance for any help you can give.  The thing is not down, it is just balancing strangely causing application performance issues.
    James

    Hey James,
    You will need to suspend the content rule in order to add the url statement.  This will cause a quick downtime until the content rule is activated again.  I have shown below the commands to add the statement.  Perhaps you can create your commands in a Notepad file, then paste them all in so they execute quickly to minimize your downtime:
      content MY-SITE
        vip address 10.201.130.140
        port 80
        protocol tcp
        add service MY-SERVER
        active
    CSS11503# config t
    CSS11503(config)# owner TEST
    CSS11503(config-owner[TEST])# content MY-SITE
    CSS11503(config-owner-content[TEST-MY-SITE])# url "/*"
    %% Attribute may not be modified on active rule
    CSS11503(config-owner-content[TEST-MY-SITE])# suspend
    CSS11503(config-owner-content[TEST-MY-SITE])# url "/*"
    CSS11503(config-owner-content[TEST-MY-SITE])# active
    CSS11503(config-owner-content[TEST-MY-SITE])# exit
    CSS11503(config-owner[TEST])# exit
    CSS11503(config)# exit
    CSS11503# show run
      content MY-SITE
        vip address 10.201.130.140
        add service MY-SERVER
        port 80
        protocol tcp
       url "/*"       <--------
        active
    Hope this helps,
    Sean

  • JMS cluster and distributed destination load balancing question

              Hi All
              Scenario: 2 WL 7 servers in cluster with distributed queue in both of them and
              both the servers have an MDB deployed for the queue. Now if a producer in server
              #1 writes to the Queue - he will write to the local queue - right?
              In that case will the local MDB pick up the message or that can be load balanced?
              OR the write it self can be load balanced?
              I really want either the write or the read to be load balanced - but I suspect
              server affinity will play a mess here. Can anyone pls clarify.
              thanks
              Anamitra
              

              Hi All
              Scenario: 2 WL 7 servers in cluster with distributed queue in both of them and
              both the servers have an MDB deployed for the queue. Now if a producer in server
              #1 writes to the Queue - he will write to the local queue - right?
              In that case will the local MDB pick up the message or that can be load balanced?
              OR the write it self can be load balanced?
              I really want either the write or the read to be load balanced - but I suspect
              server affinity will play a mess here. Can anyone pls clarify.
              thanks
              Anamitra
              

Maybe you are looking for