JMS cluster and distributed destination load balancing question
Hi All
Scenario: 2 WL 7 servers in cluster with distributed queue in both of them and
both the servers have an MDB deployed for the queue. Now if a producer in server
#1 writes to the Queue - he will write to the local queue - right?
In that case will the local MDB pick up the message or that can be load balanced?
OR the write it self can be load balanced?
I really want either the write or the read to be load balanced - but I suspect
server affinity will play a mess here. Can anyone pls clarify.
thanks
Anamitra
Hi All
Scenario: 2 WL 7 servers in cluster with distributed queue in both of them and
both the servers have an MDB deployed for the queue. Now if a producer in server
#1 writes to the Queue - he will write to the local queue - right?
In that case will the local MDB pick up the message or that can be load balanced?
OR the write it self can be load balanced?
I really want either the write or the read to be load balanced - but I suspect
server affinity will play a mess here. Can anyone pls clarify.
thanks
Anamitra
Similar Messages
-
Hi,guys
Suppose I started a empty cache firstly, then started another cache to load lots of data and put the data into the cache. If two caches join the same cluster, they will implement load balance automatically. It is transparent to us.
My question is,
Can I put the data into the empty cache until it is full, then put the data into the another one?
Thanks,
BinHi Bin,
For partitioned caches (distributed and near-cache topologies), if you have multiple storage-enabled cache nodes within the same partitioned cache service, each of them will serve as the primary node for a share of the data. The identity of the cache node which is primary for a particular data entry depends on the key of the entry and the key association algorithm chosen (see the Wiki and the forum posts about this). By the default algorithm, the amount of data stored by each node as primary is about equal to each other (depends on the distribution of the hashCode algorithms).
Also, if you have a backup count greater than zero, each node will also hold a backup copy of data for which other nodes are the primary nodes. The amount of the backup data in a node is roughly the amount of primary data multiplied by the configured backup-count (can be provided in the cache configuration, and by default it is 1).
So it is almost impossible achieve a distribution of data in which you fill up the memory on one cache node before starting to consume memory on another.
First of all, you would have to turn off backups, otherwise each added entry is stored on more than one servers. With turning off backups, you lose the chance to retain all your data if a node dies.
Second, since the place of a data (the storage-enabled node which holds its primary copy) is distributed about evenly around a cluster, and not by the order of placing the entries into the cache, you would not be able to direct arbitrarily record-by-record which node is holding the primary (without backups the only) copy of your just-inserted data.
Anyway, doing what you proposed (filling up caches one by one) reduces the advantages of load balancing and decreases performance otherwise, too, as there are a quite a few operations which are symmetrical in all storage-enabled nodes (e.g. queries, entry processing and aggregation, etc.), which all operate on local data in parallel (providing you use the proper edition of Coherence which supports parallel execution). Storing data on a lesser amount of nodes reduces the total processing power available to the parallel tasks, as some CPUs will not have any data to process, and the fewer rest will have to process all the data.
As for replicated caches, it is outright impossible to fill up cache nodes one-by-one, as all nodes in a replicated cache service by their very nature store the same data related to that cache service.
Just my 2 cents, of course.
Best regards,
Robert -
3rd party distributed SW load balancing with In-Memory Replication
Hi,
Could someone please comment on the feasibility of the following setup?
I've started testing replication with a software load balancing product. This
product lets all nodes receive all packets and uses a kernel-level filter
to let only one node at the time receive it. Since there's minimum 1 heartbeat
between the nodes, there are several NICs in each node.
At the moment it seems like it doesn't work: - I use the SessionServlet - with
a 2-node cluster I first have the 2 nodes up and I access it with a single client:
.the LB is configured to be sticky wrt. source IP address, so the same node gets
all the traffic - when I stop the node receiving the traffic the other node takes
over (I changed the colours of SessionServlet) . however, the counter restarts
at zero
From what I read of the in-memory replication documentation I thought that it
might work also with a distributed software load balancing cluster. Any comments
on the feasability of this?
Is there a way to debug replication (in WLS6SP1)? I don't see any replication
messages in the logs, so I'm not even sure that it works at all. - I do get a
message about "Clustering Services startting" when I start the examples server
on each node - is there anything tto look for in the console to make sure that
things are working? - the evaluation license for WLS6SP1 on NT seems to support
In-Memory Replication and Cluster. However, I've also seen a Cluster-II somewhere:
is that needed?
Thanks for your attention!
Regards, Frank Olsen
We are considering Resonate as one of the software load balancer. We haven't certified
them yet. I have no idea how long its going to take.
As a base rule if the SWLB can do the load balancing and maintain stickyness that is fine
with us as long as it doesn't modify the cookie or the URL if URL rewriting is enabled.
Having said that if you run into problems we won't be able to support you since it is not
certified.
-- Prasad
Frank Olsen wrote:
> Prasad Peddada <[email protected]> wrote:
> >Frank Olsen wrote:
> >
> >> Hi,
> >>
> > We don't support any 3rd party software load balancers.
>
> Does that mean that there are technical reasones why it won't work, or just that
> you haven't tested it?
>
> > As >I said before I am thinking your configuration is >incorrect if n-memory
> replication is not working. I would >strongly suggest you look at webapp deployment
> descriptor and >then the config.xml file.
>
> OK.
>
> >Also doing sticky based on source ip address is not good. You >should do it based
> on passive cookie persistence or active >cookie persistence (with cookie insert,
> a new one).
> >
>
> I agree that various source-based sticky options (IP, port; network) are not the
> best solution. In our current implementation we can't do this because the SW load
> balancer is based on filtering IP packets on the driver level.
>
> Currently I'm more interested in understanding whether it can our SW load balancer
> can work with your replication at all?
>
> What makes me think that it could work is that in WLS6.0 a session failed over
> to any cluster node can recover the replicated session.
>
> Can there be a problem with the cookies?
> - are the P/S for replication put in the cookie by the node itself or by the proxy/HW
> load balancer?
>
> >
> >The options are -Dweblogic.debug.DebugReplication=true and
> >-Dweblogic.debug.DebugReplicationDetails=true
> >
>
> Great, thanks!
>
> Regards,
> Frank Olsen
-
Hi all,
I have a service object (SO1) which has been set to Load Balancing.
This service object has an attribute which serves as a number allocator
(NA1).
This NA1 provides a unique number across the whole application for each of
the record that require to store into DB.
The problem is, will the NA1 get replicated if the SO1 is replicated?
If yes, will NA1 crash?
Regards,
Martin Chan
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Senior Analyst/Programmer
Dept of Education and Training
Mobile : 0413-996-116
Email: martin.chandet.nsw.edu.au
Tel: 02-9942-9685Hi Serge,
Could you prefix it with the PID of the load balanced process ?No I can't. At least not at the moment.
When a service object is replicated, it is automatically replicated into adifferent partition...
Thanks.
An advice, make the NA1 shared. So if you get to do multithreaded accessto
it, you won't screw up things.I am thinking it may be better off to create it as a service object on it's
own.
How is the number returned by the NA1 generated ?It gets generated by Forte's code.
... Try to make it so that the
load balanced partitions don't need to access the database more than onein
5 min. to get a new Seed Key. This would not need to PID.Thanks for your advise.
Regards
Martin Chan
-----Original Message-----
From: Serge Blais [mailto:Serge.BlaisSun.com]
Sent: Tuesday, 3 April 2001 14:17
To: Chan, Martin
Subject: RE: (forte-users) SO Load Balancing Question
Your right, they can generate the same number. How much control do you have
over the ID being generated? Could you prefix it with the PID of the load
balanced process ?
Just a note: When a service object is replicated, it is automatically
replicated into a different partition, possibly on the same machine or on a
different one.
An advice, make the NA1 shared. So if you get to do multithreaded access to
it, you won't screw up things.
How is the number returned by the NA1 generated ? If NA1 is using a stored
procedure, or something like:
Start TRX
read number
newnumber = number+5000
write back newnumber
End Trx
Something like will be very safe. The Database Index Table is taking care
of the critical section. Then you can be sure that each replicate can be
independent (not hit into each other) for 5000 iterations. Depending on the
frequency, you may want to up this number or lower this number. Too high it
would make the key very high very soon with wholes in the sequence. Too low
and you would have hit between the replicates. Try to make it so that the
load balanced partitions don't need to access the database more than one in
5 min. to get a new Seed Key. This would not need to PID.
Serge
At 01:59 PM 4/3/2001 +1000, you wrote:
Hi Serge,
The number return by the NA1 is used as a primary key for each of therecord
that stores in the DB.
The Number Allocator NA1 is required to access to DB to update an ID table
which carry the next available sequence number. NA1 will only update this
table for every 5000 records.
For example, the initial value of the sequence is: 1
The next update will change the value to 5001, next will be 10001 and soon.
>
The properties of this NA1 class at runtime
Shared - Disallowed
Distributed - Disallowed
Transactional - Is Default
Monitored - Disallowed
Unfortunately, this attribute is not a handle but is instantiated by theSO1
itself.
I have been thinking, if SO1 is replicated within the same partition, and
each replicate will carry its own NA1. NA1 and the replicate of NA1 may
return a same number if their initial values of the sequence are the same.
Correct?
Regards
Martin Chan
-----Original Message-----
From: Serge Blais [mailto:Serge.BlaisSun.com]
Sent: Tuesday, 3 April 2001 13:11
To: Chan, Martin; forte-userslists.xpedior.com
Subject: Re: (forte-users) SO Load Balancing Question
Let's see if I understand right.
You have a service object that keep a handle to an object that either keep
state information, or that generate state information. Now the thing to
figure out is which is it. Let's assume that NA1 is a number generator,
that does not need to be synchronized or that doesn't need to access any
external resource. It would still work, depending on the algorythm you are
using.
Will they share the same NA1? It depends on the nature of NA1, but for sure
NA1 would have to be an anchored object. An if multiple partitions would
share the same object "only" for key generation, you would bring down your
performance on key generation or key update (by adding one inter-process
call).
In short:
1. Many scenarios can happen, you need to be clearer on your description.
2. If you are sharing an object by load balanced partitions, this greatly
reduce the gain of load balancing the partition.
3. If NA1 is keeping state, any access to it would need to be controlled
"shared".
Have fun now...
Serge
At 12:30 PM 4/3/2001 +1000, Chan, Martin wrote:
Hi all,
I have a service object (SO1) which has been set to Load Balancing.
This service object has an attribute which serves as a number allocator
(NA1).
This NA1 provides a unique number across the whole application for each
of
the record that require to store into DB.
The problem is, will the NA1 get replicated if the SO1 is replicated?
If yes, will NA1 crash?
Regards,
Martin Chan
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Senior Analyst/Programmer
Dept of Education and Training
Mobile : 0413-996-116
Email: martin.chandet.nsw.edu.au
Tel: 02-9942-9685
For the archives, go to: http://lists.xpedior.com/forte-users and use
the login: forte and the password: archive. To unsubscribe, send in a new
email the word: 'Unsubscribe' to: forte-users-requestlists.xpedior.comSerge Blais
Professional Services Engineer
iPlanet Expertise Center
Sun Professional Services
Cell : (514) 234-4110
Serge.BlaisSun.comSerge Blais
Professional Services Engineer
iPlanet Expertise Center
Sun Professional Services
Cell : (514) 234-4110
Serge.BlaisSun.com -
JMS Failover with Distributed Destinations in 7.0
How does JMS failover with distributed destinations in WL 7.0?
In an environment using file stores for persistent messages, can a working server
automatically pick up unprocessed and persisted messages from a failed server?
If so, what's the best way to set this up?
Or, is this completely manual? In other words, we have to bring up a new server
pointing to the location of the file store from the failed server?
It appears that two JMSServers cannot share the same file store and, I'm assuming,
two file stores cannot be using the same directory for persistence.
So the HA you're talking about is something like Veritas automatically restarting
a server (or starting a new one) to process the messages in the persistent queue
that were unprocessed at the time of failure with the file store residing on some
sort of HA disk array.
The key point is that a message once it arrives at a server must be processed
by that server or, in the case of failure of that server, must be processed by
a server similarly configured to the one that failed so that it picks up the unprocessed
messages. The message can't be processed by another server in the cluster.
Or, is there some trick that could be employed to copy from the file store of
the failed server and repost the messages to the still operating servers?
"Zach" <[email protected]> wrote:
>Unless you have some sort of HA framework/hardware, this is a manual
>operation. You either point to the existing persistent storage (shared
>storage or JDBC connection pool), or you move the physical data.
>
>_sjz.
>
>"Jim Cross" <[email protected]> wrote in message
>news:[email protected]...
>>
>>
>> How does JMS failover with distributed destinations in WL 7.0?
>>
>> In an environment using file stores for persistent messages, can a
>working
>server
>> automatically pick up unprocessed and persisted messages from a failed
>server?
>> If so, what's the best way to set this up?
>>
>> Or, is this completely manual? In other words, we have to bring up
>a new
>server
>> pointing to the location of the file store from the failed server?
>
>
-
Pix OSPF load balancing question
I have a pix 515e with two default routes, learned via OSPF from two routers on the "outside" interface.
Currently router#2 is being preferred way much more than router#1. There are many thousands of destinations for the traffic. These two routers are further doing NAT to nat rfc1918 ip's to the internet (the pix is NOT doing nat)
Can someone please let me know how the PIX does load balancing? is it by IP address destination? is it something else?
thanks,
JoePer TAC:
"the PIX will do per-destination Load Balancing instead of per packet
load balancing. The algorithm will look at the source and destination
addresses. It does not do 1:1 load balancing. Given enough different
source and destination addresses, the packets will more or less reach a
50/50 spit between the two next-hops. However, in real world testing
with the same source and destination addresses, it may not reach an even
load balancing." -
How does CEF perform equal and unequal cost load balancing?
hello
How does CEF perform equal and unequal cost load balancing?
thanksHello Wang,
it is only EIGRP that can perform load balancing over unequal cost links.
For equal cost links CEF allocates 16 buckets and maps them to the the physical links.
the result of a binary operation is used to associated a packet to an outgoing interface:
Source IP address EXOR DEstination IP Address EXOR hash
the hash is a seed that changes only at every reload.
Actually the last 4 bits are used so that each flow can be classified in one bucket.
then the outgoing interface is the one asscociated to the result of the exor operation.
Another way to see is that m bits are used so that 2^m is equal to N number of links (if N is even)
the rule is simple and pre-established
Hope to help
Giuseppe -
When trying to download apps on my new iPad I keep getting prompted to update my security questions for my safety. When I choose this option it freezes and won't load the questions, however, when I hit not now it won't let me download. What do I do?
Reboot your iPad and then see if you can set the security questions.
Reboot the iPad by holding down on the sleep and home buttons at the same time for about 10-15 seconds until the Apple Logo appears - ignore the red slider - let go of the buttons. -
Cisco CSS 11503 Arrowpoint/Load Balance question
I am troubleshooting an issue with my 11503. I am running version 07.40.0.04. I have it configured as follows:
content upcadtoa-rule
add service cadtoa-wls1-e0
add service cadtoa-wls1-e1
add service cadtoa-wls2-e0
add service cadtoa-wls2-e1
add service cadtoa-wls3-e0
add service cadtoa-wls3-e1
add service cadtoa-wls4-e0
add service cadtoa-wls4-e1
add service cadtoa-wls5-e0
add service cadtoa-wls5-e1
add service cadtoa-wls6-e0
add service cadtoa-wls6-e1
arrowpoint-cookie expiration 00:00:15:00
protocol tcp
port 8001
advanced-balance arrowpoint-cookie
redundant-index 2
vip address 172.30.194.195 range 2
arrowpoint-cookie name TOA
active
However, the load-balancing across the servers does not seem to be doing much balancing. One of those servers is getting hit with 5 times as much traffic as another and another server is lucky to get a connection at all. With the cookie expiration set, one would think that this would all balance out over time.
I just came across this information from Cisco and I am wondering if it is relevant:
If you configure a balance or advanced-balance method on a content rule that requires the TCP protocol for Layer 5 (L5) spoofing, you should configure a default URL string, such as url "/*". The addition of the URL string forces the content rule to become an L5 rule and ensures L5 load balancing or stickiness. If you do not configure a default URL string, unexpected results can occur.
In the following configuration example, if you configure a Layer 3 (L3) content rule with an L5 balance method, the CSS performs L5 load balancing, but will reject UDP packets.
content testing
vip address 192.168.128.131
add service s1
balance url
active
The balance url method is an L5 load-balancing method in which the CSS must spoof the connection and examine the HTTP GET content request to perform load balancing. The CSS rejects the UDP packet sent to this rule because a UDP connection cannot be L5. Though the CSS allows this rule configuration, its expected behavior would be more clear if you promote the rule to L5 by configuring the url "/*" command.
In the next example, if you configure an L3 content rule with an L5 advanced-balance method, L5 stickiness will not work as expected.
content testing
vip address 192.168.128.131
add service s1
advanced-balance arrowpoint-cookie
active
The advanced-balance arrowpoint-cookie method causes the CSS to spoof the connection, however, the CSS still marks it as an L3 rule. Thus, the CSS does not insert the generated cookie and the rule defaults to L3 stickiness (sticky-srcip). You must configure a URL like url "/*" to promote this rule to L5, ensuring that L5 stickiness works as expected.
Thanks in advance for any help you can give. The thing is not down, it is just balancing strangely causing application performance issues.
JamesHey James,
You will need to suspend the content rule in order to add the url statement. This will cause a quick downtime until the content rule is activated again. I have shown below the commands to add the statement. Perhaps you can create your commands in a Notepad file, then paste them all in so they execute quickly to minimize your downtime:
content MY-SITE
vip address 10.201.130.140
port 80
protocol tcp
add service MY-SERVER
active
CSS11503# config t
CSS11503(config)# owner TEST
CSS11503(config-owner[TEST])# content MY-SITE
CSS11503(config-owner-content[TEST-MY-SITE])# url "/*"
%% Attribute may not be modified on active rule
CSS11503(config-owner-content[TEST-MY-SITE])# suspend
CSS11503(config-owner-content[TEST-MY-SITE])# url "/*"
CSS11503(config-owner-content[TEST-MY-SITE])# active
CSS11503(config-owner-content[TEST-MY-SITE])# exit
CSS11503(config-owner[TEST])# exit
CSS11503(config)# exit
CSS11503# show run
content MY-SITE
vip address 10.201.130.140
add service MY-SERVER
port 80
protocol tcp
url "/*" <--------
active
Hope this helps,
Sean -
Client side load balancing and server side load balancing
Hello Team,
I need to know how to set up client and server side load balancing in oracle rac. What all things to be implemented like creating a service, tnsnames.ora settings etc.
And also if i used SCAN ip instead of VIP. how the settings will change.
Regards,Hi,
please find here an Whitepaper with the information
http://www.oracle.com/technetwork/database/features/availability/maa-wp-11gr2-client-failover-173305.pdf
kind regards -
Hello,
When performing settings on JMS connection factory, one can set (check the "Load Balancing Enabled" option in the Configuration tab, Load Balance sub tab).
In the help documentation, we can read:
Specifies whether non-anonymous producers created through a connection factory are load balanced within a distributed destination on a per-call basis.
*If enabled, the associated message producers are load balanced on every send() or publish() .+
I have performed some tests and I don't see the expected behaviour that is to say load-balancing for each send or publish call.
So first what does mean "non-anonymous producers" ? Does that mean that we have to create JMS connection with username/password arguments ? If yes, I have used the same credentials than the ones used for the admin console and again I don't see load-balancing on physical queues belonging to one distributed queue !
Could you give, please, me advice on how to get the load-balancing working per send or publish call ?
Best Regards.Hello,
The content of the config.xml:
<?xml version='1.0' encoding='UTF-8'?>
<domain xmlns="http://xmlns.oracle.com/weblogic/domain" xmlns:sec="http://xmlns.oracle.com/weblogic/security" xmlns:wls="http://xmlns.oracle.com/weblogic/security/wls" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.oracle.com/weblogic/security/xacml http://xmlns.oracle.com/weblogic/security/xacml/1.0/xacml.xsd http://xmlns.oracle.com/weblogic/security/providers/passwordvalidator http://xmlns.oracle.com/weblogic/security/providers/passwordvalidator/1.0/passwordvalidator.xsd http://xmlns.oracle.com/weblogic/domain http://xmlns.oracle.com/weblogic/1.0/domain.xsd http://xmlns.oracle.com/weblogic/security http://xmlns.oracle.com/weblogic/1.0/security.xsd http://xmlns.oracle.com/weblogic/security/wls http://xmlns.oracle.com/weblogic/security/wls/1.0/wls.xsd http://www.bea.com/ns/weblogic/90/security/extension http://xmlns.oracle.com/weblogic/1.0/security.xsd">
<name>FRANCOISdomain</name>
<domain-version>10.3.2.0</domain-version>
<security-configuration>
<name>FRANCOISdomain</name>
<realm>
<sec:authentication-provider xsi:type="wls:default-authenticatorType">
<sec:control-flag>OPTIONAL</sec:control-flag>
</sec:authentication-provider>
<sec:authentication-provider xsi:type="wls:default-identity-asserterType">
<sec:active-type>AuthenticatedUser</sec:active-type>
</sec:authentication-provider>
<sec:authentication-provider xmlns:ext="http://www.bea.com/ns/weblogic/90/security/extension" xsi:type="ext:agent-authenticatorType">
<n1:name xmlns:n1="http://www.bea.com/ns/weblogic/90/security">OpenAMProvider</n1:name>
<n2:control-flag xmlns:n2="http://www.bea.com/ns/weblogic/90/security">OPTIONAL</n2:control-flag>
</sec:authentication-provider>
<sec:role-mapper xmlns:xac="http://xmlns.oracle.com/weblogic/security/xacml" xsi:type="xac:xacml-role-mapperType"></sec:role-mapper>
<sec:authorizer xmlns:xac="http://xmlns.oracle.com/weblogic/security/xacml" xsi:type="xac:xacml-authorizerType"></sec:authorizer>
<sec:adjudicator xsi:type="wls:default-adjudicatorType"></sec:adjudicator>
<sec:credential-mapper xsi:type="wls:default-credential-mapperType"></sec:credential-mapper>
<sec:cert-path-provider xsi:type="wls:web-logic-cert-path-providerType"></sec:cert-path-provider>
<sec:cert-path-builder>WebLogicCertPathProvider</sec:cert-path-builder>
<sec:name>myrealm</sec:name>
<sec:password-validator xmlns:pas="http://xmlns.oracle.com/weblogic/security/providers/passwordvalidator" xsi:type="pas:system-password-validatorType">
<sec:name>SystemPasswordValidator</sec:name>
<pas:min-password-length>8</pas:min-password-length>
<pas:min-numeric-or-special-characters>1</pas:min-numeric-or-special-characters>
</sec:password-validator>
</realm>
<default-realm>myrealm</default-realm>
<credential-encrypted>{AES}mq1iuVKohqULL/lwkqBF0PCxYeSXcHavSgc2TO4mKEWr81KYRukVzT/6Icj2576UhryaX5E/RzUKDJUZrEWAshpbE9B023NHogEtz7K0XQhToHxukFCiBy5I5mM8XpN4</credential-encrypted>
<node-manager-username>myusername</node-manager-username>
<node-manager-password-encrypted>{AES}r3SsMwpQiaNUYrGsTljMgyB9i4A0TELOfOni+RxRP/0=</node-manager-password-encrypted>
</security-configuration>
<jta>
<timeout-seconds>120</timeout-seconds>
</jta>
<log>
<file-name>logs/FRANCOISdomain.log</file-name>
<rotation-type>bySize</rotation-type>
<number-of-files-limited>true</number-of-files-limited>
<file-count>7</file-count>
<file-min-size>20480</file-min-size>
<rotate-log-on-startup>true</rotate-log-on-startup>
<log4j-logging-enabled>false</log4j-logging-enabled>
</log>
<snmp-agent-deployment>
<name>ServerSNMPAgent-0</name>
<enabled>true</enabled>
<send-automatic-traps-enabled>true</send-automatic-traps-enabled>
<snmp-port>1610</snmp-port>
<snmp-trap-version>1</snmp-trap-version>
<community-prefix>public</community-prefix>
<community-based-access-enabled>true</community-based-access-enabled>
<snmp-engine-id>ServerSNMPAgent-0</snmp-engine-id>
<authentication-protocol>noAuth</authentication-protocol>
<privacy-protocol>noPriv</privacy-protocol>
<inform-retry-interval>10000</inform-retry-interval>
<max-inform-retry-count>1</max-inform-retry-count>
<localized-key-cache-invalidation-interval>3600000</localized-key-cache-invalidation-interval>
<snmp-access-for-user-m-beans-enabled>true</snmp-access-for-user-m-beans-enabled>
<inform-enabled>false</inform-enabled>
<master-agent-x-port>7050</master-agent-x-port>
<target>AdminServer</target>
</snmp-agent-deployment>
<server>
<name>AdminServer</name>
<log>
<name>AdminServer</name>
<file-name>logs/AdminServer__%yyyy%_%MM%_%dd%_%hh%_%mm%.log</file-name>
<rotation-type>bySize</rotation-type>
<file-min-size>20480</file-min-size>
<logger-severity>Info</logger-severity>
<log-file-severity>Notice</log-file-severity>
<stdout-severity>Notice</stdout-severity>
<domain-log-broadcast-severity>Notice</domain-log-broadcast-severity>
<memory-buffer-severity>Trace</memory-buffer-severity>
</log>
<listen-port>20001</listen-port>
<iiop-enabled>true</iiop-enabled>
<default-iiop-user>iiopuser</default-iiop-user>
<default-iiop-password-encrypted>{AES}v2+TWtuxeDCyJ5ztyFko4t3ISkqKnlXEGK350FHvCXM=</default-iiop-password-encrypted>
<listen-address>10.10.166.103</listen-address>
</server>
<server>
<name>managed1</name>
<reverse-dns-allowed>false</reverse-dns-allowed>
<native-io-enabled>true</native-io-enabled>
<thread-pool-percent-socket-readers>33</thread-pool-percent-socket-readers>
<max-message-size>10000000</max-message-size>
<max-http-message-size>-1</max-http-message-size>
<complete-message-timeout>60</complete-message-timeout>
<idle-connection-timeout>65</idle-connection-timeout>
<period-length>60000</period-length>
<idle-periods-until-timeout>4</idle-periods-until-timeout>
<dgc-idle-periods-until-timeout>5</dgc-idle-periods-until-timeout>
<ssl>
<enabled>true</enabled>
<hostname-verifier xsi:nil="true"></hostname-verifier>
<hostname-verification-ignored>false</hostname-verification-ignored>
<export-key-lifespan>500</export-key-lifespan>
<client-certificate-enforced>false</client-certificate-enforced>
<listen-port>20012</listen-port>
<two-way-ssl-enabled>false</two-way-ssl-enabled>
<server-private-key-alias>myhost.mycompany.com</server-private-key-alias>
<server-private-key-pass-phrase-encrypted>{AES}haHJwbqbttygoo71Dyb3dQck2VsEd1woFGijvFXM0sA=</server-private-key-pass-phrase-encrypted>
<ssl-rejection-logging-enabled>true</ssl-rejection-logging-enabled>
<inbound-certificate-validation>BuiltinSSLValidationOnly</inbound-certificate-validation>
<outbound-certificate-validation>BuiltinSSLValidationOnly</outbound-certificate-validation>
<allow-unencrypted-null-cipher>false</allow-unencrypted-null-cipher>
<use-server-certs>false</use-server-certs>
</ssl>
<log>
<file-name>logs/managed1_%yyyy%_%MM%_%dd%_%hh%_%mm%.log</file-name>
<rotation-type>bySize</rotation-type>
<number-of-files-limited>true</number-of-files-limited>
<file-count>7</file-count>
<rotation-time>00:00</rotation-time>
<file-min-size>20480</file-min-size>
<rotate-log-on-startup>true</rotate-log-on-startup>
<logger-severity>Debug</logger-severity>
<logger-severity-properties>com.iplanet=Debug;test.ejb=Debug;com.sun.indentity=Debug;org.apache.http=Debug;test.servlet=Debug</logger-severity-properties>
<log-file-severity>Debug</log-file-severity>
<stdout-severity>Debug</stdout-severity>
<domain-log-broadcast-severity>Debug</domain-log-broadcast-severity>
<domain-log-broadcast-filter xsi:nil="true"></domain-log-broadcast-filter>
<memory-buffer-severity>Debug</memory-buffer-severity>
<memory-buffer-filter xsi:nil="true"></memory-buffer-filter>
<log4j-logging-enabled>true</log4j-logging-enabled>
<redirect-stdout-to-server-log-enabled>false</redirect-stdout-to-server-log-enabled>
<domain-log-broadcaster-buffer-size>50</domain-log-broadcaster-buffer-size>
</log>
<max-open-sock-count>-1</max-open-sock-count>
<stuck-thread-max-time>600</stuck-thread-max-time>
<stuck-thread-timer-interval>60</stuck-thread-timer-interval>
<machine>FRANCOIS_Machine1</machine>
<listen-port>20011</listen-port>
<listen-port-enabled>true</listen-port-enabled>
<cluster>FRANCOIS_cluster</cluster>
<web-server>
<web-server-log>
<number-of-files-limited>false</number-of-files-limited>
</web-server-log>
<frontend-http-port>0</frontend-http-port>
<frontend-https-port>0</frontend-https-port>
<keep-alive-enabled>true</keep-alive-enabled>
<keep-alive-secs>30</keep-alive-secs>
<https-keep-alive-secs>60</https-keep-alive-secs>
<post-timeout-secs>30</post-timeout-secs>
<max-post-size>-1</max-post-size>
<send-server-header-enabled>false</send-server-header-enabled>
<wap-enabled>false</wap-enabled>
<accept-context-path-in-get-real-path>false</accept-context-path-in-get-real-path>
</web-server>
<server-debug>
<debug-scope>
<name>weblogic.security</name>
<enabled>false</enabled>
</debug-scope>
<debug-scope>
<name>weblogic.servlet</name>
<enabled>false</enabled>
</debug-scope>
<debug-scope>
<name>default</name>
<enabled>false</enabled>
</debug-scope>
<debug-scope>
<name>weblogic</name>
<enabled>false</enabled>
</debug-scope>
</server-debug>
<listen-address>host.mycompany.com</listen-address>
<accept-backlog>300</accept-backlog>
<login-timeout-millis>5000</login-timeout-millis>
<java-compiler>javac</java-compiler>
<tunneling-enabled>true</tunneling-enabled>
<tunneling-client-ping-secs>45</tunneling-client-ping-secs>
<tunneling-client-timeout-secs>40</tunneling-client-timeout-secs>
<server-start>
<java-vendor>Sun</java-vendor>
<java-home>/opt/32bit/jdk1.6.0_18</java-home>
<class-path>${CLASSPATH}:/opt/32bit/jdk1.6.0_18/lib/tools.jar:/product/DSL60/wlserver_10.3/server/lib/weblogic_sp.jar:/product/DSL60/wlserver_10.3/server/lib/weblogic.jar:/product/FILES/PAF/j2ee_agents/weblogic_v10_agent/lib/agent.jar:/product/FILES/PAF/j2ee_agents/weblogic_v10_agent/lib/openssoclientsdk.jar:/product/FILES/PAF/j2ee_agents/weblogic_v10_agent/locale:/product/FILES/PAF/j2ee_agents/weblogic_v10_agent/Agent_002/config</class-path>
<bea-home>/product/DSL60</bea-home>
<root-directory>/product/DSL60/wls/domain/FRANCOISdomain</root-directory>
<security-policy-file>/product/DSL60/wlserver_10.3/server/lib/weblogic.policy</security-policy-file>
<arguments>-Dname=WL1_MYCOMPANY_PID -Dlog4j.configuration=file:///product/DSL60/wls/domain/FRANCOISdomain/lib/log4j.xml -Declipselink.register.run.mbean=true -Xms1024m -Xmx1024m -XX:MaxPermSize=256m -d32 -Doracle.net.tns.admin=/opt/oracle/11.2.0/network/admin/tnsname.ora -Djava.util.logging.config.file=/product/FILES/PAF/j2ee_agents/weblogic_v10_agent/config/OpenSSOAgentLogConfig.properties -DLOG_COMPATMODE=Off</arguments>
<username>myusername</username>
<password-encrypted>{AES}+o7kEIuvUEC1C4IoVveulxKTyN3upgWDglcqqgOEwt4=</password-encrypted>
</server-start>
<jta-migratable-target>
<user-preferred-server>managed1</user-preferred-server>
<cluster>FRANCOIS_cluster</cluster>
</jta-migratable-target>
<low-memory-time-interval>3600</low-memory-time-interval>
<low-memory-sample-size>10</low-memory-sample-size>
<low-memory-granularity-level>5</low-memory-granularity-level>
<low-memory-gc-threshold>5</low-memory-gc-threshold>
<auto-kill-if-failed>true</auto-kill-if-failed>
<health-check-interval-seconds>30</health-check-interval-seconds>
<managed-server-independence-enabled>true</managed-server-independence-enabled>
<client-cert-proxy-enabled>false</client-cert-proxy-enabled>
<key-stores>CustomIdentityAndCustomTrust</key-stores>
<custom-identity-key-store-file-name>/product/FILES/PAF/cert/opensso.jks</custom-identity-key-store-file-name>
<custom-identity-key-store-type>JKS</custom-identity-key-store-type>
<custom-identity-key-store-pass-phrase-encrypted>{AES}yg0Tx8tcfZsqM2sYbfTPEDl7ceN5X5zUEALaBM58wS8=</custom-identity-key-store-pass-phrase-encrypted>
<custom-trust-key-store-file-name>/product/FILES/PAF/cert/opensso.jks</custom-trust-key-store-file-name>
<custom-trust-key-store-type>JKS</custom-trust-key-store-type>
<custom-trust-key-store-pass-phrase-encrypted>{AES}8Ghgu1RUTF7st3f69sZKdb6vTfWiFvk1g+CUi63utBA=</custom-trust-key-store-pass-phrase-encrypted>
<overload-protection>
<shared-capacity-for-work-managers>1111</shared-capacity-for-work-managers>
<panic-action>system-exit</panic-action>
<failure-action>no-action</failure-action>
<free-memory-percent-high-threshold>0</free-memory-percent-high-threshold>
<free-memory-percent-low-threshold>0</free-memory-percent-low-threshold>
</overload-protection>
</server>
<server>
<name>managed2</name>
<reverse-dns-allowed>false</reverse-dns-allowed>
<native-io-enabled>true</native-io-enabled>
<thread-pool-percent-socket-readers>33</thread-pool-percent-socket-readers>
<max-message-size>10000000</max-message-size>
<complete-message-timeout>60</complete-message-timeout>
<idle-connection-timeout>65</idle-connection-timeout>
<period-length>60000</period-length>
<idle-periods-until-timeout>4</idle-periods-until-timeout>
<dgc-idle-periods-until-timeout>5</dgc-idle-periods-until-timeout>
<log>
<file-name>logs/managed2_%yyyy%_%MM%_%dd%_%hh%_%mm%.log</file-name>
<rotation-type>bySize</rotation-type>
<number-of-files-limited>true</number-of-files-limited>
<file-count>7</file-count>
<rotation-time>00:00</rotation-time>
<file-min-size>20480</file-min-size>
<rotate-log-on-startup>true</rotate-log-on-startup>
<logger-severity>Debug</logger-severity>
<logger-severity-properties>org.apache.http=Error</logger-severity-properties>
<log-file-severity>Debug</log-file-severity>
<stdout-severity>Debug</stdout-severity>
<domain-log-broadcast-severity>Debug</domain-log-broadcast-severity>
<domain-log-broadcast-filter xsi:nil="true"></domain-log-broadcast-filter>
<memory-buffer-severity>Debug</memory-buffer-severity>
<memory-buffer-filter xsi:nil="true"></memory-buffer-filter>
<log4j-logging-enabled>true</log4j-logging-enabled>
<redirect-stdout-to-server-log-enabled>false</redirect-stdout-to-server-log-enabled>
<domain-log-broadcaster-buffer-size>50</domain-log-broadcaster-buffer-size>
</log>
<max-open-sock-count>-1</max-open-sock-count>
<stuck-thread-max-time>600</stuck-thread-max-time>
<stuck-thread-timer-interval>60</stuck-thread-timer-interval>
<machine>FRANCOIS_Machine1</machine>
<listen-port>20021</listen-port>
<cluster>FRANCOIS_cluster</cluster>
<web-server>
<web-server-log>
<number-of-files-limited>false</number-of-files-limited>
</web-server-log>
</web-server>
<listen-address>10.10.166.103</listen-address>
<accept-backlog>300</accept-backlog>
<login-timeout-millis>5000</login-timeout-millis>
<tunneling-enabled>true</tunneling-enabled>
<tunneling-client-ping-secs>45</tunneling-client-ping-secs>
<tunneling-client-timeout-secs>40</tunneling-client-timeout-secs>
<server-start>
<java-vendor>Sun</java-vendor>
<java-home>/opt/32bit/jdk1.6.0_18</java-home>
<class-path>${CLASSPATH}:/opt/32bit/jdk1.6.0_18/lib/tools.jar:/product/DSL60/wlserver_10.3/server/lib/weblogic_sp.jar:/product/DSL60/wlserver_10.3/server/lib/weblogic.jar:/product/FILES/PAF/j2ee_agents/weblogic_v10_agent/lib/agent.jar:/product/FILES/PAF/j2ee_agents/weblogic_v10_agent/lib/openssoclientsdk.jar:/product/FILES/PAF/j2ee_agents/weblogic_v10_agent/locale:/product/FILES/PAF/j2ee_agents/weblogic_v10_agent/Agent_003/config</class-path>
<bea-home>/product/DSL60</bea-home>
<root-directory>/product/DSL60/wls/domain/FRANCOISdomain</root-directory>
<security-policy-file>/product/DSL60/wlserver_10.3/server/lib/weblogic.policy</security-policy-file>
<arguments>-Dname=WL1_MYCOMPANY_PID -Dlog4j.configuration=file:///product/DSL60/wls/domain/FRANCOISdomain/lib/log4j.xml -Declipselink.register.run.mbean=true -Xms1024m -Xmx1024m -XX:MaxPermSize=256m -d32 -Doracle.net.tns.admin=/opt/oracle/11.2.0/network/admin/tnsname.ora -Djava.util.logging.config.file=/product/FILES/PAF/j2ee_agents/weblogic_v10_agent/config/OpenSSOAgentLogConfig.properties -DLOG_COMPATMODE=Off</arguments>
<username>myusername</username>
<password-encrypted>{AES}AveXfjkD6M1nkwLoBOtN9QhrOA+C1d84AP+A2WThpN0=</password-encrypted>
</server-start>
<jta-migratable-target>
<user-preferred-server>managed2</user-preferred-server>
<cluster>FRANCOIS_cluster</cluster>
</jta-migratable-target>
<low-memory-time-interval>3600</low-memory-time-interval>
<low-memory-sample-size>10</low-memory-sample-size>
<low-memory-granularity-level>5</low-memory-granularity-level>
<low-memory-gc-threshold>5</low-memory-gc-threshold>
<auto-kill-if-failed>true</auto-kill-if-failed>
<health-check-interval-seconds>30</health-check-interval-seconds>
<managed-server-independence-enabled>true</managed-server-independence-enabled>
</server>
<cluster>
<name>FRANCOIS_cluster</name>
<cluster-address>10.10.166.103:20011,10.10.166.103:20021</cluster-address>
<default-load-algorithm>round-robin</default-load-algorithm>
<cluster-messaging-mode>unicast</cluster-messaging-mode>
<cluster-broadcast-channel></cluster-broadcast-channel>
<weblogic-plugin-enabled>true</weblogic-plugin-enabled>
<frontend-http-port>20011</frontend-http-port>
<frontend-https-port>20012</frontend-https-port>
<number-of-servers-in-cluster-address>1</number-of-servers-in-cluster-address>
</cluster>
<production-mode-enabled>false</production-mode-enabled>
<embedded-ldap>
<name>FRANCOISdomain</name>
<credential-encrypted>{AES}M6zrsdwO+PvT05M07l6QPOBMLacz4b6Z9+DT5EDxQPABYDdIzZbossnMLiXSSodJ</credential-encrypted>
</embedded-ldap>
<archive-configuration-count>3</archive-configuration-count>
<config-backup-enabled>true</config-backup-enabled>
<configuration-version>10.3.2.0</configuration-version>
<library>
<name>mycompany-domain-logging.jar#[email protected]</name>
<target>FRANCOIS_cluster</target>
<module-type xsi:nil="true"></module-type>
<source-path>servers/AdminServer/upload/mycompany-domain-logging.jar/app/mycompany-domain-logging.jar</source-path>
<security-dd-model>DDOnly</security-dd-model>
<staging-mode>stage</staging-mode>
</library>
<library>
<name>eclipselink-custom.jar#[email protected]</name>
<target>FRANCOIS_cluster</target>
<module-type xsi:nil="true"></module-type>
<source-path>servers/AdminServer/upload/eclipselink-custom.jar/app/eclipselink-custom.jar</source-path>
<security-dd-model>DDOnly</security-dd-model>
<staging-mode>stage</staging-mode>
</library>
<machine>
<name>FRANCOIS_Machine1</name>
<node-manager>
<nm-type>Plain</nm-type>
<listen-address>10.10.166.103</listen-address>
<listen-port>5566</listen-port>
</node-manager>
</machine>
<jms-server>
<name>JMSServer1</name>
<target>managed1</target>
<persistent-store>jdbcStore1</persistent-store>
</jms-server>
<jms-server>
<name>JMSServer2</name>
<target>managed2</target>
<persistent-store>jdbcStore2</persistent-store>
</jms-server>
<migratable-target>
<name>managed1 (migratable)</name>
<notes>This is a system generated default migratable target for a server. Do not delete manually.</notes>
<user-preferred-server>managed1</user-preferred-server>
<cluster>FRANCOIS_cluster</cluster>
</migratable-target>
<migratable-target>
<name>managed2 (migratable)</name>
<notes>This is a system generated default migratable target for a server. Do not delete manually.</notes>
<user-preferred-server>managed2</user-preferred-server>
<cluster>FRANCOIS_cluster</cluster>
</migratable-target>
<startup-class>
<name>AppenderStartup</name>
<target>FRANCOIS_cluster</target>
<class-name>com.mycompany.logging.AppenderStartup</class-name>
<load-before-app-deployments>true</load-before-app-deployments>
</startup-class>
<jdbc-store>
<name>jdbcStore1</name>
<prefix-name>jdbcStore1</prefix-name>
<data-source>technical_mycompany_noxa.ds</data-source>
<target>managed1</target>
</jdbc-store>
<jdbc-store>
<name>jdbcStore2</name>
<prefix-name>jdbcStore2</prefix-name>
<data-source>mycompany_noxa_failover.ds</data-source>
<target>managed2</target>
</jdbc-store>
<jms-system-resource>
<name>EclipseLink_Module</name>
<target>FRANCOIS_cluster</target>
<sub-deployment>
<name>DeployToCluster</name>
<target>FRANCOIS_cluster</target>
</sub-deployment>
<descriptor-file-name>jms/eclipselink_module-jms.xml</descriptor-file-name>
</jms-system-resource>
<jms-system-resource>
<name>TESTJMS</name>
<target>FRANCOIS_cluster</target>
<sub-deployment>
<name>TestQueueM1</name>
<target>JMSServer1</target>
</sub-deployment>
<sub-deployment>
<name>TestQueueM2</name>
<target>JMSServer2</target>
</sub-deployment>
<descriptor-file-name>jms/testjms-jms.xml</descriptor-file-name>
</jms-system-resource>
<admin-server-name>AdminServer</admin-server-name>
<jdbc-system-resource>
<name>mycompany_xa_failover.ds</name>
<target>FRANCOIS_cluster</target>
<descriptor-file-name>jdbc/mycompany_xa_failover2eds-4849-jdbc.xml</descriptor-file-name>
</jdbc-system-resource>
<jdbc-system-resource>
<name>mycompany_noxa_failover.ds</name>
<target>FRANCOIS_cluster</target>
<descriptor-file-name>jdbc/mycompany_noxa_failover2eds-3264-jdbc.xml</descriptor-file-name>
</jdbc-system-resource>
<jdbc-system-resource>
<name>technical_mycompany_noxa.ds</name>
<target>FRANCOIS_cluster</target>
<descriptor-file-name>jdbc/technical_mycompany_noxa2eds-3047-jdbc.xml</descriptor-file-name>
</jdbc-system-resource>
</domain>
Best Regards. -
Clustering problems and load balancing question
I am using Weblogic 6.1. My Windows NT environment consists of 10 web client-simulator
machines, 2 App. Server machines and one database server machine. I have defined
one cluster on each app. server. Each cluster is running 3 Weblogic instances, or
so it should be when I fix my problems!
My questions/problems are the following:
1. Can I use a software dispatcher to perform workload balancing between the 2 weblogic
clusters? That is, the client-simulator machines send the requests to the software
dispatcher which performs workload balancing between the 2 Weblogic clusters. The
clusters perform round-robin amongst all instances. Note that the documentation only
talks about Hardware Balancing.
2. I am having problems with my multicast IP addresses. For instance, on one App.
Server machine, I am using the multicast IP address: 239.0.0.1 for MyCluster. When
I start the Admin Server, I get a JDBC error: "... multicast socket error: Request
Time Out". I have used the utils.MulticastTest utility which shows the packets not
being received:
I (S1) sent message num 1
I (S1) sent message num 2
I (S1) sent message num 3
I (S1) sent message num 4
What am I doing wrong?
3. Re. the cluster configuration:
NOTE: I have executed my workload using 2 independent App. Server machines with a
software dispatcher - no clustering. Each App. Server used a jdbc connection pool
of 84 database connections. The db connections happened to become my bottleneck.
When I tried to increase the number of connections in the jdbc pool, throughput decreased
dramatically. Thus, I decided to add a cluster of Weblogic instances to each one
of my 8 x 900Mhz machines in order to scale up. Unfortunatly, adding clusters have
not been that simple a task - probably because I am totally new to the Web Application
Server world!
Here is what I've got so far:
I have obtained 3 static IP addresses for the 3 instances of Weblogic instances that
I wish to run within the cluster. All servers in the cluster use port number 80.
There is a corresponding DNS entry for each IP address. My base assumption is that
one of these instances will double up as the Administration Server... Is it true,
or do I need to define a separate Admin server if I wish to run 3 Weblogic instances
(each with a connection pool of 84 database connections for a total of 252 database
connections)?
Do I need to re-deploy my applications for the cluster? And if so, would this explain
why I am having problem starting my Admin Server?
I think this is it for now. Any help will be greatly appreciated!
Thanks in advance,
Guylaine.
Guylaine Cantin wrote:
> I am using Weblogic 6.1. My Windows NT environment consists of 10 web client-simulator
> machines, 2 App. Server machines and one database server machine. I have defined
> one cluster on each app. server. Each cluster is running 3 Weblogic instances, or
> so it should be when I fix my problems!
>
> My questions/problems are the following:
>
> 1. Can I use a software dispatcher to perform workload balancing between the 2 weblogic
> clusters? That is, the client-simulator machines send the requests to the software
> dispatcher which performs workload balancing between the 2 Weblogic clusters. The
> clusters perform round-robin amongst all instances. Note that the documentation only
> talks about Hardware Balancing.
>
We also support software load balancers (for e.g. resonate)
The software dispatcher should be intelligent enough to decode the
cookie and route the request to the appropriate servers. This is
necessary to maintain sticky load balancing.
> 2. I am having problems with my multicast IP addresses. For instance, on one App.
> Server machine, I am using the multicast IP address: 239.0.0.1 for MyCluster. When
> I start the Admin Server, I get a JDBC error: "... multicast socket error: Request
> Time Out". I have used the utils.MulticastTest utility which shows the packets not
> being received:
>
> I (S1) sent message num 1
> I (S1) sent message num 2
> I (S1) sent message num 3
> I (S1) sent message num 4
> ...
>
> What am I doing wrong?
>
You should run the above utility from multiple windows and see if each
of them being recognized or not.
i.e. java utils.MulticastTest -N S1 -A 239.0.0.1
java utils.MulticastTest -N S1 -A 239.0.0.1
> 3. Re. the cluster configuration:
>
> NOTE: I have executed my workload using 2 independent App. Server machines with a
> software dispatcher - no clustering. Each App. Server used a jdbc connection pool
> of 84 database connections. The db connections happened to become my bottleneck.
> When I tried to increase the number of connections in the jdbc pool, throughput decreased
> dramatically. Thus, I decided to add a cluster of Weblogic instances to each one
> of my 8 x 900Mhz machines in order to scale up. Unfortunatly, adding clusters have
> not been that simple a task - probably because I am totally new to the Web Application
> Server world!
>
You have to stress test your application several times and set
maxCapacity of the conn pool accordingly.
> Here is what I've got so far:
>
> I have obtained 3 static IP addresses for the 3 instances of Weblogic instances that
> I wish to run within the cluster. All servers in the cluster use port number 80.
> There is a corresponding DNS entry for each IP address. My base assumption is that
> one of these instances will double up as the Administration Server... Is it true,
> or do I need to define a separate Admin server if I wish to run 3 Weblogic instances
> (each with a connection pool of 84 database connections for a total of 252 database
> connections)?
BEA recommends to use Admin server for administrative tasks only
like configuring new deployments, jdbc conn pools, adding users etc..
It's not a good idea to have admin server part of cluster.
>
> Do I need to re-deploy my applications for the cluster? And if so, would this explain
> why I am having problem starting my Admin Server?
>
You have to target all your apps to the Cluster.
> I think this is it for now. Any help will be greatly appreciated!
>
> Thanks in advance,
>
> Guylaine.
>
-
Questions on replication and h/w load balancer
Why does h/w load balancer have to support passive cookies and inspect them to
dispatch the request to the primary server first? If we have in-memory replication
and if h/w loadbalancer just dispatches the http request from the client to any
of the weblogic servers in the cluster wouldnt this work?
Is it to pin the session to the creator server to minimize the chance of replication
misses due to n/w issues, member server slow speed, buffer overwrite etc.
-Shiraz
Yes, and previous to 6.1 (?) if the request showed up at the wrong server it
would fail.
Peace,
Cameron Purdy
Tangosol Inc.
Tangosol Coherence: Clustered Coherent Cache for J2EE
Information at http://www.tangosol.com/
"Shiraz Zaidi" <[email protected]> wrote in message
news:3c15aa10$[email protected]..
>
> Why does h/w load balancer have to support passive cookies and inspect
them to
> dispatch the request to the primary server first? If we have in-memory
replication
> and if h/w loadbalancer just dispatches the http request from the client
to any
> of the weblogic servers in the cluster wouldnt this work?
>
> Is it to pin the session to the creator server to minimize the chance of
replication
> misses due to n/w issues, member server slow speed, buffer overwrite etc.
>
> -Shiraz
-
JMS and MQ series load balancing.
We have an interface that utilizes the JMS adapter and MQ series from websphere. From a high level, i wanted to ask , when you have additional cluster instances of the adapter framework running. How do the communication channels on these (say 2) instances know which one is to process the message. Or in the scenario, say one comm channel is already bottlenecked, how does the adapter engine know to forward the next JMS message to the other adapter.
I would be most appreciative of any help in this regard. IE docs, website or tools to configure.
Thanks
Jeremy BakerHi,
Your JMS adapter is just like your standalone JMS (java) program that puts and retrieves ur messages from the Queues. So u don't need to create any local Queues and QManagers on XI Server. U need to configure the Parameters as mentioned in ur Sender Adapter( WebsphereMQ (MQSeries) and u will be successfully able to retrieve the messages from MQSeries.
Make sure u have deployed the necessary jar's on ur XIServer to connect to WebsphereMQ.
Cheers,
Siva Maranani. -
BIP and Siebel server - file system and load balancing question
1. I just need to understand whenever the reports are generated through BIP, are there reports stored on some local directory (say Reports) on the BIP server or Siebel File system? If on a File system how will archiving policy be implemented.
2. When we talk of load balancing BIP Server. Can the common load balancer be used for BIP and Siebel servers?
http://myforums.oracle.com/jive3/thread.jspa?threadID=335601Hi Sravanthi,
Please check the below for finding ITS and WAS parameters from backend :
For ITS - Go to SE37 >> Utilities >> Setting >> Click on ICON Right Top Corner in popup window >> Select Internet Transaction Server >> you will find the Standard Path and HTTP URL.
For WAS - Go to SE37 >> Run FM - RSBB_URL_PREFIX_GET >> Execute it >> you will find PRefix and PAth paramter for WAS.
Please refer to this may help step-by-step : How-to create a portal system for using it in Visual Composer
Hope it helps
Regards
Arun
Maybe you are looking for
-
My computer is authorized, songs show up as purchase but they cannot be located, i checked the actual itunes folder and the entire album folder is in there but its empty! the songs still show up in my iphone. i transfered purchases. I have no idea wh
-
Duplicate Items in Recent Items menu
HI, Just recently (in the last couple of weeks) I have suddenly been getting duplicate entries in the recent items folder (not in the apple menu as has been discussed elsewhere). I have 2 entries for Disk Utility, ITunes, Terminal etc. I have checked
-
Error in Oracle Form 10g while deploying it on Application server
hiiiii I have a form in 10g in which i am trying to produce data in excel format through ole2.. the form is working fine when i am running it on form builder....but when i deploy this form on oracle 10g application server,the form is getting run thro
-
My hard drive crashed and I'm going to have to install a new hard drive and re-install PSE 11. The instructions to put another copy on a new computer is to deactivate PSE from the old computer . Am I going to do that since it calls for me to deacti
-
Adobe Story Free in Internet Explorer 11
Hi, I want to use Adobe exports Free. I have Internet Explorer 11 but not working properly. I sign and I can not do that run the application.