Making requests to a cluster
Hi,
I'm a bit confused by the General tab in cluster configuration.
It contains the following fields:-
- Name
- Cluster Address
- Default Load Algorithm
- Service Age Threshold
I understand that hostname/IP(s) that map to one or more servers
in the cluster goes in Cluster Address. But, if that's the case,
what party is responsible for scheduling requests to servers
in th cluster, using the algorithm in Default Load Algorithm? And, how does one
connect to that party and on what port?
If the answer is that you have to use your own policy (software
or hardware load balancing) then what is the purpose of
the Default Load Algorithm field in WLS 6.1?
You already configure what servers are on the cluster, so WLS
knows this already. So why does one have to specify the
IPs again in Cluster Address? Seems to me, and from other
messages in this forum, that filling out this tab doesn't have
much benefit at all.
Thanks in advance,
Gary
FT.com
The cluster address is the DNS round-robin address that clients use in their URL to
establish
their initial connection. The cluster address is currently only used by WL in two
limited cases:
EJB home handles -- These contain info that can be serialized and passed to a
client
which currently may not have a connection to the cluster. The client can use
the handle to find its associated EJB.
Entity Bean fail-over -- Allows client to automagically get back to the cluster
if a connection to the cluster fails.
Tom
Gary Watson wrote:
> Hi,
>
> I'm a bit confused by the General tab in cluster configuration.
> It contains the following fields:-
> - Name
> - Cluster Address
> - Default Load Algorithm
> - Service Age Threshold
>
> I understand that hostname/IP(s) that map to one or more servers
> in the cluster goes in Cluster Address. But, if that's the case,
> what party is responsible for scheduling requests to servers
> in th cluster, using the algorithm in Default Load Algorithm? And, how does one
> connect to that party and on what port?
>
> If the answer is that you have to use your own policy (software
> or hardware load balancing) then what is the purpose of
> the Default Load Algorithm field in WLS 6.1?
>
> You already configure what servers are on the cluster, so WLS
> knows this already. So why does one have to specify the
> IPs again in Cluster Address? Seems to me, and from other
> messages in this forum, that filling out this tab doesn't have
> much benefit at all.
>
> Thanks in advance,
> Gary
> FT.com
> ______________________________________________________________
Similar Messages
-
Urgent help requested : How to cluster SOAP (via EJB) in WLS 7.0 SP 01?
Hi all,
I am able to deploy simple EJBs across clustered WLS instances.
I am unsuccessful in doing the same for a web service (SOAP) using EJBs. The application
gets deployed but successive requests do not round robin among available services.
I think somehow I need to do some cluster tweaks to the receiving WLS servlet
that peeks into the SOAP message and forwards it to the right service.
Could someone please help me out?
I would be most grateful.
Thanks a lot.
GuhaA couple of entries in webservices.xml file and making the proxy from the web server
reach out to the cluster instead of trying to make the https client do so.
-
Apache plug-in won't load balance requests evenly on cluster
I can't seem to get the Apache plug-in to actually do round-robin load balancing
of HTTP
requests. It does random-robin, as I like to call it, since the plug-in will usually
hit all the
servers in the cluster but in a random fashion.
I've got three managed servers:
192.168.1.5:8001 (WL6 on Linux) 192.168.1.2:8001 (WL6 on Linux) 192.168.1.7:8001
(WL6 on Linux)
Admin server on 192.168.1.7:7000 (WL6 on W2k)
My Apache server is 1.3.9 (RedHat SSL) on 192.168.1.52.
The log file for each servers has something like this:
####<Apr 19, 2001 1:18:54 AM MDT> <Info> <Cluster> <neptune> <cluster1server1>
<main> <system> <> <000102> <Joined cluster cluster1 at address 225.0.0.5 on port
8001>
####<Apr 19, 2001 1:19:31 AM MDT> <Info> <Cluster> <neptune> <cluster1server1>
<ExecuteThread: '9' for queue: 'default'> <> <> <000127> <Adding
3773576126129840579S:192.168.1.2:[8001,8001,7002,7002,8001,7002,-1]:192.168.1.52
to the cluster> ####<Apr 19, 2001 1:19:31 AM MDT> <Info> <Cluster> <neptune>
<cluster1server1> <ExecuteThread: '11' for queue: 'default'> <> <> <000127> <Adding
-6393447100509727955S:192.168.1.5:[8001,8001,7002,7002,8001,7002,-1]:192.168.1.52
to the cluster>
So I believe I have correctly created a cluster, although I did not bother to
assign
replication groups for HTTP session replication (yet).
The Apache debug output indicates it knows about all three servers and I can see
it
doing the "random-robin" load balancing. Here is the output:
Thu Apr 19 00:20:53 2001 Initializing lastIndex=2 for a list of length=3 Thu Apr
19
00:20:53 2001 Init Srvr# [1] = [192.168.1.2:8001] load=1077584792 isGood=1077590272
numSk ip=134940256 Thu Apr 19 00:20:53 2001 Init Srvr# [2] = [192.168.1.5:8001]
load=1077584792 isGood=1077590272 numSk ip=134940256 Thu Apr 19 00:20:53 2001
Init Srvr# [3] = [192.168.1.7:8001] load=1077584792 isGood=1077590272 numSk
ip=134940256 Thu Apr 19 00:20:53 2001 INFO: SSL is not configured Thu Apr 19
00:20:53 2001 Now trying whatever is on the list; ci->canUseSrvrList = 1 Thu Apr
19
00:20:53 2001 INFO: New NON-SSL URL Thu Apr 19 00:20:53 2001 general list: trying
connect to '192.168.1.7'/8001 Thu Apr 19 00:20:53 2001 Connected to 192.168.1.7:8001
Thu Apr 19 00:20:53 2001 INFO: sysSend 320 Thu Apr 19 00:20:53 2001 INFO:
Reader::fill(): first=0 last=0 toRead=4096 Thu Apr 19 00:21:06 2001 parsed all
headers
OK Thu Apr 19 00:21:06 2001 Initializing lastIndex=1 for a list of length=3 Thu
Apr 19
00:21:06 2001 ###Response### : Srvr# [1] = [192.168.1.5:8001] load=1077584792
isGood=1077 546628 numSkip=1077546628 Thu Apr 19 00:21:06 2001 ###Response###
: Srvr# [2] = [192.168.1.2:8001] load=1077584792 isGood=1077 546628
numSkip=1077546628 Thu Apr 19 00:21:06 2001 ###Response### : Srvr# [3] =
[192.168.1.7:8001] load=1077584792 isGood=1077 546628 numSkip=1077546628 Thu Apr
19 00:21:06 2001 INFO: Reader::fill(): first=0 last=0 toRead=4096
Basically, the lastIndex=XXX appears to be random. It may do round-robin for 4
or 5
connections but then always it resorts to randomly directing new connections.
This is what the configuration looks like using the plug-in's
/weblogic?__WebLogicBridgeConfig URL:
Weblogic Apache Bridge Configuration parameters:
WebLogic Cluster List:
1.Host: '192.168.1.2' Port: 8001 Primary
General Server List:
1.Host: '192.168.1.2' Port: 8001
2.Host: '192.168.1.5' Port: 8001
3.Host: '192.168.1.7' Port: 8001
DefaultFileName: ''
PathTrim: '/weblogic'
PathPrepend: '' ConnectTimeoutSecs:
'10' ConnectRetrySecs: '2'
HungServerRecoverSecs: '300'
MaxPostSize: '0'
StatPath: false
CookieName: JSESSIONID
Idempotent:
ON FileCaching:
ON ErrorPage: ''
DisableCookie2Server: OFF
Can someone please help to shed some light on this? I would be really grateful,
thanks!
JeffRight - it means that the only configuration which can do perfect round-robin is a
single plugin (non-Apache, or single-process Apache) - all others essentially do random
(sort of, but it can skew test results during first N requests).
Robert Patrick <[email protected]> wrote:
Dimitri,
The way Apache works is that is spawns a bunch of child processes and the parent process
that listens on the port delegates the processing of each request to one of the child
processes. This means that the load-balancing dome by the plugin before the session ID is
assigned does not do perfect round-robining because there are multiple copies of the plugin
loaded in the multiple child processes. This situation is similar to the one you would get
by running multiple proxy servers on different machines with the NES/iPlanet and IIS
plugins.
As I pointed out in my response to Jeff, attempting to address this problem with IPC
machanisms would only solve the single machine problem and most people deploy multiple
proxy servers to avoid a single point of failure...
Hope this helps,
Robert
Dimitri Rakitine wrote:
Hrm. This is strange - I thought that all the information nesessary for a
'sticky' load-balancing (primary/secondary) is contained in the cookie/session info,
so, the particular plug-in implementation should not make any difference. For
load-balancing - statistically, given large enough sampling base, Apache plug-in
should perform just a well as NS one (unless apache is somehow misconfigured and
calls fork() for each new request).
Jeff Calog <[email protected]> wrote:
Robert,
Thanks for the sanity reply, you are definitely right. I used Netscape 3.0 on
Win2k and it did perfect round-robin load balancing to my servers.
<raving>
BEA - ARE YOU LISTENING? STOP TELLING PEOPLE YOUR APACHE PLUG-IN IS A VIABLE
LOAD BALANCING SOLUTION! It's worthless for load balancing!
</raving>
In some tests, as many as 90% of my connections/requests would be sent to a single
server. There should be something in the release notes like "By the way, the
Apache plug-in is only advertised as doing round-robin load balancing, in reality
it doesn't work worth a darn".
I'm surprised they don't used shared memory or some other technique (pipes, sockets,
signals, writing to /tmp, anything) for interprocess communication to fix that.
Jeff
Robert Patrick <[email protected]> wrote:
Yes, the problem lies in the fact that Apache uses multiple processes
instead of
multiple threads to process requests. Therefore, you end up with multiple
processes all
with the WebLogic plugin loaded into them (and they cannot see one another)...
Hopefully, Apache 2.0 when it comes out will allow the plugin to do a
better job...
Jeff Calog wrote:
I can't seem to get the Apache plug-in to actually do round-robin loadbalancing
of HTTP
requests. It does random-robin, as I like to call it, since the plug-inwill usually
hit all the
servers in the cluster but in a random fashion.
I've got three managed servers:
192.168.1.5:8001 (WL6 on Linux) 192.168.1.2:8001 (WL6 on Linux) 192.168.1.7:8001
(WL6 on Linux)
Admin server on 192.168.1.7:7000 (WL6 on W2k)
My Apache server is 1.3.9 (RedHat SSL) on 192.168.1.52.
The log file for each servers has something like this:
####<Apr 19, 2001 1:18:54 AM MDT> <Info> <Cluster> <neptune> <cluster1server1>
<main> <system> <> <000102> <Joined cluster cluster1 at address 225.0.0.5on port
8001>
####<Apr 19, 2001 1:19:31 AM MDT> <Info> <Cluster> <neptune> <cluster1server1>
<ExecuteThread: '9' for queue: 'default'> <> <> <000127> <Adding
3773576126129840579S:192.168.1.2:[8001,8001,7002,7002,8001,7002,-1]:192.168.1.52
to the cluster> ####<Apr 19, 2001 1:19:31 AM MDT> <Info> <Cluster><neptune>
<cluster1server1> <ExecuteThread: '11' for queue: 'default'> <> <><000127> <Adding
-6393447100509727955S:192.168.1.5:[8001,8001,7002,7002,8001,7002,-1]:192.168.1.52
to the cluster>
So I believe I have correctly created a cluster, although I did notbother to
assign
replication groups for HTTP session replication (yet).
The Apache debug output indicates it knows about all three serversand I can see
it
doing the "random-robin" load balancing. Here is the output:
Thu Apr 19 00:20:53 2001 Initializing lastIndex=2 for a list of length=3Thu Apr
19
00:20:53 2001 Init Srvr# [1] = [192.168.1.2:8001] load=1077584792 isGood=1077590272
numSk ip=134940256 Thu Apr 19 00:20:53 2001 Init Srvr# [2] = [192.168.1.5:8001]
load=1077584792 isGood=1077590272 numSk ip=134940256 Thu Apr 19 00:20:532001
Init Srvr# [3] = [192.168.1.7:8001] load=1077584792 isGood=1077590272numSk
ip=134940256 Thu Apr 19 00:20:53 2001 INFO: SSL is not configured ThuApr 19
00:20:53 2001 Now trying whatever is on the list; ci->canUseSrvrList= 1 Thu Apr
19
00:20:53 2001 INFO: New NON-SSL URL Thu Apr 19 00:20:53 2001 generallist: trying
connect to '192.168.1.7'/8001 Thu Apr 19 00:20:53 2001 Connected to192.168.1.7:8001
Thu Apr 19 00:20:53 2001 INFO: sysSend 320 Thu Apr 19 00:20:53 2001INFO:
Reader::fill(): first=0 last=0 toRead=4096 Thu Apr 19 00:21:06 2001parsed all
headers
OK Thu Apr 19 00:21:06 2001 Initializing lastIndex=1 for a list oflength=3 Thu
Apr 19
00:21:06 2001 ###Response### : Srvr# [1] = [192.168.1.5:8001] load=1077584792
isGood=1077 546628 numSkip=1077546628 Thu Apr 19 00:21:06 2001 ###Response###
: Srvr# [2] = [192.168.1.2:8001] load=1077584792 isGood=1077 546628
numSkip=1077546628 Thu Apr 19 00:21:06 2001 ###Response### : Srvr#[3] =
[192.168.1.7:8001] load=1077584792 isGood=1077 546628 numSkip=1077546628Thu Apr
19 00:21:06 2001 INFO: Reader::fill(): first=0 last=0 toRead=4096
Basically, the lastIndex=XXX appears to be random. It may do round-robinfor 4
or 5
connections but then always it resorts to randomly directing new connections.
This is what the configuration looks like using the plug-in's
/weblogic?__WebLogicBridgeConfig URL:
Weblogic Apache Bridge Configuration parameters:
WebLogic Cluster List:
1.Host: '192.168.1.2' Port: 8001 Primary
General Server List:
1.Host: '192.168.1.2' Port: 8001
2.Host: '192.168.1.5' Port: 8001
3.Host: '192.168.1.7' Port: 8001
DefaultFileName: ''
PathTrim: '/weblogic'
PathPrepend: '' ConnectTimeoutSecs:
'10' ConnectRetrySecs: '2'
HungServerRecoverSecs: '300'
MaxPostSize: '0'
StatPath: false
CookieName: JSESSIONID
Idempotent:
ON FileCaching:
ON ErrorPage: ''
DisableCookie2Server: OFF
Can someone please help to shed some light on this? I would be reallygrateful,
thanks!
Jeff
Dimitri--
Dimitri -
Making Requester Mandatory in PO Distributions
Hi,
We have a requirement to make the Requester field mandatory in the PO
Distribution screen. I used forms personalization to make the required
property of that field to True.
But, when i create a new PO, PO line and save it, it does not validate the
field in the distributions screen. The field is required ONLY when i
navigate to the screen. Is there a way we can achieve this?
Thanks,
AshishHi... Try this...
1. Please goto SPRO>Material Management>Purchasing>Purchase Order>Define Screen layout at Document level
2. Choose "Me21n" and select "Details"
3. You will be listed with a bunch of "Field Selection Group", double click on them and look for the specifc field label you are lookin for.
4. choose "Reqd.entry" for the field label.
Hope this is helpful
Regards,
kumara -
Making use of a cluster to render
Hi everyone,
First post here! Hoping to get involved in the discussions.
Anyways, I've a cluster set up with two MBP's, two new Quad Xeons, and a fairly beefed up G5. One of the MacBook's is acting as controller, with only one rendering service running on it, so it doesn't get overloaded.
I am editing a 3.5Gb file in Final Cut Pro and was wondering how I can make the most of the cluster i've set up. Do i need to export it to compressor and use it that way or is there a simpler way?
Thanks a lot,
Ronanreader.setFeature("http://xml.org/sax/features/validation", true);
reader.setFeature("http://apache.org/xml/features/validation/schema", true);
reader.setProperty("http://apache.org/xml/properties/schema/external-noNamespaceSchemaLocation", XSDSchemaString);
-
WLS 8.1 SP2 : node-to-node request routing in cluster???
Hello everybody;
a bit confused about request management in WLS clusters,
hope you can help clarify, please.
Do WLS 8.1 SP2 clusters use node-to-node request
routing "behind the scenes"?
If so, can this feature be explicitly configured/controlled?
TIA
Paola R.Paola R. <[email protected]> writes:
> Hello everybody;
>
>
> a bit confused about request management in WLS clusters,
> hope you can help clarify, please.
>
>
> Do WLS 8.1 SP2 clusters use node-to-node request
> routing "behind the scenes"?
Only in certain circumstances (usually becuase you are using an applet
or the servers are behind a firewall). Usually routing is client
driven.
>
>
> If so, can this feature be explicitly configured/controlled?
No
andy
>
>
> TIA
>
>
> --
> Paola R. -
Google crossdomain.xml when making requests
Hi guys,
Does anyone know how to get round the googles cross domain
when trying to access their services like the maps.google.com
geocode?
Every call i make to try and make a LoadVars request works
fine in the flash IDE where it just trys to pull down the
google.com crossdomain, but any calls in a live enviroment just
request the crossdomain from the appropriate service, i.e. the
maps.google.com and then don't make the actual request.
<?xml version="1.0"?>
<!DOCTYPE cross-domain-policy SYSTEM "
http://www.macromedia.com/xml/dtds/cross-domain-policy.dtd">
<cross-domain-policy>
<site-control
permitted-cross-domain-policies="by-content-type" />
</cross-domain-policy>
Any help would be great, otherwise i'll have to proxy the
requests through a php page.Howdy,
I have done some further investigations and I have so far concluded that:
- in the cases where the service does NOT work, the following is happening:
I watch a video (video 1), then close the window. I start a new window with a new video (video 2). However, despite closing a window and opening a new window, the TCP session for video 1 remains open. So when I start video 2, the crossdomain.xml request is sent on the old TCP session.
Instead of getting an "OK" reply, I simply get an "ACK" reply and the process is halted.
- in the cases where the service does work, the following is happening:
I watch a video (video 1), then close the window. I start a new window with a new video (video 2). For each window, a new TCP session is set up/ synchronised, and the crossdomain.xml request receices an "ACK".
- it seems that the failure scenario happens when there are no files in the cache.
Any ideas of what this could be? -
Hi !
We have our workflow currently designed for Aproving Access, but when somebody create a request for New User I don`t want they have to choose any role but it`s seem to be mandatory if the request type is New_Account. ( I try to take off the action Assign_roles but nothing happen)
I don`t want the requesters or managers to select roles. I just want they ask for a role or a tcode in the Request Reason and Security Admins ultimately choose the role at their stage.
Can you help me?
Thanks a lot!Hi Karen,
CUP is designed to have a "Role" while creating a request. CUP cannot creat request without having a "Role" assigned.
If you want this functionality, please open an enhancement request following the note below.
Note # 1083615 - GRC Access Control Enhancement Process.
Best Regards,
Sirish Gullapalli. -
Making some elements of cluster invisible
I have a cluster indicator as shown. Cluster contains a Num ‘has linear test passed?’
I am passing cluster’s reference to the parent VI.
If Num=0, I want to hide only the Num keeping rest of the cluster visible in the parent VI.
How can I just make one num optionally invisible?
Thanks
Sandeep
Attachments:
Create LinearTest Report Ref.vi 350 KBI have to echo Dynamik's comments. this is an odd construct.
Dynamik, you missed the "_1" version you were useing.
Sandeep,
Just hit ignore, open the diagram and replace the missing VI with the one you posted.
Here is another variation where I took the liberty of redefining your output type, just to make things easier.
Make sure you close yours before you open the VI "SetClusterVisabilityCreateLinearTestReportRef.vi"
Ben
Message Edited by Ben on 09-15-2005 08:13 PM
Ben Rayner
I am currently active on.. MainStream Preppers
Rayner's Ridge is under construction
Attachments:
SetClusterVisabilityCreateLinearTestReportRef.vi 168 KB
CreateLinearTestReportRef1.vi 351 KB -
Making new oracle 10g cluster in redhat enterprise linux
Hy All
i have installed redhat enterprise server in 2 systems and both are attached with SANE network storage. i want to configure oracle 10 g clustering with this setup.
should i have to configure clustering in linux only or there is a feature of clustering in oracle?
if yes than how to do that? i dont have any idea about oracle
please somebody help me
with regards,
aliHi Ali!
Hardware clustering is differnet from Oracle clustering.
If you have the database you should investigate RAC http://www.oracle.com/pls/db102/to_toc?pathname=rac.102%2Fb14197%2Ftoc.htm&remark=portal+%28Administration%29
For the Application Server have a look at http://download-uk.oracle.com/docs/cd/B14099_18/core.1012/b14003/toc.htm
Hardware clustering with RedHat could be achieved with Piranha.
cu
Andreas -
Cluster service is requested to stop on all nodes when DNS is unavailable
Our 6 node coherence cluster has been running fine for few days. All coherence nodes were requested to stop the cluster service when the DNS server was not available for few mins due to a scheduled maintenance activity. Cluster services didn’t come back up until the DNS server is available. Why would it need a DNS server when the cluster is already started and running fine for few days?
Here’s the error message and thread dump from the logs:
2010-12-18 18:07:18.819/3464791.277 Oracle Coherence GE 3.6.0.3 <Error> (thread=IpMonitor, member=7): Detected hard timeout) of {WrapperGuardable Guard{Daemon=Cluster} Service=ClusterService{Name=Cluster, State=(SERVICE_STARTED, STATE_JOINED), Id=0, Version=3.6, OldestMemberId=5}}
2010-12-18 18:07:18.823/3464791.281 Oracle Coherence GE 3.6.0.3 <Error> (thread=Termination Thread, member=7): Full Thread Dump
Thread[Invocation:Management:EventDispatcher,5,Cluster]
java.lang.Object.wait(Native Method)
com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
com.tangosol.coherence.component.util.daemon.queueProcessor.Service$EventDispatcher.onWait(Service.CDB:7)
com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
java.lang.Thread.run(Thread.java:619)
Thread[Logger@9250962 3.6.0.3,3,main]
java.lang.Object.wait(Native Method)
com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
java.lang.Thread.run(Thread.java:619)
Thread[Signal Dispatcher,9,system]
Thread[Finalizer,8,system]
java.lang.Object.wait(Native Method)
java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:118)
java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:134)
java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:159)
Thread[Invocation:Management,5,Cluster]
java.lang.Object.wait(Native Method)
com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onWait(Grid.CDB:6)
com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
java.lang.Thread.run(Thread.java:619)
ThreadCluster
java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:850)
java.net.InetAddress.getAddressFromNameService(InetAddress.java:1201)
java.net.InetAddress.getAllByName0(InetAddress.java:1154)
java.net.InetAddress.getAllByName(InetAddress.java:1084)
java.net.InetAddress.getAllByName(InetAddress.java:1020)
java.net.InetAddress.getByName(InetAddress.java:970)
java.net.InetSocketAddress.<init>(InetSocketAddress.java:124)
com.tangosol.net.ConfigurableAddressProvider$AddressHolder.getAddress(ConfigurableAddressProvider.java:426)
com.tangosol.net.ConfigurableAddressProvider$1.next(ConfigurableAddressProvider.java:167)
java.util.AbstractCollection.contains(AbstractCollection.java:89)
com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.ClusterService.isWellKnown(ClusterService.CDB:5)
com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.ClusterService.compareImportance(ClusterService.CDB:7)
com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.ClusterService.getWitnessMemberSet(ClusterService.CDB:49)
com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.ClusterService.verifyMemberLeft(ClusterService.CDB:91)
com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.ClusterService.onNotifyTcmpTimeout(ClusterService.CDB:11)
com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.ClusterService$NotifyTcmpTimeout.onReceived(ClusterService.CDB:1)
com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:11)
com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:33)
com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.ClusterService.onNotify(ClusterService.CDB:3)
com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
java.lang.Thread.run(Thread.java:619)
Thread[main,5,main]
java.lang.Object.wait(Native Method)
com.tangosol.net.DefaultCacheServer.monitorServices(DefaultCacheServer.java:270)
com.tangosol.net.DefaultCacheServer.startAndMonitor(DefaultCacheServer.java:56)
com.tangosol.net.DefaultCacheServer.main(DefaultCacheServer.java:197)
Thread[PacketReceiver,7,Cluster]
java.lang.Object.wait(Native Method)
com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketReceiver.onWait(PacketReceiver.CDB:2)
com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
java.lang.Thread.run(Thread.java:619)
Thread[PacketSpeaker,8,Cluster]
java.lang.Object.wait(Native Method)
com.tangosol.coherence.component.util.queue.ConcurrentQueue.waitForEntry(ConcurrentQueue.CDB:16)
com.tangosol.coherence.component.util.queue.ConcurrentQueue.remove(ConcurrentQueue.CDB:7)
com.tangosol.coherence.component.util.Queue.remove(Queue.CDB:1)
com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketSpeaker.onNotify(PacketSpeaker.CDB:21)
com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
java.lang.Thread.run(Thread.java:619)
Thread[Termination Thread,6,Cluster]
java.lang.Thread.dumpThreads(Native Method)
java.lang.Thread.getAllStackTraces(Thread.java:1487)
com.tangosol.net.GuardSupport.logStackTraces(GuardSupport.java:810)
com.tangosol.coherence.component.net.Cluster$DefaultFailurePolicy.onGuardableTerminate(Cluster.CDB:4)
com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid$WrapperGuardable.terminate(Grid.CDB:1)
com.tangosol.net.GuardSupport$Context$2.run(GuardSupport.java:677)
java.lang.Thread.run(Thread.java:619)
Thread[Reference Handler,10,system]
java.lang.Object.wait(Native Method)
java.lang.Object.wait(Object.java:485)
java.lang.ref.Reference$ReferenceHandler.run(Reference.java:116)
Thread[PacketPublisher,6,Cluster]
java.lang.Object.wait(Native Method)
com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketPublisher.onWait(PacketPublisher.CDB:2)
com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
java.lang.Thread.run(Thread.java:619)
Thread[DistributedCache,5,Cluster]
java.lang.Object.wait(Native Method)
com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onWait(Grid.CDB:6)
com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
java.lang.Thread.run(Thread.java:619)
Thread[IpMonitor,6,Cluster]
java.lang.Object.wait(Native Method)
com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
com.tangosol.coherence.component.util.daemon.IpMonitor.onWait(IpMonitor.CDB:4)
com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
java.lang.Thread.run(Thread.java:619)
Thread[PacketListener1P,8,Cluster]
java.net.PlainDatagramSocketImpl.receive0(Native Method)
java.net.PlainDatagramSocketImpl.receive(PlainDatagramSocketImpl.java:136)
java.net.DatagramSocket.receive(DatagramSocket.java:725)
com.tangosol.coherence.component.net.socket.UdpSocket.receive(UdpSocket.CDB:22)
com.tangosol.coherence.component.net.UdpPacket.receive(UdpPacket.CDB:1)
com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketListener.onNotify(PacketListener.CDB:20)
com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
java.lang.Thread.run(Thread.java:619)
Thread[PacketListener1,8,Cluster]
java.net.PlainDatagramSocketImpl.receive0(Native Method)
java.net.PlainDatagramSocketImpl.receive(PlainDatagramSocketImpl.java:136)
java.net.DatagramSocket.receive(DatagramSocket.java:725)
com.tangosol.coherence.component.net.socket.UdpSocket.receive(UdpSocket.CDB:22)
com.tangosol.coherence.component.net.UdpPacket.receive(UdpPacket.CDB:1)
com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketListener.onNotify(PacketListener.CDB:20)
com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
java.lang.Thread.run(Thread.java:619)
2010-12-18 18:07:18.823/3464791.281 Oracle Coherence GE 3.6.0.3 <Warning> (thread=Termination Thread, member=7): Terminating Guard{Daemon=Cluster}
2010-12-18 18:07:18.823/3464791.281 Oracle Coherence GE 3.6.0.3 <Error> (thread=StopService, member=7): Requested to stop cluster service.
2010-12-18 18:07:18.826/3464791.284 Oracle Coherence GE 3.6.0.3 <D5> (thread=DistributedCache, member=7): Service DistributedCache left the cluster
2010-12-18 18:07:18.826/3464791.284 Oracle Coherence GE 3.6.0.3 <D5> (thread=Invocation:Management, member=7): Service Management left the cluster
2010-12-18 18:07:24.904/3464797.362 Oracle Coherence GE 3.6.0.3 <Error> (thread=main, member=7): Failed to restart services: com.tangosol.net.RequestTimeoutException: Timeout while waiting for cluster to stop.
2010-12-18 18:07:33.915/3464806.373 Oracle Coherence GE 3.6.0.3 <Error> (thread=main, member=7): Failed to restart services: com.tangosol.net.RequestTimeoutException: Timeout while waiting for cluster to stop.
2010-12-18 18:07:42.924/3464815.382 Oracle Coherence GE 3.6.0.3 <Error> (thread=main, member=7): Failed to restart services: com.tangosol.net.RequestTimeoutException: Timeout while waiting for cluster to stop.
2010-12-18 18:07:51.936/3464824.394 Oracle Coherence GE 3.6.0.3 <Error> (thread=main, member=7): Failed to restart services: com.tangosol.net.RequestTimeoutException: Timeout while waiting for cluster to stop.The log file shows that list of the addresses are formed by IP, but they are configured by using hostname in override file.
Here's the log entry:
WellKnownAddressList(Size=2,
WKA{Address=165.X.X.XX7, Port=8088}
WKA{Address=165.X.X.XX8, Port=8088}
Here's the configuration from tangosol-coherence-override-prod.xml:
<well-known-addresses>
<socket-address id="1">
<address system-property="tangosol.coherence.wka">serverA</address>
<port system-property="tangosol.coherence.wka.port">8088</port>
</socket-address>
<socket-address id="2">
<address system-property="tangosol.coherence.wka">serverB</address>
<port system-property="tangosol.coherence.wka.port">8088</port>
</socket-address>
</well-known-addresses>
Thanks,
Ramesh -
Weblogic managed servers connecting to the servers in different cluster
Hi All,
We have a weired problem going on for a while. We have a cluster configuration
with an admin server and two managed servers. We have the similar configuration
in DEV, TEST and PROD. The problem is that the managed server members in DEV cluster
are making connections to managed servers which are member of PROD cluster for
session replication. The same way TEST servers are trying to connect to PROD and
DEV.
Has anyone seen this kind of problem before. BEA seems to be cluless so far.
Thanks in adavnce for your input.
Udit
Venkat,
Thats a good suggestion but these things are too obvious to ignore. We have different
multicast address in DEV and PROD and also hosts are on different sub net. I do
not know if cluster name will make any differene though.
Thanks for your input anyway,
Udit
"venkat" <[email protected]> wrote:
>
>Udit,
> You can check the sub net, multicast address and the cluster name.
>If the dev
>and prod servers are in the same sub net with same multicast address,
>then change
>the multicast and try.
>
>Venkat
>"venkat" <[email protected]> wrote:
>>
>>Udit,
>>
>>
>>"Udit Singh" <[email protected]> wrote:
>>>
>>>Kumar,
>>>Thanks for the reply.
>>>The situation is that managed server in DEV try to replicate the session
>>>to a
>>>managed server in PROD and TEST and vice versa.
>>>Let us say our dev managed servers are running on abc01 and abc02 and
>>>prod managed
>>>servers are running on xyz01 and xyz02. All the managed servers are
>>running
>>>on
>>>port 7005.
>>>If I do the netstat on abc01 or abc02 I could the see established connections
>>>between abc01/02 and xyz01/02.
>>>Why is that happening? We are running 6.1SP2.
>>>
>>>Udit
>>>
>>>Kumar Allamraju <[email protected]> wrote:
>>>>We do not restrict intercluster communication as of 61 SP3.
>>>>Once we get the IP from the cookie, we can safely make a
>>>>connection to the other clustered node. We were not checking
>>>>if the server is part of the same cluster or not. This is
>>>>already fixed in 7.x and 61 SP4(not yet released) If you are
>>>>on 61 Sp2 or SP3 then you should contact support and
>>>>reference CR # CR089798 to get a one off patch.
>>>>
>>>>Regardless, are you traversing from DEV to PROD cluster and
>>>>vice-versa. If not then this problem shouldn't happen unless
>>>>plugin is routing the request to wrong cluster.
>>>>
>>>>--
>>>>Kumar
>>>>
>>>>Udit Singh wrote:
>>>>> Hi All,
>>>>> We have a weired problem going on for a while. We have a cluster
>>configuration
>>>>> with an admin server and two managed servers. We have the similar
>>>configuration
>>>>> in DEV, TEST and PROD. The problem is that the managed server members
>>>>in DEV cluster
>>>>> are making connections to managed servers which are member of PROD
>>>>cluster for
>>>>> session replication. The same way TEST servers are trying to connect
>>>>to PROD and
>>>>> DEV.
>>>>> Has anyone seen this kind of problem before. BEA seems to be cluless
>>>>so far.
>>>>>
>>>>> Thanks in adavnce for your input.
>>>>> Udit
>>>>
>>>
>>
>
-
The Cluster not failover when i shutdown one managed server?
Hello, I created one cluster whit two managed servers, and deployed an application across the cluster, but the weblogic server gave me two url and two different port for access to this application.
http://server1:7003/App_name
http://server1:7005/App_name
When I shutdown immediate one managed server i lost the connection whit the application from this managed server, My question is, the failover and de load balancer not work, why??
Why two diferent address?
thank any helpWell you have two different addresses (URL) because those are two physical managed servers. By creating a cluster you are not automatically going to have a virtual address (URL) that will load balance requests for that application between those two managed servers.
If you want one URL to access this application, you will have to have some kind of web server in front of your WebLogic. You can install and configure Oracle HTTP Server to route requests to WebLogic cluster. Refer this:
http://download.oracle.com/docs/cd/E12839_01/web.1111/e10144/intro_ohs.htm#i1008837
And this for details on how to configure mod_wl_ohs to route requests from OHS to WLS:
http://download.oracle.com/docs/cd/E12839_01/web.1111/e10144/under_mods.htm#BABGCGHJ
Hope this helps.
Thanks
Shail -
Controling IP addressed used to issue HTTP request
Hi,
We are looking for a way to control the IP address we use for outgoing HTTP requests so that we can choose between two IP addresses we will have on a linux server. We have a Linux server running Java/Tomcat which is making HTTP GET requests to Web servers. We want to be able to use one IP address when making requests to one Web server and use a different IP address when making requests to another server. The actual Java method we are using to get the page is connection.getInputStream(), and we get a connection with url.openConnection();
Is it possible to control which of the server's IP addresses is used for each request, and if so, how? Is there some way we can set up the Linux server or Tomcat to make it work?
BTW, I believe it is possible to have two IP addresses on a linux server, but I haven't looked into what's involved. Feel free to comment on this if relevant.
Thanks!It is definately possible to have mutliple IP Addresses on a Linux machine.
As far as setting the local IP Address to bind to when opening a connection, you cannot do this with the Standard HttpURLConnection, HttpsURLConnection, or URLConnection. You can do it, however, using apaches HTTPClient API. -
How to add information to header of request
Hi,
i am new Flex. I am facing one issue releated to request.
While making request i want to add one token ,this can be validated at server side to check the request is valid or not.
But am unable to add this information to header of request
Following is the peice of code whic is am using:
var httpService:HTTPService = new HTTPService();
httpService.url = url;
httpService.method = method;
// TO DO: adding timer and transcation array
if(resultHandler != null){
httpService.addEventListener(ResultEvent.RESULT, resultHandler);
if(faultHandler != null){
httpService.addEventListener(FaultEvent.FAULT, faultHandler);
httpService.headers={"securi_token","0683b713-8529-45fd-a930-de132a87b171"}; // random genarated nuimber
httpService.send();
This is not working .Please help !!!!Yes, it is possible. The CopyrightLayer script here: http://www.rags-int-inc.com/PhotoTechStuff/PscsScripts/ has an option (showIPTCDesc) that creates a text layer from IPTC Metadata. This should provide examples of how to place the text layer as well.
Cheers, Rags :-)
Maybe you are looking for
-
Hi all Hope you can help a beginner with this I have 2 tables like this: ASSIGNMENTS emp number(9) startdate date dep varchar2(3) enddate date arr varchar2(3) BASE emp station varchar2(3) TABLE ASSIGMENTS 1 01-DEC-03 LAX 01-DEC-03 JFK 1 02-DEC-03 JFK
-
Change panel in japplet at runtime
I am fairly new to java, but here is what i am trying to do. I would like to have a button, when pressed, change a panel in my japplet and repaint. is this possible? i do not know anything about tabbed panes (i leanred with 1.2) so i'd like to stay a
-
JVM 1.4.1_01 crash in awt_getBIColorOrder
Our site has developed a nasty crash problem using some of the newer JVMs. There is an intermittent JVM crash running on Windows NT4 SP6a as well as Windows 2000 SP3. Basically, it looks like native JVM code is segfaulting. No exception is printed to
-
Pass Finder reference between Automator and Applescript
Hi - This is doing my head in ! I have an Automator script that encrypts a PDF and then runs an AppleScript within the Automator workflow that does a rename on the resultant encrypted PDF. The result of this workflow is a filename I need within anoth
-
SRM user settings deletion (like in program SAPBBPAT05)
Hello experts! I'm currently working on a user upload program In this program I update some user settings which are available in the web interface via the "Settings" link in the header of the page I already found how to add new attributes (thanks to