Cluster-wide JNDI replication
I have a question about cluster wide JNDI replication. I am on weblogic
7.0 sp2, solaris 8. I have 2 weblogic servers in a cluster and two
asynchronous JVM's connect to this cluster. The two weblogic servers
would be w1 and w2 and the JVMs would be j1
w1 creates a stateful ejb. The handle to this ejb is put on the JNDI
tree. A JMS message is created which contains the key to the handle
object. A JMS message is sent which is picked up by one of the JVM's say
j1. j1 tries to do a lookup of the handle using the key in the JMS
message and it is not able to find the handle object.
The reason is that j1 is trying to connect to the w2 to find the handle.
The change to the jndi tree is not propagated to w2 yet and hence the
error. Any ideas on why it would take so long to communicate the jndi
tree? My jndi tree has hardly anything in it. There are a couple of JMS
connection factories and 3 ejbs and couple of JMS queues.
Would appreciate any kind of help. I am going to migrate storing the ejb
handles to another persistent store instead of the jndi tree but till
then any insight into this problem would be helpful.
Please don't advice solutions like using Thread.sleep and trying again
Thanks,
Shiva.
Shiva,
> I have a question about cluster wide JNDI replication. I am on weblogic
> 7.0 sp2, solaris 8. I have 2 weblogic servers in a cluster and two
> asynchronous JVM's connect to this cluster. The two weblogic servers
> would be w1 and w2 and the JVMs would be j1
>
> w1 creates a stateful ejb. The handle to this ejb is put on the JNDI
> tree. A JMS message is created which contains the key to the handle
> object. A JMS message is sent which is picked up by one of the JVM's say
> j1. j1 tries to do a lookup of the handle using the key in the JMS
> message and it is not able to find the handle object.
>
> The reason is that j1 is trying to connect to the w2 to find the handle.
> The change to the jndi tree is not propagated to w2 yet and hence the
> error. Any ideas on why it would take so long to communicate the jndi
> tree? My jndi tree has hardly anything in it. There are a couple of JMS
> connection factories and 3 ejbs and couple of JMS queues.
That's an interesting problem. The handle should have enough information
to locate the EJB. Are you explicitly trying to connect to W2 from J1 or
something? That part didn't make sense to me.
(While it's a different approach, if you need to share data real-time in a
cluster, use our Coherence Java clustered cache software.)
Peace,
Cameron Purdy
Tangosol, Inc.
http://www.tangosol.com/coherence.jsp
Tangosol Coherence: Clustered Replicated Cache for Weblogic
"Shiva P" <[email protected]> wrote in message
news:[email protected]...
>
Similar Messages
-
JNDI replication within a cluster
Hi to all of you,
we successfully enabled HTTP Session replication and tested the failover. We would also like to setup a JNDI replication, so that we can use it as a storage for some shared data -- as stated in http://download.oracle.com/docs/cd/B10464_05/web.904/b10324/cluster.htm this should be enabled automatically, once EJB replication is enabled.
With some problems we finally enabled EJB replication (we configured it through orion-application.xml) and the required replication policy is propagated to our statful beans. Anyway, the JNDI is still not replicated over the machines.
We are running latest OAS 10g, cluster is MultiCast on RedHat Enterprise, replication policy for Stateful beans is set to 'onRequestEnd' (we tried all the options :), our application is normal ear with 1 ejb and 1 war archive and apart from JNDI replication, it works as expected.
Is there some trick that is not mentioned or that we may overlooked in documentation to enable JNDI replication?
Kind Regards,
MartinHopefully solved -- though the documentation explicitly mentions rebinding as not working, after any changes made to value stored in JNDI context you should just re-bind the value to the JNDI context; the value is replicated to other JNDI contexts.
m. -
JNDI replication problems in WebLogic cluster.
I need to implement a replicable property in the cluster: each server could
update it and new value should be available for all cluster. I tried to bind
this property to JNDI and got several problems:
1) On each rebinding I got error messages:
<Nov 12, 2001 8:30:08 PM PST> <Error> <Cluster> <Conflict start: You tried
to bind an object under the name example.TestName in the jndi tree. The
object you have bound java.util.Date from 10.1.8.114 is non clusterable and
you have tried to bind more than once from two or more servers. Such objects
can only deployed from one server.>
<Nov 12, 2001 8:30:18 PM PST> <Error> <Cluster> <Conflict Resolved:
example.TestName for the object java.util.Date from 10.1.9.250 under the
bind name example.TestName in the jndi tree.>
As I understand this is a designed behavior for non-RMI objects. Am I
correct?
2) Replication is still done, but I got randomly results: I bind object to
server 1, get it from server 2 and they are not always the same even with
delay between operation in several seconds (tested with 0-10 sec.) and while
it lookup returns old version after 10 sec, second attempt without delay
could return correct result.
Any ideas how to ensure correct replication? I need lookup to return the
object I bound on different sever.
3) Even when lookup returns correct result, Admin Console in
Server->Monitoring-> JNDI Tree shows an error for bound object:
Exception
javax.naming.NameNotFoundException: Unable to resolve example. Resolved: ''
Unresolved:'example' ; remaining name ''
My configuration: admin server + 3 managed servers in a cluster.
JNDI bind and lookup is done from stateless session bean. Session is
clusterable and deployed to all servers in cluster. Client invokes session
methods throw t3 protocol directly on servers.
Thank you for any help.It is not a good idea to use JNDI to replicate application data. Did you consider
using JMS for this? Or JavaGroups (http://sourceforge.net/projects/javagroups/) -
there is an example of distibuted hashtable in examples.
Alex Rogozinsky <[email protected]> wrote:
I need to implement a replicable property in the cluster: each server could
update it and new value should be available for all cluster. I tried to bind
this property to JNDI and got several problems:
1) On each rebinding I got error messages:
<Nov 12, 2001 8:30:08 PM PST> <Error> <Cluster> <Conflict start: You tried
to bind an object under the name example.TestName in the jndi tree. The
object you have bound java.util.Date from 10.1.8.114 is non clusterable and
you have tried to bind more than once from two or more servers. Such objects
can only deployed from one server.>
<Nov 12, 2001 8:30:18 PM PST> <Error> <Cluster> <Conflict Resolved:
example.TestName for the object java.util.Date from 10.1.9.250 under the
bind name example.TestName in the jndi tree.>
As I understand this is a designed behavior for non-RMI objects. Am I
correct?
2) Replication is still done, but I got randomly results: I bind object to
server 1, get it from server 2 and they are not always the same even with
delay between operation in several seconds (tested with 0-10 sec.) and while
it lookup returns old version after 10 sec, second attempt without delay
could return correct result.
Any ideas how to ensure correct replication? I need lookup to return the
object I bound on different sever.
3) Even when lookup returns correct result, Admin Console in
Server->Monitoring-> JNDI Tree shows an error for bound object:
Exception
javax.naming.NameNotFoundException: Unable to resolve example. Resolved: ''
Unresolved:'example' ; remaining name ''
My configuration: admin server + 3 managed servers in a cluster.
JNDI bind and lookup is done from stateless session bean. Session is
clusterable and deployed to all servers in cluster. Client invokes session
methods throw t3 protocol directly on servers.
Thank you for any help.--
Dimitri -
Is there a way to force the JNDI replication?
It is on by default, unless you specifically disabled it:
http://e-docs.bea.com/wls/docs61/javadocs/weblogic/jndi/WLContext.html
REPLICATE_BINDINGS
public static final java.lang.String REPLICATE_BINDINGS
Cluster-specific: Specifies whether tree modifications are replicated
and is only applicable when connecting to WebLogic Servers that are running
in a cluster. By default, any modification to the naming tree is replicated
across the cluster, which ensures that any server can act as a naming server
for the entire cluster. Setting this property to false changes this behavior
and should be done with extreme caution: a false setting means that modifications
to the tree caused by bind, unbind, createSubcontext, and destroySubcontext
will not be replicated.
Understand the implications of this property before changing its default (true).
Pothiraj <[email protected]> wrote:
> Is there a way to force the JNDI replication?
Dimitri
-
How to get cluster wide cache statistics
Pls provide me an example using InvocationService to get cluster wide cache statistics.
JK,
Thanks for the code snippet.
Now this is how i re-align my code for cluster wide statistics collection. I am able to retrieve the CacheHits count as 2. Pls let me know if this approach is correct.
A StatisticsAgent class which provide the invocation service to cluster wide nodes.
package com.hp.dal.cache.cluster;
public class StatisticsAgent extends AbstractInvocable {
String cacheName = null;
public StatisticsAgent(cacheName) {
// TODO Auto-generated constructor stub
this.cacheName = cacheName;
@Override
public void run() {
Statistics stats = new Statistics();
// MBeanServerConnection mBeanServer = jmx();
MBeanServerConnection mBeanServer = jmx("vbharadwaj5",40002,null,null);
String jmxQuery = "Coherence:type=Cache,service=*,name=" + cacheName + ",nodeId=*,tier=back";
Set<ObjectInstance> queryResults;
try {
queryResults = mBeanServer.queryMBeans(new ObjectName(jmxQuery), null);
long totalHits = 0;
for (ObjectInstance objectInstance : queryResults) {
ObjectName objectName = objectInstance.getObjectName();
long cacheHits = (Long) mBeanServer.getAttribute(objectName, "CacheHits");
totalHits += cacheHits;
stats.setCacheHits(totalHits);
super.setResult(stats);
} catch (MalformedObjectNameException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (NullPointerException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (AttributeNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (InstanceNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (MBeanException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (ReflectionException e) {
// TODO Auto-generated catch block
e.printStackTrace();
public static MBeanServerConnection jmx() {
return ManagementFactory.getPlatformMBeanServer();
public static MBeanServerConnection jmx(String host, int port, String user, String password) throws Exception {
String urlPath = "/jndi/rmi://" + host + ":" + port + "/jmxrmi";
JMXServiceURL jmxUrl = new JMXServiceURL("rmi", "", 0, urlPath);
Map<String, String[]> env = new HashMap<String, String[]>();
if (user != null) {
env.put(JMXConnector.CREDENTIALS, new String[]{user, password});
JMXConnector jmxConnector = JMXConnectorFactory.connect(jmxUrl, env);
return jmxConnector.getMBeanServerConnection();
A client class which will invoke the InvocationService:
package com.hp.dal.cache.cluster;
public class TestClient {
public static void main(String[] args) throws Exception {
// TODO Auto-generated method stub
Map refMap = new HashMap();
refMap.put("V", "Vidya");
refMap.put("P", "Sudheesh");
NamedCache cache = CacheFactory.getCache("some-cache-name");
cache.put("V", "Vidya");
cache.put("P","Sudheesh");
cache.get("V");
cache.get("P");
InvocationService invocationService = (InvocationService) CacheFactory.getService("InvocationService");
Set setMembers = invocationService.getInfo().getServiceMembers();
System.out.println("SYNCHRONOUS EXECUTING AGENT");
Map<Member, Object> statsMap = invocationService.query(new StatisticsAgent(cache.getCacheName()), setMembers);
for (Entry<Member, Object> stats : statsMap.entrySet()) {
System.out.println("Member: " + stats.getKey() + ", Stats: " + stats.getValue());
Statistics obj = (Statistics) stats.getValue();
System.out.println(obj.getCacheHits());
Corresponding Cache config running on all nodes are:
<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">
<cache-config>
<caching-scheme-mapping>
<cache-mapping>
<cache-name>*</cache-name>
<scheme-name>ExamplesPartitionedPofScheme</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<distributed-scheme>
<scheme-name>ExamplesPartitionedPofScheme</scheme-name>
<service-name>PartitionedPofCache</service-name>
<backing-map-scheme>
<read-write-backing-map-scheme>
<internal-cache-scheme>
<local-scheme>
<high-units>250M</high-units>
<unit-calculator>binary</unit-calculator>
<expiry-delay>0s</expiry-delay>
</local-scheme>
</internal-cache-scheme>
</read-write-backing-map-scheme>
</backing-map-scheme>
<autostart>true</autostart>
</distributed-scheme>
<invocation-scheme>
<scheme-name>invocation-service</scheme-name>
<service-name>InvocationService</service-name>
<thread-count>5</thread-count>
<autostart>true</autostart>
</invocation-scheme>
</caching-schemes>
</cache-config>
Corresponding tangosol-coherence-override.xml is :
<?xml version='1.0'?>
<coherence
xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-operational-config http://xmlns.oracle.com/coherence/coherence-operational-config/1.0/coherence-operational-config.xsd">
<cluster-config>
<member-identity>
<cluster-name>MyCluster</cluster-name>
</member-identity>
<multicast-listener>
<address>224.3.7.0</address>
<port>3155</port>
<time-to-live>100</time-to-live>
</multicast-listener>
</cluster-config>
<configurable-cache-factory-config>
<init-params>
<init-param>
<param-type>java.lang.String</param-type>
<param-value system-property="tangosol.coherence.cacheconfig">performance-coherence-config.xml
</param-value>
</init-param>
</init-params>
</configurable-cache-factory-config>
<management-config>
<managed-nodes>all</managed-nodes>
<allow-remote-management>true</allow-remote-management>
</management-config>
</coherence>
Server properties:
-Dtangosol.coherence.management =all
-Dtangosol.coherence.management.readonly =false -
JNDI replication\lookup problems
Hi,
We have a WL5.1 SP12 cluster that until today consisted of 2 Solaris boxes running
8 WL instances on the 2.6 OS.
Today we attempted to add a third Solaris box, running 2 SP12 WL instances on the
2.8 OS. These two new instances were identical to the other 8 in all respects apart
from one ejb jar which they did not deploy (because of Solaris 2.8\JNI\JDK1.3.1 incompatiblilities).
We figured that these new JVM could lookup this bean via the clustered JNDI and execute
on one of the original 8 JVMs, so we did not deploy on these new servers. This worked
fine on test (a 3 way cluster, on one box running Solaris 2.8 and SP10).
However when we cut the new box in this morning we got javax.naming.NameNotFoundExceptions
from the new JVMs.
These new JVMs apeared to start fine, and everything looked as it should on the console,
but still the error.
so :
what could it be :
OS related - a clustering spanning 2.6 and 2.8 OSs
SP 12 related ?
Anybody encountered anything like this before ?
Thanks in advance.
Justin
Yes, the EJB classes are in the server classpath.
I assumed that JNDI replication occured as a resulting of enabling clustering.
"Sabha" <[email protected]> wrote:
>Are the ejb home/remote interfaces in the server classpath of the 2 newer
>JVMs? Is jndi replication turned off?
>
>-Sabha
>
>"Justin" <[email protected]> wrote in message
>news:[email protected]...
>>
>> Hi,
>>
>> We have a WL5.1 SP12 cluster that until today consisted of 2 Solaris boxes
>running
>> 8 WL instances on the 2.6 OS.
>>
>> Today we attempted to add a third Solaris box, running 2 SP12 WL instances
>on the
>> 2.8 OS. These two new instances were identical to the other 8 in all
>respects apart
>> from one ejb jar which they did not deploy (because of Solaris
>2.8\JNI\JDK1.3.1 incompatiblilities).
>> We figured that these new JVM could lookup this bean via the clustered
>JNDI and execute
>> on one of the original 8 JVMs, so we did not deploy on these new servers.
>This worked
>> fine on test (a 3 way cluster, on one box running Solaris 2.8 and SP10).
>>
>> However when we cut the new box in this morning we got
>javax.naming.NameNotFoundExceptions
>> from the new JVMs.
>>
>> These new JVMs apeared to start fine, and everything looked as it should
>on the console,
>> but still the error.
>>
>> so :
>>
>> what could it be :
>>
>> OS related - a clustering spanning 2.6 and 2.8 OSs
>> SP 12 related ?
>>
>> Anybody encountered anything like this before ?
>>
>> Thanks in advance.
>>
>> Justin
>>
>>
>>
>
>
-
What is the best way to implement a cluster-wide object ID generator?
What is the best way to implement a cluster-wide object ID generator?
What is the best way to implement a cluster-wide
object ID generator?Always use 3 because it is prime.
Alternatively more information about the system and the needs of the system might prompt alternative ideas some of which are likely to be better than others for your particular implementation and system constraints. -
Hi All:
I get a problem with the cluster.
I have a class called OrderMonitor, used to function as a central
controler to the order entity beans. The OrderMonitor will call all the
order beans under some condition. The order beans will also call the
OrderMonitor under some condition.
Currently, the order beans look up this OrderMonitor thr'u local
cluster's jndi, the OrderMonitor binds itself to the local cluster's
JNDI.
How can i make the beans in one cluster call the orderMonitor in
another cluster? And how can i make the OrderMonitor call all the beans
in all the clusters? Should the beans look up the OrderMonitor not in
the local cluster, but in the dispatcher machine?
Will weblogic/dispatcher do this automatically?
Thanks a lot!
wang minjiangJust read weblogic manual. Seems that there is no particular dispatcher
machine in weblogic clusters. (different with SilverStream)
So, the solution might be:
OrderMonitor implements as a RMI, register its stub to local server, and
subsequently copied to all other servers when joining in.
Beans look up OrderMonitor thr'u lookup the replicated JNDI, and pass in
its remote object, rather than the bean itself, in order for the monitor to
call back.
Any comments?
wang minjiang
wangminjiang wrote:
Hi All:
I get a problem with the cluster.
I have a class called OrderMonitor, used to function as a central
controler to the order entity beans. The OrderMonitor will call all the
order beans under some condition. The order beans will also call the
OrderMonitor under some condition.
Currently, the order beans look up this OrderMonitor thr'u local
cluster's jndi, the OrderMonitor binds itself to the local cluster's
JNDI.
How can i make the beans in one cluster call the orderMonitor in
another cluster? And how can i make the OrderMonitor call all the beans
in all the clusters? Should the beans look up the OrderMonitor not in
the local cluster, but in the dispatcher machine?
Will weblogic/dispatcher do this automatically?
Thanks a lot!
wang minjiang -
Campus cluster with storage replication
Hi all..
we are planning to implement a campus cluster with storage replication over a distance of 4km using remote mirror feature of sun storagetek 6140.
Primary storage ( the one where quorum resides ) and replicated secondary storage will be in separate sites interconnected with dedicated single mode fiber.
The nodes of the cluster will be using primary storage and the data from primary will be replicated to the second storage using remote mirror.
Now in case the primary storage failed completely how the cluster can continue operation with the second storage? what is the procedure ? how does the initial configuration look like?
Regards..
SHi,
a high level overview with a list of restrictions can be found here:
http://docs.sun.com/app/docs/doc/819-2971/6n57mi28m?q=TrueCopy&a=view
More details how to set this up can be found at:
http://docs.sun.com/app/docs/doc/819-2971/6n57mi28r?a=view
The basic setup would be to have 2 nodes, 2 storageboxes, TrueCopy between the 2 boxes but no crosscabling. The HAStoragePlus resource being part of a service resource group would use a device that had been "cldevice replicate"ed by the administrator. so that the "same" device could be used on both nodes.
I am not sure how a failover is triggered if the primary storage box failed. But due to the "replication" mentioned above, SC knows how to reconfigure the replication in the case of a failover.
Unfortunately, due to lack of HDS storage in my own lab, I was not able to test this setup; so this is all theory.
Regards
Hartmut
PS: Keep in mind, that the only replication technology integrated into SC today is HDS TrueCopy. If you're thinking of doing manual failovers anyway, you could have a look at the Sun Cluster Geographic Edition which is more a disaster recovery like configuration that combines 2 or more clusters and is able to failover resource groups including replication; this product already supports more replication technologies and will even more in the future. Have a look at http://docsview.sfbay.sun.com/app/docs/coll/1191.3 -
Cluster-wide invalidation using read-mostly pattern
Hello,
I have a qeustion around the use of the read-mostly Entity Beans pattern with implicit invalidation (through specifying the read-only EJB name in the <invalidation-target> element of the read-write bean's DD).
When an update occurs in the read-write bean, is invalidation propogated to all nodes in a cluster, thereby forcing a ejbLoad() on the next invocation of instance of the read-only bean?
I was reasonably certain that this was the case. It has been a while but my memory is that even in 6.1, invalidation using the CachingHome interface (obiovusly not quite the same thing, but surely close) performed a cluster-wide invalidation. Unfortuantely I don't have a cluster lying around to knock up a quick test case at the moment.
The reason for me raising the question is that if you search for "read-mostly" on dev2dev you will find a recent article from Dmitri Maximovich - "<b>Peak performance tuning of CMP 2.0 Entity beans in WebLogic Server 8.1 and 9.0</b>"http://dev2dev.bea.com/pub/a/2005/11/tuning-cmp-ejbs.html?page=3
This contains the worrying sentence :
<i><b>In contrast to the read-mostly pattern, which doesn't provide mechanisms to notify other nodes in the cluster that data was changed on one of the nodes</b>, when a bean with optimistic concurrency is updated, a notification is broadcast to other cluster members, and the cached bean instances are discarded to prevent optimistic conflicts.</i>
I don't particulary want to use an optimistic concurrency in my current development as the application is guaranteed sole access to the underlying database and for the data we're concerned with there are extremely infrequent updates. My first thoughts were that a read-mostly pattern would be ideal for our requirements. However, I would be extremely concerned by the prospect of stale data existing on some nodes as I was planning on also setting read-timeout-seconds to zero.
Anyone who can shed some light on the subject would be much appreciated.
Thanks
Brendan BuckleyYou are correct. The dev2dev article is not.
The caching home interface triggers an invalidation message across the cluster members. Their cache is marked dirty and the next call to the bean will force an ejbLoad.
-- Rob
WLS Blog http://dev2dev.bea.com/blog/rwoollen/ -
PCD Release Cache Cluster Wide
Hello to all,
We have got problems with deployment to cluster nodes. I know there is a possibility to release the cache by two ways. The first one needs to catch
the right node and release the cache. This has to be done for every node.
I want to use the second way 'Release Cache Cluster Wide' but in some
way I have problems by forming the right URLs...
I used e.g.
https://frds00532.emea.zf-world.com:50001/irj/servlet/prt/portal/prtroot/pcd!3aportal_content!2fcontent!2fkeyDataCockpit!2fn!2fiViews!2fcom.zf.f
Does someone can give me an hint or a link how to form such an URL and
wich parts has such a URL to contain?
Thanks.Hello to all,
We have got problems with deployment to cluster nodes. I know there is a possibility to release the cache by two ways. The first one needs to catch
the right node and release the cache. This has to be done for every node.
I want to use the second way 'Release Cache Cluster Wide' but in some
way I have problems by forming the right URLs...
I used e.g.
https://frds00532.emea.zf-world.com:50001/irj/servlet/prt/portal/prtroot/pcd!3aportal_content!2fcontent!2fkeyDataCockpit!2fn!2fiViews!2fcom.zf.f
Does someone can give me an hint or a link how to form such an URL and
wich parts has such a URL to contain?
Thanks. -
Does the replication in clustered jndi also support removing replicate
bindings
when you call unbind()?
When I programmatically bind a value (a String), it is replicated throughout
the cluster but when I try to unbind it only gets removed on one server.
Calling rebind with a new value doesn't work either; a message is printed to
the console saying there is a conflict. Has anyone seen this before and/or
is there
something I'm missing.
This is on weblogic5.1sp4.This sounds like a bug.
I suggest that you file a bug report with our support organization. Be sure
to include a complete test case. They will also need information from
you -- please review our external support procedures:
http://www.beasys.com/support/index.html
Thanks,
Michael
Michael Girdley
BEA Systems Inc
"Jonathon Lee" <[email protected]> wrote in message
news:398b4219$[email protected]..
Does the replication in clustered jndi also support removing replicate
bindings
when you call unbind()?
When I programmatically bind a value (a String), it is replicatedthroughout
the cluster but when I try to unbind it only gets removed on one server.
Calling rebind with a new value doesn't work either; a message is printedto
the console saying there is a conflict. Has anyone seen this beforeand/or
is there
something I'm missing.
This is on weblogic5.1sp4. -
JNDI replication and rebinging from multiple servers
Here is what I need:
Server1 binds ObjX to it's JNDI tree (which replicates the object to the
other servers in the cluster)
Server2 does a lookup on ObjX, changes some values and rebinds it back to
it's JNDI tree (which should replicate the changes to the other servers
including Server1).
The above does not happen because, Server1 is known to be the owner of ObjX,
and thus instead of getting replicated data after Server2 rebinds, it gets
duplicate name errors.
Is there anyway to make the above work the way I want it to?It is bad to use JNDI to replicate application data/cache in the cluster.
If you are sure that you want to use multicast to replicate your data,
in 6.0 you can use JMS:
http://e-docs.bea.com/wls/docs60/jms/implement.html#1206428
Or you can use javagroups: http://sourceforge.net/projects/javagroups
John Boyd <[email protected]> wrote:
Yes, exactly.
"Dimitri Rakitine" <[email protected]> wrote in message
news:[email protected]..
Are you trying to use JNDI to replicate runtime data across the cluster?
John Boyd <[email protected]> wrote:
Here is what I need:
Server1 binds ObjX to it's JNDI tree (which replicates the object to the
other servers in the cluster)
Server2 does a lookup on ObjX, changes some values and rebinds it back
to
it's JNDI tree (which should replicate the changes to the other servers
including Server1).
The above does not happen because, Server1 is known to be the owner of
ObjX,
and thus instead of getting replicated data after Server2 rebinds, itgets
duplicate name errors.
Is there anyway to make the above work the way I want it to?--
Dimitri
Dimitri -
URL Resource and JNDI Replication
Do URLResources get replicated in the JNDI tree?
JDBC resource can be created using asadmin command.
Its a two step process.
First JDBC connection pool has to be created using following command.
asadmin create-jdbc-connection-pool user admin password adminadmin host fuyako port 7070 datasourceclassname XA restype jax.sql.DataSource
isolationlevel serializable isconnectvalidatereq=true validationmethod auto-commit description "XA Connection" --property DatabaseName="jdbc\:pointbase\:server\:\/\/local
host\/sample" :User=public:Password=public XA_connection_pool
Next JDBC resource has to be created using the following command.
asadmin create-jdbc-resource --user admin --password adminadmin --host fuyako --port 7070
--connectionpoolid XA_connection_pool --description
"creating a sample jdbc resource" sample_jdbc_resource
Please change the parameters as suitable to you.
The detailed description of these commands can be found as following URLs
http://java.sun.com/j2ee/1.4/docs/relnotes/cliref/hman1/create-connector-connection-pool.1.html
http://java.sun.com/j2ee/1.4/docs/relnotes/cliref/hman1/create-jdbc-resource.1.html -
Highly Available Cluster-wide IP
We have the following situation
1>. We have a resource group hosting samba
2>. We have one more resource group hosting a java server.
We need to make both this resource group dependant on a common logical hostname.
Do we create a separate resource group with a logical hostname resource. In this case even if the adpater hosting the logical hostname goes down, the resource group is not switched since there is no probe functionality for LogicalHostname.
How do we go about doing thisHi,
from your question I conclude that both services always have to run on the same node, as they have to run where the "common" logical IP is running. How about putting both services into a single resource group? This seems to be the easiest solution.
In most failure scenarios of logical IP addresses, a failover should be initiated. I must admit that I have never tested a RG which consisited only of a logical IP address.
Regards
Hartmut
Maybe you are looking for
-
I need to Downgrade my iPhone 3G from iOS 4.0 back to 3.1.3
I've had the same problem as many iPhone customers, updating iTunes to newest version (9.2.0.61) and Upgrading my iPhone 3G to v. 4.0 from v 3.1.3. iTunes successfully upgraded to iOS 4.0, but the Restore of my backup FAILED!!! OVER and OVER and OVER
-
ipad is disabled. We upgraded to iphone 5 and IOS7 and since then ipad is asking for a passcode which we did not have before. Now disabled!
-
IDOCs Packaging ... What if one IDOC is in error in the packet ?
Hi all, What will happend to the rest of the IDOC in the packet ? Lets say, i packaged my IDOC in a bunch of 5. The first 3 IDOCs, their payload are ok. However the 4th has a bad payload and it crash in the mapping section ... Does the first 3 IDOC
-
How can I move my .mac e-mail to main apple id?
I have a Main apple ID that i've always used for itunes and app store purchases, but I had a mobile me account that upgraded to iCloud. I want to consolidate it to one ID and keep my email address. I know Apple say you can have one ID for iCloud and
-
CST and VAT amount (non deductible taxe) is debiting in the purchase price
Hi I am facing some problem in migo FI documents MY client is maintaining standard price for raw materials While doing GR in MIGO CST and VAT ,both are non deductible taxes, is debiting in purchase price variance and crediting in GR/IR The above said