JMS in cluster - why is it assymetric?

We use 8.1 in a 2 noded cluster/+1 adminserver/.
          On each node, there is a JMS server on migratable target. Lets say 'test1' and 'test2'.
          A distributed queue has 2 destination , one on the each node. The JNDI name of the queue is 'test-dist' so the 2 destinations are test-dist@test1 & test-dist@test2.
          The situation seems to be competely symmetric. If I check the JNDI tree of each of them I can see all three of the prev JNDI Names.
          OK
          Now I turn off one of the nodes.
          If I turn off node1 then in the JNDI tree of node2 there is:
          test-dist && test-dist@test2
          OK
          But if I had turned off node 2 then in the JNDI tree of node 1 I can just find
          test-dist@test1
          and 'test-dist' cannot be found thus cannot be accessed. Am I doing something wrong? Or istn't it supposed to be compeletly symmetric? If more details eeded feel free to ask.
          Thanks for your remarks.

Hello,
          This does sound odd. If you can see the distributed destination and other destinations on the JNDI tree of both nodes it suggests to me that eveything is deployed correctly.
          Can you double check that you target your distributed destination to your cluster and selected all the weblogic server instances within that cluster (default setting)?
          If the problem persists I would raise a support case.
          Hussein Badakhchani
          www.orbism.com

Similar Messages

  • Which transport protocol is most efficient in JMS adaptor and why..???

    Hi all,
    Which transport protocol is most efficient in JMS adaptor and why..???
    Also can anyone tell me how to check queues in the integration server and in the reciever side....???
    If any one explain it rather than providing any link...i will be delighted...
    Thanks....
    Biplab

    <i>Which transport protocol is most efficient in JMS adaptor and why..???</i>
    U have to select the JMS provider for the JMS adapter under Transport Protocol.
    The selection of JMS provider could be according to ur cost estimation.
    http://help.sap.com/saphelp_nw04/helpdata/en/c1/739c4186c2a409e10000000a155106/frameset.htm
    SONIQ MQ and IBM MQ series r widely used
    <i>Also can anyone tell me how to check queues in the integration server and in the reciever side....???</i>
    smq1 - outbound queues
    smq2 - inbound queues
    Regards,
    Prateek

  • Concurrent nodes reading from JMS topic (cluster environment)

    Hi.
    Need some help on this:
    Concurrent nodes reading from JMS topic (cluster environment)
    Thanks
    Denis

    After some thinking, I noted the following:
    1 - It's correct that only one node subscribes to a topic at a time. Otherwise, the same message would be processed by the nodes many times.
    2 - In order to solve the load balancing problem, I think that the Topic should be changed by a Queue. This way, each BPEL process from the node would poll for a message, and as soon as the message arrives, only one BPEL node gets the message and take if off the Queue.
    The legacy JMS provider I mentioned in the post above is actually the Retek Integration Bus (RIB). I'm integrating Retek apps with E-Business Suite.
    I'll try to configure the RIB to provide both a Topic (for the existing application consumers) and a Queue (an exclusive channel for BPEL)
    Do you guys have already tried to do an integration like this??
    Thanks
    Denis

  • Load balance with JMS in cluster

              I would like to develop a MDB listening to a topic, which can be deployed in a
              weblogic cluster, so that whenever a message is published to that topic, one and
              only one of the MDB in a wls server of the cluster will process the message, what
              is the best way to do this?
              thanks
              Zhou
              

    Do you desire a distributed topic that doesn't replicate
              between topic members, but replicates
              to all subscribers on a particular member?
              If so, then a distributed topic is NOT
              what you want to configure. Instead, you can do as you
              suggest: set JNDINameReplicated to false
              and put a same JNDI named topic on each server.
              Tom
              Tom Barnes wrote:
              >
              >
              > x zhou wrote:
              >
              >> Can I do the following to achive what I want:
              >
              >
              > I don't know what you want. But a message sent to a topic
              > member still gets replicated to the other topic members.
              >
              > I'm guessing what you are looking for is a
              > distributed queue, not a distributed topic. I suggest
              > you look at the distributed queue option. A message
              > sent to a distributed queue will only get processed
              > by one MDB.
              >
              >>
              >> 1) create a distributed topic having topic name "topic.distributed"
              >> that has "topic.1"
              >> and "topic.2" as its member on two wls instances in a cluster
              >>
              >> 2) write a MDB that will be deployed seperately to 2 wls instances in
              >> the cluster,
              >> with different weblogic-ejb-jar.xml file: the difference is at the line:
              >>
              >> <destination-jndi-name>topic.1</destination-jndi-name> for server 1;
              >> <destination-jndi-name>topic.2</destination-jndi-name> for server 2;
              >>
              >> 3) create the publisher using the distributed topic (of jndi name
              >> 'distribute.topic'),
              >> according the document, it should send the message to one and only one
              >> of its
              >> member, and therefore the corresponding MDB in the wls instance of the
              >> cluster
              >> which receives the message will be notified
              >>
              >> and this will give me multi-threading in the same wls instance
              >> (because of MDB)
              >> and load-balancing across the whole cluster, right?
              >>
              >> Incidently, it looks like I can make the topic member's jndi name the
              >> same, as
              >> long as I un-check the "Replicate JNDI Name In Cluster" checkbox. Why
              >> is there
              >> such a box in the first place? Isn't the topic not clusterable?
              >>
              >> thanks
              >>
              >>
              >> Tom Barnes <[email protected]> wrote:
              >>
              >>> (1) Currently, there is no way in WL to get multiple consumers to share
              >>> a topic subscription - which is essentially what you are asking for.
              >>> The JMS standard doesn't support this either. This is not to say
              >>> that it wouldn't be a useful feature - I personally think it would be.
              >>>
              >>> (2) Why not use a queue?
              >>>
              >>> Tom
              >>>
              >>> X Zhou wrote:
              >>>
              >>>
              >>>> I would like to develop a MDB listening to a topic, which can be
              >>>> deployed
              >>>
              >>>
              >>> in a
              >>>
              >>>> weblogic cluster, so that whenever a message is published to that
              >>>> topic,
              >>>
              >>>
              >>> one and
              >>>
              >>>> only one of the MDB in a wls server of the cluster will process the
              >>>
              >>>
              >>> message, what
              >>>
              >>>> is the best way to do this?
              >>>>
              >>>> thanks
              >>>>
              >>>> Zhou
              >>>
              >>>
              >>
              >
              

  • JMS/Queue cluster question

              Hi
              I have some very basic cluster questions on JMS Queues. Lets say Q1>I have 3 WLS
              in cluster. I create the queue in only WLS#1 - then all the other WLS (#2 and #3)
              should have a stub in their JNDI tree for the Queue which points to the Queue in
              #1 - right? Basically what I am trying to acheive is to have the queue in one server
              and all the other servers have a pointer to it - I beleive this is possible in WLS
              cluster - right??
              Q2> Is there any way a client to the queue running on a WLS can tell whether the
              Queue handle its using is local (ie in the same server) or remote. Is the API createQueue(./queuename)
              going to help here??
              Q3>Is there any way to create a Queue dynamically - I guess JMX is the answer -right?
              But I will take this question a bit further - lets say Q1 answer is yes. In this
              case if server #1 crashes - then #2 and #3 have no Queues. So if they try to create
              a replica of the Queue (as on server#1) - pointing to the same filestore - can they
              do it?? - I want only one of them to succed in creating the Queue and also the Queue
              should have all the data of the #1 Queue (1 to 1 replica).
              All I want is the concept of primary and secondary queue in a cluster. Go on using
              the primary queue - but if it fails use the 2ndry queue. Kind of HttpSession replication
              concept in clusters. My cluster purpose is more for failover rather than loadbalancing.
              TIA
              Anamitra
              

              Anamitra wrote:
              > Hi Tom
              > 7.0 is definitely an option for me. So lets take the scenarion on case of JMS cluster
              > and 7.0.
              >
              > I do not understand what u mean by HA framework?
              An HA framework is a third party product that can be used to automatically restart a failed server
              (perhaps on a new machine), and that will guarantee that the same server isn't started in two
              different places (that would be bad). There are few of these HA products, "Veritas" is one of
              them. Note that if you are using JMS file stores or transactions, both of which depend on the disk,
              you must make sure that the files are available on the new machine. One approach to this is to use
              what is known as a "dual-ported" disk.
              > If I am using a cluster of 3 WLS
              > 7.0 servers - as u have said I can create a distrubuted Queue with a fwd delay attribute
              > set to 0 if I have the consumer only in one server say server #1.
              > But still if the server #1 goes down u say that the Queues in server #2 and server
              > #3 will not have access to the messages which were stuck in the server #1 Queue when
              > it went down -right?
              Right, but is there a point in forwarding the messages to your consumer's destination if your
              application is down?
              If your application can tolerate it, you may wish to consider allowing multiple instances of it (one
              per physical destination). That way if something goes down, only those messages are out-of-business
              until the application comes back up...
              >
              >
              > Why cant the other servers see them - they all point to the same store right??
              > thanks
              > Anamitra
              >
              Again, multiple JMS servers can not share a store. Nor can multiple stores share a file. That will
              cause corruption. Multiple stores CAN share a database, but can't use the same tables in the
              database.
              Tom
              >
              > Tom Barnes <[email protected]> wrote:
              > >
              > >
              > >Anamitra wrote:
              > >
              > >> Hi
              > >> I have some very basic cluster questions on JMS Queues. Lets say Q1>I
              > >have 3 WLS
              > >> in cluster. I create the queue in only WLS#1 - then all the other WLS
              > >(#2 and #3)
              > >> should have a stub in their JNDI tree for the Queue which points to the
              > >Queue in
              > >> #1 - right?
              > >
              > >Its not a stub. But essentially right.
              > >
              > >> Basically what I am trying to acheive is to have the queue in one server
              > >> and all the other servers have a pointer to it - I beleive this is possible
              > >in WLS
              > >> cluster - right??
              > >
              > >Certainly.
              > >
              > >>
              > >> Q2> Is there any way a client to the queue running on a WLS can tell whether
              > >the
              > >> Queue handle its using is local (ie in the same server) or remote. Is
              > >the API createQueue(./queuename)
              > >> going to help here??
              > >
              > >That would do it. This returns the queue on the CF side of the established
              > >Connection.
              > >
              > >>
              > >> Q3>Is there any way to create a Queue dynamically - I guess JMX is the
              > >answer -right?
              > >> But I will take this question a bit further - lets say Q1 answer is yes.
              > >In this
              > >> case if server #1 crashes - then #2 and #3 have no Queues. So if they
              > >try to create
              > >> a replica of the Queue (as on server#1) - pointing to the same filestore
              > >- can they
              > >> do it??
              > >> - I want only one of them to succed in creating the Queue and also the
              > >Queue
              > >> should have all the data of the #1 Queue (1 to 1 replica).
              > >
              > >No. Not possible. Corruption city.
              > >Only one server may safely access a store at a time.
              > >If you have an HA framework that can ensure this atomicity fine, or are
              > >willing
              > >to ensure this manually then fine.
              > >
              > >>
              > >>
              > >> All I want is the concept of primary and secondary queue in a cluster.
              > >Go on using
              > >> the primary queue - but if it fails use the 2ndry queue. Kind of HttpSession
              > >replication
              > >> concept in clusters. My cluster purpose is more for failover rather than
              > >loadbalancing.
              > >
              > >If you use 7.0 you could use a distributed destination, with a high weight
              > >on the destination
              > >you want used most. Optionally, 7.0 will automatically forward messages
              > >from distr. dest
              > >members that have no consumers to those that do.
              > >
              > >In 6.1 you can emulate a distributed destination this way (from an upcoming
              > >white-paper):
              > >Approximating Distributed Queues in 6.1
              > >
              > >If you wish to distribute the destination across several servers in a cluster,
              > >use the distributed
              > >destination features built into WL 7.0. If 7.0 is not an option, you can
              > >still approximate a simple
              > >distributed destination when running JMS servers in a &#8220;single-tier&#8221;
              > configuration.
              > > Single-tier indicates
              > >that there is a local JMS server on each server that a connection factory
              > >is targeted at. Here is a
              > >typical scenario, where producers randomly pick which server and consequently
              > >which part of the
              > >distributed destination to produce to, while consumers in the form of MDBs
              > >are pinned to a particular
              > >destination and are replicated homogenously to all destinations:
              > >
              > >· Create JMS servers on multiple servers in the cluster. The servers will
              > >collectively host the
              > >distributed queue &#8220;A&#8221;. Remember, the JMS servers (and WL servers) must
              > >be named differently.
              > >
              > >· Configure a queue on each JMS server. These become the physical destinations
              > >that collectively become
              > >the distributed destination. Each destination should have the same name
              > >"A".
              > >
              > >· Configure each queue to have the same JNDI name &#8220;JNDI_A&#8221;, and also
              > take
              > >care to set the destination&#8217;s
              > >&#8220;JNDINameReplicated&#8221; parameter to false. The &#8220;JNDINameReplicated&#8221;
              > parameter
              > >is available in 7.0, 6.1SP3
              > >or later, or 6.1SP2 with patch CR061106.
              > >
              > >· Create a connection factory, and target it at all servers that have a
              > >JMS server with &#8220;A&#8221;.
              > >
              > >· Target the same MDB pool at each server that has a JMS server with destination
              > >&#8220;A&#8221;, configure its
              > >destination to be &#8220;JNDI_A&#8221;. Do not specify a connection factory URL
              > when
              > >configuring the MDB, as it can
              > >use the server&#8217;s default JNDI context that already contains the destination.
              > >
              > >· Producers look up the connection factory, create a connection, then a
              > >session as usual. Then producers
              > >look up the destination by calling javax.jms.QueueSession.createQueue(String).
              > > The parameter to
              > >createQueue requires a special syntax, the syntax is &#8220;./<queue name>&#8221;,
              > so
              > >&#8220;./A&#8221; works in this example.
              > >This will return a physical destination of the distributed destination that
              > >is local to the producer&#8217;s
              > >connection. This syntax is available on 7.0, 6.1SP3 or later, and 6.1SP2
              > >with patch CR072612.
              > >
              > >This design pattern allows for high availability, as if one server goes
              > >down, the distributed destination
              > >is still available and only the messages on that one server become unavailable.
              > > It also allows for high
              > >scalability as speedup is directly proportional to the number of servers
              > >on which the distributed
              > >destination is deployed.
              > >
              > >
              > >
              > >>
              > >> TIA
              > >> Anamitra
              > >
              > >
              > ><!doctype html public "-//w3c//dtd html 4.0 transitional//en">
              > ><html>
              > >Anamitra wrote:
              > ><blockquote TYPE=CITE>Hi
              > ><br>I have some very basic cluster questions on JMS Queues. Lets say Q1>I
              > >have 3 WLS
              > ><br>in cluster. I create the queue in only WLS#1 - then all the other WLS
              > >(#2 and #3)
              > ><br>should have a stub in their JNDI tree for the Queue which points to
              > >the Queue in
              > ><br>#1 - right?</blockquote>
              > >Its not a stub. But essentially right.
              > ><blockquote TYPE=CITE>Basically what I am trying to acheive is to have
              > >the queue in one server
              > ><br>and all the other servers have a pointer to it - I beleive this is
              > >possible in WLS
              > ><br>cluster - right??</blockquote>
              > >Certainly.
              > ><blockquote TYPE=CITE>
              > ><br>Q2> Is there any way a client to the queue running on a WLS can tell
              > >whether the
              > ><br>Queue handle its using is local (ie in the same server) or remote.
              > >Is the API createQueue(./queuename)
              > ><br>going to help here??</blockquote>
              > >That would do it. This returns the queue on the
              > >CF side of the established Connection.
              > ><blockquote TYPE=CITE>
              > ><br>Q3>Is there any way to create a Queue dynamically - I guess JMX is
              > >the answer -right?
              > ><br>But I will take this question a bit further - lets say Q1 answer is
              > >yes. In this
              > ><br>case if server #1 crashes - then #2 and #3 have no Queues. So if they
              > >try to create
              > ><br>a replica of the Queue (as on server#1) - pointing to the same filestore
              > >- can they
              > ><br>do it?? <br>
              > >- I want only one of them to succed in creating the Queue and also the
              > >Queue
              > ><br>should have all the data of the #1 Queue (1 to 1 replica).</blockquote>
              > >No. Not possible. Corruption city.
              > ><br>Only one server may safely access a store at a time.
              > ><br>If you have an HA framework that can ensure this atomicity fine, or
              > >are willing
              > ><br>to ensure this manually then fine.
              > ><blockquote TYPE=CITE>
              > ><p>All I want is the concept of primary and secondary queue in a cluster.
              > >Go on using
              > ><br>the primary queue - but if it fails use the 2ndry queue. Kind of HttpSession
              > >replication
              > ><br>concept in clusters. My cluster purpose is more for failover rather
              > >than loadbalancing.</blockquote>
              > >If you use 7.0 you could use a distributed destination, with a high weight
              > >on the destination
              > ><br>you want used most. Optionally, 7.0 will automatically
              > >forward messages from distr. dest
              > ><br>members that have no consumers to those that do.
              > ><p><i>In 6.1 you can emulate a distributed destination this way (from an
              > >upcoming white-paper):</i>
              > ><br><i>Approximating Distributed Queues in 6.1</i><i></i>
              > ><p><i>If you wish to distribute the destination across several servers
              > >in a cluster, use the distributed destination features built into WL 7.0.
              > >If 7.0 is not an option, you can still approximate a simple distributed
              > >destination when running JMS servers in a &#8220;single-tier&#8221; configuration.
              > >Single-tier indicates that there is a local JMS server on each server that
              > >a connection factory is targeted at. Here is a typical scenario,
              > >where producers randomly pick which server and consequently which part
              > >of the distributed destination to produce to, while consumers in the form
              > >of MDBs are pinned to a particular destination and are replicated homogenously
              > >to all destinations:</i><i></i>
              > ><p><i>· Create JMS servers on multiple servers in the cluster.
              > >The servers will collectively host the distributed queue &#8220;A&#8221;. Remember,
              > >the JMS servers (and WL servers) must be named differently.</i><i></i>
              > ><p><i>· Configure a queue on each JMS server. These become
              > >the physical destinations that collectively become the distributed destination.
              > >Each destination should have the same name "A".</i><i></i>
              > ><p><i>· Configure each queue to have the same JNDI name &#8220;JNDI_A&#8221;,
              > >and also take care to set the destination&#8217;s &#8220;JNDINameReplicated&#8221;
              > parameter
              > >to false. The &#8220;JNDINameReplicated&#8221; parameter is available in
              > >7.0, 6.1SP3 or later, or 6.1SP2 with patch CR061106.</i><i></i>
              > ><p><i>· Create a connection factory, and target it at all servers
              > >that have a JMS server with &#8220;A&#8221;.</i><i></i>
              > ><p><i>· Target the same MDB pool at each server that has a JMS server
              > >with destination &#8220;A&#8221;, configure its destination to be &#8220;JNDI_A&#8221;.
              > >Do not specify a connection factory URL when configuring the MDB, as it
              > >can use the server&#8217;s default JNDI context that already contains the destination.</i><i></i>
              > ><p><i>· Producers look up the connection factory, create a connection,
              > >then a session as usual. Then producers look up the destination by
              > >calling javax.jms.QueueSession.createQueue(String). The parameter
              > >to createQueue requires a special syntax, the syntax is &#8220;./<queue name>&#8221;,
              > >so &#8220;./A&#8221; works in this example. This will return a physical
              > >destination of the distributed destination that is local to the producer&#8217;s
              > >connection. This syntax is available on 7.0, 6.1SP3 or later,
              > >and 6.1SP2 with patch CR072612.</i><i></i>
              > ><p><i>This design pattern allows for high availability, as if one server
              > >goes down, the distributed destination is still available and only the
              > >messages on that one server become unavailable. It also allows
              > >for high scalability as speedup is directly proportional to the number
              > >of servers on which the distributed destination is deployed.</i>
              > ><br><i></i>
              > ><br><i></i>
              > ><blockquote TYPE=CITE>
              > ><br>TIA
              > ><br>Anamitra</blockquote>
              > ></html>
              > >
              > >
              

  • APIC Cluster - Why minimum of 3 controllers?

    Hi!
    I'm just getting started on learning about the Cisco ACI, and one of the things that struck me was that Cisco has recommended (or mandated?) a minimum of 3 APICs in a cluster. Is this a requirement? If so, why can't we just have 2 controllers in a cluster?
    In this webpage (http://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/unified-fabric/white-paper-c11-730021.html), there was a discussion on the number of controllers vs data loss, but there is no explanation about why 2 controllers can't be used.
    Thanks!

    Hello 
    To understand why three APICs is the recommended minimum you must understand how the APICs distribute information between the three. All parts of ACI are datasets generated and processed by the Distributed Policy Repository and that data for those APICs functions are partitioned into logically banded subsets called shards (like  DB shard). a Shard is then broken into three replicas or copies. each APIC has a replica for every shard but only 1 APIC is the master for a particular replica/shard. This is a way to distribute the workload evenly and load balance processing across the cluster of 3 as well as a fail safe in case an APIC goes down.
    Now that the theory is out of the way, imagine one of your three APICs goes down. the remaining two will negotiate who will now be the master for the shards that the down APIC was in charge of. Workload is then load balance to the two and the cluster becomes fully fit again. Working with 2 APICs is really unadvised due to the split brain condition. This occurs when APIC 1 and APIC 2 thing they are both leaders for a shard and cannot agree so the shard is in contention and the cluster is unfit/"data layer partially diverged". with the cluster in this state it is unadvised to make changes in the GUI, i don't remember if its even allowed. 
    With the case of only 1 APIC, that APIC does all the work, it is the leader for all shards but if it goes down then you can not make any changes at all. data plane will continue forwarding but since no APIC, theres no way to create new policies or changes. 
    Thanks for using the support forums! hope this helps!

  • Trying to start the j2ee to be the JMS provider but why error

    Anyone knows why I can't start the j2ee ..which I hope to use it as the JMS provider. I am trying to try the e.g. from JMS tutorial....
    But it seems that I can't start the j2ee in the first place 'cos error say that I have another instance running...(but I didn't start anything in the first place). How should I solve this?
    How could I stop the j2ee server....I tried the command
    j2ee -stop but cannot..
    Anyone can help?
    AG
    G:\jmseg\simple>j2ee -verbose
    J2EE server listen port: 1050
    java.lang.RuntimeException: Could not initialize j2ee server. Possible cause cou
    ld be another instance of the server already running.
    at com.sun.enterprise.iiop.POAProtocolMgr.initializeNaming(POAProtocolMg
    r.java:134)
    at com.sun.enterprise.server.J2EEServer.run(J2EEServer.java:227)
    at com.sun.enterprise.server.J2EEServer.main(J2EEServer.java:918)
    java.lang.RuntimeException: Could not initialize j2ee server. Possible cause cou
    ld be another instance of the server already running.
    at com.sun.enterprise.iiop.POAProtocolMgr.initializeNaming(POAProtocolMg
    r.java:134)
    at com.sun.enterprise.server.J2EEServer.run(J2EEServer.java:227)
    at com.sun.enterprise.server.J2EEServer.main(J2EEServer.java:918)
    java.lang.RuntimeException: Could not initialize j2ee server. Possible cause cou
    ld be another instance of the server already running.
    at com.sun.enterprise.server.J2EEServer.run(J2EEServer.java:355)
    at com.sun.enterprise.server.J2EEServer.main(J2EEServer.java:918)
    J2EE server reported the following error: Could not initialize j2ee server. Poss
    ible cause could be another instance of the server already running.
    Error executing J2EE server ...
    G:\jmseg\simple>

    I got exactly the same message when I tried to start j2ee server... It worked fine the second time round though, So I suggest you try it again.
    tim..

  • Jms in cluster / load balancing and failover

              Did I get it right ???
              I have 1 admin server and 4 managed servers in a 2 clusters, a development cluster
              and a test cluster.
              I now want to have loadbalancing with my jms server and I want to be able to migrate
              my jms server in case of failer.
              for each cluster I have created
              connectionFactory targeted to the cluster
              Distributed destination with 2 queue members
              One JMSServer migratable targeted on managed server 1, with destination 1 from
              the ditributed destination
              One JMSServer migratable targeted on managed server 2, with destination 2 from
              the ditributed destination
              I expect this to make loadbalancing between the 2 servers in the cluster, and
              I can migrate the jms server if one of the server fails to the running server.
              One thing is now.....If one server fails and I migrate the jms server to the other
              server that is running, and I then restart the server that was down, what is then
              happening, do I then have 3 jms servers ???
              [config.xml]
              

              "Kris" <[email protected]> wrote in message news:[email protected]...
              >
              > "Kawaljit Singh Sunny" <[email protected]> wrote:
              > >
              > >"Kris" <[email protected]> wrote in message
              > >news:[email protected]...
              > >>
              > >> Did I get it right ???
              > >>
              > >> I have 1 admin server and 4 managed servers in a 2 clusters, a
              development
              > >cluster
              > >> and a test cluster.
              > >> I now want to have loadbalancing with my jms server and I want to be
              > >able
              > >to migrate
              > >> my jms server in case of failer.
              > >>
              > >> for each cluster I have created
              > >> connectionFactory targeted to the cluster
              > >> Distributed destination with 2 queue members
              > >> One JMSServer migratable targeted on managed server 1, with destination
              > >1
              > >from
              > >> the ditributed destination
              > >> One JMSServer migratable targeted on managed server 2, with destination
              > >2
              > >from
              > >> the ditributed destination
              > >
              > >If the server where your JMSConnections are loadBalanced to goes down,
              > >the
              > >producers and consumers using this JMSConnection are closed.
              > >You have to recreate these producers and consumers.
              > >If the server where your Destination resides goes dow, the consumers
              > >are
              > >closed.
              > >If the producers JMSConnection is not on this server, the producer stays
              > >up.
              > >
              > >>
              > >>
              > >> I expect this to make loadbalancing between the 2 servers in the
              cluster,
              > >and
              > >> I can migrate the jms server if one of the server fails to the running
              > >server.
              > >>
              > >> One thing is now.....If one server fails and I migrate the jms server
              > >to
              > >the other
              > >> server that is running, and I then restart the server that was down,
              > >what
              > >is then
              > >> happening, do I then have 3 jms servers ???
              > >
              > >No you still have 2 JMSServers. JMS Migration is manual.
              > >
              > >>
              >
              > you say : No you still have 2 JMSServers. JMS Migration is manual.
              >
              > But if I manual migrate the jmsserver that was down to the running wls
              server,
              > that already have one jms server running, this wls server must then have 2
              jms
              > servers. And I boot the wls server that hosted the jms server that was
              down, this
              > will now have a running jms server. isn't that 3 jms servers ?
              Once you migrate a JMSServer from a WeblogicServer1 to WeblogicServer2,
              and then you boot WeblogicServer1, this JMSServer which was migrated should
              NOT be on WeblogicServer1.
              (You have migrated the JMSServer from WeblogicServer1 to WeblogicServer2)
              >
              > But I was thinking about that I could spare the migration part. If I have
              2 wls
              > servers and a jms server on each of them, and a destributed destination
              with 2
              > queue members that are persistent in a database. If a wls og just a jms
              server
              > goes down, I just have to reboot the server and it will run again. This
              way I
              > dont have to think about migration, or what ?
              Yes that is true.
              Irrespective of whether you have migration or not,
              only thing you need to do take care is to reconnect to weblogic server, if
              the the server where your JMSConnection is loadBalanced to goes down.
              There is no failover of JMSConnections. Producers inside this JMSConnection
              will be closed. You will have to create a new JMSConnection and a new
              Producer and continue with your production of JMS Messages.
              -sunny
              

  • JMS Channel Cluster Nodes-INACTIVE

    Hello All,
    We have a Sender - JMS Channel which is green state but the Cluster Node (10 of them) are in WAITING STATE - Channel_Inactive. And Nodes are in GREEN STATE.
    I have checked the Cache in Integration Directory where I could see RED entries and I have tried 'Repeat Cache Instance' , but in vain. Is it a fair idea to Run the Function-module 'LCR_CLEAR_CACHE' ? Does it have any impact?!
    Due to this 1 of the message is lying in the MS system (JMS stream) in Uncommitted State.
    ALL HAPPENING IN A PROD SYSTEM!!!
    Please find the screenshots attached.
    Regards
    KarthiSP

    Hi ,
    Check the central adapter engine cache status in Cache monitoring from RWB...its having green or red...if it is red check with basis team...
    Thanks,
    Naveen

  • Best practise configurnig JMS in cluster

    Hi,
              I'm working with Weblogic 8.1.2 and I'm trying to determine a best practice approach for configuring my JMS resources in a simple cluster.
              I initially just created a cluster with 2 servers with each server having its own JMS server. Each distributed destination has a member on each of the JMS servers.
              Reading further in to the Weblogic documentation however it talks about migratable targets and pinned services. The implication seems to be that there should be just one JMS server in the cluster, targeted to a migratable server (http://e-docs.bea.com/wls/docs81/cluster/failover.html#migratable)
              I can't find documentation talking about a best practice configuration. I'm unsure whether I should just have the one JMS server deployed to a migratable server, and so each distributed destination just having one member or if there should be 2 JMS servers, 1 per cluster member, with each distributed destination having 2 members.
              I'm unsure what the benefit of one approach over the other is with regards to failover. Am I not correct in thinking that if a server instance holding JMS connections for producers/consumer fails then the connections are closed and not recreated.
              Any article links, suggestions, explanations etc appreciated.
              Thanks,
              Aoife

    The best thing to do is dependent on what you want out of your application. Using a distributed destination can help you if you want to ensure that producers will get load balanced among several currently running destinations. It can also help the availability of consumers (as long as you set your forwarding delay parameters properly - by default forwarding messages from one physical queue to the other does not happen). In neither case when using distributed destinations will your client side artifacts be automatically reconnected (however, some level of automatic reconection is coming in 9.0.1). Furthermore, any persistent messages on any physical destinations will not suddenly be available anywhere else until a crashed machine has been brought back up.
              That is where migratable targets come in. If your application wants to fail over using some sort of redundant hardware to back up the disks (e.g. dual-ported scsi drives) then migratable targets make more sense. You can script the migration of a JMSServer complete with its store of persistent messages from a failed machine to another machine. This increases the fail-over capability of your persistent messages without using a distributed queue (but you would not get the producer/consumer load-balancing you get with DDs).
              So this is not an exact science. Some applications need the load-balancing, others need the persistent migration capability. Most need both. There is nothing stoping you from using physical destinations in your distributed destination that are targeted to migratable JMSServers. You would then have the ability to migrate the persistent state of the destination, and also have load-balancing of the producers and consumers.
              Hope that helps...
              John Wells (Aziz)
              [email protected]
              John Wells (Aziz)
              [email protected]

  • Desing input needed: JMS in cluster(s) for OSB

    Hi
    I have two admin servers prod A and prod B on two separate physical boxes.
    prod A has managed servers MS1 and MS2. : Together they form cluster A
    prod B has MS3 and MS4.: Together they form cluster B
    Client forwards request to a loadbalancer which forwards request to either prod A or prod B based on load in environment.
    Use case: I have to push some data in JMS queue on server. Client will call periodically and hit one OSB proxy service which basically gets data from queue and returns. Since I have two clusters thus I need to create two queues, one each on prod A and prod B. Now consider following scenario:
    Client hits loadbalancer
    loadbalancer forwards request to prod A. Queue on prod A was empty.
    since I get empty response, I forward the same to prod B. Queue on prod B had data so returned it to client.
    Here issue is I need to have communication between prod A and prod B. Also I need to implement manual fail over in case prod A or prod B is down.
    My question is :-
    Can more than one clusters, if they are not aware of each other (if they do - then together they form a single cluster if I am not wrong) share resources such as queues?
    May be I am not clear or couple of concepts of mine are wrong...;-)
    Please share your thoughts.
    Thanks and Regards
    Swapnil Kharwadkar

    Barnes,
              Thanks for such a nice explaination. Let me tell what i did ealier for my set up and what is the problem i have right now.
              I deployed the application which is using the JMS in the single cluster environment which has 2 managed servers each from different physical servers.
              I configured one connection factory with the preoper JNDI name and default delivery mode as non persistant and deployed the connection factorey on the cluster in which i deployed the application.
              I configured one JMS server and deployed in one of the servers from the cluster with migratable method and persistant store as none.
              I configured one destination under this created JMS server and replicated the JNDI name in the cluster.
              The application can access the message from the queue on the server in which i deployed the JMS server, not from the other one, though i deployed the server with migratable service. This is the problem i cant access from the other server.
              I am not sure what i am doing wrong or what made me wrong. Please help me out.
              Jasmine

  • JMS Cluster

    my configuration for JMS in Cluster is as follows,
              1.i have 3 WL instances running in a cluster,
              2.have 2 JMSconn factories with clientid(unique for each).for durable
              subscription deployed on all the servers and also cluster.
              3. a JMSServer with a topic deployed on one server(since this is not
              clusterable).
              with this confuration i was able to publish messages and also receive
              messages.
              the problems iam facing are
              1. when i kill the subscriber and restarts,it is giving
              "weblogic.jms.common.InvalidClientIDException: clientID is used"
              exception, when i try to create the Topicconnection.iam resatarting Wl
              Server everytime i kill the subscriber.
              2. durable subscription is not working in this confuration, i mean
              subscriber is not getting messages(after it is backup) sent by the
              publisher when subscriber was inactive.
              any kind of help will be highly appreciated.
              srini
              

    See response to your other post.
              Ravi Krishnamurthy wrote:
              > Hello :
              > I have some more questions:
              >
              > 1. I have application ( that contains ejb's and mdb's) and jms clustered
              > in a cluster called cls with 2 nodes.
              > I use distributed destinations ( both topics and queues and I dont have
              > any durable topics)
              >
              > When I target the individual jms server to the individual servers then
              > the destinations are not bound.
              >
              > But if change it to the migratable targets then the destinations are
              > bound.
              >
              > Could someone tell me what I might be doing wrong?
              >
              > Thanks,
              > Ravi
              >
              

  • JMS sub-deployment targeted to cluster

    Hello,
    I have read at the Oracle documentation that we can use a sub-deployment within a JMS module targeted only to a single JMS Server. But I have an application that create JMS resources with deploy descriptors (Application modules) and target the sub-deployment to a cluster with two machines, each machine has a JMSServer. It seems to work (I see them on the JNDI tree on both machines). This is the jms.xml:
    Do you also use sub-deployment targeted to a cluster or I shouldn't? Thank you in advance ;)
    I am using webLogic version 10.3.4.0
    <?xml version="1.0" encoding="UTF-8"?>
    <wls:weblogic-jms xmlns:wls="http://xmlns.oracle.com/weblogic/weblogic-jms"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://xmlns.oracle.com/weblogic/weblogic-jms http://xmlns.oracle.com/weblogic/weblogic-jms/1.1/weblogic-jms.xsd">
         <wls:quota name="InboundMessageQuota">
              <wls:bytes-maximum>1000000</wls:bytes-maximum>
              <wls:messages-maximum>10000</wls:messages-maximum>
         </wls:quota>
         <!-- Factories -->
         <wls:connection-factory name="InboundMessageConnectionFactory">
              <wls:sub-deployment-name>InboundMessage</wls:sub-deployment-name>
              <wls:jndi-name>jms.InboundMessageConnectionFactory</wls:jndi-name>
              <wls:security-params>
                   <wls:attach-jmsx-user-id>false</wls:attach-jmsx-user-id>
              </wls:security-params>
         </wls:connection-factory>
         <!-- Queue's -->
         <wls:uniform-distributed-queue name="InboundMessageQueue">
              <wls:sub-deployment-name>InboundMessage</wls:sub-deployment-name>
    <wls:delivery-params-overrides>
    <wls:delivery-mode>Non-Persistent</wls:delivery-mode>
    </wls:delivery-params-overrides>
    <wls:delivery-failure-params>
    <wls:redelivery-limit>0</wls:redelivery-limit>
    </wls:delivery-failure-params>
              <wls:jndi-name>jms.InboundMessageQueue</wls:jndi-name>
         </wls:uniform-distributed-queue>
    </wls:weblogic-jms>

    Technically, cluster targeting a distributed destination is supported, and it'll work as long as your can be absolutely assured that the cluster will never host additional JMS servers - perhaps from future projects, or due to integrating third party products. Otherwise, your distributed destinations will end up creating instances on JMS servers where they don't belong - a common mistake that can be tough to diagnose and untangle. I highly recommend staying with the best practice of using a subdeployment that references exactly the intended JMS servers. Note that a subdeployment can reference more than one JMS server.
    See [url  http://download.oracle.com/docs/cd/E17904_01/web.1111/e13738/best_practice.htm#JMSAD455] WL JMS Configuration Best Practices
    Hope this helps,
    Tom

  • MDBs in 9.1 continue to consume JMS queues even after being deleted

    <b>We have an MDB application that reads a batch message off of a JMS queue, archives it in a database, parses the batch message into individual messages and writes them onto other JMS queues to be consumed by another application. Everything was running fine in Weblogic 8.1.5. However, due to problems with XA drivers and the MSDTC(predictable SQL server crashes), we decided to upgrade to Weblogic 9.1 to take advantage of the LLR option.</b>
              <b>First, we had an issue where our MDBs were causing the following exception:</b>
              <i>####<May 26, 2006 7:42:12 PM EDT> <Error> <JMX> <ist-clft2> <wltest1> <ExecuteThread: '1' for queue: 'default'> <<WLS Kernel>> <> <> <1148686932991> <BEA-149500> <An exception occurred while registering the MBean null.
              java.lang.IllegalArgumentException: Registered more than one instance with the same objectName : com.bea:ServerRuntime=wltest1,MessageDrivenEJBRuntime=RhapsodyMDB_DMBModule!JMSServer4@DMB_BEAN_QUEUE,Name=RhapsodyMDB_DMBModule!JMSServer4@DMB_BEAN_QUEUE,ApplicationRuntime=DataBrokerEAR1_2,Type=EJBPoolRuntime,EJBComponentRuntime=DataBrokerEJB new:[email protected] existing weblogic.ejb.container.monitoring.EJBPoolRuntimeMBeanImpl@7db003
                   at weblogic.management.jmx.ObjectNameManagerBase.registerObject(ObjectNameManagerBase.java:146)
                   at weblogic.management.mbeanservers.internal.WLSObjectNameManager.lookupObjectName(WLSObjectNameManager.java:133)
                   at weblogic.management.jmx.modelmbean.WLSModelMBeanFactory.registerWLSModelMBean(WLSModelMBeanFactory.java:86)
                   at weblogic.management.mbeanservers.internal.RuntimeMBeanAgent$1.registered(RuntimeMBeanAgent.java:104)
                   at weblogic.management.provider.internal.RegistrationManagerImpl.invokeRegistrationHandlers(RegistrationManagerImpl.java:205)
                   at weblogic.management.provider.internal.RegistrationManagerImpl.register(RegistrationManagerImpl.java:85)
                   at weblogic.management.runtime.RuntimeMBeanDelegate.register(RuntimeMBeanDelegate.java:320)
                   at weblogic.management.runtime.RuntimeMBeanDelegate.<init>(RuntimeMBeanDelegate.java:257)
                   at weblogic.management.runtime.RuntimeMBeanDelegate.<init>(RuntimeMBeanDelegate.java:222)
                   at weblogic.ejb.container.monitoring.EJBPoolRuntimeMBeanImpl.<init>(EJBPoolRuntimeMBeanImpl.java:32)
                   at weblogic.ejb.container.monitoring.MessageDrivenEJBRuntimeMBeanImpl.<init>(MessageDrivenEJBRuntimeMBeanImpl.java:49)
                   at weblogic.ejb.container.manager.MessageDrivenManager.initialize(MessageDrivenManager.java:503)
                   at weblogic.ejb.container.manager.MessageDrivenManager.setup(MessageDrivenManager.java:120)
                   at weblogic.ejb.container.manager.MessageDrivenManager.setup(MessageDrivenManager.java:146)
                   at weblogic.ejb.container.deployer.MessageDrivenBeanInfoImpl.createMDManager(MessageDrivenBeanInfoImpl.java:1481)
                   at weblogic.ejb.container.deployer.MessageDrivenBeanInfoImpl.createDDMDManagers(MessageDrivenBeanInfoImpl.java:1378)
                   at weblogic.ejb.container.deployer.MessageDrivenBeanInfoImpl.onDDMembershipChange(MessageDrivenBeanInfoImpl.java:1285)
                   at weblogic.jms.common.CDS$DD2Listener.listChange(CDS.java:454)
                   at weblogic.jms.common.CDSServer$DDHandlerChangeListener.statusChangeNotification(CDSServer.java:167)
                   at weblogic.jms.dd.DDHandler.callListener(DDHandler.java:318)
                   at weblogic.jms.dd.DDHandler.callListeners(DDHandler.java:344)
                   at weblogic.jms.dd.DDHandler.run(DDHandler.java:282)
                   at weblogic.jms.common.SerialScheduler.run(SerialScheduler.java:37)
                   at weblogic.work.ExecuteRequestAdapter.execute(ExecuteRequestAdapter.java:21)
                   at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:145)
                   at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:117)
              >
              ####<May 26, 2006 7:42:13 PM EDT> <Info> <EJB> <ist-clft2> <wltest1> <ExecuteThread: '1' for queue: 'default'> <<WLS Kernel>> <> <> <1148686933069> <BEA-010060> <The Message-Driven EJB: RhapsodyMDB has connected/reconnected to the JMS destination: weblogic.jms.DMB_BEAN_QUEUE.></i>
              <b>
              Generally this happend after there were cluster communication issues. Multi-cast messages were lost and our MDB reconnects to the JMS queues as indicated by the below log:</b>
              <i>####<May 30, 2006 5:19:06 PM EDT> <Info> <EJB> <AMTC-RAP-STG3> <RAPBEA1S> <[ACTIVE] ExecuteThread: '54' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <> <1149023946040> <BEA-010060> <The Message-Driven EJB: DataBrokerMDB has connected/reconnected to the JMS destination: weblogic.jms.PHINMS_DMB_QUEUE.>
              ####<May 30, 2006 5:19:10 PM EDT> <Info> <Cluster> <AMTC-RAP-STG3> <RAPBEA1S> <[ACTIVE] ExecuteThread: '22' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1149023950228> <BEA-000112> <Removing RAPBEA3S jvmid:720875810499147484S:cmts-rap-bea3:[7005,-1,-1,-1,-1,-1,-1]:DMBstg:RAPBEA3S from cluster view due to timeout.>
              ####<May 30, 2006 5:19:11 PM EDT> <Info> <Cluster> <AMTC-RAP-STG3> <RAPBEA1S> <[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1149023951009> <BEA-000115> <Lost 2 multicast message(s).>
              ####<May 30, 2006 5:19:11 PM EDT> <Info> <Cluster> <AMTC-RAP-STG3> <RAPBEA1S> <[ACTIVE] ExecuteThread: '22' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1149023951040> <BEA-000111> <Adding RAPBEA3S with ID 720875810499147484S:cmts-rap-bea3:[7005,-1,-1,-1,-1,-1,-1]:DMBstg:RAPBEA3S to cluster: DMBstg_cluster view.></i>
              <b>
              This would cause the queues to eventually have hundreds of consumers and cause the server to fail.
              Basically, it seems as though the MDBs that are supposed to stop continue and attempt to process, while new threads connect to the JMS queues.
              I tried undeploying our application and deleted it from the configuration. However there were consumers still on the respective queues and when I sent messages, I got an error indicating a "Class Not Found exception" due to the fact that the EJB was undeployed and deleted from the configuration, however the MDB component was not and continued to listen for messages. In 8.1.5, as soon as the application was undeployed, there were zero consumers on the JMS queues.
              I have read the posts about a soon to be released fix that would have the MDBs connect only to the queues locally and not go out the the cluster. Would this fix my issue?
              Is there something in the deployment descriptor to configure that will cause it to disconnect and now spawn so many consumers to the JMS queues?
              Why is it that the number of MDB consumers on the JMS queues stayed static in 8.1.5, but they are erratic in 9.1 even after I set our 9.1 server to use the 8.1.5 execute queue policy. Help would be much appreciated.</b>

    I recommend contacting customer support. There's a known problem with MDBs listening to distributed destinations that are local to the same cluster as the MDB, you're problem may be related (the clue is that the stack trace contains jms.dd.DDHandler.callListeners()). The problem is that the MDB connects to all physical queues in a distributed destination rather than just the local queue.
              Tom

  • Simple JMS Configuration on OC4J Developers Preview

    What is the minumum required to configure the default OC4J JMS.
    I have a client deployed with the server that tries to get a Topic Connection Factory ....
    InitialContext ctx = new InitialContext();
    tConFactory = (TopicConnectionFactory)ctx.lookup("java:comp/env/jms/TopicConnectionFactory");
    tCon = tConFactory.createTopicConnection();
    tSession = tCon.createTopicSession(false,Session.AUTO_ACKNOWLEDGE);
    but the lookup fails with:
    javax.naming.NameNotFoundException: jms/TopicConnectionFactory not found in DriveStartup
    In jms.xml I have the following:
    <jms-server port="9127">
    <topic-connection-factory location="jms/TopicConnectionFactory" port="9127" password="2143768880" username="admin" />
         <!-- path to the log-file where JMS-events/errors are stored -->
         <log>
              <file path="../log/jms.log" />
         </log>
    </jms-server>
    What else do i have to do ?
    Cheers paul

    Hi Paul,
    I had similiar problems like the one you describe.
    I got the JMS example (CoffeeMaker) running, but had problems with my own application ( Object not bound).
    In my case, I got it running now when I only use the absolute necessary (oc4j.jar, junit.jar in my case) in my classpath
    (of course the application client xml's must contain your lookup names, but I did not !!! have to setup
    my queue in jms.xml (wonder why).
    You might try this (oc4j.jar references the required libs for jms,jndi,etc. internally)
    [Comment to "absolute necessary": of course I have a CLASSPATH which contains only the necessary libs.
    But since we are working on a framework which does not only want to connect to OC4J JMS (or EJB), I have
    a CLASSPATH containing other libs. I have to find the correct order, I guess.]
    But still my question to this forum: how is it possible that a queue is started "implicitly" (that means it is
    not configured in jms.xml). Is this desired or mandatory (Sun J2EE reference implementation does the same).
    Or am I missing something ?
    Regards,
    Armin

Maybe you are looking for