JMS Failover with Distributed Destinations in 7.0

          How does JMS failover with distributed destinations in WL 7.0?
          In an environment using file stores for persistent messages, can a working server
          automatically pick up unprocessed and persisted messages from a failed server?
          If so, what's the best way to set this up?
          Or, is this completely manual? In other words, we have to bring up a new server
          pointing to the location of the file store from the failed server?
          

          It appears that two JMSServers cannot share the same file store and, I'm assuming,
          two file stores cannot be using the same directory for persistence.
          So the HA you're talking about is something like Veritas automatically restarting
          a server (or starting a new one) to process the messages in the persistent queue
          that were unprocessed at the time of failure with the file store residing on some
          sort of HA disk array.
          The key point is that a message once it arrives at a server must be processed
          by that server or, in the case of failure of that server, must be processed by
          a server similarly configured to the one that failed so that it picks up the unprocessed
          messages. The message can't be processed by another server in the cluster.
          Or, is there some trick that could be employed to copy from the file store of
          the failed server and repost the messages to the still operating servers?
          "Zach" <[email protected]> wrote:
          >Unless you have some sort of HA framework/hardware, this is a manual
          >operation. You either point to the existing persistent storage (shared
          >storage or JDBC connection pool), or you move the physical data.
          >
          >_sjz.
          >
          >"Jim Cross" <[email protected]> wrote in message
          >news:[email protected]...
          >>
          >>
          >> How does JMS failover with distributed destinations in WL 7.0?
          >>
          >> In an environment using file stores for persistent messages, can a
          >working
          >server
          >> automatically pick up unprocessed and persisted messages from a failed
          >server?
          >> If so, what's the best way to set this up?
          >>
          >> Or, is this completely manual? In other words, we have to bring up
          >a new
          >server
          >> pointing to the location of the file store from the failed server?
          >
          >
          

Similar Messages

  • Using Messaging  Bridge with Distributed Destinations

    I'm having some problems using Distributed Destinations with the Messaging Bridge in WLS 7.0sp2. Our configuration consists of the following:
              * WLS Server1:
              * JMS Server1:
              * inbound queue
              * outbound queue
              * JMS Server2:
              * inbound queue
              * outbound queue
              * WLS Server 2:
              * JMS Server3:
              * inbound queue
              * outbound queue
              A distributed destination is configured for the inbound and outbound queues.
              Two messaging queues are configured so that bridge from WLS JMS to Tibco JMS using the distributed destinations.
              Everything is working fine for the inbound, but we have found that the outbound messaging bridge is picking up messages from only one of the two JMS servers on the first WLS instance. When monitoring, it can be seen that there is no consumer registered against the second JMS server.
              Any suggestions?
              

    Hi Rob,
              Consumers on distributed destinations are always pinned
              to a single physical destination on creation. Only
              producers can round-robin each message. When a
              distributed destination is used as a source destination,
              you will need to configure a bridge per distributed
              destination. Alternatively, you can enable
              forwarding between the physical queues, which automatically
              forwards messages from physical queues with
              no consumers to physical queues that have consumers
              (but the extra hop impacts performance). For more information,
              consult the JMS documentation.
              Tom
              Rob McArthur wrote:
              > I'm having some problems using Distributed Destinations with the Messaging Bridge in WLS 7.0sp2. Our configuration consists of the following:
              > * WLS Server1:
              > * JMS Server1:
              > * inbound queue
              > * outbound queue
              > * JMS Server2:
              > * inbound queue
              > * outbound queue
              > * WLS Server 2:
              > * JMS Server3:
              > * inbound queue
              > * outbound queue
              >
              > A distributed destination is configured for the inbound and outbound queues.
              >
              > Two messaging queues are configured so that bridge from WLS JMS to Tibco JMS using the distributed destinations.
              >
              > Everything is working fine for the inbound, but we have found that the outbound messaging bridge is picking up messages from only one of the two JMS servers on the first WLS instance. When monitoring, it can be seen that there is no consumer registered against the second JMS server.
              >
              > Any suggestions?
              >
              >
              

  • JMS cluster and distributed destination load balancing question

              Hi All
              Scenario: 2 WL 7 servers in cluster with distributed queue in both of them and
              both the servers have an MDB deployed for the queue. Now if a producer in server
              #1 writes to the Queue - he will write to the local queue - right?
              In that case will the local MDB pick up the message or that can be load balanced?
              OR the write it self can be load balanced?
              I really want either the write or the read to be load balanced - but I suspect
              server affinity will play a mess here. Can anyone pls clarify.
              thanks
              Anamitra
              

              Hi All
              Scenario: 2 WL 7 servers in cluster with distributed queue in both of them and
              both the servers have an MDB deployed for the queue. Now if a producer in server
              #1 writes to the Queue - he will write to the local queue - right?
              In that case will the local MDB pick up the message or that can be load balanced?
              OR the write it self can be load balanced?
              I really want either the write or the read to be load balanced - but I suspect
              server affinity will play a mess here. Can anyone pls clarify.
              thanks
              Anamitra
              

  • Getting load-balancing with distributed destination to work...

    Hello,
              I try to setup a proof of concept for balancing heavy load over several JMS server instances on WLS 8.1 SP4:
              I have
              - 2 managed servers in 1 cluster plus adminserver on 1 machine (WinXP)
              - 1 JMS server on each server (no migrateable targets used), having 1 physical queue
              - 1 distributed queue deployed to the cluster, consisting of the two physical queues
              - 1 connection factory deployed to the cluster with round-robin load-balancing enabled, and server affinity disabled
              - 1 test JSP using the connection factory from above, doing a complete re-connection per test message
              - 1 MDB ejb module deployed to the cluster
              and the result is:
              1. Calling the JSP through the second server instance load-balances messages on both JMS servers, that's fine ...
              BUT
              2. Calling the JSP through the first server instance processes all messages on the first server instance's JMS server, no message is ever sent to the second server.
              What could be the reason for the different behaviour of both servers?
              extract from config.xml used:
              <Cluster
              MulticastAddress="237.0.0.1"
              Name="clusterA"/>
              <EJBComponent Name="jtest"
              Targets="clusterA"
              URI="ejb/test.jar"/>
              <WebAppComponentName="test"
              Targets="clusterA"
              URI="web/test"/>
              <JMSConnectionFactory
              JNDIName="ConnectionFactory"
              LoadBalancingEnabled="true"
              ServerAffinityEnabled="false"
              Targets="clusterA"/>
              <JMSServer
              Name="JMSServer1"
              Store="JMSFileStore1"
              Targets="server1">
              <JMSQueue
              JNDIName="TestQueue.server1"
              JNDINameReplicated="true"
              Name="TestQueue-server1"/>
              </JMSServer>
              <JMSServer
              Name="JMSServer2"
              Store="JMSFileStore3"
              Targets="server2">
              <JMSQueue
              JNDIName="TestQueue.server2"
              JNDINameReplicated="true"
              Name="TestQueue-server2"/>
              </JMSServer>
              <JMSDistributedQueue
              JNDIName="TestQueue.DD"
              Name="TestQueueDD"
              Targets="clusterA">
              <JMSDistributedQueueMember
              JMSQueue="TestQueue.server1"
              Name="TestQueue.server1Memeber"/>
              <JMSDistributedQueueMember
              JMSQueue="TestQueue.server2"
              Name="TestQueue.server2Member"/>
              </JMSDistributedQueue>
              Cheers
              Martin

    Ok solved, it works as expected, when running on two solaris hosts

  • How do you get distributed destinations to work ?

              Hi all,
              I'm trying to getting to work Distributed JMS Destinations. But
              still without success. This is the sitation:
              Node1---->JMS Server1------>Queue1 [Distributed destination1]
              Node2---->JMS Server2------>Queue2 [Distributed destination1]
              I have a cluster with 2 Nodes.
              I have created 2 JMS Servers each one targetted on one Node.
              Then I have created 2 Queues and added them to a Distributed Destination as member.
              Now I start the QueueSender (pointing to the Distributed Destination's JNDI)
              I start 2 QueueReceivers (pointing to the JNDI of the 2 Queues).
              The problem is that when one of the two nodes, let's say Node1, fails
              messages aren't dispatched to the JMS Server2.
              So I wonder: Is it possible that if one of the 2 JMS Server fails
              the message is redirected to the other JMS Server ?
              Thanks a lot
              Francesco
              

    Receivers only load-balance once - when
              they are first created. The JMS Performance
              Guide explores the implications of this.
              Keep in mind that the distributed destination
              feature was optimized for the case where
              receivers (generally MDBs) run on the same
              servers as the distributed destination
              instances, which simply load balances each
              receiver to its local destination instance.
              Otherwise, if the receivers
              are not co-located with distributed destination
              instances, load-balancing
              configuration requires special care to prevent
              physical destinations from not getting served
              by receivers.
              One solution for the latter problem, as you
              already wrote below, is to receive directly from the
              physical destinations - which removes the
              load balancing decision all-together.
              Francesco wrote:
              > Hi,
              > thanks a lot for your clean answers.
              > Ps has anybody got an answer about my last question (That is, should I lookup
              > the JNDI
              > of the single Queue or of the Distributed Destination in order to get distributed
              > destination to work ?)
              > Thanks a lot
              > Francesco
              >
              > Tom Barnes <[email protected].bea.com>
              > wrote:
              >
              >>It is normal for a sender to get an exception
              >>when its connection's host server goes down.
              >>The client connection's host server does
              >>not change for the life of the connection,
              >>and does not automatically fail-over.
              >>Even when the client's connection host stays up
              >>but the JMSServer hosting the current
              >>distributed destination instance goes down, the sender
              >>may still get an exception. Such an exception
              >>can indicate that there is ambiguity as to
              >>whether or not the JMS server received the
              >>message from the send().
              >>
              >>The standard way to handle
              >>such exceptions is to re-establish the connection,
              >>session, and producer on the client (usually
              >>using the exact same code that initialized
              >>these resources to start with.)
              >>
              >>In addition, it is highly recommended
              >>for the client to register exception listeners
              >>on both the connection and on the WLSession.
              >>
              >>Tom
              >>
              >>Francesco wrote:
              >>
              >>
              >>>"Barry Myles" <[email protected]> wrote:
              >>>
              >>>
              >>>>Hi Francesco
              >>>>
              >>>>Try creating a connection factory that has Load balanced ticked and
              >>
              >>server
              >>
              >>>>affinity
              >>>>unticked
              >>>>
              >>>>now make sure that the distributed destinations load balancing property
              >>>>is set
              >>>>to Round Robin
              >>>>
              >>>>
              >>>>
              >>>>>I start 2 QueueReceivers (pointing to the JNDI of the 2 Queues).
              >>>>
              >>>>Also when the QueueRecievers are running do you see one consumer attached
              >>>>to each
              >>>>physical queue?
              >>>>
              >>>>HTH
              >>>>
              >>>>
              >>>>"Francesco" <[email protected]> wrote:
              >>>>
              >>>>
              >>>>>Hi all,
              >>>>>I'm trying to getting to work Distributed JMS Destinations. But
              >>>>>still without success. This is the sitation:
              >>>>>
              >>>>>Node1---->JMS Server1------>Queue1 [Distributed destination1]
              >>>>>Node2---->JMS Server2------>Queue2 [Distributed destination1]
              >>>>>
              >>>>>I have a cluster with 2 Nodes.
              >>>>>I have created 2 JMS Servers each one targetted on one Node.
              >>>>>Then I have created 2 Queues and added them to a Distributed Destination
              >>>>>as member.
              >>>>>
              >>>>>Now I start the QueueSender (pointing to the Distributed Destination's
              >>>>>JNDI)
              >>>>>I start 2 QueueReceivers (pointing to the JNDI of the 2 Queues).
              >>>>>
              >>>>>The problem is that when one of the two nodes, let's say Node1, fails
              >>>>>messages aren't dispatched to the JMS Server2.
              >>>>>
              >>>>>So I wonder: Is it possible that if one of the 2 JMS Server fails
              >>>>>the message is redirected to the other JMS Server ?
              >>>>>Thanks a lot
              >>>>>Francesco
              >>>>
              >>>Hi Barry,
              >>>I have tried to modify the connection factory as you said but
              >>>I'm still stuck. Basically when I shut down one of the two Nodes, the
              >>
              >>Sender
              >>
              >>>receives an exception "IllegalStateException: Producer is closed".
              >>>
              >>>Could you tell me if it's basically correct that :
              >>>
              >>>Sender----->looks up the Distributed Queue
              >>>Receivers---->look up the single Queues (which belong to the Distr.
              >>
              >>Queue)
              >>
              >>>Thanks again
              >>>Francesco
              >>
              >
              

  • WebLogic 10.3 jms Uniform Distributed Destination

    We are running WL Server 10.3 on Suse 11.0. I have created a Cluster, Servers, JMS Servers, Connection Factory and a Distributed Destination. On the Connection Factory, I have the "Server Affinity" checkbox unchecked. Server/Cluster all look good. I am using the jmsfullclient.jar for the test.
    When attempting to access the distributed destination, I get the following error on the second message. If I were to turn on "Session Affinity" in the Cluster, the problem does not arise but I lose the value of the distributed destination. Any suggestions and assistance would be appreciated.:
    weblogic.jms.common.JMSException: No failover destination.
         at weblogic.jms.dispatcher.DispatcherAdapter.convertToJMSExceptionAndThrow(DispatcherAdapter.java:110)
         at weblogic.jms.dispatcher.DispatcherAdapter.dispatchSyncNoTran(DispatcherAdapter.java:61)
         at weblogic.jms.client.JMSProducer.toFEProducer(JMSProducer.java:1275)
         at weblogic.jms.client.JMSProducer.deliveryInternal(JMSProducer.java:783)
         at weblogic.jms.client.JMSProducer.sendInternal(JMSProducer.java:541)
         at weblogic.jms.client.JMSProducer.sendWithListener(JMSProducer.java:394)
         at weblogic.jms.client.JMSProducer.send(JMSProducer.java:384)
         at weblogic.jms.client.WLProducerImpl.send(WLProducerImpl.java:970)
         at com.overstock.util.Example.main(Example.java:44)
    Caused by: weblogic.jms.common.JMSException: No failover destination.
         at weblogic.jms.frontend.FEProducer.pickNextDestination(FEProducer.java:750)
         at weblogic.jms.frontend.FEProducer.sendRetryDestination(FEProducer.java:1092)
         at weblogic.jms.frontend.FEProducer.send(FEProducer.java:1399)
         at weblogic.jms.frontend.FEProducer.invoke(FEProducer.java:1460)
         at weblogic.messaging.dispatcher.Request.wrappedFiniteStateMachine(Request.java:961)
         at weblogic.messaging.dispatcher.DispatcherServerRef.invoke(DispatcherServerRef.java:276)
         at weblogic.messaging.dispatcher.DispatcherServerRef.handleRequest(DispatcherServerRef.java:141)
         at weblogic.messaging.dispatcher.DispatcherServerRef.access$000(DispatcherServerRef.java:34)
         at weblogic.messaging.dispatcher.DispatcherServerRef$2.run(DispatcherServerRef.java:111)
         at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
         at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
    Caused by: weblogic.messaging.dispatcher.DispatcherException: could not find Server null
         at weblogic.messaging.dispatcher.DispatcherManager.dispatcherCreate(DispatcherManager.java:176)
         at weblogic.messaging.dispatcher.DispatcherManager.dispatcherFindOrCreate(DispatcherManager.java:58)
         at weblogic.jms.dispatcher.JMSDispatcherManager.dispatcherFindOrCreate(JMSDispatcherManager.java:219)
         at weblogic.jms.dispatcher.JMSDispatcherManager.dispatcherFindOrCreateChecked(JMSDispatcherManager.java:230)
         at weblogic.jms.frontend.FEProducer.findDispatcher(FEProducer.java:825)
         at weblogic.jms.frontend.FEProducer.sendRetryDestination(FEProducer.java:995)
         ... 9 more
    Caused by: javax.naming.NameNotFoundException: Unable to resolve 'weblogic.messaging.dispatcher.S:null'. Resolved 'weblogic.messaging.dispatcher'; remaining name 'S:null'
         at weblogic.jndi.internal.BasicNamingNode.newNameNotFoundException(BasicNamingNode.java:1139)
         at weblogic.jndi.internal.BasicNamingNode.lookupHere(BasicNamingNode.java:252)
         at weblogic.jndi.internal.ServerNamingNode.lookupHere(ServerNamingNode.java:182)
         at weblogic.jndi.internal.BasicNamingNode.lookup(BasicNamingNode.java:206)
         at weblogic.jndi.internal.BasicNamingNode.lookup(BasicNamingNode.java:214)
         at weblogic.jndi.internal.BasicNamingNode.lookup(BasicNamingNode.java:214)
         at weblogic.jndi.internal.BasicNamingNode.lookup(BasicNamingNode.java:214)
         at weblogic.jndi.internal.WLEventContextImpl.lookup(WLEventContextImpl.java:254)
         at weblogic.jndi.internal.WLContextImpl.lookup(WLContextImpl.java:380)
         at javax.naming.InitialContext.lookup(InitialContext.java:392)
         at weblogic.messaging.dispatcher.DispatcherManager.dispatcherCreate(DispatcherManager.java:172)
         ... 14 more
    My client code is extremely basic and is the following:
    public class Example {
    public static void main(String args[]) {
    String providerUrl = "t3://localhost:7003,localhost:7005";
    Hashtable<String, String> ht = new Hashtable<String, String>();
    ht.put(Context.INITIAL_CONTEXT_FACTORY, "weblogic.jndi.WLInitialContextFactory");
    ht.put(Context.PROVIDER_URL, providerUrl);
    InitialContext ctx = null;
    try {
    ctx = new InitialContext(ht);
    ConnectionFactory connFactory = (ConnectionFactory) ctx
    .lookup("connectionfactory");
    Destination dest = (Destination) ctx.lookup("distributedqueue");
    Connection conn = null;
    Session session = null;
    MessageProducer p = null;
    try {
    conn = connFactory.createConnection();
    conn.start();
    System.out.println("Thread:" + Thread.currentThread().getId() + " got a connection "
    + conn.hashCode());
    session = conn.createSession(false, Session.AUTO_ACKNOWLEDGE);
    p = session.createProducer(dest);
    System.out.println("Thread:" + Thread.currentThread().getId() + " started a connection");
    for (int i = 0; i < 1000; i++) {
    p.send(session.createTextMessage());
    System.out.println("FinishedRunning:" + Thread.currentThread().getId());
    catch (JMSException e) {
    e.printStackTrace();
    finally {
    if (p != null) {
    try {
    p.close();
    catch (JMSException e) {
    e.printStackTrace();
    if (session != null) {
    try {
    session.close();
    catch (JMSException e) {
    e.printStackTrace();
    if (conn != null) {
    try {
    conn.close();
    catch (JMSException e) {
    e.printStackTrace();
    catch (Exception e) {
    e.printStackTrace();
    Edited by: svachary on Jan 8, 2009 1:01 PM

    Hi,
    I would suggest you to go through these 2 links which you give all the details about UDD
    Topic:Steps to Configure Uniform Distributed Queue (UDQ) on Weblogic Server
    http://middlewaremagic.com/weblogic/?p=3747
    Topic:JMS Demo using WebLogic Uniform Distributed Queue (UDQ)
    http://middlewaremagic.com/weblogic/?page_id=1976
    Tips to solve UDQ issues
    - Make sure the Server Affinity Enabled parameter should be un-checked (disable) which under the [ Connection factory –> Configuration (tab) –> Load Balance (sub-tab)]
    - Disable the Server Affinity Enabled parameter for the connection factory whish is been used by your UDQ.
    - All the managed servers are in the same cluster.
    - If the managed serves are in different boxes make sure the listing address is given correctly which is under [ Machine –> Configuration (tab) –> Node Manager (sub-tab) ]
    - Test if you are able to PING the servers on different boxes and make sure that there is no network issues and you are able to communicate with the servers.
    Hope this helps you.
    Regards,
    Ravish Mody
    http://middlewaremagic.com/weblogic/
    Come, Join Us and Experience The Magic…

  • JMS Distributed Destination Topic - how to avoid MDB to recieve duplicates

    HELP!!
              I have spent a lot of time looking for an option to move jms on a cluster, finally I can do this using distributed destination, I have two memmbers(Topic) in the distributed destination topic, but when both of them are active, the Message Driven Bean which is deployed on cluster recieves 2 messages instead of 1, anyone please help me to change this to have only one message instead of 2 !!!!!

    Make it a Queue or don't deploy the MDB to the cluster. You are seeing
              expected behavior. I have the same situation with a Topic and I deploy my
              MDB to one node in the cluster and have it configured to consume the local
              physical Topic.
              Bill
              "Manav Sehgal" <[email protected]> wrote in message
              news:18024883.1107530318601.JavaMail.root@jserv5...
              > HELP!!
              > I have spent a lot of time looking for an option to move jms on a
              > cluster, finally I can do this using distributed destination, I have two
              > memmbers(Topic) in the distributed destination topic, but when both of
              > them are active, the Message Driven Bean which is deployed on cluster
              > recieves 2 messages instead of 1, anyone please help me to change this to
              > have only one message instead of 2 !!!!!
              

  • JMS Distributed Destinations as proxy service endpoints

    We have a cluster of 2 ALSB servers with a proxy service deployed
    listening on a JMS queue which is a distributed destination (DD) on a
    WLS 8.1 system.
    The JMS DD is spread across 4 JMS servers and 4 JVM's.
    When ALSB starts up - we only get consumers on 2 of the 4 DD members,
    which makes sense as a JMS proxy service is essentially an MDB, and this
    is normal MDB/distributed destination behaviour - it will bind to the
    queue member where it first makes the connection to the JVM.
    How do I make sure my messages are consumed from all 4 DD queue members?
    There is a JMS option to forward messages to queues with consumers if no
    consumers exist - is that the way to do it?
    Someone must have come across this before and I'd be grateful for any
    advice.
    Also - how do you reduce the consumers on the JMS queue - seems to
    default to 16. Perhaps you don't.
    Thanks,
    Pete

    Meghan Pietila wrote:
    Update: I see your exchange with Tom over on the JMS forum, Pete. We just switched everything to WLS 9.2, so maybe the ALSB internal extension will work for us. It's not working by default, so possibly it's something that must be activated... I'll follow up with BEA.
    http://forums.bea.com/thread.jspa?threadID=570001317
    I'd still like to hear what you end up working with, if you don't mind sharing once it's running. :)
    MeghanI only have 8.1 as an option - a JMS proxy service is essentially an
    MDB, and if you deploy an MDB in the same way, it will only bind to a
    single destination, so it's actually behaving as I would expect.
    The forward delay on the queue probably won't be an issue for us, as
    we're not talking about high volumes and large messages, but for some
    sites there will be alot of moving around queues, access jms filestore
    etc which you could do without.
    I reckon the trick is (as Tom mentioned) to bind to individual
    distributed queue members from the proxy service somehow.
    I'll let you know what I find out.
    Pete

  • Distributed Destination Failover not working

              I'm using WebLogic 7 SP1 on Windows 2000. I've configured a
              distributed queue that has two member. The two members are
              running in two WebLogic instances in a Cluster configuration
              (call them Server1 and Server2). My client posts messages to the
              distributed queue and it seems the messages seem to be
              distributed between Server1 and Server2 (as expected). Although,
              when I kill Server1, the client complains that it can't connect
              to the queue on Server1 and never recovers. I would have
              expected to see (at most) one exception and then the next
              request to use Server2's queue. The client gets the following
              exception:
              weblogic.jms.dispatcher.DispatcherException: Dispatcher not found in jndi: Server1,
              javax.naming.NameNotFoundException: Unable to resolve 'weblogic.jms.S:Server1'
              Resolved: 'weblogic.jms' Unresolved:'S:Server1' ; remaining name 'S:Server1'
                   at weblogic.jms.dispatcher.DispatcherManager.dispatcherCreate(DispatcherManager.java:323)
                   at weblogic.jms.dispatcher.DispatcherManager.findOrCreate(DispatcherManager.java:413)
                   at weblogic.jms.frontend.FEProducer.<init>(FEProducer.java:87)
                   at weblogic.jms.frontend.FESession$2.run(FESession.java:607)
                   at weblogic.security.service.SecurityServiceManager.runAs(SecurityServiceManager.java:785)
                   at weblogic.jms.frontend.FESession.producerCreate(FESession.java:604)
                   at weblogic.jms.frontend.FESession.invoke(FESession.java:2246)
                   at weblogic.jms.dispatcher.Request.wrappedFiniteStateMachine(Request.java:552)
                   at weblogic.jms.dispatcher.DispatcherImpl.dispatchSync(DispatcherImpl.java:275)
                   at weblogic.jms.client.JMSSession.createProducer(JMSSession.java:1461)
                   at weblogic.jms.client.JMSSession.createSender(JMSSession.java:1312)
              Should I have some kind of recovery logic on my client to make
              this stuff work?
              Bob.
              

    Hi,
                   I hit the same problem, were you able to fix it ? if so how ?
              Tom Barnes wrote:
              > Hi Bob,
              >
              > If you haven't already, see if the connection factory you use has
              > "ServerAffinityEnabled" set to false (the default is true) and
              > "LoadBalancingEnabled" set to true. That said, I think you may be
              > seeing a known bug - so I suggesting contacting customer support.
              >
              > Tom
              >
              > Bob S wrote:
              >
              >> Tom,
              >>
              >> I don't really have a problem with getting an exception for the
              >> request that was in progress when the server failed. I would
              >> expect, though, the next request to succeed.
              >> The problem is that even when I restart my client process it
              >> still tries to go to the same destination (weird). It seems
              >> that the Distributed Destination exception handling logic only
              >> removes the failed entry when it receives a certain type of
              >> exception. I'm suspecting this because (just 5 minutes ago)
              >> I got the distributed destination to recover from the failure.
              >> The exception that I got this time was the following:
              >>
              >> weblogic.jms.common.JMSException: Failed to send message because
              >> destination MyQueue_JMSServer1
              >> is not avaiable (shutdown, suspended or deleted).
              >>
              >> Start server side stack trace:
              >> weblogic.jms.common.JMSException: Failed to send message because
              >> destination MyQueue_JMSServer1
              >> is not avaiable (shutdown, suspended or deleted).
              >> at
              >> weblogic.jms.backend.BEDestination.checkShutdownOrSuspendedNeedLock(BEDestination.java:1102)
              >>
              >> at weblogic.jms.backend.BEDestination.send(BEDestination.java:2782)
              >> at weblogic.jms.backend.BEDestination.invoke(BEDestination.java:3810)
              >> at
              >> weblogic.jms.dispatcher.Request.wrappedFiniteStateMachine(Request.java:552)
              >>
              >> at
              >> weblogic.jms.dispatcher.DispatcherImpl.dispatchAsync(DispatcherImpl.java:152)
              >>
              >> at
              >> weblogic.jms.dispatcher.DispatcherImpl.dispatchAsyncTranFuture(DispatcherImpl.java:425)
              >>
              >> at weblogic.jms.dispatcher.DispatcherImpl_WLSkel.invoke(Unknown
              >> Source)
              >> at
              >> weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:362)
              >> at
              >> weblogic.rmi.internal.BasicServerRef$1.run(BasicServerRef.java:313)
              >> at
              >> weblogic.security.service.SecurityServiceManager.runAs(SecurityServiceManager.java:785)
              >>
              >> at
              >> weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:308)
              >>
              >> at
              >> weblogic.rmi.internal.BasicExecuteRequest.execute(BasicExecuteRequest.java:30)
              >>
              >> at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:153)
              >> at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:134)
              >> End server side stack trace
              >>
              >> In this (rare) case, my system recovered beautifully!
              >>
              >> Bob.
              >>
              >> Tom Barnes <[email protected]> wrote:
              >>
              >>> Hi Bob,
              >>>
              >>> The particular exception you are seeing seems like it could use
              >>> some enhancement - it should be wrapped in a "friendlier" exception
              >>> such as "remote server XXX unavailable". I
              >>> recommend filing a case with customer support.
              >>>
              >>> That said, a producer sending to a distributed destination needs
              >>> to be able to handle send failures. WebLogic
              >>> will automatically retry sends in cases where there is no ambiguity,
              >>> but
              >>> when it can't determine the nature of the failure (eg it can't
              >>> determine whether or not the message made it to a JMS server) it
              >>> throws the exception back to the client to let the client
              >>> decide what it wants to do - eg commit/don't commit, reconnect
              >>> and resend, reconnect and don't resend.
              >>>
              >>> Tom
              >>> Bob S wrote:
              >>>
              >>>> I'm using WebLogic 7 SP1 on Windows 2000. I've configured a
              >>>> distributed queue that has two member. The two members are
              >>>> running in two WebLogic instances in a Cluster configuration
              >>>> (call them Server1 and Server2). My client posts messages to the
              >>>> distributed queue and it seems the messages seem to be
              >>>> distributed between Server1 and Server2 (as expected). Although,
              >>>> when I kill Server1, the client complains that it can't connect
              >>>> to the queue on Server1 and never recovers. I would have
              >>>> expected to see (at most) one exception and then the next
              >>>> request to use Server2's queue. The client gets the following
              >>>> exception:
              >>>>
              >>>> weblogic.jms.dispatcher.DispatcherException: Dispatcher not found in
              >>>
              >>>
              >>> jndi: Server1,
              >>>
              >>>> javax.naming.NameNotFoundException: Unable to resolve
              >>>> 'weblogic.jms.S:Server1'
              >>>> Resolved: 'weblogic.jms' Unresolved:'S:Server1' ; remaining name
              >>>> 'S:Server1'
              >>>> at
              >>>> weblogic.jms.dispatcher.DispatcherManager.dispatcherCreate(DispatcherManager.java:323)
              >>>>
              >>>> at
              >>>> weblogic.jms.dispatcher.DispatcherManager.findOrCreate(DispatcherManager.java:413)
              >>>>
              >>>> at weblogic.jms.frontend.FEProducer.<init>(FEProducer.java:87)
              >>>> at weblogic.jms.frontend.FESession$2.run(FESession.java:607)
              >>>> at
              >>>> weblogic.security.service.SecurityServiceManager.runAs(SecurityServiceManager.java:785)
              >>>>
              >>>> at
              >>>> weblogic.jms.frontend.FESession.producerCreate(FESession.java:604)
              >>>> at weblogic.jms.frontend.FESession.invoke(FESession.java:2246)
              >>>> at
              >>>> weblogic.jms.dispatcher.Request.wrappedFiniteStateMachine(Request.java:552)
              >>>>
              >>>> at
              >>>> weblogic.jms.dispatcher.DispatcherImpl.dispatchSync(DispatcherImpl.java:275)
              >>>>
              >>>> at
              >>>> weblogic.jms.client.JMSSession.createProducer(JMSSession.java:1461)
              >>>> at
              >>>> weblogic.jms.client.JMSSession.createSender(JMSSession.java:1312)
              >>>>
              >>>> Should I have some kind of recovery logic on my client to make
              >>>> this stuff work?
              >>>>
              >>>> Bob.
              >>>
              >>>
              >>
              >
              

  • JMS distributed destination

    In Bea's WLS 7.0 JMS: "Looking up Distributed Topics" and "Deploying Message-Drive
    Beans on a Distributed Topic" :
    http://e-docs.bea.com/wls/docs70/jms/implement.html#1260828
    This sound like each message(topic) gets sent to all members of a distributed
    topic.
    Then in the same doc for LB, a couple paragraphs down we have a conflict explaination:
    "In the round-robin algorithm, WebLogic JMS maintains an ordering of physical
    destinations within the distributed destination. The messaging load is distributed
    across the physical destinations one at a time in the order that they are defined
    in the WebLogic Server configuration (config.xml) file. "
    If a topcs sent to a distributed topic will be sent to all physical destinations,
    why we need LB here. Or the LB is applied to distributed Queue only.
    Please comments.
    THX.
    -John

    Rob,
    Not a expert I must admit, but the link provided by you states "A distributed destination is a set of destinations (queues or topics) that are accessible as a single, logical destination to a client. A distributed destination has the following characteristics:
    It is referenced by its own JNDI name."
    You can actually configure JMS adapters using JNDI so I think this should be possible. More info here,
    http://help.sap.com/saphelp_nw04/helpdata/en/c1/739c4186c2a409e10000000a155106/content.htm
    Do let us know if this helps and works!
    Regards
    Bhavesh

  • Distributed destination with MQ as Foreign Provider

    Hi
              Messages arrive in a set of clustered MQ Series Queues. MDBs process the messages (We use MQ as the Foreign JMS Provider). Can I set up a distributed destination (JMS Queues) in WebLogic and use those clustered MQ queues as the Foreign Provider ? I would like the MQ Series clustering to provide the redundancy/availability of the MQ Series system and the JMS distributed destination to provide high availability for the MDBs that are hosted in a set of clustered WebLogic server instances.
              Thanks

    Hi          > Messages arrive in a set of clustered MQ Series
              > Queues. MDBs process the messages (We use MQ as the
              > Foreign JMS Provider). Can I set up a distributed
              > destination (JMS Queues) in WebLogic and use those
              > clustered MQ queues as the Foreign Provider ?
              No. WL distributed destinations consist only of a set of physical WL destinations. The WL distributed destination code does not support non-WL destinations.
              > I would
              > like the MQ Series clustering to provide the
              > redundancy/availability of the MQ Series system and
              > the JMS distributed destination to provide high
              > availability for the MDBs that are hosted in a set of
              > clustered WebLogic server instances.
              If MQ Series exposes its distributed destination via the standard JMS API, then the MDBs will automatically run as MQ distributed dest clients. This is a function of MQ, not WL -- WL MDBs simply use the standard JMS API of the vendor's supplied client to get their messages.
              But note that WL MDBs use a single connection per MDB pool/deployment. Some clustering implementations (MQ?) require using multiple connections...
              >
              > Thanks

  • JMS Failover Implementation With Cluster Consist Of Four Servers

    Hi All,
    I mistankly posted the following thread in a WebLogic general area. It should be here. Can any one help please.
    [Document for JMS Failover Implementation On WebLogic|http://forums.oracle.com/forums/thread.jspa?threadID=900391&tstart=15]
    Could you please just reply here.
    Thanks :)

    Hi All,
    I mistankly posted the following thread in a WebLogic general area. It should be here. Can any one help please.
    [Document for JMS Failover Implementation On WebLogic|http://forums.oracle.com/forums/thread.jspa?threadID=900391&tstart=15]
    Could you please just reply here.
    Thanks :)

  • Messages for Durable Subscriber are not persisted with Distributed Topic

    Setup:
              - Weblogic 8.1 SP 6 cluster with two nodes on single Sun Solaris 5.8 host
              - Distributed topic destination, separate JMS server with separate filestore for each node
              - Standalone durable JMS client connecting to distributed topic destination using specific JNDI name.
              - Messages are persistent (msg.setJMSDeliveryMode(DeliveryMode.PERSISTENT). Delivery mode override for topic destinations is set to "Persistent"
              Test:
              1) Both nodes up, starting durable topic subscriber -> durable subscriber is visible in Weblogic console -> sending X messages -> X messages are received :-)
              2) Both nodes up, killing durable topic subscriber -> durable subscriber is still visible in Weblogic console -> sending X messages -> Messages Current Count for durable subscribers is set to X -> starting JMS client again
              2a) subscribing to same node as before -> console says durable subscriber is active again -> X messages are received :-)
              2b) subscribing to other node -> durable subscriber is automatically migrated by Weblogic to this node, console says that subscriber is active, but Messages Current Count is 0 (zero) -> 0 (zero) messages are received :_|
              Where are my X messages gone?
              Should I open a call or did I misunderstand the basics?
              Thanks, Peter

    According to Bea Customer Support this is the normal behavior. If you kill a durable topic subscriber and reconnect it with the same id to another node, the old subscription is deleted and all messages still waiting to be delivered are gone.
              Lesson learned: If you need failover for the server AND client use JMS queues.
              Peter

  • What are distributed destinations?

    We seem to have some confusion about what distributed destinations are within WebLogic.
              I believe a distributed topic to be a single topic name (Ex. MyTopic) that will be replicated across multiple weblogic servers. So a publisher publishes a MyTopic message to a "virtual topic" on serverA, it will then be replicated to consumers listening on serverB and serverC as well. So basically this buys you failover, load balancing, and possibly some performance increase.
              Another thought is that a distributed topic can contain multiple topic names within it. For instance, AlphabetTopic is a distributed topic. I can send a message to TopicA which is a entry of AlphabetTopics "child" topics. I can also send a message to TopicB which is part of the AlphabetTopic. It is thought that this will buy us the ability to preserve message ordering across topics because somehow weblogic will be able to maintain ordering across a distributed topic.
              Any information is greatly appreciated.

    Let's just say we were only using one host, no cluster.
              1) Can I have a distributed topic (AlphabetTopic), that has 26 topics under it (TopicA-Z) each with a subscriber?
              [Tom] Yes, if there are 26 JMS Servers configured on the single WL server host, where each has one of the DD's physical members. DD's are restricted to working within a single cluster or a single WL server.
              2) What happens if I publish a message directly to TopicA? Does it only go to the physical destinations subscriber?
              [Tom] No. It is still forwarded to all 25 other topics.
              If it were a cluster, would it go to a TopicA physical destination on another node?
              [Tom] Yes. The message would publish directly to TopicA where-ever it resides in the cluster.
              If it is not routed, then in this instance you are not even really using dist topics.
              3) What if I post a message to AlphabetTopic?
              [Tom] If you published to "AlphabetTopic" it would generally first go to a physical topic on the local server (depending on how you've configured your load balancing), and it would also be replicated to all of the other physical topic members.
              My understanding is that it would go to all subscribers listening for TopicA-TopicZ.
              [Tom] Correct.
              Then it would be up to selectors to filter out if all of those clients were to retrieve the message, correct?
              [Tom] Correct.
              Tom

  • JMS Failover & Load balancing.

    Hi,
    I have 4 Managed servers A,B,C,D on 4 physical boxes. We have one JMs server on Box D, All other Managed server uses this only JMS which is on D, if this goes down we loose all messages. I want to have JMS failover in my environment. I suggested to have 4 JMS servers and 4 File stores for each Managed server? my question is that Is weblogic that intellegent that if a client connects to box B JMS server and if the servers goes down, the message will be send top another JMS server?

    ravi tiwari wrote:
    Hi,
    I have 4 Managed servers A,B,C,D on 4 physical boxes. We have one JMs server on Box D, All other Managed server uses this only JMS which is on D, if this goes down we loose all messages. I want to have JMS failover in my environment. I suggested to have 4 JMS servers and 4 File stores for each Managed server? my question is that Is weblogic that intellegent that if a client connects to box B JMS server and if the servers goes down, the message will be send top another JMS server?You don't mention if you're running in a clustered environment or what
    version of WLS you're using, so I've assumed a cluster and WLS 8.1
    For resiliency, you should really have 4 JMS servers, one on each
    managed server. Then each JMS server has it's own filestore on the
    physical managed server machine.
    So, you have JMSA, JMSB, JMSC, JMSD with FileStoreA, FileStoreB,
    FileStoreC & FileStoreD.
    You should also look at using JMS distributed destinations as described
    in the documentation.
    In your current environment, if server D goes down, you not only lose
    your messages, your application would lose access to your JMS queues?
    If you use distributed destinations, and have 4 JMS servers, your JMS
    queues will still be available if a single server goes down.
    If a server does go down however, you have to follow the JMS migration
    procedures to migrate the JMS service from the failed server to a
    running one.
    There are conditions to this process, which are best found out from the
    migration documentation to be honest, rather than describe it here.
    We use this setup, and it works fine for us. We've never had to use JMS
    migration, as so far we haven't had anything serious to cause us to need
    to migrate. Our servers also boot from a SAN which makes our resilience
    processes simpler.
    Hope that helps,
    Pete

Maybe you are looking for

  • Trust Relationship Failed after Upgrading Surface Pro to Win 8.1

    I upgraded a Surface Pro to Windows 8.1 a few days ago. After the upgrade we are getting an error that says that "the trust relationship between this workstation and the primary domain failed". The Surface was joined to a Windows Server 2012 Essentia

  • THIS is why there is no upgrade price for Logic Pro X

    Because it's time for people stop stop convincing themselves that software manufacturers somehow 'owe them', just because they've used previous versions. I've been using Logic in its various forms since 1995 - that's 18 years, kids - upgrading every

  • Menu bar error

    the menu bars sometimes shows all sub levels open in all the pages in muse  plan  and muse design. the only way I can fix this is to click on each top level menu item in Each master page,  which is about 100 items or more.  the menu bar displays clos

  • Adobe Captivate Trial expires after one use

    I just downloaded the Adobe Captivate trial (Mac) and it only worked once. I'm pretty sure I have never installed this before on my Mac. Is there a known problem and fix for this? Thanks.

  • Buy to Sell - VL06G (IS-OIL)

    Goodmorning all, We can to carry out simultenous good issue/good receipt in buy to sell process only with VL02N transaction, but not with massive goods issue VL06G. Do you have some info about this anomaly!? Thanks in advance. Salvio