Weblogic 8.1 JMS Failover

Hi,
          In the Weblogic documentation with regards to JMS and Clusters it is stated that "Automatic failover is not supported by WebLogic JMS in this release." In my case, this release is 8.1.2. Can anyone clarify exactly what this means?
          Thanks,
          Aoife

          Not sure whether it is just me or not - the new security stuff in WebLogic 8.1
          just makes life so much tougher.
          Thanks for the suggestion anyways.
          Eric
          Tom Barnes <[email protected].bea.com>
          wrote:
          >Even though its meant for foreign providers, perhaps credential
          >mapping would work? See:
          >
          >http://edocs.bea.com/wls/docs81/ejb/message_beans.html#1151409
          >
          >Also, you might want to try posting to the
          >security and/or ejb newsgroups.
          >
          >Tom
          >
          >P.S. This question has come up before, so it seems likely
          >that the security section of the MDB documentation
          >may need more detail. If you post any feedback here,
          >I'll make sure it gets sent directly to the
          >documentation folks...
          >
          >Eric Ma wrote:
          >
          >> I have a JMS Topic living in one WebLogic 8.1 domain and a MDB that
          >listens to
          >> this JMS Topic living in another domain. Do I need to configure trusted
          >domain
          >> relationship for both domains?
          >
          

Similar Messages

  • Document for JMS Failover Implementation On WebLogic

    Hi,
    I am looking some good links and techniques to implement JMS failover using WebLogic 10.3.
    FailOver* [As we do with our Databases (Concept of Clustring)]
    System will consist of two app servers and each will have its own application deployments but if one failed for some reason the application messages should redirected to the other servere and vice versa.
    Above efinition is very brief but if anyone can help provide some good documents and info how to implement it it will be appriciated.
    Thanks :-)

    Thanks alot guys for your help. We successfully implemented it at our servers here by creating distributed queues targetting all servers in a cluster.
    One point which I think is worth mentioning and I want to share with all us here is that; when App Server [where MDB will post the message finally after retrieving from queue] if that goes down what will happen, what MDB will do with that message?.
    We impleneted the DLQ (error destination) and deploy one more MDB_DLQ_SERVER2 (Let say App SERVER 1 is down) which gets triggered when any message comes to DLQ and post that message to some other App Server, Let say message has been read by MDB_SERVER1 on SERVER1 but offcourse actaull server is down so message should get Re-directed to its Error Destination after it expiration peiod or whatever the settings are. DLQ (Error Destination) which is also a distributed destinatrion again targetting all servers in cluster same as actaull Request or Reply queues BUT MDB_DLQ_SERVER2 which is deployed on Server2 is NOT able to read this message. It get triggered but can not access the message.
    After debugging for almost a day we found out its because message has been transafed to DLQ but actaully its resides in a FILESTORE_SERVER1 and MDB_DLQ_SERVER2 is not able to access it.
    To work with that we have to define MDB_DLQ_SERVER1 to cater the SERVER1 failure and MDB_DLQ_SERVER2 to cater SERVER2 failure.
    Reason I am mentioning this because as I said DLQ is also a normal Distributed Queue but at the same time its NOT as Distributed as its says.
    Hope you all understand what I just wrote above.
    Now I need to implement exactly the same scenario using four seperate physicall machine containing my four servers. I tried this scenario by creating four machines where node manager for each server is running and listning but when I am trying to start the server it gives me Certificate Exception with bad user name and password. Anyway I have seen some posts here regarding this; So i think i'll be fine.
    Thanks Again,
    Sheeraz

  • JMS Failover & Load balancing.

    Hi,
    I have 4 Managed servers A,B,C,D on 4 physical boxes. We have one JMs server on Box D, All other Managed server uses this only JMS which is on D, if this goes down we loose all messages. I want to have JMS failover in my environment. I suggested to have 4 JMS servers and 4 File stores for each Managed server? my question is that Is weblogic that intellegent that if a client connects to box B JMS server and if the servers goes down, the message will be send top another JMS server?

    ravi tiwari wrote:
    Hi,
    I have 4 Managed servers A,B,C,D on 4 physical boxes. We have one JMs server on Box D, All other Managed server uses this only JMS which is on D, if this goes down we loose all messages. I want to have JMS failover in my environment. I suggested to have 4 JMS servers and 4 File stores for each Managed server? my question is that Is weblogic that intellegent that if a client connects to box B JMS server and if the servers goes down, the message will be send top another JMS server?You don't mention if you're running in a clustered environment or what
    version of WLS you're using, so I've assumed a cluster and WLS 8.1
    For resiliency, you should really have 4 JMS servers, one on each
    managed server. Then each JMS server has it's own filestore on the
    physical managed server machine.
    So, you have JMSA, JMSB, JMSC, JMSD with FileStoreA, FileStoreB,
    FileStoreC & FileStoreD.
    You should also look at using JMS distributed destinations as described
    in the documentation.
    In your current environment, if server D goes down, you not only lose
    your messages, your application would lose access to your JMS queues?
    If you use distributed destinations, and have 4 JMS servers, your JMS
    queues will still be available if a single server goes down.
    If a server does go down however, you have to follow the JMS migration
    procedures to migrate the JMS service from the failed server to a
    running one.
    There are conditions to this process, which are best found out from the
    migration documentation to be honest, rather than describe it here.
    We use this setup, and it works fine for us. We've never had to use JMS
    migration, as so far we haven't had anything serious to cause us to need
    to migrate. Our servers also boot from a SAN which makes our resilience
    processes simpler.
    Hope that helps,
    Pete

  • JMS Failover Implementation With Cluster Consist Of Four Servers

    Hi All,
    I mistankly posted the following thread in a WebLogic general area. It should be here. Can any one help please.
    [Document for JMS Failover Implementation On WebLogic|http://forums.oracle.com/forums/thread.jspa?threadID=900391&tstart=15]
    Could you please just reply here.
    Thanks :)

    Hi All,
    I mistankly posted the following thread in a WebLogic general area. It should be here. Can any one help please.
    [Document for JMS Failover Implementation On WebLogic|http://forums.oracle.com/forums/thread.jspa?threadID=900391&tstart=15]
    Could you please just reply here.
    Thanks :)

  • Message lost during Message Send in weblogic 9.2 JMS

    I am using Weblogic Server 9.2 in a clustered environment (2 nodes in separate machines). The application deployed has ejb timers which picks up data from DB, and sends text messages (very small size as its just an ID) to a Queue (using the JMS APIs - QueueSender.send(message)). There is an MDB (with max-beans-in-free-pool) configured to be 20 listenting to this message. There might be 1-20 records picked up from the DB and the id of those records will be sent to the JMS Queue as separate JMS messages. i.e. QueueSender.send will be called as many times as there are record ids to send. The timer after sending the message to the queue sets the status of all the data it picked up to "Busy" in the same transaction as the message send. We are using XA driver and the JMS Connection factory is also XA enabled. We had problems with this XA transaction as weblogic sometimes sends the messages and before db commit happened to set the status to "Busy". We had a workaround to this problem by setting the JMS Connection factory setting "time to Deliver" to 300 so that the messages are visible to the MDB only after 300 ms thereby giving time for db commit to be over as per the suggestion in the following link. http://jayesh-patel.blogspot.com/. We had done this 2 months back and to date we find no problem.
    Now we are facing another strange problem. The JMS sometime misses delivering the messages. This is evident from the logs we had printed from the MDB. The logs indicate some messages has reached the MDB and some did not. This problem occurs occasionally. Also what I see is that this had occured under normal load conditions only though this application operates under high loads (number of messages being processed per day may go up to 5000). In the last one month this had occured 3 times. The JMS messaging is becoming unreliable.
    This seems to be a bug in Weblogic 9.2 JMS. Is there any workaround to prevent this problem. ?

    It is doubtful that JMS is losing messages. Suggestions:
    * Try increasing your sheduled message delay. 300 millis is a short delay -- any small network glitch, disk hardware issue, or even a single JVM GC in the JDBC driver can cause the database to take longer than 300 millis to complete its part of the transaction.
    * Try "WebLogic JMS Message Lifecycle Logging" if you want to trace message activity.
    * Instead of the delay, consider using WebLogic's Datasource "LLR" option, a commonly used solution to your use case. This is an optimization option that yields fully ACID (safe) transactions, and has the very nice side effect of deterministically forcing the TM to always commit the database portion of a transaction before committing any other resources participating in the same transaction.
    Links:
    http://download.oracle.com/docs/cd/E13222_01/wls/docs92/jms_admin/troubleshoot.html
    http://download.oracle.com/docs/cd/E13222_01/wls/docs92/jta/llr.html#LLRfailover
    http://download.oracle.com/docs/cd/E13222_01/wls/docs92/jdbc_admin/jdbc_datasources.html#llr
    Tom
    Edited by: TomB on May 7, 2010 5:20 PM

  • JMS Failover with Distributed Destinations in 7.0

              How does JMS failover with distributed destinations in WL 7.0?
              In an environment using file stores for persistent messages, can a working server
              automatically pick up unprocessed and persisted messages from a failed server?
              If so, what's the best way to set this up?
              Or, is this completely manual? In other words, we have to bring up a new server
              pointing to the location of the file store from the failed server?
              

              It appears that two JMSServers cannot share the same file store and, I'm assuming,
              two file stores cannot be using the same directory for persistence.
              So the HA you're talking about is something like Veritas automatically restarting
              a server (or starting a new one) to process the messages in the persistent queue
              that were unprocessed at the time of failure with the file store residing on some
              sort of HA disk array.
              The key point is that a message once it arrives at a server must be processed
              by that server or, in the case of failure of that server, must be processed by
              a server similarly configured to the one that failed so that it picks up the unprocessed
              messages. The message can't be processed by another server in the cluster.
              Or, is there some trick that could be employed to copy from the file store of
              the failed server and repost the messages to the still operating servers?
              "Zach" <[email protected]> wrote:
              >Unless you have some sort of HA framework/hardware, this is a manual
              >operation. You either point to the existing persistent storage (shared
              >storage or JDBC connection pool), or you move the physical data.
              >
              >_sjz.
              >
              >"Jim Cross" <[email protected]> wrote in message
              >news:[email protected]...
              >>
              >>
              >> How does JMS failover with distributed destinations in WL 7.0?
              >>
              >> In an environment using file stores for persistent messages, can a
              >working
              >server
              >> automatically pick up unprocessed and persisted messages from a failed
              >server?
              >> If so, what's the best way to set this up?
              >>
              >> Or, is this completely manual? In other words, we have to bring up
              >a new
              >server
              >> pointing to the location of the file store from the failed server?
              >
              >
              

  • Porting from weblogic.event to JMS

              Hi,
              Has anyone done porting from weblogic.event (pre JMS) to JMS? If so, please share
              your experience and point me to any resources.
              Thanks,
              Tahir
              

    The J2EE reference implementation did not have any such thing as a "startup
    class" last time I checked. That is a WebLogic feature.
    Cameron Purdy
    [email protected]
    http://www.tangosol.com
    WebLogic Consulting Available
    "vamsi" <[email protected]> wrote in message
    news:39e47c60$[email protected]..
    >
    I am trying to port some of java code to J2EE and i am having problemsetting up startup classes in J2EE enterprise server.
    Any help would be really appreciated. thanks in advance.

  • JMS failover

              We have an external EDI service posting XML data over http to Weblogic JMS server,
              we have 2 physical machines which run the weblogic and JMS, our EDI can only feed
              to one server so if that server goes down what would be the best way to automactically
              failover to the other Weblogic JMS, since JMS server cannot be clustered.We are
              infact using a database based queue.
              Thanks
              Anil Jacob
              

              Hi Anil,
              you are correct that automatic migration is not supported in weblogic server so
              you will need to take care of this yourself.
              We have written a little java program that runs of the same server that our admin
              server is running on. This program has two purposes, one is to restart the admin
              server incase it should goes down. This is because the we need to trigger the
              migration through the admin server. The second is to poll the managed servers
              and and migrate when needed.
              The methods that handle the migration are located in weblogic.management.runtime.MigratableServiceCoordinatorRuntimeMBean.
              So as an example you could get a referense to this mbean by writing:
              MBeanHome home = Helper.getMBeanHome(wls_user, wls_password, admin_url, admin_server_name);
              MigratableServiceCoordinatorRuntime mrt = home.getRuntimeMBean("the-MigratableServiceCoordinator",
              "MigratableServiceCoordinatorRuntime");
              mrt.migrate(migratableTargetMBean, destinationServer, false, true);
              We are using bea`s types interfaces for this but you can also use "pure" jmx calls,
              but I was is a hurry and this seemed to be quicker.
              Below is the javadoc for the migrate method that bea`s support provided us.
              * Migrate all services deployed to the migratableTarget to the destination
              * server. Use this method if either the source or the destation or both are
              * not running.
              * Precondition:
              * The migratableTarget must contain at least one server.
              * The destination server must be a member of the migratableTarget's list
              of
              * candidate servers.
              * If automatic migration mode is disabled, the destination server must not
              * be the currently hosting server (i.e. head of candidate list of the
              * migratableTarget).
              * Postcondition:
              * If automatic migration mode is disabled and if the migration succeeded,
              * the head of the candidate server list in the migratableTarget will be the
              * destination server.
              * @param migratableTarget - all services targeted to this target are to be
              * migrated to the destination server.
              * THIS MUST BE A CONFIG MBEAN
              * @param destination - the new server where the services deployed to
              * migratableTarget shall be activated
              * @param sourceUp - the currently active server is up and running. If false,
              * the administrator must ensure that the services deployed
              * to migratableTarget are NOT active.
              * @param destinationUp - the destination server is up and running.
              void migrate(MigratableTargetMBean migratableTarget, ServerMBean destination,
              boolean sourceUp, boolean destinationUp) throws MigrationException;
              If you take a look at the weblogic.management.configuration.MigratableTargetMBean
              it has a method named getHostingServer(). This method returns a ServerMBean which
              tells us on which server the jms services are bound. This the same as the "CurrentServer"
              in the admin console. These objects have names like "appserver1 (migratable)"(where
              appserver1 would be replaced with the name of your managed server). This can
              be abit confusing if you look at the admin server log would can somtime see that
              is is migrating "appserver1 (migratable) to appserver1", which migth look stange(well
              it did for me:)). Just remember that "appserver1 (migratable)" is just the name
              of the migratable target, and that object has the "CurrentServer" attribute .
              The "CurrentServer" is the server where the services are right now, and it is
              that server that we are migrating the services from.
              I think that your ide of have inte a backup jms server running of a second wls
              will work just fine. You should be able to poll the server which is hosting the
              jms service, and if it goes down you migrate the migratable target to the second
              server. You could stop there and manually "reset" the configuration to your
              starting state or fully automate so that the second server takes over and the
              first becomes you back up.
              Just ask if some of this isn`t very clear.
              /Daniel
              "Anil Jacob" <[email protected]> wrote:
              >
              >Daniel,
              >
              >That was a pretty good explanation. Thanks. Our application vendor does
              >not support
              >distributed queues therefore right now that option might not be used
              >in our case,
              >however the migratable option for JMS that you discribed looks good.
              >Is migration
              >an automactic process, I don't think so, what do you say?
              >What I want to acheive given the limited options is to have a backup
              >Weblogic
              >JMS which would come alive once the primary goes down, in this case your
              >idea
              >of a script looks good, can you give me more details on the script that
              >polls
              >the managed server and migrates JMS incase of a failure?
              >
              >Thankx
              >Anil
              >
              >[email protected] (Daniel Bevenius) wrote:
              >>Hi,
              >>
              >>What you could do is have a distributed destination as the previous
              >>post suggested. This distributed destination could then have one
              >>member that is pinned to a JMS server on one of you`re managed
              >>servers. This JMSServer should have its target set to appserver1
              >>(migratable).
              >>
              >>
              >>In you`re case you say that you can only target one server with the
              >>EDI feed which means that you will need to manually migrate the jms
              >>services to the second server.
              >>
              >>We had to create a program that polls the servers to see if they are
              >>alive. If one goes down we migrate the jms services automatically to
              >>the server that is still up.
              >>
              >>You stated that JMS servers could not be clustered. In our setup we
              >>have a JMSServer on each managedserver(we have two in a cluster).
              >>According to BEA support the problem with automatic failover is with
              >>consumers that bind to a distributed destination(note that this is not
              >>so for producers).
              >>When a client binds to a distributed destination it will get a binding
              >>to one of the members of that distributed destinaion, which depends
              >on
              >>the load balancing policy(we use round robin). Lets say that the
              >>client binds to a jms server on our first managedserver. If this
              >>server crashes the the client will still have a binding to the first
              >>managed server. What we are migrating is the binding to a different
              >>jms server.
              >>
              >>Don`t know if this helps you at all but let me know and I`ll try to
              >>explain it better.
              >>
              >>/Daniel
              >>
              >>
              >>
              >>
              >>"Anil Jacob" <[email protected]> wrote in message news:<[email protected]>...
              >>> We have an external EDI service posting XML data over http to Weblogic
              >>JMS server,
              >>> we have 2 physical machines which run the weblogic and JMS, our EDI
              >>can only feed
              >>> to one server so if that server goes down what would be the best way
              >>to automactically
              >>> failover to the other Weblogic JMS, since JMS server cannot be clustered.We
              >>are
              >>> infact using a database based queue.
              >>>
              >>> Thanks
              >>> Anil Jacob
              >
              

  • WebLogic 10.3 jms Uniform Distributed Destination

    We are running WL Server 10.3 on Suse 11.0. I have created a Cluster, Servers, JMS Servers, Connection Factory and a Distributed Destination. On the Connection Factory, I have the "Server Affinity" checkbox unchecked. Server/Cluster all look good. I am using the jmsfullclient.jar for the test.
    When attempting to access the distributed destination, I get the following error on the second message. If I were to turn on "Session Affinity" in the Cluster, the problem does not arise but I lose the value of the distributed destination. Any suggestions and assistance would be appreciated.:
    weblogic.jms.common.JMSException: No failover destination.
         at weblogic.jms.dispatcher.DispatcherAdapter.convertToJMSExceptionAndThrow(DispatcherAdapter.java:110)
         at weblogic.jms.dispatcher.DispatcherAdapter.dispatchSyncNoTran(DispatcherAdapter.java:61)
         at weblogic.jms.client.JMSProducer.toFEProducer(JMSProducer.java:1275)
         at weblogic.jms.client.JMSProducer.deliveryInternal(JMSProducer.java:783)
         at weblogic.jms.client.JMSProducer.sendInternal(JMSProducer.java:541)
         at weblogic.jms.client.JMSProducer.sendWithListener(JMSProducer.java:394)
         at weblogic.jms.client.JMSProducer.send(JMSProducer.java:384)
         at weblogic.jms.client.WLProducerImpl.send(WLProducerImpl.java:970)
         at com.overstock.util.Example.main(Example.java:44)
    Caused by: weblogic.jms.common.JMSException: No failover destination.
         at weblogic.jms.frontend.FEProducer.pickNextDestination(FEProducer.java:750)
         at weblogic.jms.frontend.FEProducer.sendRetryDestination(FEProducer.java:1092)
         at weblogic.jms.frontend.FEProducer.send(FEProducer.java:1399)
         at weblogic.jms.frontend.FEProducer.invoke(FEProducer.java:1460)
         at weblogic.messaging.dispatcher.Request.wrappedFiniteStateMachine(Request.java:961)
         at weblogic.messaging.dispatcher.DispatcherServerRef.invoke(DispatcherServerRef.java:276)
         at weblogic.messaging.dispatcher.DispatcherServerRef.handleRequest(DispatcherServerRef.java:141)
         at weblogic.messaging.dispatcher.DispatcherServerRef.access$000(DispatcherServerRef.java:34)
         at weblogic.messaging.dispatcher.DispatcherServerRef$2.run(DispatcherServerRef.java:111)
         at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
         at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
    Caused by: weblogic.messaging.dispatcher.DispatcherException: could not find Server null
         at weblogic.messaging.dispatcher.DispatcherManager.dispatcherCreate(DispatcherManager.java:176)
         at weblogic.messaging.dispatcher.DispatcherManager.dispatcherFindOrCreate(DispatcherManager.java:58)
         at weblogic.jms.dispatcher.JMSDispatcherManager.dispatcherFindOrCreate(JMSDispatcherManager.java:219)
         at weblogic.jms.dispatcher.JMSDispatcherManager.dispatcherFindOrCreateChecked(JMSDispatcherManager.java:230)
         at weblogic.jms.frontend.FEProducer.findDispatcher(FEProducer.java:825)
         at weblogic.jms.frontend.FEProducer.sendRetryDestination(FEProducer.java:995)
         ... 9 more
    Caused by: javax.naming.NameNotFoundException: Unable to resolve 'weblogic.messaging.dispatcher.S:null'. Resolved 'weblogic.messaging.dispatcher'; remaining name 'S:null'
         at weblogic.jndi.internal.BasicNamingNode.newNameNotFoundException(BasicNamingNode.java:1139)
         at weblogic.jndi.internal.BasicNamingNode.lookupHere(BasicNamingNode.java:252)
         at weblogic.jndi.internal.ServerNamingNode.lookupHere(ServerNamingNode.java:182)
         at weblogic.jndi.internal.BasicNamingNode.lookup(BasicNamingNode.java:206)
         at weblogic.jndi.internal.BasicNamingNode.lookup(BasicNamingNode.java:214)
         at weblogic.jndi.internal.BasicNamingNode.lookup(BasicNamingNode.java:214)
         at weblogic.jndi.internal.BasicNamingNode.lookup(BasicNamingNode.java:214)
         at weblogic.jndi.internal.WLEventContextImpl.lookup(WLEventContextImpl.java:254)
         at weblogic.jndi.internal.WLContextImpl.lookup(WLContextImpl.java:380)
         at javax.naming.InitialContext.lookup(InitialContext.java:392)
         at weblogic.messaging.dispatcher.DispatcherManager.dispatcherCreate(DispatcherManager.java:172)
         ... 14 more
    My client code is extremely basic and is the following:
    public class Example {
    public static void main(String args[]) {
    String providerUrl = "t3://localhost:7003,localhost:7005";
    Hashtable<String, String> ht = new Hashtable<String, String>();
    ht.put(Context.INITIAL_CONTEXT_FACTORY, "weblogic.jndi.WLInitialContextFactory");
    ht.put(Context.PROVIDER_URL, providerUrl);
    InitialContext ctx = null;
    try {
    ctx = new InitialContext(ht);
    ConnectionFactory connFactory = (ConnectionFactory) ctx
    .lookup("connectionfactory");
    Destination dest = (Destination) ctx.lookup("distributedqueue");
    Connection conn = null;
    Session session = null;
    MessageProducer p = null;
    try {
    conn = connFactory.createConnection();
    conn.start();
    System.out.println("Thread:" + Thread.currentThread().getId() + " got a connection "
    + conn.hashCode());
    session = conn.createSession(false, Session.AUTO_ACKNOWLEDGE);
    p = session.createProducer(dest);
    System.out.println("Thread:" + Thread.currentThread().getId() + " started a connection");
    for (int i = 0; i < 1000; i++) {
    p.send(session.createTextMessage());
    System.out.println("FinishedRunning:" + Thread.currentThread().getId());
    catch (JMSException e) {
    e.printStackTrace();
    finally {
    if (p != null) {
    try {
    p.close();
    catch (JMSException e) {
    e.printStackTrace();
    if (session != null) {
    try {
    session.close();
    catch (JMSException e) {
    e.printStackTrace();
    if (conn != null) {
    try {
    conn.close();
    catch (JMSException e) {
    e.printStackTrace();
    catch (Exception e) {
    e.printStackTrace();
    Edited by: svachary on Jan 8, 2009 1:01 PM

    Hi,
    I would suggest you to go through these 2 links which you give all the details about UDD
    Topic:Steps to Configure Uniform Distributed Queue (UDQ) on Weblogic Server
    http://middlewaremagic.com/weblogic/?p=3747
    Topic:JMS Demo using WebLogic Uniform Distributed Queue (UDQ)
    http://middlewaremagic.com/weblogic/?page_id=1976
    Tips to solve UDQ issues
    - Make sure the Server Affinity Enabled parameter should be un-checked (disable) which under the [ Connection factory –> Configuration (tab) –> Load Balance (sub-tab)]
    - Disable the Server Affinity Enabled parameter for the connection factory whish is been used by your UDQ.
    - All the managed servers are in the same cluster.
    - If the managed serves are in different boxes make sure the listing address is given correctly which is under [ Machine –> Configuration (tab) –> Node Manager (sub-tab) ]
    - Test if you are able to PING the servers on different boxes and make sure that there is no network issues and you are able to communicate with the servers.
    Hope this helps you.
    Regards,
    Ravish Mody
    http://middlewaremagic.com/weblogic/
    Come, Join Us and Experience The Magic…

  • Ignore my previous Q. Clearer descriprition of JMS FAilover problem

              I'm using weblogic700 .
              I have a cluster( say mycluster) and two weblogic servers ms1 and ms2 in this cluster.
              I have configured a connection factory and its target is assigned to my cluster.
              I have configured a JMS Server and its target is assigned to ms1.
              It has a destination myqueue.
              The listen address for ms1 and ms2 is host1
              The listen port for ms1 and ms2 are 7011 and 7021.
              Now I deploy a message driven bean to mycluster.
              I have a JMS client. Using it if I send a message with a PROVIDER_URL t3://host1:7011
              then the message is consumed.
              (the message is sent to myqueue and the message driven
              bean consumes messages at myqueue).
              Now what should I do If ms1 stops.How can I support failover( even it requires
              some manual steps.)
              So my Question is: How do use the facilty of migration of pinned services in
              this case?
              Also as in the case of servlet and jsp where I can get a common entry point(
              the host and port number of the proxy server which I got by configuring the HTTPClusterServlet)
              how can I get a common address which I will use in my client for sending my
              messages ?
              I feel I am not supposed to use individual server addresses in sending my messages
              to my message driven bean deployed in cluster.
              

              Be honest, the Weblogic document about multiple instances require you have multiple
              IPs (names) for the same machine and each IP refers to 1 instance,this way, you can
              always use the same port. I am not sure how to implement this concept. Grab your
              system and network administrator to help you.
              "Aniket karmakar" <[email protected]> wrote:
              >
              > Thanks Mr Wendy Zhang...I have successfully tested migration of pinned
              >services(
              >IN MY CASE JMS)
              > With your concept of the entry point (t3://s1,s2,s3:p)I could test the
              >migration.But
              >the only point which still itches me is that I
              >cannot have two clustered servers on the same machine.In that case I cannot
              >have
              >the same port number for both of them.
              >
              >"Wendy Zhang" <[email protected]> wrote:
              >>
              >>For the common address, please check the detail from Weblogic 7 JNDI document.
              >>The
              >>last paragraph has a simple description. The address is the address you
              >>put in the
              >>Cluster address, for example, if you use comma seperated addresses and
              >you
              >>have three
              >>servers s1,s2 and s3 with port p, the common address appeared in your JMS
              >>client
              >>is t3://s1,s2,s3:p. The Weblogic library used in your client knows how
              >to
              >>handle
              >>this address.
              >>
              >>
              >>"Aniket" <[email protected]> wrote:
              >>>
              >>> I'm using weblogic700 .
              >>>
              >>>I have a cluster( say mycluster) and two weblogic servers ms1 and ms2
              >>in
              >>>this cluster.
              >>> I have configured a connection factory and its target is assigned
              >to
              >>>my cluster.
              >>> I have configured a JMS Server and its target is assigned to ms1.
              >>> It has a destination myqueue.
              >>>
              >>> The listen address for ms1 and ms2 is host1
              >>> The listen port for ms1 and ms2 are 7011 and 7021.
              >>>
              >>> Now I deploy a message driven bean to mycluster.
              >>>
              >>> I have a JMS client. Using it if I send a message with a PROVIDER_URL
              >>>t3://host1:7011
              >>>then the message is consumed.
              >>>(the message is sent to myqueue and the message driven
              >>>bean consumes messages at myqueue).
              >>>
              >>> Now what should I do If ms1 stops.How can I support failover( even
              >>it
              >>>requires
              >>>some manual steps.)
              >>> So my Question is: How do use the facilty of migration of pinned services
              >>>in
              >>>this case?
              >>>
              >>> Also as in the case of servlet and jsp where I can get a common entry
              >>>point(
              >>>the host and port number of the proxy server which I got by configuring
              >>>the HTTPClusterServlet)
              >>>
              >>> how can I get a common address which I will use in my client for
              >sending
              >>>my
              >>>messages ?
              >>>
              >>> I feel I am not supposed to use individual server addresses in sending
              >>>my messages
              >>> to my message driven bean deployed in cluster.
              >>
              >
              

  • How to send to Weblogic 9.1 JMS queue from EJB running in Glassfish?

    Hi.
    I hope this is the correct forum.
    I need to send a message to a jms queue on a remote weblogic 9.1 server. To this end I believe I will have to:
    1) Install one or more weblogic client libraries/jar files on my glassfish application server. The weblogic server expects communication in the "t3" protocol, which I believe tunnels iiop, jms etc. inside their own proprietary protocol
    2) Define the jms queue in my glassfish application server with a queue connection factory that creates the queue using classes from the weblogic jars?
    3) Otherwise code the EJB methods sending messages as I would code and send messages to any jms queue
    It's the first two steps that have me baffled. The more I dig in glassfish administrator's manual, the more confused I become.
    Anybody have any idea if/how this can be accomplished?
    Some reference to where I may find information as to how to accomplish this?
    Thank you,
    Jacob.

    Jacob,
    I suggest you try, either a GlassFish forum (http://forums.java.net/jive/forum.jspa?forumID=56), or perhaps the GenericJMSRA project https://genericjmsra.dev.java.net .This forum is primarily about the Sun JMS provider, Java MQ or Open MQ -- or the JMS Specification.
    -- Ed Bratt

  • Load balancing In weblogic 6.1 JMS Cluster

              hi
              i have implemented distributed queues in weblogic as suggested in the JMS performance
              guide.but the problem is that of the 3 queues in cluster the messages always end
              up going to the same queue and the remaining 2 remainin queues are empty. i want
              to perfomr load balancing but i think this ios available only in weblogic 7.0.can
              anyone suggest any alternative workaround ??
              

    I wonder if the JNDINameReplicated setting is taking effect.
              Are there any warning or error messages in your logs?
              Can you see statistics for all three JMS servers on the console?
              Does it work if you use the "./a" syntax?
              illusionz70 wrote:
              > hi,
              > 3 machines are in a cluster.all three machines have their different jms servers
              > and queues under them but target teh same connection factory.therefore all three
              > servers target 3 different machines which are in a cluster. i have weblogic 6.1
              > to which i applied the repective CR(patch) which enables the JNDIREPLICATED parameter.i
              > set this parameter to false.as a result i am able to monitor the 3 queues which
              > have the same jndi names but under different servers.but the problem is when i
              > lookup the queues from client using the machine's ip address(not the cluster ip),
              > the results are haywire.... even though i target say a ip of 30 , the msg goes
              > to ip 20... i dont know why..do you have any idea.
              > NOTE : i dont use the ./a ( where queue is the quename ) syntax in createqueue.
              > i look up the queue using the JNDI name.
              > thanks
              > Tom Barnes <[email protected].bea.com>
              > wrote:
              >
              >>Make sure that your context host and connection
              >>factory (CF) host are actually the same as what
              >>you're expecting.
              >>
              >>How, exactly, are your clients:
              >> (A) looking up their context
              >> (B) looking up their CF
              >>
              >>What are the targets for the CF?
              >>What are the URLs for the servers?
              >>What is the URL the clients are using? How is it
              >>load-balanced among the servers?
              >>Are the servers clustered?
              >>
              >>Tom
              >>
              >>illusionz70 wrote:
              >>
              >>
              >>>hi
              >>> i have implemented distributed queues in weblogic as suggested in
              >>
              >>the JMS performance
              >>
              >>>guide.but the problem is that of the 3 queues in cluster the messages
              >>
              >>always end
              >>
              >>>up going to the same queue and the remaining 2 remainin queues are
              >>
              >>empty. i want
              >>
              >>>to perfomr load balancing but i think this ios available only in weblogic
              >>
              >>7.0.can
              >>
              >>>anyone suggest any alternative workaround ??
              >>
              >
              

  • Mule, Weblogic and MQ JMS : deadlock problem

    Dear Oracle community,
    We are hosting our Mule ESB (3.1) application on a Weblogic 10.3 (11g) server and are using IBM Websphere MQ's JMS solution (with libraries version 7.0.1.7).
    The problem we are facing is that JMS connections are created by one of Weblogic's worker (thread) and the close() method for those connections are not necessarily called by the same thread.
    This is bad because from what I know this behavior is undefined from JMS specification
    (see http://docs.oracle.com/cd/E15051_01/wls/docs103/jms/design_best_practices.html#wp1061413 ) and it is a blocker issue in our case because this lead to a deadlock.
    Do someone has any idea how to enforce the fact that the same thread do create and close the connection through Weblogic and/or Mule configuration (without re-implementing the connector) ?
    Thanks in advance for your help,
    Best regards,
    Y.
    PS : I've already posted this question on Mule ESB's forum : http://forum.mulesoft.org/mulesoft/topics/mule_weblogic_and_mq_jms_deadlock_problem
    Edited by: user7428803 on May 14, 2012 2:26 PM

    I hope you found a solution by now, but as an FYI:
    * The JMS specification specifically requires that JMS providers support the ability to call connection.close() and session.close() from an arbitrary thread even when another thread is making calls on these objects. It furthermore goes into some detail about the expected behavior of these calls. The best practices link you cited alludes to this: "The JMS Specification states that multi-threading a session, producer, consumer, or message method results in undefined behavior +except when calling close().+"
    * Depending on your use case, you may not need to use Mule to integrate MQ with WebLogic. WebLogic supports a variety of options for integrating MQ directly without the use of third party tooling. See http://download.oracle.com/docs/cd/E17904_01/web.1111/e13727/interop.htm .
    Tom

  • Small weblogic.jar for jms/j2ee clients of weblogic

              We are trying to create a smaller footprint for the ~38M weblogic.jar for distribution
              to our client applications to use the JMS and J2EE features of Weblogic 7.0sp2.
              I attempted to use the whitepaper document distributed by BEA for creating a smaller
              jar file, but it did not work. Has anyone else in the user community successfully
              created the jar file and if so could they give me some insight on how they did
              it.
              Thanks,
              Ashish
              

    Hi Ashish,
              I've personally used the "URL" class loader option with success,
              and I know that several customers have also used this option, as
              well as the other options for years. Feel free to
              post more detail than "it did not work", and I may be able
              to help you out.
              Tom, BEA
              P.S. If 8.1 is an option, you may with to consider using the
              thin client jars it supplies.
              Ashish Bisarya wrote:
              > We are trying to create a smaller footprint for the ~38M weblogic.jar for distribution
              > to our client applications to use the JMS and J2EE features of Weblogic 7.0sp2.
              > I attempted to use the whitepaper document distributed by BEA for creating a smaller
              > jar file, but it did not work. Has anyone else in the user community successfully
              > created the jar file and if so could they give me some insight on how they did
              > it.
              >
              > Thanks,
              > Ashish
              

  • WebLogic 10.0 JMS Thin Client and JVM 1.4

    As mentioned in [WebLogic JMS Thin Client|http://download.oracle.com/docs/cd/E11035_01/wls100/client/jms_thin_client.html#wp1026979], it can be used on JVM 1.4 client, but it seems that wljmsclient.jar and wlclient.jar compiled using java 1.5 compiler with no 1.4 compatibility.
    Where can I get a 1.4 complied version of this jars for WebLogic 10 ?
    Edited by: user10385140 on 02.10.2008 2:32

    Hi,
    The doc is correct that the 1.4 JVM is supported for thin 10.0 clients, but note that 1.4 is not supported for 10.3 (the latest version). If you confirm that there's a problem, I recommend contacting customer support. Meanwhile, as a work-around, you can use a client jar from an earlier version (such as 9.2 at the latest MP).
    The latest updated version of the 10.0 client doc is at http://edocs.bea.com/wls/docs100/client/basics.html, the link you provided points to an older version of the edoc.
    You might want to look at using a generated "full client" rather than a thin client unless a smaller jar size is important in your use case. The reasoning is stated in the updated edoc.
    Regards,
    Tom Barnes
    WebLogic JMS Developer Team
    Edited by: TomB on Oct 2, 2008 6:52 AM

Maybe you are looking for