JMS failover

          We have an external EDI service posting XML data over http to Weblogic JMS server,
          we have 2 physical machines which run the weblogic and JMS, our EDI can only feed
          to one server so if that server goes down what would be the best way to automactically
          failover to the other Weblogic JMS, since JMS server cannot be clustered.We are
          infact using a database based queue.
          Thanks
          Anil Jacob
          

          Hi Anil,
          you are correct that automatic migration is not supported in weblogic server so
          you will need to take care of this yourself.
          We have written a little java program that runs of the same server that our admin
          server is running on. This program has two purposes, one is to restart the admin
          server incase it should goes down. This is because the we need to trigger the
          migration through the admin server. The second is to poll the managed servers
          and and migrate when needed.
          The methods that handle the migration are located in weblogic.management.runtime.MigratableServiceCoordinatorRuntimeMBean.
          So as an example you could get a referense to this mbean by writing:
          MBeanHome home = Helper.getMBeanHome(wls_user, wls_password, admin_url, admin_server_name);
          MigratableServiceCoordinatorRuntime mrt = home.getRuntimeMBean("the-MigratableServiceCoordinator",
          "MigratableServiceCoordinatorRuntime");
          mrt.migrate(migratableTargetMBean, destinationServer, false, true);
          We are using bea`s types interfaces for this but you can also use "pure" jmx calls,
          but I was is a hurry and this seemed to be quicker.
          Below is the javadoc for the migrate method that bea`s support provided us.
          * Migrate all services deployed to the migratableTarget to the destination
          * server. Use this method if either the source or the destation or both are
          * not running.
          * Precondition:
          * The migratableTarget must contain at least one server.
          * The destination server must be a member of the migratableTarget's list
          of
          * candidate servers.
          * If automatic migration mode is disabled, the destination server must not
          * be the currently hosting server (i.e. head of candidate list of the
          * migratableTarget).
          * Postcondition:
          * If automatic migration mode is disabled and if the migration succeeded,
          * the head of the candidate server list in the migratableTarget will be the
          * destination server.
          * @param migratableTarget - all services targeted to this target are to be
          * migrated to the destination server.
          * THIS MUST BE A CONFIG MBEAN
          * @param destination - the new server where the services deployed to
          * migratableTarget shall be activated
          * @param sourceUp - the currently active server is up and running. If false,
          * the administrator must ensure that the services deployed
          * to migratableTarget are NOT active.
          * @param destinationUp - the destination server is up and running.
          void migrate(MigratableTargetMBean migratableTarget, ServerMBean destination,
          boolean sourceUp, boolean destinationUp) throws MigrationException;
          If you take a look at the weblogic.management.configuration.MigratableTargetMBean
          it has a method named getHostingServer(). This method returns a ServerMBean which
          tells us on which server the jms services are bound. This the same as the "CurrentServer"
          in the admin console. These objects have names like "appserver1 (migratable)"(where
          appserver1 would be replaced with the name of your managed server). This can
          be abit confusing if you look at the admin server log would can somtime see that
          is is migrating "appserver1 (migratable) to appserver1", which migth look stange(well
          it did for me:)). Just remember that "appserver1 (migratable)" is just the name
          of the migratable target, and that object has the "CurrentServer" attribute .
          The "CurrentServer" is the server where the services are right now, and it is
          that server that we are migrating the services from.
          I think that your ide of have inte a backup jms server running of a second wls
          will work just fine. You should be able to poll the server which is hosting the
          jms service, and if it goes down you migrate the migratable target to the second
          server. You could stop there and manually "reset" the configuration to your
          starting state or fully automate so that the second server takes over and the
          first becomes you back up.
          Just ask if some of this isn`t very clear.
          /Daniel
          "Anil Jacob" <[email protected]> wrote:
          >
          >Daniel,
          >
          >That was a pretty good explanation. Thanks. Our application vendor does
          >not support
          >distributed queues therefore right now that option might not be used
          >in our case,
          >however the migratable option for JMS that you discribed looks good.
          >Is migration
          >an automactic process, I don't think so, what do you say?
          >What I want to acheive given the limited options is to have a backup
          >Weblogic
          >JMS which would come alive once the primary goes down, in this case your
          >idea
          >of a script looks good, can you give me more details on the script that
          >polls
          >the managed server and migrates JMS incase of a failure?
          >
          >Thankx
          >Anil
          >
          >[email protected] (Daniel Bevenius) wrote:
          >>Hi,
          >>
          >>What you could do is have a distributed destination as the previous
          >>post suggested. This distributed destination could then have one
          >>member that is pinned to a JMS server on one of you`re managed
          >>servers. This JMSServer should have its target set to appserver1
          >>(migratable).
          >>
          >>
          >>In you`re case you say that you can only target one server with the
          >>EDI feed which means that you will need to manually migrate the jms
          >>services to the second server.
          >>
          >>We had to create a program that polls the servers to see if they are
          >>alive. If one goes down we migrate the jms services automatically to
          >>the server that is still up.
          >>
          >>You stated that JMS servers could not be clustered. In our setup we
          >>have a JMSServer on each managedserver(we have two in a cluster).
          >>According to BEA support the problem with automatic failover is with
          >>consumers that bind to a distributed destination(note that this is not
          >>so for producers).
          >>When a client binds to a distributed destination it will get a binding
          >>to one of the members of that distributed destinaion, which depends
          >on
          >>the load balancing policy(we use round robin). Lets say that the
          >>client binds to a jms server on our first managedserver. If this
          >>server crashes the the client will still have a binding to the first
          >>managed server. What we are migrating is the binding to a different
          >>jms server.
          >>
          >>Don`t know if this helps you at all but let me know and I`ll try to
          >>explain it better.
          >>
          >>/Daniel
          >>
          >>
          >>
          >>
          >>"Anil Jacob" <[email protected]> wrote in message news:<[email protected]>...
          >>> We have an external EDI service posting XML data over http to Weblogic
          >>JMS server,
          >>> we have 2 physical machines which run the weblogic and JMS, our EDI
          >>can only feed
          >>> to one server so if that server goes down what would be the best way
          >>to automactically
          >>> failover to the other Weblogic JMS, since JMS server cannot be clustered.We
          >>are
          >>> infact using a database based queue.
          >>>
          >>> Thanks
          >>> Anil Jacob
          >
          

Similar Messages

  • JMS Failover with Distributed Destinations in 7.0

              How does JMS failover with distributed destinations in WL 7.0?
              In an environment using file stores for persistent messages, can a working server
              automatically pick up unprocessed and persisted messages from a failed server?
              If so, what's the best way to set this up?
              Or, is this completely manual? In other words, we have to bring up a new server
              pointing to the location of the file store from the failed server?
              

              It appears that two JMSServers cannot share the same file store and, I'm assuming,
              two file stores cannot be using the same directory for persistence.
              So the HA you're talking about is something like Veritas automatically restarting
              a server (or starting a new one) to process the messages in the persistent queue
              that were unprocessed at the time of failure with the file store residing on some
              sort of HA disk array.
              The key point is that a message once it arrives at a server must be processed
              by that server or, in the case of failure of that server, must be processed by
              a server similarly configured to the one that failed so that it picks up the unprocessed
              messages. The message can't be processed by another server in the cluster.
              Or, is there some trick that could be employed to copy from the file store of
              the failed server and repost the messages to the still operating servers?
              "Zach" <[email protected]> wrote:
              >Unless you have some sort of HA framework/hardware, this is a manual
              >operation. You either point to the existing persistent storage (shared
              >storage or JDBC connection pool), or you move the physical data.
              >
              >_sjz.
              >
              >"Jim Cross" <[email protected]> wrote in message
              >news:[email protected]...
              >>
              >>
              >> How does JMS failover with distributed destinations in WL 7.0?
              >>
              >> In an environment using file stores for persistent messages, can a
              >working
              >server
              >> automatically pick up unprocessed and persisted messages from a failed
              >server?
              >> If so, what's the best way to set this up?
              >>
              >> Or, is this completely manual? In other words, we have to bring up
              >a new
              >server
              >> pointing to the location of the file store from the failed server?
              >
              >
              

  • JMS Failover & Load balancing.

    Hi,
    I have 4 Managed servers A,B,C,D on 4 physical boxes. We have one JMs server on Box D, All other Managed server uses this only JMS which is on D, if this goes down we loose all messages. I want to have JMS failover in my environment. I suggested to have 4 JMS servers and 4 File stores for each Managed server? my question is that Is weblogic that intellegent that if a client connects to box B JMS server and if the servers goes down, the message will be send top another JMS server?

    ravi tiwari wrote:
    Hi,
    I have 4 Managed servers A,B,C,D on 4 physical boxes. We have one JMs server on Box D, All other Managed server uses this only JMS which is on D, if this goes down we loose all messages. I want to have JMS failover in my environment. I suggested to have 4 JMS servers and 4 File stores for each Managed server? my question is that Is weblogic that intellegent that if a client connects to box B JMS server and if the servers goes down, the message will be send top another JMS server?You don't mention if you're running in a clustered environment or what
    version of WLS you're using, so I've assumed a cluster and WLS 8.1
    For resiliency, you should really have 4 JMS servers, one on each
    managed server. Then each JMS server has it's own filestore on the
    physical managed server machine.
    So, you have JMSA, JMSB, JMSC, JMSD with FileStoreA, FileStoreB,
    FileStoreC & FileStoreD.
    You should also look at using JMS distributed destinations as described
    in the documentation.
    In your current environment, if server D goes down, you not only lose
    your messages, your application would lose access to your JMS queues?
    If you use distributed destinations, and have 4 JMS servers, your JMS
    queues will still be available if a single server goes down.
    If a server does go down however, you have to follow the JMS migration
    procedures to migrate the JMS service from the failed server to a
    running one.
    There are conditions to this process, which are best found out from the
    migration documentation to be honest, rather than describe it here.
    We use this setup, and it works fine for us. We've never had to use JMS
    migration, as so far we haven't had anything serious to cause us to need
    to migrate. Our servers also boot from a SAN which makes our resilience
    processes simpler.
    Hope that helps,
    Pete

  • JMS Failover Implementation With Cluster Consist Of Four Servers

    Hi All,
    I mistankly posted the following thread in a WebLogic general area. It should be here. Can any one help please.
    [Document for JMS Failover Implementation On WebLogic|http://forums.oracle.com/forums/thread.jspa?threadID=900391&tstart=15]
    Could you please just reply here.
    Thanks :)

    Hi All,
    I mistankly posted the following thread in a WebLogic general area. It should be here. Can any one help please.
    [Document for JMS Failover Implementation On WebLogic|http://forums.oracle.com/forums/thread.jspa?threadID=900391&tstart=15]
    Could you please just reply here.
    Thanks :)

  • Document for JMS Failover Implementation On WebLogic

    Hi,
    I am looking some good links and techniques to implement JMS failover using WebLogic 10.3.
    FailOver* [As we do with our Databases (Concept of Clustring)]
    System will consist of two app servers and each will have its own application deployments but if one failed for some reason the application messages should redirected to the other servere and vice versa.
    Above efinition is very brief but if anyone can help provide some good documents and info how to implement it it will be appriciated.
    Thanks :-)

    Thanks alot guys for your help. We successfully implemented it at our servers here by creating distributed queues targetting all servers in a cluster.
    One point which I think is worth mentioning and I want to share with all us here is that; when App Server [where MDB will post the message finally after retrieving from queue] if that goes down what will happen, what MDB will do with that message?.
    We impleneted the DLQ (error destination) and deploy one more MDB_DLQ_SERVER2 (Let say App SERVER 1 is down) which gets triggered when any message comes to DLQ and post that message to some other App Server, Let say message has been read by MDB_SERVER1 on SERVER1 but offcourse actaull server is down so message should get Re-directed to its Error Destination after it expiration peiod or whatever the settings are. DLQ (Error Destination) which is also a distributed destinatrion again targetting all servers in cluster same as actaull Request or Reply queues BUT MDB_DLQ_SERVER2 which is deployed on Server2 is NOT able to read this message. It get triggered but can not access the message.
    After debugging for almost a day we found out its because message has been transafed to DLQ but actaully its resides in a FILESTORE_SERVER1 and MDB_DLQ_SERVER2 is not able to access it.
    To work with that we have to define MDB_DLQ_SERVER1 to cater the SERVER1 failure and MDB_DLQ_SERVER2 to cater SERVER2 failure.
    Reason I am mentioning this because as I said DLQ is also a normal Distributed Queue but at the same time its NOT as Distributed as its says.
    Hope you all understand what I just wrote above.
    Now I need to implement exactly the same scenario using four seperate physicall machine containing my four servers. I tried this scenario by creating four machines where node manager for each server is running and listning but when I am trying to start the server it gives me Certificate Exception with bad user name and password. Anyway I have seen some posts here regarding this; So i think i'll be fine.
    Thanks Again,
    Sheeraz

  • Weblogic 8.1 JMS Failover

    Hi,
              In the Weblogic documentation with regards to JMS and Clusters it is stated that "Automatic failover is not supported by WebLogic JMS in this release." In my case, this release is 8.1.2. Can anyone clarify exactly what this means?
              Thanks,
              Aoife

              Not sure whether it is just me or not - the new security stuff in WebLogic 8.1
              just makes life so much tougher.
              Thanks for the suggestion anyways.
              Eric
              Tom Barnes <[email protected].bea.com>
              wrote:
              >Even though its meant for foreign providers, perhaps credential
              >mapping would work? See:
              >
              >http://edocs.bea.com/wls/docs81/ejb/message_beans.html#1151409
              >
              >Also, you might want to try posting to the
              >security and/or ejb newsgroups.
              >
              >Tom
              >
              >P.S. This question has come up before, so it seems likely
              >that the security section of the MDB documentation
              >may need more detail. If you post any feedback here,
              >I'll make sure it gets sent directly to the
              >documentation folks...
              >
              >Eric Ma wrote:
              >
              >> I have a JMS Topic living in one WebLogic 8.1 domain and a MDB that
              >listens to
              >> this JMS Topic living in another domain. Do I need to configure trusted
              >domain
              >> relationship for both domains?
              >
              

  • Ignore my previous Q. Clearer descriprition of JMS FAilover problem

              I'm using weblogic700 .
              I have a cluster( say mycluster) and two weblogic servers ms1 and ms2 in this cluster.
              I have configured a connection factory and its target is assigned to my cluster.
              I have configured a JMS Server and its target is assigned to ms1.
              It has a destination myqueue.
              The listen address for ms1 and ms2 is host1
              The listen port for ms1 and ms2 are 7011 and 7021.
              Now I deploy a message driven bean to mycluster.
              I have a JMS client. Using it if I send a message with a PROVIDER_URL t3://host1:7011
              then the message is consumed.
              (the message is sent to myqueue and the message driven
              bean consumes messages at myqueue).
              Now what should I do If ms1 stops.How can I support failover( even it requires
              some manual steps.)
              So my Question is: How do use the facilty of migration of pinned services in
              this case?
              Also as in the case of servlet and jsp where I can get a common entry point(
              the host and port number of the proxy server which I got by configuring the HTTPClusterServlet)
              how can I get a common address which I will use in my client for sending my
              messages ?
              I feel I am not supposed to use individual server addresses in sending my messages
              to my message driven bean deployed in cluster.
              

              Be honest, the Weblogic document about multiple instances require you have multiple
              IPs (names) for the same machine and each IP refers to 1 instance,this way, you can
              always use the same port. I am not sure how to implement this concept. Grab your
              system and network administrator to help you.
              "Aniket karmakar" <[email protected]> wrote:
              >
              > Thanks Mr Wendy Zhang...I have successfully tested migration of pinned
              >services(
              >IN MY CASE JMS)
              > With your concept of the entry point (t3://s1,s2,s3:p)I could test the
              >migration.But
              >the only point which still itches me is that I
              >cannot have two clustered servers on the same machine.In that case I cannot
              >have
              >the same port number for both of them.
              >
              >"Wendy Zhang" <[email protected]> wrote:
              >>
              >>For the common address, please check the detail from Weblogic 7 JNDI document.
              >>The
              >>last paragraph has a simple description. The address is the address you
              >>put in the
              >>Cluster address, for example, if you use comma seperated addresses and
              >you
              >>have three
              >>servers s1,s2 and s3 with port p, the common address appeared in your JMS
              >>client
              >>is t3://s1,s2,s3:p. The Weblogic library used in your client knows how
              >to
              >>handle
              >>this address.
              >>
              >>
              >>"Aniket" <[email protected]> wrote:
              >>>
              >>> I'm using weblogic700 .
              >>>
              >>>I have a cluster( say mycluster) and two weblogic servers ms1 and ms2
              >>in
              >>>this cluster.
              >>> I have configured a connection factory and its target is assigned
              >to
              >>>my cluster.
              >>> I have configured a JMS Server and its target is assigned to ms1.
              >>> It has a destination myqueue.
              >>>
              >>> The listen address for ms1 and ms2 is host1
              >>> The listen port for ms1 and ms2 are 7011 and 7021.
              >>>
              >>> Now I deploy a message driven bean to mycluster.
              >>>
              >>> I have a JMS client. Using it if I send a message with a PROVIDER_URL
              >>>t3://host1:7011
              >>>then the message is consumed.
              >>>(the message is sent to myqueue and the message driven
              >>>bean consumes messages at myqueue).
              >>>
              >>> Now what should I do If ms1 stops.How can I support failover( even
              >>it
              >>>requires
              >>>some manual steps.)
              >>> So my Question is: How do use the facilty of migration of pinned services
              >>>in
              >>>this case?
              >>>
              >>> Also as in the case of servlet and jsp where I can get a common entry
              >>>point(
              >>>the host and port number of the proxy server which I got by configuring
              >>>the HTTPClusterServlet)
              >>>
              >>> how can I get a common address which I will use in my client for
              >sending
              >>>my
              >>>messages ?
              >>>
              >>> I feel I am not supposed to use individual server addresses in sending
              >>>my messages
              >>> to my message driven bean deployed in cluster.
              >>
              >
              

  • Tom !!! Help needed in JMS Topic Lookup Cluster DOMAIN

    Hi Tom,
              We have a clustered domain with 2 managed servers and our application is an Applet-Servlet based application.We user JMS Topic for some of online blotters.
              And on looking up we get the following error,
              weblogic.jms.common.JMSException: <055053> <java.rmi.RemoteException: CORBA INTERNAL 1398079712 Maybe; nested exception is: org.omg.CORBA.INTERNAL: vmcid: SUN minor code: 224 completed: Maybe >      
              weblogic.jms.client.JMSConnectionFactory.setupJMSConnection(JMSConnectionFactory.java:272)at weblogic.jms.client.JMSConnectionFactory.createConnectionInternal(JMSConnectionFactory.java:299)at weblogic.jms.client.JMSConnectionFactory.createTopicConnection(JMSConnectionFactory.java:198)at
              Pls note while looking up for the Initial context since it is a cluster environment, we look up using comma separated Ipaddresses.
              Pls note, we use JRE 1.4.2_13 for the applet to launch,.
              Earlier also for JMS failover i sought your help and your suggestions have really invaluable.
              Request you to provide me some more suggestions on this issue as well.
              Regards
              Suresh

    The exception is thrown by the JVM's built in Sun IIOP (CORBA) libraries. I don't know the cause, but a google search for "SUN minor code: 224 completed: Maybe" yields a single hit that might help. See the July 23/24 posts for "Re: iiop: error with bidirectional connections" in the weblogic rmi-iiop newsgroup:
              http://forums.bea.com/thread.jspa?threadID=300001651
              Tom

  • Best Practice when deploying a single mdb into a cluster

    At a high level, we are converting all of our components to Weblogic processes that use Stateless Session Beans and Message Driven Beans. All of the weblogic processes will be clustered, and all of the topic queues will be distributed (Uniform Distributed Topics / Queues).
              We have one component that is a single MDB reading from a single queue on 1 machine. It is a requirement that the JMS messages on that queue be processed in order, and the processing of messages frequently requires that the same row in the DB be updated. Does anyone have any thoughts on the best design for that in our clustered environment?
              One possible solution we have come up with (not working):
              Possible Solution 1: Use a distributed topic and enforce a single client via client-id on the connection factory, causing a single consumer.
              1.Deploy a uniform-distributed Topic to the cluster.
              2.Create a connection factory with a client-id.
              3.Deploy a single FooMDB to the cluster.
              Problem with Solution 1: WL allows multiple consumers on Topic with same client-id
              1.Start (2) servers in cluster
              2.FooMDB running on Server_A connects to Topic
              3.FooMDB running on Server_B fails with unique id exception (yeah).
              4.Send messages - Messages are processed only once by FooMDB on Server_A (yeah).
              5.Stop Server_A.
              6.FooMDB running on Server_B connects automatically to Topic.
              7.Send messages - Messages are processed by FooMDB on Server_B (yeah).
              8.Start Server_A
              9.FooMDB successfully connects to Topic, even though FooMDB on Server_B is already connected (bad). Is this a WL bug or our config bug??
              10.Send messages - Messages are processed by both FooMDB on Server_A and Server_B (bad). Is this a WL bug or our config bug??
              Conclusion: Does anyone have any thoughts on the best design for that in our clustered environment? and if the above solution is doable, what mistake might we have made?
              Thank you in advance for your help!
              kb

    Thanks for the helpful info Tom.
              Kevin - It seems that for both the MDB, and the JMS provider, there are (manual or scripted) actions to be taken during any failure event + failure probes possibly required to launch these actions...?
              In the case of the JMS provider, the JMS destination needs to be migrated in the event of managed-server or host failure; if this host is the one that also runs the Admin server then the Admin server also needs to be restarted on a new host too, in order that it can become available to receive the migration instructions and thus update the config of the managed server which is to be newly targetted to serve the JMS destination.
              In the case of the MDB, a deployment action of some sort would need to take place on another managed-server, in the event of a failure of the managed server or the host, where the original MDB had been initally deployed.
              The JMS Destination migration actions can be totally avoided by the use of another JMS implementation which has a design philosophy of "failover" built into it (for example, Tibco EMS has totally automatic JMS failover features) and could be accessed gracefully by using Weblogic foreign JMS. The sinlge MDB deployed on one of the Weblogic managed servers in the cluster would still need some kind of (possibly scripted) redeployment action, and on top of this, there would need to be some kind of health check process to establish if this re-deployment action was actually required to be launched. It is possible that the logic and actions required just to establish the true functional health of this MDB could themsevles be as difficult as the original design requirement :-)
              All of this suggests that for the given requirement; the BEA environment is not well suited; and if no other environment or JMS provider is available at your site, then a manipulation of process itself may be required to enable it to be handled in a highly-available way which can be gracefully administered in a Weblogic cluster.
              We have not discussed the message payload design and the reasons that message order must be respected - by changing the message payload design and possibly adding additional data, this requirement "can", "in certain circumstances", be avoided.
              If you can't do that, I suggest you buy a 2 node Sun Cluster with shared HA storage and use this to monitor a simple JMS client java program that periodically checks for items on the Queue. The Tibco EMS servers could also be configured on this platform and give totally automatic failover protection for both process and host failure scenarios. With the spare money we can go to the pub.
              P.S. I don't work for Tibco or Sun and am a BIG Weblogic fan :-)

  • SNMP Support in 6.0

    Release notes state SNMP support is not included in 6.0 but in a future
    release any ideas when this future release will be out??

    Any list regarding what other features will be in SilverSword floating around
    the Inet like Topic clustering, automatic JMS failover etc etc.
    Thanks
    Ensell Lee wrote:
    Hello,
    The SNMP agent is currently scheduled for our next release - internally
    code named "SilverSword" - which is scheduled to be released Q2 2001.
    Thanks for your patience
    Ensell Lee
    Product Manager
    WebLogic Server
    Larry Presswood wrote:
    Release notes state SNMP support is not included in 6.0 but in a future
    release any ideas when this future release will be out??

  • WLS questions

    What are the prectical advantages of using a hardware cluster (SUN, VERITAS) for JMS failover, instead of WebLogic migratable queues ? I mean, what are the real benefits in terms of reliability and downtime reduction ?
    What gives better performance and reliability for JMS persistence between RDBMS and file peristence ?
    In generating a proxy.jar client library to call a WebLogic Server webservice, how can I target the client to a different server? Is there a way to prevent the address of the webservice to be hardcoded in the proxy.jar ?
    Thanks

    What are the prectical advantages of using a hardware cluster (SUN, VERITAS) for JMS failover, instead of WebLogic migratable queues ? I mean, what are the real benefits in terms of reliability and downtime reduction ?
    What gives better performance and reliability for JMS persistence between RDBMS and file peristence ?
    In generating a proxy.jar client library to call a WebLogic Server webservice, how can I target the client to a different server? Is there a way to prevent the address of the webservice to be hardcoded in the proxy.jar ?
    Thanks

  • Jms in cluster / load balancing and failover

              Did I get it right ???
              I have 1 admin server and 4 managed servers in a 2 clusters, a development cluster
              and a test cluster.
              I now want to have loadbalancing with my jms server and I want to be able to migrate
              my jms server in case of failer.
              for each cluster I have created
              connectionFactory targeted to the cluster
              Distributed destination with 2 queue members
              One JMSServer migratable targeted on managed server 1, with destination 1 from
              the ditributed destination
              One JMSServer migratable targeted on managed server 2, with destination 2 from
              the ditributed destination
              I expect this to make loadbalancing between the 2 servers in the cluster, and
              I can migrate the jms server if one of the server fails to the running server.
              One thing is now.....If one server fails and I migrate the jms server to the other
              server that is running, and I then restart the server that was down, what is then
              happening, do I then have 3 jms servers ???
              [config.xml]
              

              "Kris" <[email protected]> wrote in message news:[email protected]...
              >
              > "Kawaljit Singh Sunny" <[email protected]> wrote:
              > >
              > >"Kris" <[email protected]> wrote in message
              > >news:[email protected]...
              > >>
              > >> Did I get it right ???
              > >>
              > >> I have 1 admin server and 4 managed servers in a 2 clusters, a
              development
              > >cluster
              > >> and a test cluster.
              > >> I now want to have loadbalancing with my jms server and I want to be
              > >able
              > >to migrate
              > >> my jms server in case of failer.
              > >>
              > >> for each cluster I have created
              > >> connectionFactory targeted to the cluster
              > >> Distributed destination with 2 queue members
              > >> One JMSServer migratable targeted on managed server 1, with destination
              > >1
              > >from
              > >> the ditributed destination
              > >> One JMSServer migratable targeted on managed server 2, with destination
              > >2
              > >from
              > >> the ditributed destination
              > >
              > >If the server where your JMSConnections are loadBalanced to goes down,
              > >the
              > >producers and consumers using this JMSConnection are closed.
              > >You have to recreate these producers and consumers.
              > >If the server where your Destination resides goes dow, the consumers
              > >are
              > >closed.
              > >If the producers JMSConnection is not on this server, the producer stays
              > >up.
              > >
              > >>
              > >>
              > >> I expect this to make loadbalancing between the 2 servers in the
              cluster,
              > >and
              > >> I can migrate the jms server if one of the server fails to the running
              > >server.
              > >>
              > >> One thing is now.....If one server fails and I migrate the jms server
              > >to
              > >the other
              > >> server that is running, and I then restart the server that was down,
              > >what
              > >is then
              > >> happening, do I then have 3 jms servers ???
              > >
              > >No you still have 2 JMSServers. JMS Migration is manual.
              > >
              > >>
              >
              > you say : No you still have 2 JMSServers. JMS Migration is manual.
              >
              > But if I manual migrate the jmsserver that was down to the running wls
              server,
              > that already have one jms server running, this wls server must then have 2
              jms
              > servers. And I boot the wls server that hosted the jms server that was
              down, this
              > will now have a running jms server. isn't that 3 jms servers ?
              Once you migrate a JMSServer from a WeblogicServer1 to WeblogicServer2,
              and then you boot WeblogicServer1, this JMSServer which was migrated should
              NOT be on WeblogicServer1.
              (You have migrated the JMSServer from WeblogicServer1 to WeblogicServer2)
              >
              > But I was thinking about that I could spare the migration part. If I have
              2 wls
              > servers and a jms server on each of them, and a destributed destination
              with 2
              > queue members that are persistent in a database. If a wls og just a jms
              server
              > goes down, I just have to reboot the server and it will run again. This
              way I
              > dont have to think about migration, or what ?
              Yes that is true.
              Irrespective of whether you have migration or not,
              only thing you need to do take care is to reconnect to weblogic server, if
              the the server where your JMSConnection is loadBalanced to goes down.
              There is no failover of JMSConnections. Producers inside this JMSConnection
              will be closed. You will have to create a new JMSConnection and a new
              Producer and continue with your production of JMS Messages.
              -sunny
              

  • Problem in Automated Failover in JMS Clustering

    Hi,
              I am facing a problem in JMS clustering, now let me explain the scenario.
              I have 2 managed servers participating in the weblogic cluster, now since JMS is a singleton service what i did is i have created 2 JMS servers and targeted them to Managedservers 1 and 2 respectively.I have also created a DistributedTopic
              Now in my case an applet is the JMS client i have registered the exception listener on both the topic connection and WLSession, now when i bring down one managed server forcefully(Ctrl-C) the call back method onException is supposed to have been called,which has reconnect method which tries to get the connection but it does not happen in this clustered environment.
              I have checked the same thing without a cluster i.e with a single server (Admin server) and it seems to be calling the Onexception call back method.
              What is wrong with this approach.
              Another query just to add on to this is, is this the right approach to achieve automated failover in a clustered environment or do i have to go with Migratable servers concept(not clear on this though).
              Thanks in advance.
              Suresh

    I think is correct.U must stop all the server of the cluster, or to know how of the two your client is using. U can used the amministration console to monitoring the status of the destination of the JMS server.
              I have a similar problem, but using tho different kynd of Receiver.
              Now in my case I created two different type of client
              Asynchronous and synchronous .
              The first type registered himself as MessageListener and also as ExceptionListener on the connection. When I bring down the managed server in which the client was connected the call back method onException is called as the bea documentation explains.
              The second client instead registered himself as ExceptionListener but not as MessageListener on the connestion. It called in different execution thread the receive method on the receiver created using the connection.
              In this case when i bring down the managed server in which the client is connect the call back method onException was NOT called, instead the client received the JMSException on all the call "receive()".
              This is true? I expected instead that the behaviour of the 2 type of client was the same!

  • JMS servers' data failover

    hi,
    In a cluster with two managed servers (SRV1,SRV2) each one running its own JMS server and distributed queues when one server goes down while some producer still sends messages in a normal circumstances the producer will be reconnected to another server and remaining messages will go to this server. My question is if before crash there are 50 pending messages on affected server shouldn't they be transfered to running server (after the crash) so the client could read them before crashed server goes online again? All servers use some shared drive for their persistance store. Or this data migration have to be done manually along with JMS service migration?
    all i could find in documentation that there is automatic jms producers failover but there is no information what happens with data in persistance store, how to make them available for the client connected to another server running the same distributed queue.

    I believe you need to either start the server in-place or fail-it-over using something like whole-server-migration or service-level migration to recover those messages. There is no automatic mechanism to make them magically move from the persistent store of failed server to a server that is already running.
    http://edocs.bea.com/wls/docs100/cluster/migration.html#wp1039659

  • Doubt in WLS 7.0 JMS Cluster Failover

    I configured 2 JMS servers JMS1 and JMS2. I also configured 2 JMS file stores FS1 and FS2. JMS server JMS1 using FS1 and JMS2 using FS2. I also configured 1 distributed queue DQ for the cluster server. I also configured 1 local queue LQ1 in JMS1 and 1 local queue LQ2 in JMS2.
              When the JMS2 server failed, I migrate it to the migratable target JMS1. The file store FS2 and all transaction log files have been copied to the JMS1 server with the same path. I first migrated JTA service and then JMS service. The migration procedure worked and no problem. But the messages in the JMS2 did not get processed by the migratable target JMS1.
              What was wrong in my configuration? In my migration procedure? Or in my file store? Or anything? I knew I should use shared disk storage for file store. I just want to simulate the problem and try cluster failover.
              

              Chan Tong wrote:
              > Tom Barnes <[email protected]> wrote:
              >
              >>
              >>Chan Tong wrote:
              >>
              >>>I configured 2 JMS servers JMS1 and JMS2. I also configured 2 JMS file
              >>
              >>stores FS1 and FS2. JMS server JMS1 using FS1 and JMS2 using FS2. I
              >>also configured 1 distributed queue DQ for the cluster server. I also
              >>configured 1 local queue LQ1 in JMS1 and 1 local queue LQ2 in JMS2.
              >>
              >>>When the JMS2 server failed, I migrate it to the migratable target
              >>
              >>JMS1. The file store FS2 and all transaction log files have been copied
              >>
              >>
              >>>to the JMS1 server with the same path. I first migrated JTA service
              >>
              >>and then JMS service. The migration procedure worked and no problem.
              >>
              >>>But the messages in the JMS2 did not get processed by the migratable
              >>
              >>target JMS1.
              >>
              >>>What was wrong in my configuration? In my migration procedure? Or in
              >>
              >>my file store? Or anything? I knew I should use shared disk storage
              >>for file store. I just want to simulate the problem and try cluster
              >>failover.
              >>
              >>Need more information. What do you mean by "the messages in the JMS2
              >>
              >>did not get processed"? Do JMS2's message counts
              >>show up in console monitoring?
              >>
              >
              > After migrate JMS2 to JMS1, the messages in the JMS2 file store were not processed
              > by the JMS1. Could it be I manual copied the file store? Or I will need to use
              > shared file store that can be access from from both servers?
              Either way yes. The migrated JMS server still needs to be able to
              find its store. Another way to do this is to use a JDBC store - as
              long as the remote database is highly available and your
              applications can tolerate the likely JMS performance reduction.
              

Maybe you are looking for

  • How to have 2 styles of links on a page?

    I have 2 separate navigation tables for a home page. I made each one as a "component" using Tables, 1 column with multiple rows. One has white type on a blue background, the other has blue type with a yellow background. I modified the link css for ea

  • Server Admin Problem - File Sharing fails to load

    We recently switched over from a 10.4 server to an Intel Quad Core Xserve running 10.5.8. Things seemed to be working for about a week but now The File Sharing part of Server Admin fails to load. After spinning for a few minutes I get a kNetworkError

  • How to check the spool info

    Hello kindly let me know how can i check how much portion of the spool is currently occupies (So that when its getting filled , we will be able to manually delete the spool) - How to check the free percentage and configured amount of spool regards

  • Cisco Security Manager Appliance bundle

    I have a customer subscribed to the Security ELA, so all security related licenses and subscriptions are free.  It is possible to order this product as an appliance without the bundled licenses?

  • How do i drap and drop files onto my juk drive

    How do i drag and drop files and photos into my lexar juk drive?