Different behaviour in JMS Cluster automatic failover

Hi,
          I am problem in JMS clustering, now let me explain the scenario.
          I have 2 managed servers participating in the weblogic cluster, now since JMS is a singleton service what i did is i have created 2 JMS servers and targeted them to Managedservers 1 and 2 respectively.I have also created a Distributed Destination and deploy they with the deployment "wizard" ("autodeploy") to all the member of the cluster.
          Now in my case I created two different type of client
          Asynchronous and synchronous .
          The first one register himself as MessageListener and also as ExceptionListener. When I bring down the managed server in which the client is connect the call back method onException is called.
          The second client instead register himself as ExceptionListener but not as MessageListener. It call in different thread the receive method on the destination.
          In this case if i I bring down the managed server in which the client is connect the call back method onException is NOT called, instead i receive the JMSException on all the call "receive".
          I expected that the behaviour was the same of the firts client.
          Thanks in advance.
          dani

Its not clear from your description what you're trying to do, as typical apps use a single module, including those that use distributed destinations, and typical apps do not use the convention of specifying a module name in their JNDI name. (The "!" syntax makes me suspect that you're not using JNDI to lookup destinations, rather you're using the rarely recommended JMS session "createQueue()" call.).
Never-the-less, I suspect the problem is simply that your using a distributed queue and haven't realized that queue browsers and consumers pin themselves to a single queue member. To ensure full coverage of a distributed queue, the best practice is to use a WebLogic MDB: WebLogic MDBs automatically ensure that each queue member has consumers.
By the way, if you are using a distributed queues, then the best practice config is as follows for each homogeneous set of JMS servers:
-- Configure a custom WL store per server, target to the server's default migratable target.
-- Configure a JMS server per server, target to the server's default migratable target, set the store for the JMS server to be the same as the custom store.
-- Configure a single JMS module, target to the cluster.
-- Configure a single subdeployment for the module that references each JMS server (and nothing else).
-- Configure one or more distributed queues for the module. Never use default targeting -- instead use advanced subdeployment targeting to target each distributed queue to the single subdeployment you defined earlier.
-- Configure one or more custom connection factories in the module, use default targeting.
I recommend that you read through the JMS admin and programmer's guides in the edocs if you haven't done so already. You might find that the JMS chapter of the new book "Professional Oracle WebLogic" is helpful.
Tom
Edited by: TomB on Nov 4, 2009 10:12 AM

Similar Messages

  • Problem setting up JMS Automatic Failover in WebLogic 9.1

    I received the following error when trying to configure server migration using Sybase DB to store leasing tables information:
              <java.sql.SQLException:JZ0S3:> The inherited method executdUpdate(String) cannot be used in this subclass.
              at com.sybase.jdbc2.jdbc.ErrorMessage.raiseError(ErrorMessage.556)
              at com.sybase.jdbc2.jdbc.SybPreparedStatement.executedUpdate(SybPreparedStatement.java:122)
              at weblogic.jdbc.wrapper.Statement.executedUpdate(Statement.java:433)
              at weblogic.cluster.singleton.DatabaseLeasingBasis.renewAllLeases(DatabaseLeasingBasis.java:118)
              at weblogic.cluster.singleton.DatabaseLeasingBasis.sendHeartbeat(DatabaseLeasingBasis.java:94)
              I'm using Sybase driver type 4, version 5.X. I also experimented with Sybase driver type 4, version 6.X but got the same error. I tried to use WebLogic Sybase Driver and got different error (SQLException: SQLState(HY000)).
              I have used Sybase driver version 5.X successfully with our application in WL9.1. It seems the calls to renew all leases in WL9.1 need to change to from PreparedStatement to Statement since Sybase PreparedStatement does not support executeUpdate(String).
              I need to resolve this problem ASAP. We are upgrading to WL9.1 from 8.1 so that we could take advantage of the automatic failover but so far, it is not working. Please help.
              Thanks.

    I'd contact [email protected] I believe this is a known bug when using sybase.
              -- Rob
              WLS Blog http://dev2dev.bea.com/blog/rwoollen/

  • Configuring Automatic Failover for EPM Planning Cluster

    We are trying to test automatic failover using a Planning(11.1.2.2)/weblogic cluster containing 2 physical servers and a Weblogic proxy plug-in for OHS.
    I understand that to enable this we must configure in-memory replication of HTTP session states and to do this, (according to various sources including ID 779350.1) the weblogic.xml file must include a descriptor set up as follows:
    <session-descriptor>
       <session-param>
        <param-name>PersistentStoreType</param-name>
        <param-value>replicated</param-value>
    </session-param>
    </session-descriptio>
    Where should weblogic.xml be created or amended (if it already exists) for a Planning cluster in a standard scaled out EPM deployment in order to effect failover between the two servers.
    Thanks

    yes, it can be load balanced in hyperion registry i believe, seen it once only drawback is ,if a JVM goes down while processing a request it needs to be manually started, however the url will switch automatically   

  • Automatic failover not currently supported by WebLogic JMS?

              Hi,
              does this apply to Weblogic 8.1 as well? When will automatic failover be implemented?
              Thanks,
              A.
              

    Hi,
              You can setup a "node manager" process to automatically
              restart a failing server, but there is no automatic
              fail-over to a new machine.
              To achieve automatic failover use
              third party HA framework software and disk replication
              software such as is supplied by Veritas. I'm not aware
              of any plans to add automated fail-over to the next release.
              Tom
              Iggy wrote:
              > Hi,
              >
              > does this apply to Weblogic 8.1 as well? When will automatic failover be implemented?
              >
              > Thanks,
              >
              > A.
              >
              >
              

  • Manual or Automatic failover?

    For SQL Server (2005/2008/R2) cluster, after failover happened from one node to another, how do I tell whether it was an automatic failover or somebody failed it over manually. This would help further troubleshoot what caused the failover. Please
    consider both Windows Server 2003 and Windows Server 2008 scenario. Thanks.

    Hi yarkandstar,
    Based on your description, you can check the SQL Server Error logs to find if it is an automatic failover or a manual failover in  SQL Server cluster. In the Error logs, if SQL Server restart multiple times in the same node and then come online in the
    other node, it should ideally be an automatic failover. If SQL Server is running on a different node after the first restart, it is a manual failover.
    Also, as Balmukund’s post, you can check the cluster log  to know whether SQL Server has an automatic failover or a manual failover.
    For more details about checking cluster log in Windows server 2003 cluster, please review this blog:
    How to find whether SQL Server had an Automatic Failover or a Manually initiated Failover in cluster.
    For more details about checking cluster log in Windows server 2008 cluster, please review this similar thread:
    Need to know if it was a manual or an automatic failover - WIndows server 2008 R2.
    Thanks,
    Lydia Zhang

  • Automatic failover doesn't failback to the first server if the second server is lost.

    Hi Everybody,
       We use the database mirroring a lot in our product solutions and we have recently experienced a strange behaviour in our failover tests with SQL2008R2.
    We have 2 servers running Windows 2008 R2 standard and SQL 2008 R2 standard SP2. (let's call them DB1 and DB2)
    We also have a Witness workstation running SQL 2008 Express on a Windows 7
    A database from DB1 is mirrored to DB2 in "safety full" mode, with witness. At this stage, the database is principal on DB1 and mirror on DB2
    To test the automatic failover, we first restart the DB1 server which has the database in principal mode
    After a few seconds, the database on DB2 becomes principal, which is normal , that's exactly what we want.
    After a few minutes, DB1 comes back online and its database takes the mirror role (still OK). At this stage then, the database is principal on DB2 and mirror on DB1
    when the monitoring application shows that the mirror is synchronized and that both servers are connected to the witness, we restart DB2 to trigger an automatic failover to DB1.
    What we see is that DB1 never takes the principal role and the database stays in mirror.
    In the DB1 Errorlog, I only see these 2 lines when DB2 disappears, no other message related to the mirroring session.
    2014-01-22 08:57:26.91 spid43s     Starting up database 'Test123'.
    2014-01-22 08:57:26.95 spid43s     Bypassing recovery for database 'Test123' because it is marked as a mirror database, which cannot be recovered. This is an informational message only. No user action is required.
    When DB2 comes back online, the database on DB2 keeps its principal status and the database on DB1 stays mirror.
    And what is really really strange is that, if I restart DB2 once again, directly after that, DB1 failover normally and the database on DB1 takes the principal role after a few seconds. without any configuration changes between the 2 restarts.
    DB1 errorlog shows then :
    2014-01-22 09:00:37.53 spid29s     Error: 1474, Severity: 16, State: 1.
    2014-01-22 09:00:37.53 spid29s     Database mirroring connection error 4 'An error occurred while receiving data: '64(The specified network name is no longer available.)'.' for 'TCP://DB2:5022'.
    2014-01-22 09:00:37.53 spid18s     Database mirroring is inactive for database 'Test123'. This is an informational message only. No user action is required.
    2014-01-22 09:00:42.37 spid32s     The mirrored database "Test123" is changing roles from "MIRROR" to "PRINCIPAL" due to Auto Failover.
    2014-01-22 09:00:42.39 spid32s     Recovery is writing a checkpoint in database 'Test123' (7). This is an informational message only. No user action is required.
    2014-01-22 09:00:42.39 spid32s     Recovery completed for database Test123 (database ID 7) in 78 second(s) (analysis 0 ms, redo 0 ms, undo 7 ms.) This is an informational message only. No user action is required.
    So, if I summarize, 
    - a first failover from DB1 to DB2 always work
    - then, a restart of DB2 never failover to DB1
    - a second restart of DB2 always failover to DB1
    This is pretty much systematic on one our server couple.
    Any explanation for this or any idea where I can search to find the reason of this strange behavior ?
    Thanks a lot for your help
    Seb

    Thank you Tom
    But I have already checked that and reported the Errorlog abstracts in my original post.
    When DB01 disapears for the first time, nothing in the DB01 ERRORLOG (it is restarting :-) )
    AND no particular error message in the DB02 ERRORLOG (nothing related to the fact that DB01 is not reachable anymore !!! )
    Only these two lines
    2014-01-22 08:57:26.91 spid43s     Starting
    up database 'Test123'.
    2014-01-22 08:57:26.95 spid43s     Bypassing recovery
    for database 'Test123' because it is marked as a mirror database, which cannot be recovered. This is an informational message only. No user action is required.
    So my main question remains Why DB02 doesn't detect that DB01 disapears (and the first time only) and why the failover mechanism doesn't trigger the failover ?
    Thank you
    Seb

  • Unplanned automatic failover using Hyper-v Replica , why don't the VMs start up automatically on the replica server?

    Hi,
    We have cluster with two hosts (Host01 , host02) replicated to another server (Replica01)
    in order to test automatic failover to the replica server  (Replica01) We unplugged the power cables from Host01 and Host 02 
    now the VMs on the replica server is still off  , why don't the VMs start up  automatically on the replica server?
    Ramy Shaker

    overall there is no automatic failover in Hyper-V
    Of course there is. It's enabled by Failover Clustering. This is a totally separate technology from Hyper-V Replica.
    There is no automatic start up in Hyper-V Replica because it is not designed to detect a split-brain condition where the same virtual machine is running in multiple locations simultaneously. The replica site has no way to know why it can't reach the primary
    system anymore. It might just be because someone unplugged a network cable. If the primary's virtual machines are still running and the replica decides to spin up its copies, you will have many troubles.
    Eric Siron Altaro Hyper-V Blog
    I am an independent blog contributor, not an Altaro employee. I am solely responsible for the content of my posts.
    "Every relationship you have is in worse shape than you think."

  • HACMP Clustering Script for SAP ECC 6.0 (SR1) - Automatic Failover-Oracle10

    Hello,
    I have installed the SAP ECC 6.0 (SR1) under AIX 5.3 / Oracle 10g with HACMP Clustering environment. Manual Failover is working fine. ASCS and Database instances are loaded in share drive with Virtual IP and Virtual name. Central Instance and Dialog Instance are loaded locally in Node A and Node B. I want to get HACMP Clustering script(automatic failover script) for Automation. Please help me if you have.
    Thanks
    Gautam Poddar

    Here are HA stop & start scripts that you should be able adapt for your particular circumstances. Based on earlier versions of SAP / Oracle but assume should be a reasonable guide
    Script to start SAP is start_sap_prd
    #!/bin/ksh
    Script:         /usr/local/bin/cluster/start_sap_prd
    Comments:       HACMP Application START script for PRD
    Show me obvious information in hacmp.out
    banner "Starting"
    banner "PRD SAP"
    Set the oracle and sap owner.
    ORASID="PRD"
    SAPADM="prdadm"
    ORAUSR="oraprd"
    VIRTUALHOST="vhost"
    DEVHOST="vhostdev"
    Get the volume groups for this resource group
    RG=$( /usr/es/sbin/cluster/utilities/cllsgrp | grep -i $ )
    VG_LIST=$( /usr/es/sbin/cluster/utilities/cllsres -g $ | \
            grep "VOLUME_GROUP=" | \
            awk -F\" '{ print $2 }' )
    Check the transport directory is mounted.
    if mount | grep -w "/usr/sap/trans"
      then
            print "Transport directory is already mounted."
      else
            cd /tmp
            print "Attempting a background mount of the transport directory."
            nohup mount -o intr,bg,soft :/usr/sap/trans1 /usr/sap/trans &
    fi
    #Start SAP and Oracle
    #Start listener
    su - $ -c /rprd/oracle/PRD/920_64/bin/lsnrctl start
    rc=$?
    if [ $? != 0 ]
      then
            echo "ERROR: Listener failed to start\n"
    fi
    #Start Database
    su - $ -c "/rprd/oracle/PRD/bin/start_database_PRD.sh"
    sleep 20
    Standard sapstart script
    su - $ -c startsap $
    Script:       /usr/local/bin/cluster/stop_sap_prd
    Dated:        01/11/06
    Application:  Oracle/SAP
    Comments:     HACMP Application STOP script for SAP / Oracle PRD
    Show me obvious information in hacmp.out
    Set the oracle and sap owner.
    rc=$?
    if [ $? != 0 ]
    then
            echo "ERROR: Failed to start SAP\n"
    fi
    exit 0
    Script to stop SAP is stop_sap_prd
    #!/bin/ksh
    set -x
    banner "stopping"
    banner "PRD SAP"
    ORASID="PRD"
    SAPADM="prdadm"
    ORAUSR="oraprd"
    VIRTUALHOST="vhost"
    #Stop SAP/Oracle
    su - $ -c stopsap $
    rc=$?
    if [ $? != 0 ]
    then
            echo "ERROR: Failed to stop SAP and Oracle\n"
            break
    fi
    Stop SAP collector and Oracle listener.
    su - $ -c /usr/sap/PRD/SYS/exe/run/saposcol -k
    rc=$?
    if [ $? != 0 ]
    then
            echo "ERROR: Failed to stop SAPOSCOL \n"
    fi
    su - $ -c /rprd/oracle/PRD/920_64/bin/lsnrctl stop
    rc=$?
    if [ $? != 0 ]
    then
            echo "ERROR: Listener failed to stop\n"
    fi
    if mount | grep -w "/usr/sap/trans"
      then
            print "Transport directory is mounted."
            /usr/es/sbin/cluster/events/utils/cl_nfskill -k -u /usr/sap/trans
            sleep 1
            /usr/es/sbin/cluster/events/utils/cl_nfskill -k -u /usr/sap/trans
            sleep 1
            umount -f /usr/sap/trans &
      else
            print "Transport directory is not mounted."
    fi
    exit 0

  • JMS cluster and happen JMS Queue Exception javax.naming.NameAlreadyBoundExc

    Hi,
    Sorry I not sure how to setup JMS cluster in WLS 10.3.2. We have two manager server in two machine. And will join into one cluster. After configure the JMS module & JMS server. We found it only can work in one server. And will faill in another server. And reply the error message as below :
    Any one can help to tell me why one server success. And other is fail !
    javax.naming.NameAlreadyBoundException: JMS_Queue_misdel_a is already bound; rem
    aining name ''
    at weblogic.jndi.internal.BasicNamingNode.bindHere(BasicNamingNode.java:357)
    at weblogic.jndi.internal.ServerNamingNode.bindHere(ServerNamingNode.java:140)
    at weblogic.jndi.internal.BasicNamingNode.bind(BasicNamingNode.java:317)
    at weblogic.jndi.internal.WLEventContextImpl.bind(WLEventContextImpl.jav
    ==> config for JMS
    <jms-server>
    <name>JMS_Server_cim_a</name>
    <target>ebowls05</target>
    <persistent-store xsi:nil="true"></persistent-store>
    <hosting-temporary-destinations>true</hosting-temporary-destinations>
    <temporary-template-resource xsi:nil="true"></temporary-template-resource>
    <temporary-template-name xsi:nil="true"></temporary-template-name>
    <message-buffer-size>-1</message-buffer-size>
    <expiration-scan-interval>30</expiration-scan-interval>
    </jms-server>
    <jms-server>
    <name>JMS_Server_cim_b</name>
    <target>ebowls06</target>
    <persistent-store xsi:nil="true"></persistent-store>
    <hosting-temporary-destinations>true</hosting-temporary-destinations>
    <temporary-template-resource xsi:nil="true"></temporary-template-resource>
    <temporary-template-name xsi:nil="true"></temporary-template-name>
    <message-buffer-size>-1</message-buffer-size>
    <expiration-scan-interval>30</expiration-scan-interval>
    </jms-server>
    <migratable-target>
    <name>ebowls06 (migratable)</name>
    <notes>This is a system generated default migratable target for a server. Do
    not delete manually.</notes>
    <user-preferred-server>ebowls06</user-preferred-server>
    <cluster>ebouatCluster</cluster>
    </migratable-target>
    <migratable-target>
    <name>ebowls05 (migratable)</name>
    <notes>This is a system generated default migratable target for a server. Do
    not delete manually.</notes>
    <user-preferred-server>ebowls05</user-preferred-server>
    <cluster>ebouatCluster</cluster>
    </migratable-target>
    <jms-system-resource>
    <name>JMS_ConnFactory_cim</name>
    <target>ebouatCluster</target>
    <descriptor-file-name>jms/JMS_ConnFactory_cim/JMS_ConnFactory_cim-jms.xml</d
    escriptor-file-name>
    </jms-system-resource>
    <jms-system-resource>
    <name>JMS_Queue_promis</name>
    <target>ebouatCluster</target>
    <sub-deployment>
    <name>JMS_Queue_promis@JMS_Server_cim_a</name>
    <target>JMS_Server_cim_a</target>
    </sub-deployment>
    <sub-deployment>
    <name>JMS_Queue_promis@JMS_Server_cim_b</name>
    <target>JMS_Server_cim_b</target>
    </sub-deployment>
    <descriptor-file-name>jms/JMS_Queue_promis/JMS_Queue_promis-jms.xml</descrip
    tor-file-name>
    </jms-system-resource>
    <jms-system-resource>
    <name>JMS_Template_cim</name>
    <target>ebouatCluster</target>
    <descriptor-file-name>jms/JMS_Template_cim/JMS_Template_cim-jms.xml</descrip
    tor-file-name>
    </jms-system-resource>
    <jms-system-resource>
    <name>JMS_Queue_misdel_a</name>
    <target>ebouatCluster</target>
    <sub-deployment>
    <name>JMS_Queue_misdel_a@JMS_Server_cim_a</name>
    <target>JMS_Server_cim_a</target>
    </sub-deployment>
    <sub-deployment>
    <name>JMS_Queue_misdel_a@JMS_Server_cim_b</name>
    <target>JMS_Server_cim_b</target>
    </sub-deployment>
    <descriptor-file-name>jms/JMS_Queue_misdel_a/JMS_Queue_misdel_a-jms.xml</des
    criptor-file-name>
    </jms-system-resource>
    <jms-system-resource>
    <name>JMS_Queue_misdel_b</name>
    <target>ebouatCluster</target>
    <sub-deployment>
    <name>JMS_Queue_misdel_b@JMS_Server_cim_a</name>
    <target>JMS_Server_cim_a</target>
    </sub-deployment>
    <sub-deployment>
    <name>JMS_Queue_misdel_b@JMS_Server_cim_b</name>
    <target>JMS_Server_cim_b</target>
    </sub-deployment>
    <descriptor-file-name>jms/JMS_Queue_misdel_b/JMS_Queue_misdel_b-jms.xml</des
    criptor-file-name>
    </jms-system-resource>

    1 - JMS clustering is an advanced concept, and, in most cases, uses "distributed queues". In case you haven't already, I highly recommend reading the JMS chapter of the new book "Professional Oracle WebLogic" as well as the related chapters in the JMS Programmer's Guide in the edocs.
    2 - The basic problem below is that you have two different queues that have matching JNDI names, but are in the same cluster.
    3 - The config snippet supplied below does not include the queue configuration. Queue configuration is embedded within the referenced module files.
    4 - Please ensure that you follow configuration best practices, as per: http://download.oracle.com/docs/cd/E15523_01/web.1111/e13738/best_practice.htm#CACJCGHG

  • Configure Solaris cluster to failover guest domain when NICs were down

    Hi,
    I am running Solaris 11 as the control domains on 2 clustered nodes running on Solaris Cluster 4. There is a Solaris 10 guest domain which is managed via the Solaris cluster in failover mode.
    2 virtual switches connected to 2 different network switches are presented to the guest domain. I would like to use link based IPMP to facilitate HA for the network connections. I understand that in this case the IPMP can only be configured within the guest domain. Now the question is how do I configure it in such a way that the guest domain fails over to the second cluster node (standby control domain) if both network interfaces are down? Thanks.
    Edited by: user12925046 on Dec 25, 2012 9:48 PM
    Edited by: user12925046 on Dec 25, 2012 9:49 PM

    The Solaris Cluster 4.1 Installation and Concepts Guide are available at :-
    http://docs.oracle.com/cd/E29086_01/index.html
    Thanks.

  • Availability group Automatic failover

    Hi
    setup a simple 2 node AG, sync. (SQL 2014 enterprise on windows 2012R2 standard)
    if I set it as manual failover everything works as expected. however when I switch to automatic failover and stop SQL service on the primary node the AG resource in cluster does offline and doesn't failover to secondary node.
    both nodes are available to the cluster resourse.
    would appreciate your feedback as to what might be the reason
    Regards
    Shaunt

    Hi,
    I would verify if Database Availability Group means AlwaysOn Availability Group.
    How did you set the FailureConditionLevel?
    Whether the diagnostic data and health information returned by sp_server_diagnostics warrants an automatic failover depends on the failure-condition level of the availability group. The failure-condition level specifies what failure conditions
    trigger an automatic failover. There are five failure-condition levels, which range from the least restrictive (level one) to the most restrictive (level five). For details about failure-conditions level, see:
    http://msdn.microsoft.com/en-us/library/hh710061.aspx#FClevel
    There are two useful articles may be helpful:
    SQL 2012 AlwaysOn Availability groups Automatic Failover doesn’t occur or does it – A look at the logs
    http://blogs.msdn.com/b/sql_pfe_blog/archive/2013/04/08/sql-2012-alwayson-availability-groups-automatic-failover-doesn-t-occur-or-does-it-a-look-at-the-logs.aspx
    SQL Server 2012 AlwaysOn – Part 7 – Details behind an AlwaysOn Availability Group
    http://blogs.msdn.com/b/saponsqlserver/archive/2012/04/24/sql-server-2012-alwayson-part-7-details-behind-an-alwayson-availability-group.aspx
    Thanks.
    Tracy Cai
    TechNet Community Support
    Hi,
    Thanks for the reply.
    It's an AlwaysOn Availability Group.
    In my test lab, I have changed the quorum configuration to a file share witness and that has allowed an automatic failover when I turn the primary replica server off (rather than power it off).
    I'll take a look at the links you provided.
    Regards,
    Bob

  • Weblogic7/examples/clustering/ejb Automatic failover for idempotent methods ?

    This one should be easy since it is from the examples folder of bea 7 about
              clustering.
              Ref : \bea7\weblogic007\samples\server\src\examples\cluster\ejb
              I am referring to the cluster example provided with the weblogic server 7.0
              on windows 2000.
              I deployed Admin server and 2 managed server as described in document.
              Everything works fine as shown by the example. I get load balancing and
              failover both. Too Good.
              Client.java is using the while loop to manage the failover. So on exception
              it will go thru the loop again.
              I understand from the documentation that the stateless session EJB will
              provide the automatic failover for Idempotent stateless bean
              Case Failover Idempotent : ( Automatic )
              If methods are written in such a way that repeated calls to the same method
              do not cause duplicate updates, the method is said to be "idempotent." For
              idempotent methods, WebLogic Server provides the
              stateless-bean-methods-are-idempotent deployment property. If you set this
              property to "true" in weblogic-ejb-jar.xml, WebLogic Server assumes that the
              method is idempotent and will provide failover services for the EJB method,
              even if a failure occurs during a method call.
              Now I made 2 changes to the code.
              1 . I added as follows to the weblogic-ejb-jar.xml of teller stateless EJB
              <stateless-clustering>
              <stateless-bean-is-clusterable>true</stateless-bean-is-clusterable>
              <stateless-bean-load-algorithm>random</stateless-bean-load-algorithm>
              <stateless-bean-methods-are-idempotent>true</stateless-bean-methods-are-idem
              potent>
              </stateless-clustering>
              So I should get the automatic failover .............
              2. Also I added the break statement in the catch on line around 230 in
              Client .java
              catch (RemoteException re) {
              System.out.println(" Error: " + re);
              // Replace teller, in case that's the problem
              teller = null;
              invoke = false;
              break;
              So that the client program does not loop again and again.
              Now I compile and restart all my three servers and redeploy application (
              just to be sure )
              I start my client and I get a automatic load balancing between the server
              which makes me happy.
              But Failover ....?
              I kill one of the managed application server in cluster at any particular
              test fail point.
              I expect the exception to be taken care automatically by error/failover
              handler in the home/remote stub
              But the client program fails and terminates.
              1. What is wrong with the code ?
              2. Does the automatic failover with the indempotent methods also has to be
              taken care by coding the similar while loop for stateless ejb ?
              Your help will be appreciated ASAP.
              Let me know if you need any thing more from my system. But I am sure this
              will be very easy as it is from the sample code.........
              Thanks
              

    Sorry I meant to send this to the ejb newsgroup.
              dan
              dan benanav wrote:
              > Do any vendors provide for clustering with automatic failover of entity
              > beans? I know that WLS does not. How about Gemstone? If not is there
              > a reason why it is not possible?
              >
              > It seems to me that EJB servers should be capable of automatic failover
              > of entity beans.
              >
              > dan
              

  • Replicating simple Java objects for automatic failover

    Is there a way to replicate a simple java object that is bound from JNDI
              across all servers so that if the primary server fails, it will
              automatically failover?
              We have a java client that uses JNDI to access EJBs on WLS5.1 SP8. In order
              to determine client information, the client currently binds a simple java
              class in the JNDI tree. The Entity and Session beans use the caller
              principal to locate the object in order to access client-information for
              such things as record locking, logging, etc..
              We have to move this architecture to a cluster environment and we are
              wondering how we can replicate this object across cluster servers so that
              failover is handled automatically, and that it is still accessible through
              JNDI.
              An RMI replicated stub is not enough, since it only works as long as the
              server hosting the RMI object is alive.
              I'd like to add that the object is created and bound at client start-up and
              destroyed at client exit.
              Thank you for any advice or information,
              Dania Kodeih.
              

    A: Replicating simple Java objects for automatic failover

    That's what I figured. I guess the only solution in this case is to persist
              the object during client sessions. I was hoping for something simpler, but I
              guess I'll have to create an Entity Bean and everything else that comes with
              it.
              Thanks,
              Dania.
              Cameron Purdy wrote in message <[email protected]>...
              >Unfortunately, when the originating server goes down, the replicated object
              >disappears.
              >
              >Peace,
              >
              >--
              >Cameron Purdy
              >Tangosol, Inc.
              >http://www.tangosol.com
              >+1.617.623.5782
              >WebLogic Consulting Available
              >
              >
              >"Don Ferguson" <[email protected]> wrote in message
              >news:[email protected]..
              >> If I am not mistaken, any serializable object will automatically be
              >replicated
              >> across the tree.
              >>
              >> Dania Kodeih wrote:
              >>
              >> > Is there a way to replicate a simple java object that is bound from
              JNDI
              >> > across all servers so that if the primary server fails, it will
              >> > automatically failover?
              >> >
              >> > We have a java client that uses JNDI to access EJBs on WLS5.1 SP8. In
              >order
              >> > to determine client information, the client currently binds a simple
              >java
              >> > class in the JNDI tree. The Entity and Session beans use the caller
              >> > principal to locate the object in order to access client-information
        �... [Show more]

    Read other 4 answers

  • Different behaviour of Flash content when on server

    Hi
    I have noticed different behaviours of my Flash movie in case
    a/ I am checking offline content (SWF) using the fault view (Show All)
    b/ I am checking  the SWF in a HTML file on a server.
    More concrete: Depending on a number of conditions I attach a movieclip to a certain object. This works fine in offline mode.
    Another example: Depending on a certain zoom level of the application, I unload some SWF.
    All works fine offline.
    When I check this online, the attach movieclip function only works in some cases, the unload of SWF does not work .
    What can be the cause ?
    bestregards
    eG

    Not sure about most of your questions -- this is all new stuff for me, too. But on this one item, I hope this helps:
    For example it tells me that a plugin (which has already been installed) needs to be installed.If you installed the Mozilla browser after initially installing the java plugin in IE, then the plugin needs to be installed within the Mozilla browser's plugins directory. And I have never been able to get Netscape or Firefox to install a Forms plugin properly. It seems like I need to run IE to get the plugin installed automatically. Then I go back to the other browser and all is ok. Looking into the Mozilla-based browser's plugins folders, I can see that running the plugin install through IE also copies the corresponding .dll file into all the Mozilla-based browsers' plugins folders.
    But since you have already installed the plugin in IE, I am not sure how you would get it to work for the other browser. But if you can identify the .dll required, just copy it yourself into your Mozilla browser's folder.

  • Weblogic JMS Cluster

    Hi,
              I have a 6.1 cluster that has a JMS server A and a JMS Server B
              deployed and running on each of the managed nodes. As destination I
              created a topic with the same name for each JMS Server. The connection
              factories I deployed to the cluster only.
              My problem is, when I start the second managed server, I get the
              following error:
              <Error> <Cluster> <Conflict start: You tried to bind an object under
              the name
              com.csg.pb.tit.tms.TMSSignalTopic in the JNDI tree. The object you
              have bound
              from 169.59.5.26 is non clusterable and you have tried to bind more
              than once
              from two or more servers. Such objects can only deployed from one
              server.>
              When I understand correctly, this happens because the JNDI tree gets
              distributed to all nodes of the cluster, so there would be two object
              with the same name. How do I solve this situation? Do I have the
              specify different names for topics in different JMSServer running in a
              cluster? How would the load balancing work?
              Please help. Thanks in advance
              Juerg
              

    Tom,
              Thanks for your help so far, I installed the patch and got things
              working. However I still have some open points where you might be able
              to help.
              Let's assume I don't have these smart forwarders and I have four
              topics per JMSServer deployed to two managed nodes. Connection
              factories deployed to the cluster. When I connect N publishers
              through the cluster, messages from one publisher end up in the
              appropriate topic on nodeA, messages from another publisher end up in
              the appropriate topic on nodeB. Now when I connect a durable
              subscriber via the cluster I only get messages from topics from one of
              the managed nodes, right? (This is exactly what I am seeing in my
              tests) Connecting durable subscribers to each managed node is not
              possible because of JNDI (throws exception like
              InstanceAlreadyExists). How would I connect a subscriber so that it
              subscribes to the right topics on the right node ( one the works with
              the previously described publisher)?
              All this leads me to the conclusion, that without the forwarders, a
              JMS cluster with Weblogic 6.1 is not so powerful, it basically does
              just load balacing.
              Is this correct or am I missing something important here?
              Thanks a lot and have a nice weekend
              Juerg
              Tom Barnes <[email protected]> wrote in message news:<[email protected]>...
              > Juerg Staub wrote:
              > > Tom,
              > >
              > > Thanks a lot. Bascially I did everything right, just need the patch.
              > >
              > > In the other hand I'd like to know what the benefits of the smart
              > > forwarders would be. When I understand correctly, every message would
              > > be forwarded to the appropriate topic in the different JMS servers.
              > > What would that bring in the case of a failure(one node of a cluster
              > > goes down)? As far as I can see, I still would need to 're-establish'
              > > the connection factory, topic session, topic and publisher in order to
              > > publish messages again?
              > >
              >
              > Yep. I think we are on the same page:
              >
              > The "smart-forwarders" would do what the 7.0 distributed topic
              > forwarders do for you. They would forward messages bound to
              > a particular physical topic to all instances of the topic.
              > This can be implemented via a durable subscription on a
              > member topic by each remote member topic's host.
              > MDBs could be used to service the durable subscription,
              > as they already have the reconnect logic built in.
              > (Durable subscriptions are used if you wish to guard against
              > lost messages). The forwarders
              > need to change a property on the message to indicate
              > that the message is already forwarded, and forward
              > only messages that have'nt been forwarded (to prevent
              > endless loops!).
              >
              >
              > > Thanks
              > >
              > > Juerg
              > >
              > > Tom Barnes <[email protected]> wrote in message news:<[email protected]>...
              > >
              > >>I suggest you read the "emulating 7.0 distributed destinations"
              > >>section of the JMS performance white-paper available on dev2dev.bea.com.
              > >>You will need to apply the referenced enhancement patch on top of SP3 to
              > >>disable JNDI replication (or update to 6.1SP4).
              > >>
              > >>If you need to create a true distributed topic, you will also need to
              > >>write your own "smart-forwarders" to forward messages
              > >>between the different physical instances of the topic. Or simply
              > >>use WL JMS 7.0 (the upgrade from 6.1 is straight-forward).
              > >>
              > >>Tom
              > >>
              > >>Juerg Staub wrote:
              > >>
              > >>>Hi,
              > >>>
              > >>>I have a 6.1 cluster that has a JMS server A and a JMS Server B
              > >>>deployed and running on each of the managed nodes. As destination I
              > >>>created a topic with the same name for each JMS Server. The connection
              > >>>factories I deployed to the cluster only.
              > >>>
              > >>>My problem is, when I start the second managed server, I get the
              > >>>following error:
              > >>>
              > >>><Error> <Cluster> <Conflict start: You tried to bind an object under
              > >>>the name
              > >>>com.csg.pb.tit.tms.TMSSignalTopic in the JNDI tree. The object you
              > >>>have bound
              > >>>from 169.59.5.26 is non clusterable and you have tried to bind more
              > >>>than once
              > >>>from two or more servers. Such objects can only deployed from one
              > >>>server.>
              > >>>
              > >>>
              > >>>When I understand correctly, this happens because the JNDI tree gets
              > >>>distributed to all nodes of the cluster, so there would be two object
              > >>>with the same name. How do I solve this situation? Do I have the
              > >>>specify different names for topics in different JMSServer running in a
              > >>>cluster? How would the load balancing work?
              > >>>
              > >>>
              > >>>Please help. Thanks in advance
              > >>>
              > >>>Juerg
              > >>
              

Maybe you are looking for

  • Get value of field object when I double click on any column of report

    Hi, We are converting our projects from .NET 2003 to .Net 2008 and upgrading reports10 to crystal reports 2008 and changing our Active x report viewer control  to crystal report viewer. We donu2019t use reports just to see and print the data. Our use

  • Macbook OSX 10.7.4 Finder Freezing/Crashing Unusable

    My Macbook has been slowing down progressively over the last week and is now virtually impossible to use. Takes a long time to boot and when it does the finder freezes for long periods before refeshing itself and freezing again. I have booted in reco

  • Return material to vendor procured through Third party PO

    Hi Friends       We have a scenario where the materials sent to customers through third party sales order is retuned back as retuned stock. This retuned material needs to be sent back to the vendor. Please let us know how this can be handled. regards

  • I have major general problem with my macbook air

    My MacBook air cannot work properly. It keep stop or cannot conect to safari. what happend?

  • Lvrt.dll path problem

    I compile and run my application. There is an error message popup. Please refer to the attachment. If you notice, there is an extra "/" beside the LabVIEW Run-Time. What leads to the problem? Thanks I have another question. There is two DLLs generate