Distributed Destination Failover not working

          I'm using WebLogic 7 SP1 on Windows 2000. I've configured a
          distributed queue that has two member. The two members are
          running in two WebLogic instances in a Cluster configuration
          (call them Server1 and Server2). My client posts messages to the
          distributed queue and it seems the messages seem to be
          distributed between Server1 and Server2 (as expected). Although,
          when I kill Server1, the client complains that it can't connect
          to the queue on Server1 and never recovers. I would have
          expected to see (at most) one exception and then the next
          request to use Server2's queue. The client gets the following
          exception:
          weblogic.jms.dispatcher.DispatcherException: Dispatcher not found in jndi: Server1,
          javax.naming.NameNotFoundException: Unable to resolve 'weblogic.jms.S:Server1'
          Resolved: 'weblogic.jms' Unresolved:'S:Server1' ; remaining name 'S:Server1'
               at weblogic.jms.dispatcher.DispatcherManager.dispatcherCreate(DispatcherManager.java:323)
               at weblogic.jms.dispatcher.DispatcherManager.findOrCreate(DispatcherManager.java:413)
               at weblogic.jms.frontend.FEProducer.<init>(FEProducer.java:87)
               at weblogic.jms.frontend.FESession$2.run(FESession.java:607)
               at weblogic.security.service.SecurityServiceManager.runAs(SecurityServiceManager.java:785)
               at weblogic.jms.frontend.FESession.producerCreate(FESession.java:604)
               at weblogic.jms.frontend.FESession.invoke(FESession.java:2246)
               at weblogic.jms.dispatcher.Request.wrappedFiniteStateMachine(Request.java:552)
               at weblogic.jms.dispatcher.DispatcherImpl.dispatchSync(DispatcherImpl.java:275)
               at weblogic.jms.client.JMSSession.createProducer(JMSSession.java:1461)
               at weblogic.jms.client.JMSSession.createSender(JMSSession.java:1312)
          Should I have some kind of recovery logic on my client to make
          this stuff work?
          Bob.
          

Hi,
               I hit the same problem, were you able to fix it ? if so how ?
          Tom Barnes wrote:
          > Hi Bob,
          >
          > If you haven't already, see if the connection factory you use has
          > "ServerAffinityEnabled" set to false (the default is true) and
          > "LoadBalancingEnabled" set to true. That said, I think you may be
          > seeing a known bug - so I suggesting contacting customer support.
          >
          > Tom
          >
          > Bob S wrote:
          >
          >> Tom,
          >>
          >> I don't really have a problem with getting an exception for the
          >> request that was in progress when the server failed. I would
          >> expect, though, the next request to succeed.
          >> The problem is that even when I restart my client process it
          >> still tries to go to the same destination (weird). It seems
          >> that the Distributed Destination exception handling logic only
          >> removes the failed entry when it receives a certain type of
          >> exception. I'm suspecting this because (just 5 minutes ago)
          >> I got the distributed destination to recover from the failure.
          >> The exception that I got this time was the following:
          >>
          >> weblogic.jms.common.JMSException: Failed to send message because
          >> destination MyQueue_JMSServer1
          >> is not avaiable (shutdown, suspended or deleted).
          >>
          >> Start server side stack trace:
          >> weblogic.jms.common.JMSException: Failed to send message because
          >> destination MyQueue_JMSServer1
          >> is not avaiable (shutdown, suspended or deleted).
          >> at
          >> weblogic.jms.backend.BEDestination.checkShutdownOrSuspendedNeedLock(BEDestination.java:1102)
          >>
          >> at weblogic.jms.backend.BEDestination.send(BEDestination.java:2782)
          >> at weblogic.jms.backend.BEDestination.invoke(BEDestination.java:3810)
          >> at
          >> weblogic.jms.dispatcher.Request.wrappedFiniteStateMachine(Request.java:552)
          >>
          >> at
          >> weblogic.jms.dispatcher.DispatcherImpl.dispatchAsync(DispatcherImpl.java:152)
          >>
          >> at
          >> weblogic.jms.dispatcher.DispatcherImpl.dispatchAsyncTranFuture(DispatcherImpl.java:425)
          >>
          >> at weblogic.jms.dispatcher.DispatcherImpl_WLSkel.invoke(Unknown
          >> Source)
          >> at
          >> weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:362)
          >> at
          >> weblogic.rmi.internal.BasicServerRef$1.run(BasicServerRef.java:313)
          >> at
          >> weblogic.security.service.SecurityServiceManager.runAs(SecurityServiceManager.java:785)
          >>
          >> at
          >> weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:308)
          >>
          >> at
          >> weblogic.rmi.internal.BasicExecuteRequest.execute(BasicExecuteRequest.java:30)
          >>
          >> at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:153)
          >> at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:134)
          >> End server side stack trace
          >>
          >> In this (rare) case, my system recovered beautifully!
          >>
          >> Bob.
          >>
          >> Tom Barnes <[email protected]> wrote:
          >>
          >>> Hi Bob,
          >>>
          >>> The particular exception you are seeing seems like it could use
          >>> some enhancement - it should be wrapped in a "friendlier" exception
          >>> such as "remote server XXX unavailable". I
          >>> recommend filing a case with customer support.
          >>>
          >>> That said, a producer sending to a distributed destination needs
          >>> to be able to handle send failures. WebLogic
          >>> will automatically retry sends in cases where there is no ambiguity,
          >>> but
          >>> when it can't determine the nature of the failure (eg it can't
          >>> determine whether or not the message made it to a JMS server) it
          >>> throws the exception back to the client to let the client
          >>> decide what it wants to do - eg commit/don't commit, reconnect
          >>> and resend, reconnect and don't resend.
          >>>
          >>> Tom
          >>> Bob S wrote:
          >>>
          >>>> I'm using WebLogic 7 SP1 on Windows 2000. I've configured a
          >>>> distributed queue that has two member. The two members are
          >>>> running in two WebLogic instances in a Cluster configuration
          >>>> (call them Server1 and Server2). My client posts messages to the
          >>>> distributed queue and it seems the messages seem to be
          >>>> distributed between Server1 and Server2 (as expected). Although,
          >>>> when I kill Server1, the client complains that it can't connect
          >>>> to the queue on Server1 and never recovers. I would have
          >>>> expected to see (at most) one exception and then the next
          >>>> request to use Server2's queue. The client gets the following
          >>>> exception:
          >>>>
          >>>> weblogic.jms.dispatcher.DispatcherException: Dispatcher not found in
          >>>
          >>>
          >>> jndi: Server1,
          >>>
          >>>> javax.naming.NameNotFoundException: Unable to resolve
          >>>> 'weblogic.jms.S:Server1'
          >>>> Resolved: 'weblogic.jms' Unresolved:'S:Server1' ; remaining name
          >>>> 'S:Server1'
          >>>> at
          >>>> weblogic.jms.dispatcher.DispatcherManager.dispatcherCreate(DispatcherManager.java:323)
          >>>>
          >>>> at
          >>>> weblogic.jms.dispatcher.DispatcherManager.findOrCreate(DispatcherManager.java:413)
          >>>>
          >>>> at weblogic.jms.frontend.FEProducer.<init>(FEProducer.java:87)
          >>>> at weblogic.jms.frontend.FESession$2.run(FESession.java:607)
          >>>> at
          >>>> weblogic.security.service.SecurityServiceManager.runAs(SecurityServiceManager.java:785)
          >>>>
          >>>> at
          >>>> weblogic.jms.frontend.FESession.producerCreate(FESession.java:604)
          >>>> at weblogic.jms.frontend.FESession.invoke(FESession.java:2246)
          >>>> at
          >>>> weblogic.jms.dispatcher.Request.wrappedFiniteStateMachine(Request.java:552)
          >>>>
          >>>> at
          >>>> weblogic.jms.dispatcher.DispatcherImpl.dispatchSync(DispatcherImpl.java:275)
          >>>>
          >>>> at
          >>>> weblogic.jms.client.JMSSession.createProducer(JMSSession.java:1461)
          >>>> at
          >>>> weblogic.jms.client.JMSSession.createSender(JMSSession.java:1312)
          >>>>
          >>>> Should I have some kind of recovery logic on my client to make
          >>>> this stuff work?
          >>>>
          >>>> Bob.
          >>>
          >>>
          >>
          >
          

Similar Messages

  • SAP ECC 6.0 SR3 Cluster failover not working in AIX with DB2 UDB V9.1 FP6

    Hi Gurus,
    We have installed the SAP ECC 6.0 SR3 High Availability  with DB2 UDB V9.1 FP6 in AIX cluster environment.
    After installation we are doing the cluster fail test.
    Node A
    Application Server
    Mount Points:
    /sapmnt/<SID>
    /usr/sap/<SID>
    /usr/sap/trans
    Node B
    Database Server
    Mount Points:
    /db2//<SID>
    The procedure followed to do the cluster failover:
    We have down the cluster on Node A and all the resources of the Node A has been moved to Node B.
    On Node B when we issued a command to start the SAP. It says u201Cno start profiles foundu201D
    WE have down the cluster on Node B and  moved the Resource from Node B to Node A .  There the db2 User IDu2019s are not available. We have crated the user Idu2019s manually on Node A. however it did not work.
    Please suggest the procedure to start the sap in cluster failover.
    Best Regards
    Sija

    Hi Sija,
    Can i have detailed scenario in your cluster configuration.
    Means you are saying that going to start cluster package manually, if it is right please make sure that you had the same copy of start, instance profiles of NodeA to Node B. Means you need to maintain two startup, two instance profiles for both nodes. In a normal situation it will picik the profile of node A to start databse from A node. But in a failover situation it will not pick node A profile to start, it should pick Node B s profiles.
    Just make a copy from node A and change the profile name accordingly to Node b. Then try to restart.
    Regards
    Nick Loy

  • Failover not working while connecting to Oracle 11g database

    Hi,
    We have a J2EE application that connects to Oracle 11g database.
    The connection URL we are using is as below
    jdbc:oracle:thin:@(DESCRIPTION =(SDU=32768)(ADDRESS=(PROTOCOL=TCP)(HOST=abc.hostname.com)(PORT=1525))(ADDRESS=(PROTOCOL=TCP)(HOST= xyz.hostname.com)(PORT=1525))(CONNECT_DATA=(SERVICE_NAME=PQR)))
    The issue that we are facing is, the database failover is not working.The application only connects to the first host in the TNS entry.Evertime there is a failure in the connection to the first host,manual steps are required to swap the hosts.
    This started happening after we upgraded Oracle DB from 9i to 11g.
    We are using the client jar of 9i to connect to 11g. Could this be causing the problem?
    Thanks In Advance.
    -Tara

    889517 wrote:
    Yes, you are right. Nothing else was updated.
    The application still works as expected except the failover.If you are correct then I seriously doubt it has anything to do with java. It would be something to do with Oracle and/or network infrastructure.
    If not then it is some small problem with the driver. You can try updating the driver but I wouldn't expect a fix.

  • CVP Failover Not working

    Hi,
    I have UCCE with CVP in my environment and also we are having 2 Call/VXML server for redundancy purpose. I have configured below command on VXML GW also created dialpeer towards my second call/vxml server to achieve redundancy but somehow my fail over is not working.
    My UCCE with primary CVP is working fine without having any issue.
    ip host mediaserver x.x.x.x
    ip host mediaserver-backup x.x.x.x
    Also I have used media_server ECC variable as 'mediaserver' to achieve redundancy.
    I'm not sure weather i am missing any other configuration to achieve the fail over.
    Please suggest.

    Thanks Chaitan for your reply.
    Yes, I am looking for redundancy as if CVP1 goes down, then calls should go to CVP2.
    We are not using SIP proxy in our environment. Please find dial-peer configuration which are targeting to CVP1 & CVP2
    dial-peer voice 7701 voip
     destination-pattern 77..
     translate-outgoing called 1
     session protocol sipv2
     session target ipv4:168.167.0.162 (CVP1)
     session transport tcp
     voice-class codec 1  
     voice-class h323 1
     dtmf-relay rtp-nte h245-signal h245-alphanumeric
     no vad
    dial-peer voice 7702 voip
     destination-pattern 77..
     translate-outgoing called 1
     session protocol sipv2
     session target ipv4:168.167.0.163 (CVP2)
     session transport tcp
     voice-class codec 1  
     voice-class h323 1
     dtmf-relay rtp-nte h245-signal h245-alphanumeric
     no vad
    Thanks

  • Failover not working in a cluster environment asks Relogin to Application

    Hi,
    I have setup cluster on weblogic and deployed CRM (ejb/ear) application.
    When I stop managed server which is handling request other managed server picks the request but it asks for relogin.
    I think failover is not working properly.
    Could you please help?
    Thanks,
    Vishal

    Hi Kalyan,
    Please See below Diagnostics:
    Managed Server1:
    ===================================================
    Date: Tue Sep 04 18:33:12 IST 2012
    DdCompnt=11,SessionContext=7,CurrJvmHeapMB=235,Bos=2,JForm=1,MaxBoRowCount=493,JApp=13,JField=1,JDdField=1,JSession=6,Sessions=7,JBaseBo=2,Misc=74,JPTrace=1,LpmErrors=1,Forms=1,Active Sessions=6,JDdTable=1021,MaxJvmHeapMB=467,MaxBos=18,BoRowCount=33,JError=1,JFields=1,UtlRecs=175,FlexAttr=-5,UserContext=3
    Managed Server2:
    ===================================================
    Date: Tue Sep 04 18:33:20 IST 2012
    DdCompnt=11,SessionContext=8,CurrJvmHeapMB=226,JForm=1,JApp=13,JSession=7,Sessions=8,Misc=148,JPTrace=1,Forms=1,Active Sessions=7,JDdTable=1124,MaxJvmHeapMB=472,UtlRecs=81,FlexAttr=-5,UserContext=4
    Free Java Heap Memory: 258400 KB
    Total Java Heap Memory: 495779 KB
    I tried to open many sessions. Its assigning randomly, but when I kill Managed Server2, its corresponding sessions are not working.
    System Error error executing LPoms.getUserContext()
    Also I found following warning in Weblogic logs.
    <Sep 4, 2012 6:26:15 PM IST> <Warning> <Socket> <BEA-000402> <There are: 5 active sockets, but the maximum number of socket reader threads allowed by the configuration is: 4. The configuration may need altered.>
    Thanks for your immediate help.

  • RMI Failover not working in 9.2?

    Setup:
    RMI's RTD.xml
    <cluster
    clusterable="true"
    load-algorithm="round-robin"
    >
    </cluster>
    <method
    name="*"
    idempotent="true"
    timeout="3000"
    >
    </method>
    Cluster
    Srv1=RMI.instance1
    Srv2=RMI.instance2
    Srv3
    Servlet: gets Srv1 context and looks up RMI object, invokes its biz method;
    Loadbalancing works, according to logs.
    Shutdown Srv2 and Srv1 continues processing.
    Restart Srv2 and shutdown Srv1 and the failover should kick in here too, but the connection is now considered broken and results in a Host not reachable exception.
    Can't find any documentation as to what I should be doing different, but I must be missing something.
    Any ideas?
    Karoly

    It just started working, or very likely, it was working from the beginning, but some components were not build/deployed properly.

  • IE 9.0 proxy failover not working correctly - Is there a bug fix or IE setting to correct this behavior

    I am testing proxy pac file  failover using IE 9.0.8112 and testing three choices using an automatic configuration file.  I shut down the first proxy to test the fail over to the second. Firefox 20.0.1 and chrome work correctly, but IE 9.0 does
    not. My snip-it is as follows:
    return "PROXY 192.168.11.12:8080; PROXY 192.168.11.195:8080; DIRECT";
    With ie8, firefox and chrome the fail over to the next proxy entry during a PROXY 192.168.11.12 fail over works correct and as follows:
    Proxy 192.168.11.12 times out after about 25-30 seconds and then proxy 192.168.11.195 is attempted and the web page is displayed.
    Then all url lookups after this time are made through proxy 192.168.11.195 and are quick. This is how proxy fail over should work.
    When I test with ie9 it works as follows:
    Proxy 192.168.11.12 times out after about 25-30 seconds and then proxy 192.168.11.195 is attempted and the web page is displayed.
    Then all following url lookups take 30 -45 seconds because it always tries the first PROXY 192.168.11.12 first before attempting proxy 192.168.11.195
    because it does not remember first Proxy 192.168.11.12 is not available.
    Is there a setting or bug opened on this behavior????

    Not to be pedantic, but the proxy.pac file is JavaScript... :)
    You might be experiencing an issue with IE's automatic proxy caching, described here: http://support.microsoft.com/kb/271361 . Basically, the choice of which proxy to use is decided once per requested host, and the decision is cached. So, if you
    are testing failover by accessing resources on the same host before and after shutting off the first proxy, IE will still insist on using that proxy address for subsequent requests. If you test a second host and get a more timely response, then I would say
    this is what you are seeing.
    You can experimentally disable this feature by setting...
    [HKEY_CURRENT_USER\Software\Policies\Microsoft\Windows\CurrentVersion\Internet Settings]
    "EnableAutoProxyResultCache"=dword:00000000"
    ...in the registry for your test user. I imagine a reboot will be required.

  • "Get destination" does not work in CNV_MBT_TDMS step "Define RFC Dest."

    We just installed TDMS and I am trying to set up the RFC destinations between our systems. Since we will be using many different systems and clients and my company is very sensitive about using standard namespaces we want to try to define our own destinations and use them via the "Get destination" function.
    So I've built a destination in SM59 (I called it TDMS_EZA371_EZA371 in case this matters) and entered a user with SAP_TDMS_USER role assigned. When I try to use this destination a popup appears saying "Definition of destination MBT_Z_PR498CC_SUB_PCL is not complete. Server name is initial.". What does this mean? The server name in the destination is totally fine and a connection test in SM59 works alright. Is this a bug or am I doing something wrong?
    Kind regards
    Mario

    When I try to use this destination a popup appears saying "Definition of destination MBT_Z_PR498CC_SUB_PCL is not complete. Server name is initial.". What does this mean? The server name in the destination is totally fine and a connection test in SM59 works alright. Is this a bug or am I doing something wrong?
    The technical names for the destinations are created automatically in accordance with the following naming convention:
    MBT_<Subproject Name>_<System Role>
    In your case you may create sub-projects with the names which you wish to (TDMS_EZA371_EZA371) and then the RFC detiantion name will be MBT_TDMS_EZA371_EZA371_PCL etc..

  • Tuxedo failover not working

    Hi,
    we are working with Tuxedo 11gR1 to let us access to an application. The configuration file uses two database IP addresses, related to the same instance that is configured in Active/Passive cluster configuration. Once the tuxedo servers listed into the ubbconfig are started, the application works properly. We did a failover testing, letting the first database node crash, the access to the application was denied.
    All was working again once the servers did an autorestart, but after about 15-20 minutes. Is there any way to reduce (if it exists) the timeout to let the autorestart occur in case of any crashing situation?
    Many thanks,
    Giuseppe.

    Hi Giuseppe,
    The answer to your question depends upon how the connections to the database are being managed.  If the servers are part of a transactional group meaning there are TMS servers associated with the group and the servers were build with the -r switch to buildserver, then Tuxedo will manage the connections.  What this means is that Tuxedo uses the xa_open() call to establish a connection to the database for the application and the application should not be performing any SQL CONNECT statements.  If Tuxedo receives an XA error during transaction processing, then Tuxedo will automatically try to re-establish the connection to the database.  Also note that if you are using RAC, you must configure the TUXRACGROUPS environment variable and configure the database to use DTP Services.  The DTP Service should be configured to failover to the other instance in the database configuration file.
    On the other hand, if the server is not part of a transactional group, then all database connection management is left up to the application and in fact, Tuxedo is completely unaware that a database is even being used.  So your code does an SQL CONNECT and if it receives an error at some point, it likely has to reconnect with another SQL CONNECT.  To help in this situation, you should be able to use TAF (Transparent Application Failover) to let the database client code attempt the reconnects.
    As far as the timeout goes, I suspect your servers are hanging trying some DML statement and eventually Tuxedo kills the servers due to SVCTIMEOUT.  Why they are hanging is still an unknown as I don't know how your application is accessing the database or how your servers are coded.
    Regards,
    Todd Little
    Oracle Tuxedo Chief Architect

  • Source as Destination is Not Working

    I have several clips in FCP.
    I select them all and select: Send To - Compressor
    In Compressor I see that my destination is definitely set as Source (I assume this should be the folder where my original clips exists).
    Compressor creates the new files, but they are created in the root folder and not in the source folder.
    Am I doing something wrong?
    Does this always happen when you send the original clips from inside FCP?
    Thanks for any help,
    Chris

    You can't fix that which is not broken. This is one of those things that, rather than try to make it do what you want, learn how it works, and then do it that way.
    A clip in your FCP browser is not actually a clip. It's a pointer, pointing to a clip, or portion of a clip in the Finder. So, there is no SOURCE destination, because the source of the icon you clicked on to export is the browser in FCP. That browser does not exist anywhere in the Finder. It's important to understand here that FCP uses the Finder to organize your media. It's not self-contained like Avid or iMovie.
    If you want to set a default location for all your exports, you can set it in the Preferences of Compressor. The Source destination only works if you have an already exported clip in the Finder, and you want to encode it to something else.
    Just to reiterate, there is NO source for clip pointers anywhere on your computer. Or, if you like, the source destination for ALL clip pointers (browser items) is the ROOT level of your hard drive. You can't change this. All you can do is set a different default destination.

  • Gwia failover not working

    Heres my problem. I have two giwa boxes, gwiaa and gwiab. Each are on seperate boxes and each are in their own domain. If I take Gwiaa down, gwiab will receive inbound email and pass them on, however, it will not send outbound email.Gwiaa is the primary. In Gwiaa, I have Gwiab defined as the alternate Internet Agent. I checked the the MTP port and both are set to 7103. Is there something else I need to be checking? I had the group that takes care of our firewall check all the setting and they said that Gwiaa and Gwiab have the same settings.
    Thanks for any help,
    Bud

    I am posting this quote from Massimo Rosen in another thread as I think it is an important 'Gotcha' and should be highlighted. Particularly as it is not mentioned, that I can find, in Kratzer/Korte. And that putting an MTA and GWIA on a second box is a (sort of) recommended route when moving from Netware to Linux for us followers of Danita.
    Quote: (talking about GW7)
    It does do something, and works correctly if only GWIA goes down. The
    problem in your case (and the defect) is that from the sending MTA's
    view, not the GWIA is down, but the path *to* that GWIA is down (too),
    e.g the MTA is gone. *This* is what fails the designed logic, it simply
    isn't (properly) designed (yet) to failover in that case. Supposedly
    this will work in the next GW version.
    CU,
    Massimo Rosen
    Novell Product Support Forum Sysop

  • DHCP failover not working

    I am attempting to implement DHCP failover using load balancing with an Essentials and a Standard server.  I have everything configured and working but I am facing an issue.
    If I try to start the DHCP services on the Essentials server and the DHCP services on the Standard server are already started it will not start and I get error 1053 The DHCP/BINL service has encountered another server on this network with IP Address, 192.168.1.3,
    belonging to the domain: . in the event log on the Essentials serve.
    Any ideas why this is?

    In a network of this size there is little need for multiple or failover DHCP servers.  With a lease time in the one week category the chances that the server will be off line at the same time that clients need to renew are pretty slim.  Should
    the server simply be unavailable you can turn the DHCP on the router, or, as you intend, on the second server.
    Larry Struckmeyer[SBS-MVP]
    Larry, I am not sure I find this to be the case -- that rarely will the server be off at the same time clients need to renew.  I volunteer support to a local non profit.  Sadly, it's not uncommon for the Essentials server to not come up cleanly
    after a Microsoft Update or a power outage.  With DHCP not available on the WSE2012 server, employees who come in and fire up their laptops will in some cases hit the lease expiration.  I'll get a phone call, and now there will be two frustrated
    people.  :-)
    I *could* return to letting the router handle DHCP, but I've read too many times on technet and social forums the rather forceful recommendations by microsoft people that an Essentials served network should have DHCP running on the server.
    I feel I'm stuck between a rock and a hard place.  The  strong recommendation from microsoft is to always serve BOTH DHCP and DNS from a WSE server.  However, there appears to be no DHCP failover option for WSE, which will make my users unhappy
    (and strain my volunteerism!).
    Thoughts?
    I've tried to research whether WSE will allow a router to act as a failover in hot standby mode, but haven't succeeded.  (I'll then need to find a router that supports hot standby in a WSE environment.)
    (I realize I am quite late to this thread.  Hopefully that's not a problem.)
    Steve

  • Unity Connection 9 Failover Not Working

    Just built 2 Unity Connection 9 in Pub/Sub configuration.  Call Manager version is still 8.
    Call Manager configuration:
    1 line group, 1 hunt list, and 1 hunt pilot
    1 voicemail pilot and 1 voicemail profile
    64 voicemail ports (32 ports per Unity Connection server) (all ports are registered)
    Unity Connection configuration:
    1 publisher and 1 subscriber
    1 phone system
    1 port group (64 port count, 32 for publisher and 32 for subscriber)
    Cluster configuration is set to change status when publisher fails
    Publisher takes calls without a problem.  But when publisher stops taking calls, subscriber does not take the calls (busy signal).
    To test if the issue might be on the call manager, I changed all port settings in Unity to point to the subscriber.  Subscriber take the calls.
    I changed 2 ports to point only to publisher, publisher takes the calls.  But when publisher stops taking calls, subscriber does not take the calls (busy signal).
    It seems like I cannot split the ports to both servers.  Am I missing something?
    Has anyone encountered this issue?  Any info is appreciated.
    Thank you!

    Problem solved! 
    In Call Manager, I created a second line group for the subscriber ports.  Added the second line group to the hunt list.  Set the LG for the subscriber in the hunt list as first and publisher as second.  This way, all calls will go to the subscriber server first.  
    In Unity Connection, I created a second port group then added the ports and made sure they are registered in CM.
    Tested failover by stopping taking calls on each server and it works!

  • Failover  not  working each time

    We have a 2 node rac on solaris 10, oracle 11.1.0.7.
    We are executing the tests from the Rac Assurance Metalink test plan and find that our service and select query correctly fail over when we create an instance failure for database, for asm or when we have node failure. However the select query does not failover when we unplug all cables for public network or when we pull cables on private interconnect. In these cases we do see vip relocated, but select query hangs.
    Below attached the service config and client tnsnames entry for sqlplus. Should service be 'preferred' on both nodes? And is TAF supposed to occur when we execute the network tests for public and private? Or are we only expecting client TAF when we have instance/node failure - and so the network interfaces are available on both nodes?
    node2<oracle>srvctl config service -d tibcouat -s tibcouat_srv -S 9
    #@=info: operation={config} config={full} ver={11.0.0.0.0}
    tibcouat_srv PREF: tibcouat1 tibcouat2 AVAIL:
    #@=service[0]: name={tibcouat_srv} enabled={true} pref={tibcouat1, tibcouat2} avail={} disabled_insts={} tafpolicy={BASIC} type={user}
    #@=endconfig:
    TIBCOUAT =
    (DESCRIPTION =
    (FAILOVER = ON)
    (ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1528))
    (ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1528))
    (LOAD_BALANCE = yes)
    (CONNECT_DATA =
    (SERVER = DEDICATED)
    (SERVICE_NAME = tibcouat_srv)
    )

    >
    >
    TIBCOUAT =
    (DESCRIPTION =
    (FAILOVER = ON)
    (ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1528))
    (ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1528))
    (LOAD_BALANCE = yes)
    (CONNECT_DATA =
    (SERVER = DEDICATED)
    (SERVICE_NAME = tibcouat_srv)
    )In order to configure TAF, the TNS entry must include a FAILOVER mode clause that defines the properties of the failover as follows,
    TYPE attribute defines the behaviour following a failure . Value : Session , select or none
    METHOD attributes --> when connections are made to the failover instance. Value :basic or preconnect
    RETRIES --> No of times that a connection should be attempted before returing an error.
    DELAY --> the time in seconds between each connection retry.
    e.g.
    finance =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = myrac2-vip)(PORT = 2042))
    (ADDRESS = (PROTOCOL = TCP)(HOST = myrac1-vip)(PORT = 2042))
    (ADDRESS = (PROTOCOL = TCP)(HOST = myrac3-vip)(PORT = 2042))
    (LOAD_BALANCE = yes)
    (CONNECT_DATA =
    (SERVER = DEDICATED)
    (SERVICE_NAME = FINANCE)
    (FAILOVER_MODE =
    (TYPE = SELECT)
    (METHOD = BASIC)
    (RETRIES = 180)
    (DELAY = 5)
    Regards
    Rajesh

  • RADIUS failover not working in wired 802.1x (CATOS switch)

    I am setting up a pilot group for wired 802.1x testing. I have it working correctly on a C2950 and C3550s. I am having trouble with the RADIUS failover on my CATOS C4006 series switches. When I disable the primary RADIUS Server to test failover, the switch never fails over to the backup RADIUS server and thus wired 802.1x fails. Am I missing something?
    Any help is appreciated. Here is my config:
    #version 8.4(7)GLX
    #radius
    set radius server 10.30.XX.XX auth-port 1812 primary
    set radius server 10.18.XX.XX auth-port 1812
    set radius timeout 30
    set radius key EE08361
    Set dot1x system-auth-control enable
    set port dot1x 5/27 port-control auto
    all radius and dot1x settings are at their default values
    Any takers??!

    I have the same setup as yours. I use Steelbelt
    radius 6.0.1 on Linux and I have Cisco 2960
    catalyst. I use 802.1x over Ethernet with
    PEAP, as seen below:
    C2960#sh run int g0/23
    Building configuration...
    Current configuration : 133 bytes
    interface GigabitEthernet0/23
    switchport mode access
    dot1x pae authenticator
    dot1x port-control auto
    dot1x guest-vlan 668
    end
    C2960#
    C2960#sh run | inc dot
    aaa authentication dot1x default group radius
    dot1x system-auth-control
    dot1x guest-vlan supplicant
    C2960#sh run | inc radius-
    radius-server host 192.168.15.10 auth-port 1812 acct-port 1813 key xxx
    radius-server host 10.250.97.26 auth-port 1812 acct-port 1813 key xxx
    C2960#
    Everything works and when I shutdown the
    radius server process on host 192.168.15.10,
    "sbrd stop", it still works with the secondary
    radius server 10.250.97.26.
    The difference between yours and mine is that
    I am running IOS instead of CatOS.
    System image file is "flash:c2960-lanbasek9-mz.122-25.SEE4.bin"
    David

Maybe you are looking for

  • Error while receivind a pdu - wrong version  grrr

    Hi, I work on Solaris 8.0 and I installed SDK toolkit. I wrote my own subagent starting with the demoAgent into the toolkit. I created X.acl and X.reg Also, I configured into snmpdx.acl my traps exactly like I did into my X.acl I started snmpdx in de

  • I can't: mount disks / go to Safe Mode

    So, I wake up the other day, turn on my PowerBook G4 and my external disk G-Drive Q 250 GB only to find that it didn't mount. I turn my G-Drive off, disconnect it from FireWire 800, turn it back on, plug the cable in, and nothing happens. Then, I ope

  • Convert scanned PDF to PDF containing text?

    Before my last laptop crashed, there was an amazing new (new to me, anyway) feature in Adobe that would attempt to convert a scanned copy of a PDF to a PDF containing text that could be highlighted. I was ecstatic! Whenever I opened a PDF, Adobe woul

  • Regd PDF Printing from Web Dynpro

    Hi, We are using NW04s SP10 . I need to show the data coming from RFC as PDF in Web Dynpro. The RFC is returning String as output after SAP Script is converted to PDF . Below is the code i have written in WD for opening as PDF. When i try to open it

  • XML Security question.

    Hi I have to develop a database access from MySQL using java at the backend pulling database records and populating it in the form of XML. Once it is done I have to get it to browser. PHP is used by front end. When user clicks it triggers php and the