CTIOS failover is not working

Hi,
Incase of one CTIOS failure the agents are not switching to the other CTIOS server. I am using UCCE and CTIOS Version 8.0.
Thanks and Regards,
Ashfaque

Check out this post for the invalid message header:
https://supportforums.cisco.com/thread/270670
They mention it taking 60 seconds, but I'm wondering if it doesn't occur faster in your new 8.x version...  If you failover to the peer, I bet you'll see the same message.
You're still getting a CONNECTION REFUSED error, which I've only seen related to invalid port configurations.  Make sure all your 42027/42028/43027/43028 settings are correct (it can get VERY confusing, for some reason).
Here's an excellent doc that I often refer people to: https://supportforums.cisco.com/docs/DOC-1390.  There's a graphic in the section "Install CTI Server (CG or CTI PG)" that will help you visualize what's going on.  The only problem with the graphic is that the CG1 representation is a little misleading- It should have CG1A (on your PG1A server) using ports 42027 and 42028, and CG1B (on your PG1B server) using ports 43027 and 43028.
Cheers,

Similar Messages

  • Failover is not working in clustering

    we installed infrastructure in the one system and added 2 instances app1.mycompany.com,app2.mycompany.com into it.
    for loadbalancing we r using webcache.
    we configured origin servers,site definitions,site-server mappings.
    in the cluster two instances showing up.
    that we can see in health monitor in Up/Down* parameter of web cache administrator console.
    we deployed same ear in two instances.
    but when i down one instance say app1.mycompany.com,
    In the health monitor its not showing up DOWN parameter for host: app1.mycompany.com.same for UP also.
    immediately its not showing changes when i am testing failover.
    Is webcache loadbalancing is Round robin based ?
    when i down one of the instances session replication is not happening properly.sometimes session expired is coming.
    when 2 instances r up if user access application all the requests r coming to one instance if down that instance session expired is coming.
    i think failover is not working in clustering.
    i checked replication properties and added <distributable> tag in both the instances.
    in webcache console page what will sessionbinding will do?i have not configured anything.

    Why are you using Webcache?
    Web cache will certainly work, but its more common role is to more access as a simple load balancer over HTTP servers, not OC4J instances.
    What I'd do is to simplify your situation to verify you have the server setup correctly.
    That means using the Oracle HTTP Server which will be part of your cluster as the common routing point. OHS and mod_oc4j are session state aware and know about all the OC4J instances. In the situation where an OC4J instance dies for some reason, mod_oc4j will know to which other OC4J instance(s) the request can be routed to pickup the replicated session state.
    Once you have verified that the failover is working on the backend, you can then configure another OHS instance and position webcache in front of them to act as a request router and failover handler for when the OHS instances are inactive.
    The Enterprise Deployment Guide offers some guidance in typical architectures, well worth a read.
    cheers
    -steve-

  • Sun Access Manager 2005Q1 session failover is not working

    Hi All
    I m using Sun access manager 2005Q1,message queue 2005Q1, Sun Directory server 5.2 ,BerkelyDb 4.2.52 and radware hardware load balancer with sticky session.
    I m have configured message queue and BerkeleyDB and both are running with any error.
    I m using http://docs.sun.com/source/817-7644/ch5_scenarios.html#wp41008 doc for session failover.
    Simple failover is working fine but the Session failover is not working.
    Any body has done session failover with Sun Access manager 2005 Q1 I m trying to resolve this issue last two month.
    Please it is urgent.

    It works fine in 2005Q4, after applying a patch 120954 if I am not mistaken. But 2005Q4 and 2005Q1 are probably different in terms of session failover (site configuration etc.)
    1. Stop both AM servers
    2. Set logging to debug mode in AMConfig.properties.
    3. Delete / move everything in /var/opt/SUNWam/debug
    4. tail -f /var/opt/SUNWam/debug/amSession
    5. Post that file here... you should be able to see if session failover is enabled etc....
    hope this helps.

  • Exchange 2010 DAG Failover does not works

    Hi Experts,
    I have a Exchange 2010 setup in  a DAG environment. We have 2 MBX servers in the main site and 1 MBX server in the DR site , all part of one DAG. We have 2 HUB/CAS servers in the main site and 1 HUB/CAS server in the DR site.
    Recently we had to do our BCP test for audit purpose. We had issues in doing failover to the DR site and below is the error faced.
    Please advise urgently on the possible causes and resolution steps for it as we need to do this test again on the coming weekend.
    "EvictDagClusterNode got exception Microsoft.Exchange.Cluster.Replay.AmClusterEvictWithoutCleanupException: An Active Manager operation failed. Error An error
    occurred while attempting a cluster operation. Error: Evict node 'sme-ho-mbx01' returned without the node being fully cleaned up. Please run cluster.exe node <NodeName> /forcecleanup to complete clean up for this node.. ---> System.ComponentModel.Win32Exception:
    The wait operation timed out"
    So, basically one of the MBX server was not evicting from the Cluster due to which failover did not work.
    Would appreciate some urgent thoughts for the possible resolution.
    regards
    abubakar
    Md.Abubakar Noorani IT Systems Engineer Serco Ltd.

    Hi,
    Yes, you can run the Stop-DatabaseAvailabilityGroup without shutting down the Mailbox server. During the process of DAG failover to DR site, the Stop-DatabaseAvailabilityGroup cmdlet should be run against all servers in the primary datacenter. If the Mailbox
    server is unavailable but Active Directory is operating in the primary datacenter, the Stop-DatabaseAvailabilityGroup command with the ConfigurationOnly parameter must be run against all servers in this state in the primary datacenter.
    And please note that the Stop-DatabaseAvailabilityGroup cmdlet can be run against a DAG only when the DAG is configured with a DatacenterActivationMode value of DagOnly. 
    Based on the error message, it seems that you should run the cluster node nodename /forcecleanup cmdlet against the specified node in the main site. Have you tried this to check the result?
    Best regards,
    Belinda
    Belinda Ma
    TechNet Community Support

  • Ip SLA failover config not working need help urgent cisco 2911 K9 router

    Hi,
    I am setting up failover wan for one of my cient and seems everything i have configured correctly but its not working. For track i am using google DNS ip 8.8.8.8 and 8.8.4.4 where if i ping 8.8.8.8 from router it pings but not 8.8.4.4. I I think because 8.8.4.4 no pinging so router does not jump if primary gigabitethernet0/0 down.
    Not sure what i am doing wrong. Please find below config details:
    -------------------------------------------config-----
    username admin privilege 15 password 7 XXXXX
    redundancy
    track 10 ip sla 1 reachability
     delay down 5 up 5
    track 20 ip sla 2 reachability
     delay down 5 up 5
    interface GigabitEthernet0/0
     ip address 122.160.79.18 255.0.0.0
     ip nat outside
     ip virtual-reassembly
     duplex auto
     speed auto
    interface GigabitEthernet0/1
     ip address 182.71.34.71 255.255.255.248
    ip nat outside
     ip virtual-reassembly
     duplex auto
     speed auto
    interface GigabitEthernet0/2
     description $ES_LAN$
     ip address 200.200.201.1 255.255.255.0
     ip nat inside
     ip virtual-reassembly
     duplex auto
     speed auto
    ip forward-protocol nd
    no ip http server
    no ip http secure-server
    ip nat inside source route-map giga0 interface GigabitEthernet0/0 overload
    ip nat inside source route-map giga0 interface GigabitEthernet0/0 overload
    ip route 0.0.0.0 0.0.0.0 GigabitEthernet0/0 track 10
    ip route 0.0.0.0 0.0.0.0 GigabitEthernet0/1 track 20
    ip route 8.8.4.4 255.255.255.255 GigabitEthernet0/1 permanent
    ip route 8.8.8.8 255.255.255.255 GigabitEthernet0/0 permanent
    ip sla 1  
     icmp-echo 8.8.8.8 source-interface GigabitEthernet0/0
     frequency 10
    ip sla schedule 1 life forever start-time now
    ip sla 2  
     icmp-echo 8.8.4.4 source-interface GigabitEthernet0/1
     frequency 10
    ip sla schedule 2 life forever start-time now
    access-list 100 permit ip any any
    access-list 101 permit ip any any
    route-map giga0 permit 10
     match ip address 100
     match interface GigabitEthernet0/0
    route-map giga1 permit 10
     match ip address 101
     match interface GigabitEthernet0/1
    control-plane
    ------------------------------------------config end

    Hello,
    as Richard Burts state correct the nat configuration is not right. But the ICMP echo request for the IP SLA is traffic, which is generated from the router with a source-interface specified. There shouldn't be any NAT operation at all, or? Iam using IP SLA  for two WAN connections too, but I can't recall  ever seen an entry for the icmp operation in the output of sh ip nat trans.
    To me the static route configuration looks wrong too. As far as I remember it's necessary to specify a next-hop address (Subnet/mask via x.x.x.x) on Multiple Access Broadcast Networks like ethernet, otherwise the Subnet appears as directly connected on the routing table. The configuration "ip route subnet mask <outgoing interface> only works correct for p2p links. With the configuration above i would say there is no routing at all possible except for "real" direct attached networks. Vibs said it's possible to reach the google dns 8.8.8.8 but not the second one 8.8.4.4. I verified that 8.8.4.4 usually answers to ICMP echo-request.
    My guess is that the next hop for the gig 0/0 interface has proxy arp enabled but the next hop for the gig0/1 interface hasn't proxy arp turned on.
    kind regards
    Lukasz

  • ACS Failover is not working

    We are running primary and secondary ACS servers 4.0 on appliance and it has been configured for automatic replication every 6 hours between them. When the primary server goes offline bcos of network issue, the secondary is supposed to authenticate but it is not happening. Hence we are forced to use the local accounts configured in the networking device to login and make configuration. Please note all our devices are configured to use both primary and secondary ACS servers.
    have anyone in this group has come across such a problem?

    Sudipto
    There could be several things that cause your problem.
    My first question would be whether the network devices and the backup server are correctly configured for each other. If you change the configuration of some network device, removing the definition of the primary ACS server so that the only server configured is the backup, does the network device authenticate with the backup?
    My second question would be when there is a network issue with the primary server is it possible that the network issue also impacts connectivity to the backup server? Can you check the logs on the backup server and see whether it received authentication requests? If it did receive authentication requests what was its response (were they authenticated or denied)?
    My third question is whether the network devices are attempting to failover. The best way to determine this would be from the output of some debugs. I suggest that on the router you configure debug aaa authentication and debug tacacs authentication (or radius if you are using radius instead of tacacs) . If you could post the debug output, taken when the problem is going on, it would help us to analyze your problem.
    I have had some experience with certain failure modes on the ACS server in which the network devices would not fail over to the backup. I had a TAC case on this which resulted in a bugID. I am aware of several other bugIDs for similar issues where failover did not occur on remote devices due to certain failure modes on the server. But in these cases there was connectivity to the server and the server was sending a response which was not expected by the remote network device. From your description it sounds like there is no connectivity, so I assume it is not the same issue.
    If you can answer the questions that I listed and provide the debug output I hope that we can help to resolve your issue.
    HTH
    Rick

  • CTIOS Client is not working on Windows 7 64x platform

    Hi All, Good day
    I'm installing CTIOS client ver 8.0 on winodws 7 64x but after installation once I open it it crashed.
    I've tried to look in the registry but I'm not able to find the Cisco folder under HKEY_Local_Machine > Software
    I'm attaching a snapshot for the crash error
    Regards,
    Mohamed Sherif

    8.0 is not supported on windows 7 64-bit.
    From the 8.5.x BOM -
    Windows 7 (64-bit) Support for CTI OS
    Starting with CTI OS Release 8.5(2), you can access the CTI OS Client on Windows 7 64-bit.
    Try running in compatibility mode for windows XP ...
    Brian
    Please rate helpful posts

  • Session-failover-enabled not working in iWS6 with a FileStore

    I'm trying to use a FileStore to implement session persistence using IWSSessionManager. I have the following in my web-apps.xml:
    <web-app uri="/Banking" dir="c:/java/online">
    <session-manager class='com.iplanet.server.http.session.IWSSessionManager'>
    <init-param>
    <param-name>session-data-store</param-name>
    <param-value>com.iplanet.server.http.session.FileStore</param-value>
    </init-param>
    <init-param>
    <param-name>session-data-dir</param-name>
    <param-value>c:/iplanet/servers/SessionData</param-value>
    </init-param>
    <init-param>
    <param-name>session-failover-enabled</param-name>
    <param-value>false</param-value>
    </init-param>
    </session-manager>
    </web-app>
    I'm seeing the following exception in my log:
    [12/Jun/2002:10:10:56] info ( 320): java.io.NotSerializableException: com.iplanet.server.http.servlet.WebApplication
    at java.io.ObjectOutputStream.outputObject(ObjectOutputStream.java:1148)
    at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:366)
    at java.io.ObjectOutputStream.outputClassFields(ObjectOutputStream.java:1827)
    at java.io.ObjectOutputStream.defaultWriteObject(ObjectOutputStream.java:480)
    at java.io.ObjectOutputStream.outputObject(ObjectOutputStream.java:1214)
    at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:366)
    at java.io.ObjectOutputStream.outputClassFields(ObjectOutputStream.java:1827)
    at java.io.ObjectOutputStream.defaultWriteObject(ObjectOutputStream.java:480)
    at java.io.ObjectOutputStream.outputObject(ObjectOutputStream.java:1214)
    at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:366)
    at java.util.Hashtable.writeObject(Hashtable.java:764)
    at java.lang.reflect.Method.invoke(Native Method)
    at java.io.ObjectOutputStream.invokeObjectWriter(ObjectOutputStream.java:1864)
    at java.io.ObjectOutputStream.outputObject(ObjectOutputStream.java:1210)
    at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:366)
    at com.iplanet.server.http.session.IWSHttpSession.writeObject(IWSHttpSession.java:764)
    at java.lang.reflect.Method.invoke(Native Method)
    at java.io.ObjectOutputStream.invokeObjectWriter(ObjectOutputStream.java:1864)
    at java.io.ObjectOutputStream.outputObject(ObjectOutputStream.java:1210)
    at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:366)
    at com.iplanet.server.http.session.FileStore.save(FileStore.java:167)
    at com.iplanet.server.http.session.IWSSessionManager.update(IWSSessionManager.java:499)
    at com.iplanet.server.http.servlet.NSHttpServletRequest.closeInputStream (NSHttpServletRequest.java:612)
    at com.iplanet.server.http.servlet.NSServletRunner.servicePostProcess(NSServletRunner.java:857)
    at com.iplanet.server.http.servlet.NSServletRunner.invokeServletService(NSServletRunner.java:942)
    at com.iplanet.server.http.servlet.WebApplication.service(WebApplication.java:1065)
    at com.iplanet.server.http.servlet.NSServletRunner.ServiceWebApp(NSServletRunner.java:959)
    Any ideas what's wrong?
    I should note that I don't think it is because I am storing non-serializable things in the session attributes. I think this because originally I was getting an exception that said that a specific attribute wasn't serializable. I changed the class definition of the class I was storing in that attribute to include "implements java.io.Serializable" and that problem went away.

    Hi Sija,
    Can i have detailed scenario in your cluster configuration.
    Means you are saying that going to start cluster package manually, if it is right please make sure that you had the same copy of start, instance profiles of NodeA to Node B. Means you need to maintain two startup, two instance profiles for both nodes. In a normal situation it will picik the profile of node A to start databse from A node. But in a failover situation it will not pick node A profile to start, it should pick Node B s profiles.
    Just make a copy from node A and change the profile name accordingly to Node b. Then try to restart.
    Regards
    Nick Loy

  • RV042 Failover does not work properly in certain WAN1 signal condition

        Our RV042 has cable modem in WAN1 and ADSL in WAN2; it is set in smart link backup mode.
    In certain cases of WAN1 signal loss, RV042 seems not to detect this condition. Consequently it does not switch automatically to WAN2.
    One way to get it to switch is to disconnect WAN1 modem power (manually in situ), then WAN2 assumes as active link.
    We conclude that, in the mentioned cases, although WAN1 signal is not good enough to provide internet service, RV042 makes a wrong decision and determines WAN1 is ok.
    Is there a way to have a correct switchover for these cases?
    May be with a firmware fix, or an internal user programming/setting, or different router model- or a combination of these elements, or any other solution you can provide us.

    Eduardo,
    It sounded like you have the device setup properly, however under the system management tab, it has the ways it will detect if there is a disconnect.
    If you have wan 1 and wan 2 set checked beside the default gateway it will ping the gateway and if it gets a reply from the modem, it will stay connected. 
    You might not have internet connectivity, but the router thinks you do cause it can ping the modem.  If you uncheck this and set it to remote host.  Then
    set wan 1 to www.google.com and wan 2 to www.yahoo.com.  This way it has to get all the way out to the internet to resolve internet names.  If it can't,
    it starts the failover process.

  • Failover not working while connecting to Oracle 11g database

    Hi,
    We have a J2EE application that connects to Oracle 11g database.
    The connection URL we are using is as below
    jdbc:oracle:thin:@(DESCRIPTION =(SDU=32768)(ADDRESS=(PROTOCOL=TCP)(HOST=abc.hostname.com)(PORT=1525))(ADDRESS=(PROTOCOL=TCP)(HOST= xyz.hostname.com)(PORT=1525))(CONNECT_DATA=(SERVICE_NAME=PQR)))
    The issue that we are facing is, the database failover is not working.The application only connects to the first host in the TNS entry.Evertime there is a failure in the connection to the first host,manual steps are required to swap the hosts.
    This started happening after we upgraded Oracle DB from 9i to 11g.
    We are using the client jar of 9i to connect to 11g. Could this be causing the problem?
    Thanks In Advance.
    -Tara

    889517 wrote:
    Yes, you are right. Nothing else was updated.
    The application still works as expected except the failover.If you are correct then I seriously doubt it has anything to do with java. It would be something to do with Oracle and/or network infrastructure.
    If not then it is some small problem with the driver. You can try updating the driver but I wouldn't expect a fix.

  • Failover not working in a cluster environment asks Relogin to Application

    Hi,
    I have setup cluster on weblogic and deployed CRM (ejb/ear) application.
    When I stop managed server which is handling request other managed server picks the request but it asks for relogin.
    I think failover is not working properly.
    Could you please help?
    Thanks,
    Vishal

    Hi Kalyan,
    Please See below Diagnostics:
    Managed Server1:
    ===================================================
    Date: Tue Sep 04 18:33:12 IST 2012
    DdCompnt=11,SessionContext=7,CurrJvmHeapMB=235,Bos=2,JForm=1,MaxBoRowCount=493,JApp=13,JField=1,JDdField=1,JSession=6,Sessions=7,JBaseBo=2,Misc=74,JPTrace=1,LpmErrors=1,Forms=1,Active Sessions=6,JDdTable=1021,MaxJvmHeapMB=467,MaxBos=18,BoRowCount=33,JError=1,JFields=1,UtlRecs=175,FlexAttr=-5,UserContext=3
    Managed Server2:
    ===================================================
    Date: Tue Sep 04 18:33:20 IST 2012
    DdCompnt=11,SessionContext=8,CurrJvmHeapMB=226,JForm=1,JApp=13,JSession=7,Sessions=8,Misc=148,JPTrace=1,Forms=1,Active Sessions=7,JDdTable=1124,MaxJvmHeapMB=472,UtlRecs=81,FlexAttr=-5,UserContext=4
    Free Java Heap Memory: 258400 KB
    Total Java Heap Memory: 495779 KB
    I tried to open many sessions. Its assigning randomly, but when I kill Managed Server2, its corresponding sessions are not working.
    System Error error executing LPoms.getUserContext()
    Also I found following warning in Weblogic logs.
    <Sep 4, 2012 6:26:15 PM IST> <Warning> <Socket> <BEA-000402> <There are: 5 active sockets, but the maximum number of socket reader threads allowed by the configuration is: 4. The configuration may need altered.>
    Thanks for your immediate help.

  • Uploading Files to DMP for local storage (Failover ) in DMM/DMP 5.2 is not working

    Dear All,
    I face a problem in making failover on the DMP , I did the following:
    1- make the design and save the presentation the needed to be failover.
    2- from Advanced Tasks ---> File transfer to DMP or Server and select the presentation.
    3- in the Adanced Task ---> Go to URL you will find an automated file genrated .
    4- Go to Digtal Media Player and select the generated URL and select the DMP and press GO.
    in DMM 5.1 after the above steps,the player was restart and after it back it displayed the uploaded presentation.
    but now the system running with 5.2 and it is not working ,Can any one help me to make this in DMM/DMP 5.2?
    Thanks,
    Ahmed Ellboudy.

    Ahmed,
    Let's not worry about failover at this time.
    Let's just attempt to get the LOCAL presentation to play.
    Your steps appear to be correct for transferring the
    presentation to the DMP.
    * From the DMM-DSM, Select the action to play the presentation
      on the DMP in Question
    Does the Presentation play on the DMP?
    then
    * go to the advanced tasks on the DMM-DSM and find the
      GO TO URL for the presentation that you created.
      Copy URL to your clipboard.
    * Now go to the DMP-DM of the DMP where you transferred the
      presentation.   Go to the Display Actions-->URL to be displayed
      and Paste the GO TO URL here then press the GO button
    Does the Presentation play on the DMP?
    Let me know what you see....
    T.

  • SAP ECC 6.0 SR3 Cluster failover not working in AIX with DB2 UDB V9.1 FP6

    Hi Gurus,
    We have installed the SAP ECC 6.0 SR3 High Availability  with DB2 UDB V9.1 FP6 in AIX cluster environment.
    After installation we are doing the cluster fail test.
    Node A
    Application Server
    Mount Points:
    /sapmnt/<SID>
    /usr/sap/<SID>
    /usr/sap/trans
    Node B
    Database Server
    Mount Points:
    /db2//<SID>
    The procedure followed to do the cluster failover:
    We have down the cluster on Node A and all the resources of the Node A has been moved to Node B.
    On Node B when we issued a command to start the SAP. It says u201Cno start profiles foundu201D
    WE have down the cluster on Node B and  moved the Resource from Node B to Node A .  There the db2 User IDu2019s are not available. We have crated the user Idu2019s manually on Node A. however it did not work.
    Please suggest the procedure to start the sap in cluster failover.
    Best Regards
    Sija

    Hi Sija,
    Can i have detailed scenario in your cluster configuration.
    Means you are saying that going to start cluster package manually, if it is right please make sure that you had the same copy of start, instance profiles of NodeA to Node B. Means you need to maintain two startup, two instance profiles for both nodes. In a normal situation it will picik the profile of node A to start databse from A node. But in a failover situation it will not pick node A profile to start, it should pick Node B s profiles.
    Just make a copy from node A and change the profile name accordingly to Node b. Then try to restart.
    Regards
    Nick Loy

  • ADSSO Service Not Working on Secondary CAS when done Failover

    We are running NAC OS 4.9.2 in OOB L2 Virtual Gateway...
    We have CAS Cluster
    Primary CAS -- 10.245.220.5  & Secondary CAS -- 10.245.220.6 and Service-IP 10.245.220.4
    When in HA Cluster Primary is Active and Secondary is Standby Ok , ADSSO is Working and Service is started
    We have capture details of same .
    10.245.220.5
    2013-04-18 15:46:21.833 +0530  Thread-70 INFO  com.perfigo.wlan.jmx.adsso.GSSServer               - GSSServer - done building kdc list for domain kotakgroup.com
    2013-04-18 15:46:21.833 +0530  Thread-70 INFO  com.perfigo.wlan.jmx.adsso.GSSServer               - GSSServer - KDC(s) :[kgp-gor-dc01.kotakgroup.com, kgp-gor-dc02.kotakgroup.com, kgp-gor-dc03.kotakgroup.com, kgp-gor-dc04.kotakgroup.com, kgp-gor-dc05.kotakgroup.com, kgp-dr-dc01.kotakgroup.com, kgp-dr-dc03.kotakgroup.com, kgp-dr-dc02.kotakgroup.com]
    2013-04-18 15:46:21.833 +0530  Thread-70 INFO  com.perfigo.wlan.jmx.adsso.GSSServer               - GSSServer - writeKrbFile: writing to file ../conf/krb.txt
    2013-04-18 15:46:21.833 +0530  Thread-70 INFO  com.perfigo.wlan.jmx.adsso.GSSServer               - GSSServer - writeKrbFile: wrote to file ../conf/krb.txt
    2013-04-18 15:46:21.834 +0530  Thread-70 INFO  com.perfigo.wlan.jmx.adsso.GSSServer               - GSSServer - creating login context ...
    2013-04-18 15:46:21.834 +0530  Thread-70 INFO  com.perfigo.wlan.jmx.adsso.GSSServer               - GSSServer - created login context ...javax.security.auth.login.LoginContext@bb3f71
    2013-04-18 15:46:39.207 +0530  Thread-70 INFO  com.perfigo.wlan.jmx.adsso.GSSServer               - Notifying GSSServer status Started
    2013-04-18 15:47:07.540 +0530  Timer-3 INFO  com.perfigo.wlan.jmx.adsso.GSSRetrier              - GSSR - Windows SSO is running
    When Primary is rebooted and Secondary becomes Active Ok , ADSSO is not working and Service is not started
    10.245.220.6
    2013-04-18 15:50:42.933 +0530  Timer-3 INFO  com.perfigo.wlan.jmx.adsso.GSSServer               - Server starting server ...
    2013-04-18 15:50:42.933 +0530  Timer-3 INFO  com.perfigo.wlan.jmx.adsso.GSSServer               - Server is now running ...
    2013-04-18 15:50:42.933 +0530  Thread-68 INFO  com.perfigo.wlan.jmx.adsso.GSSServer               - GSSServer - SPN : [casadsso/[email protected]]
    2013-04-18 15:50:42.933 +0530  Thread-68 INFO  com.perfigo.wlan.jmx.adsso.GSSServer               - GSSServer - building kdc list for domain kotakgroup.com
    2013-04-18 15:50:42.934 +0530  Thread-68 ERROR com.perfigo.wlan.jmx.adsso.GSSServer               - Unable to start server ... kotakgroup.com.
    2013-04-18 15:50:42.937 +0530  Thread-68 INFO  com.perfigo.wlan.jmx.adsso.GSSServer               - Notifying GSSServer status Stopped
    2013-04-18 15:50:42.937 +0530  Thread-68 INFO  com.perfigo.wlan.jmx.adsso.GSSServer               - server is exiting .
    Our Observation is krb.txt is not getting generated when Secondary is Active Ok ...
    Can any one suggest how to fix the issue...

    Hi,
    Can you check and see if dns and ntp are accurate and can you verify your AD environment? What version of domain controllers are in service if there are a mix then other steps like modifying a few files maybe needed.
    Also was the secondary CAS replaced or reimaged recently?
    Thanks,
    Sent from Cisco Technical Support iPad App

  • CVP Failover Not working

    Hi,
    I have UCCE with CVP in my environment and also we are having 2 Call/VXML server for redundancy purpose. I have configured below command on VXML GW also created dialpeer towards my second call/vxml server to achieve redundancy but somehow my fail over is not working.
    My UCCE with primary CVP is working fine without having any issue.
    ip host mediaserver x.x.x.x
    ip host mediaserver-backup x.x.x.x
    Also I have used media_server ECC variable as 'mediaserver' to achieve redundancy.
    I'm not sure weather i am missing any other configuration to achieve the fail over.
    Please suggest.

    Thanks Chaitan for your reply.
    Yes, I am looking for redundancy as if CVP1 goes down, then calls should go to CVP2.
    We are not using SIP proxy in our environment. Please find dial-peer configuration which are targeting to CVP1 & CVP2
    dial-peer voice 7701 voip
     destination-pattern 77..
     translate-outgoing called 1
     session protocol sipv2
     session target ipv4:168.167.0.162 (CVP1)
     session transport tcp
     voice-class codec 1  
     voice-class h323 1
     dtmf-relay rtp-nte h245-signal h245-alphanumeric
     no vad
    dial-peer voice 7702 voip
     destination-pattern 77..
     translate-outgoing called 1
     session protocol sipv2
     session target ipv4:168.167.0.163 (CVP2)
     session transport tcp
     voice-class codec 1  
     voice-class h323 1
     dtmf-relay rtp-nte h245-signal h245-alphanumeric
     no vad
    Thanks

Maybe you are looking for