FWSM Failover times

Hi Folks
I have 2 6509's with fwsm in them. They are xconfigured in active standby failover.... default values
the 6500's are OSPF routers also. Everything is redundant HSRP, FWSM etc.
when we reboot one of the 6500's it takes approximately 45 seconds for the standby FWSM to become active.
Is this normal? can the time be shortened?
any comments appreciated.

Hi,
The initial 15 seconds detection time can be reduced to 3 seconds, by tuning failover polltime and holdtime to the following:
"failover polltime unit 1 holdtime 3"
Also keep in mind after  switchover new active will establish nbr relation with nbr router. At any point of time standby does  not participate in OSPF process.  so in short new active have to  re-establish adjacencies.
Hope that helps.
Thanks,
Varun

Similar Messages

  • Fwsm failover times in real crash

    Hi,
    I have got two cat6k vss and two servis modelu FWSM
    How fast FWSM will be switch over to back up Firewall, after active-fw crash/down power?
    Sent from Cisco Technical Support iPad App

    Hi,
    The initial 15 seconds detection time can be reduced to 3 seconds, by tuning failover polltime and holdtime to the following:
    "failover polltime unit 1 holdtime 3"
    Also keep in mind after  switchover new active will establish nbr relation with nbr router. At any point of time standby does  not participate in OSPF process.  so in short new active have to  re-establish adjacencies.
    Hope that helps.
    Thanks,
    Varun

  • Ask the Expert:Configuring, Troubleshooting & Best Practices on ASA & FWSM Failover

    With Prashanth Goutham R.
    Welcome to the Cisco Support Community Ask the Expert conversation. This is an opportunity to learn and ask questions about the Configuring, Troubleshooting & Best Practices on Adaptive Security Appliances (ASA) & Firewall Services Module (FWSM) Failover with Prashanth Goutham. 
    Firewall Services Module (FWSM) is a high-performance stateful-inspection firewall that integrates into the Cisco® 6500 switch and 7600 router chassis. The FWSM monitors traffic flows using application inspection engines to provide a strong level of network security. Cisco ASA is a key component of the Cisco SecureX Framework, protects networks of all sizes with MultiScale performance and a comprehensive suite of highly integrated, market-leading security services.
    Prashanth Goutham is an experienced support engineer with the High Touch Technical Support (HTTS) Security team, covering all Cisco security technologies. During his four years with Cisco, he has worked with Cisco's major customers, troubleshooting routing, LAN switching, and security technologies. He is also qualified as a GIAC Certified Incident Handler (GCIH) by the SANS Institute.
    Remember to use the rating system to let Prashanth know if you have received an adequate response. 
    Prashanth might not be able to answer each question due to the volume expected during this event. Remember that you can continue the conversation on the Security sub-community forum shortly after the event. This event lasts through July 13, 2012. Visit this forum often to view responses to your questions and the questions of other community members.

    Hello John,
    This session is on Failover Functionality on all Cisco Firewalls, im not a geek on QOS however i have the answer for what you need. The way to limit traffic would be to enable QOS Policing on your Firewalls. The requirement that you have is about limiting 4 different tunnels to be utilizing the set limits and drop any further packets. This is called Traffic Policing. I tried out the following in my lab and it looks good.
    access-list tunnel_one extended permit ip 10.1.0.0 255.255.0.0 20.1.0.0 255.255.0.0access-list tunnel_two extended permit ip 10.2.0.0 255.255.0.0 20.2.0.0 255.255.0.0access-list tunnel_three extended permit ip 10.3.0.0 255.255.0.0 20.3.0.0 255.255.0.0access-list tunnel_four extended permit ip 10.4.0.0 255.255.0.0 20.4.0.0 255.255.0.0    class-map Tunnel_Policy1     match access-list tunnel_one   class-map Tunnel_Policy2     match access-list tunnel_two   class-map Tunnel_Policy3     match access-list tunnel_three   class-map Tunnel_Policy4     match access-list tunnel_four  policy-map tunnel_traffic_limit     class Tunnel_Policy1      police output 4096000   policy-map tunnel_traffic_limit     class Tunnel_Policy2      police output 5734400   policy-map tunnel_traffic_limit     class Tunnel_Policy3      police output 2457600    policy-map tunnel_traffic_limit     class Tunnel_Policy4      police output 4915200service-policy tunnel_traffic_limit interface outside
    You might want to watch out for the following changes in values:
    HTTS-SEC-R2-7-ASA5510-02(config-cmap)#     policy-map tunnel_traffic_limitHTTS-SEC-R2-7-ASA5510-02(config-pmap)#      class Tunnel_Policy1HTTS-SEC-R2-7-ASA5510-02(config-pmap-c)#       police output 4096000HTTS-SEC-R2-7-ASA5510-02(config-pmap-c)#HTTS-SEC-R2-7-ASA5510-02(config-pmap-c)#     policy-map tunnel_traffic_limitHTTS-SEC-R2-7-ASA5510-02(config-pmap)#      class Tunnel_Policy2HTTS-SEC-R2-7-ASA5510-02(config-pmap-c)#       police output 5734400WARNING: police rate 5734400 not supported. Rate is changed to 5734000    
    HTTS-SEC-R2-7-ASA5510-02(config-pmap-c)#HTTS-SEC-R2-7-ASA5510-02(config)#     policy-map tunnel_traffic_limitHTTS-SEC-R2-7-ASA5510-02(config-pmap)#      class Tunnel_Policy3HTTS-SEC-R2-7-ASA5510-02(config-pmap-c)#       police output 2457600WARNING: police rate 2457600 not supported. Rate is changed to 2457500HTTS-SEC-R2-7-ASA5510-02(config-pmap-c)#HTTS-SEC-R2-7-ASA5510-02(config-pmap-c)#     policy-map tunnel_traffic_limitHTTS-SEC-R2-7-ASA5510-02(config-pmap)#      class Tunnel_Policy4HTTS-SEC-R2-7-ASA5510-02(config-pmap-c)#       police output 4915200WARNING: police rate 4915200 not supported. Rate is changed to 4915000I believe this is because of the software granularity and the way IOS rounds it off in multiples of a certain value, so watch out for the exact values you might get finally. I used this website to calculate your Kilobyte values to Bits: http://www.matisse.net/bitcalc/
    The Final outputs of the configured values were :
        Class-map: Tunnel_Policy1      Output police Interface outside:        cir 4096000 bps, bc 128000 bytes        conformed 0 packets, 0 bytes; actions:  transmit        exceeded 0 packets, 0 bytes; actions:  drop        conformed 0 bps, exceed 0 bps     Class-map: Tunnel_Policy2      Output police Interface outside:        cir 5734000 bps, bc 179187 bytes        conformed 0 packets, 0 bytes; actions:  transmit        exceeded 0 packets, 0 bytes; actions:  drop        conformed 0 bps, exceed 0 bps    Class-map: Tunnel_Policy3      Output police Interface outside:        cir 2457500 bps, bc 76796 bytes        conformed 0 packets, 0 bytes; actions:  transmit        exceeded 0 packets, 0 bytes; actions:  drop        conformed 0 bps, exceed 0 bps    Class-map: Tunnel_Policy4      Output police Interface outside:        cir 4915000 bps, bc 153593 bytes        conformed 0 packets, 0 bytes; actions:  transmit        exceeded 0 packets, 0 bytes; actions:  drop        conformed 0 bps, exceed 0 bps
    Please refer to the QOS document on CCO here for further information: http://www.cisco.com/en/US/docs/security/asa/asa84/configuration/guide/conns_qos.html
    Hope that helps..

  • Failover time.

    hi,
              I have got a problem with failover time.
              My environment,
              One cluster: two weblogic servers5.1 sp4s running on Sun Solaris. The
              cluster uses In-memory replication.
              Web Server is Apache running on Sun solaris. Apache bridge is setup
              with weblogic.conf reads:
              WeblogicCluster 10.2.2.20:7001,10.2.2.21:7001
              ConnectTimeoutSecs 10
              ConnectRetrySecs 5
              StatPath true
              HungServerRecoverSecs 30:100:120
              Everything is starting fine. Both weblogic server says Joins the
              cluster....and application is working fine. When one weblogic server is
              forced to shutdown, failover takes place fine.
              The problem occurs when the machine, that has first entry in
              weblogic.conf file( 10.2.2.20 )running weblogic server is unplugged from
              the network, failover takes after three minutes.
              Could someone help me how to reduce this time. Is there any property
              that has to be set in the weblogic.conf or in weblogic.properties file
              that need to be set.
              Thanks in Advance
              Arun
              

    arunbabu wrote:
              > hi,
              > I have got a problem with failover time.
              > My environment,
              > One cluster: two weblogic servers5.1 sp4s running on Sun Solaris. The
              > cluster uses In-memory replication.
              > Web Server is Apache running on Sun solaris. Apache bridge is setup
              > with weblogic.conf reads:
              >
              > WeblogicCluster 10.2.2.20:7001,10.2.2.21:7001
              > ConnectTimeoutSecs 10
              > ConnectRetrySecs 5
              > StatPath true
              > HungServerRecoverSecs 30:100:120
              >
              > Everything is starting fine. Both weblogic server says Joins the
              > cluster....and application is working fine. When one weblogic server is
              > forced to shutdown, failover takes place fine.
              > The problem occurs when the machine, that has first entry in
              > weblogic.conf file( 10.2.2.20 )running weblogic server is unplugged from
              > the network, failover takes after three minutes.
              > Could someone help me how to reduce this time. Is there any property
              > that has to be set in the weblogic.conf or in weblogic.properties file
              > that need to be set.
              HungServerRecoverSecs seconds
              This implementation takes care of the hung or unresponsive servers in
              the cluster. The plug-in waits for HungServerRecoverSecs for the server to
              respond and then declares that server dead, failing over to the next server.
              The minimum value for this setting is 10 and the maximum value is 600. The
              default is set at 300. It should be set to a very large value. If it is less
              than the time the servlets take to process, then you will see unexpected
              results.
              Try reducing hungserver recover seconds. But remember if you application
              processing takes long time then you will in trouble since the plugin will be
              failing over to other servers in the cluster and you will be thrashing the
              servers.
              - Prasad
              >
              > Thanks in Advance
              > Arun
              Cheers
              - Prasad
              

  • What are typical failover times for application X on Sun Cluster

    Our company does not yet have any hands-on experience with clustering anything on Solaris, although we do with Veritas and Miscrosoft. My experience with MS is that it is as close to seemless (instantaneous) as possible. The Veritas clustering takes a little bit longer to activate the standby's. A new application we are bringing in house soon runs on Sun cluster (it is some BEA Tuxedo/WebLogic/Oracle monster). They claim the time it takes to flip from the active node to the standby node is ~30minutes. This to us seems a bit insane since they are calling this "HA". Is this type of failover time typical in Sun land? Thanks for any numbers or reference.

    This is a hard question to answer because it depends on the cluster agent/application.
    On one hand you may have a simple Sun Cluster application that fails over in seconds because it has to do a limited amount of work (umount here, mount there, plumb network interface, etc) to actually failover.
    On the other hand these operations may, depending on the application, take longer than another application due to the very nature of that application.
    An Apache web server failover may take 10-15 seconds but an Oracle failover may take longer. There are many variables that control what happens from the time that a node failure is detected to the time that an application appears on another cluster node.
    If the failover time is 30 minutes I would ask your vendor why that is exactly.
    Not in a confrontational way but a 'I don't get how this is high availability' since the assumption is that up to 30 minutes could elapse from the time that your application goes down to it coming back on another node.
    A better solution might be a different application vendor (I know, I know) or a scalable application that can run on more than one cluster node at a time.
    The logic with the scalable approach is that if a failover takes 30 minutes or so to complete it (failover) becomes an expensive operation so I would rather that my application can use multiple nodes at once rather than eat a 30 minute failover if one node dies in a two node cluster:
    serverA > 30 minute failover > serverB
    seems to be less desirable than
    serverA, serverB, serverC, etc concurrently providing access to the application so that failover only happens when we get down to a handful of nodes
    Either one is probably more desirable than having an application outage(?)

  • VIP failover time

    I have configured a critical service(ap-kal-pinglist) for the VIP redundant failover, default freq,maxfail and retry freq is 5,3,5, so I think the failover time is 5+5*3*2=35s.But the virtual-router's state changed from "master" to "backup" in around 5 secs after connection lost.
    Anyone help me to understand it?

    Service sw1-up-down connect to e2 interface,going down in 15sec
    Service sw2-up-down connect to e3 interface,going down in 4sec?
    JAN 14 02:38:41 5/1 3857 NETMAN-2: Generic:LINK DOWN for e2
    JAN 14 02:39:57 5/1 3858 NETMAN-2: Generic:LINK DOWN for e3
    JAN 14 02:39:57 5/1 3859 VRRP-0: VrrpTx: Failed on Ipv4FindInterface
    JAN 14 02:40:11 5/1 3860 NETMAN-2: Enterprise:Service Transition:sw2-up-down -> down
    JAN 14 02:40:11 5/1 3861 NETMAN-2: Enterprise:Service Transition:sw1-up-down -> down

  • Failover time using BFD

    Hi Champs,
    we have configured BFD in multihoming scenario with BGP routing protocol.Timer configuration is bfd interval 100 min_rx 100 multiplier 5.
    Failover from first ISP to second ISP takes 30 sec and same from first ISP to second ISP takes more than 1min. Can you suggest reason for different failver times and how can i have equal failover time from both ISP.How convergence time is calculated in BGP + BFD scenario?
    Regards
    V

    Vicky,
    A simple topology would help better understand the scenario. Do you have both the ISP terminated on same router or different router?.
    How many prefixes are you learning?. Full internet table or few prefixes?.
    Accordingly, you can consider BGP PIC or best external to speed up the convergence.
    -Nagendra

  • 2540 / RDAC path failover time

    Hi,
    I have a RHEL 5.3 server with two single port HBAs. These connect to a Brocade 300 switch and are zoned to two controllers on a 2540. Each HBA is zoned to see each controller. RDAC is used as the multipathing driver.
    When testing the solution, if I pull the cable from the active path between the HBA and the switch, it takes 60 seconds before the path fails over to the second HBA. No controller failover is taking place on the array - the path already exists through the brocade between the preferred array controller and the second HBA. After 60 seconds disk I/O continues to the original controller.
    Is this normal ? Is there a way of reducing the failover time ? I had a look at the /etc/mpp.conf variables but there is nothing obvious there that is causing this delay.
    Thanks

    Thanks Hugh,
    I forgot to mention that we were using Qlogic HBAs so our issue was a bit different...
    To resolve our problem; since we had 2x2FC HBA cards in each server we needed to configure zoning on the brocade switch to ensure that each HBA port only saw one of the two array controllers (previously both controllers were visable to each HBA port - which was breaking some RDAC rule). Also we upgraded the qlogic drivers using qlinstall -i before installing RDAC (QLogic drivers which come with RHEL5.3 are pretty old it seems).
    Anyway, after these changes path failovers were working as expected and our timeout value of 60sec for Oracle ocfs2 cluster was not exceeded.
    We actually ended up having to increase the ocfs2 timeout from 60 to 120 seconds because another test case failed - it was taking more than 60sec for a controller to failover (simulated by placing active controller offline from the service advisor). We are not sure if this time is expected or not... anyway have a service request open for this.
    Thanks again,
    Trev

  • Optimize rac failover time?

    I have 2node RAC and the failover time is taking 4 minutes. Please advice some tips/documents/links that shows, how to optimize the rac failover time?
    [email protected]

    Hi
    Could you provide some more information of what it is you are trying to achieve. I assume you are talking about a the time it takes for clients to start connecting to the available instance on the second node, could you clarify this?
    There is SQLnet parameters that can be set, you can also make shadow connections with the preconnect parameter in your fail_over section of your tnsnames.ora on the clients.
    Have you set both of your hosts as preferred in the service configuration on the RAC cluster. The impact will be less in a failure as approximately half of your connections will be unaffeced when an instance fails.
    Cheers
    Peter

  • SW-6509-FWSM failover Troubleshooting First aid

    Fault Description:
    (1)
    active  FWSM and standby FWSM  inside interface Between,ping fails。
    on side FWSM---active: ping 172.17.1.50 -------OK,ping 172.17.1.49------ping fails;
    on side FWSM---standby: ping 172.17.1.49--------OK,ping 172.17.1.50-------ping fails;
    but,active  FWSM and standby FWSM  outside interface between,ping OK。
    on side FWSM---active:ping 172.17.1.36  、  ping 172.17.1.37、ping 172.17.1.35/33/34/、ping www.baidu.com -----------All OK;
    on side FWSM---standby:ping 172.17.1.36 、  ping 172.17.1.37 、ping 172.17.1.35/33/34/、ping www.baidu.com-----------All OK;
    (2)
    Another problem:
    active  FWSM and standby FWSM  inside interface,ping  7706-------All fails。
    Summary:May be caused fwsm。
    Topology :Attachment
    FWSM :
    FWSM#                       show failover state
    ====My State===
    Primary | Active |
    ====Other State===
    Secondary | Standby |
    ====Configuration State===
        Interface config Syncing - STANDBY
        Sync Done
    ====Communication State===
        Mac set
    =========Failed Reason==============
    My Fail Reason:
        Ifc Failure
    Other Fail Reason:
        Comm Failure
    FWSM# show failover
    Failover On
    Failover unit Primary
    Failover LAN Interface: lan Vlan 997 (up)
    Unit Poll frequency 1 seconds, holdtime 15 seconds
    Interface Poll frequency 15 seconds
    Interface Policy 50%
    Monitored Interfaces 42 of 250 maximum
    Config sync: active
    Version: Ours 4.0(13), Mate 4.0(13)
    Last Failover at: 19:08:24 Beijing Dec 2 2013
        This host: Primary - Active
            Active time: 358944 (sec)
        Interface outside (172.17.1.36): Normal
        Interface inside (172.17.1.49): Normal (Not-Monitored)
        Other host: Secondary - Standby Ready
            Active time: 0 (sec)
        Interface outside (172.17.1.37): Normal
        Interface inside (172.17.1.50): Normal (Not-Monitored)
    (Not-Monitored) -----------------??????

    That's what I thought but the again, from the 6500 config prompt I actually get echo replys(!) from the FWCTX, with capture enabled as:
         access-list CAP permit ip any any
         capture mgmt access-list CAP interface MGMT packet-length 1500 circular-buffer
    But it shows blank and no hit counts. Same happens usind RTMonitor in ASDM (6.2.(2f)) some packets that are permited and routed correctly aren't actually noticed. I don't get any logging for the missing/dropped/denied echo replies from the FWCTX to the 6500 MSFC nor for the successful replies from the 6500 to the FWCTX withh ASDM Debugging logging on.

  • FWSM Failover configuration - One Context

    Hi,
    Is it possible to configure only one context in H.A. in FWSM? , yesterday  I tried to configure this but I can´t .
    Please check my configuration and tell me your opinon, or not is possible ,  maybe I have to configure all context in H.A.
    This message appears in the console when I active the FAILOVER
    Nov 23 2011 19:20:04: %FWSM-1-105002: (Secondary) Enabling failover.
    Nov 23 2011 19:20:08: %FWSM-1-105038: (Secondary) Interface count mismatch
    Nov 23 2011 19:20:08: %FWSM-1-104002: (Secondary) Switching to STNDBY - Other unit has different set of vlans configured
    Nov 23 2011 19:20:11: %FWSM-1-105001: (Secondary) Disabling failover.
    Nov 23 2011 19:23:58: %FWSM-6-302010: 0 in use, 46069 most used
    FWSM-Primario# show failover
    Failover On
    Failover unit PrimaryFailover LAN Interface: FAILLINK Vlan 1100 (up)
    Unit Poll frequency 1 seconds, holdtime 15 seconds
    Interface Poll frequency 15 seconds
    Interface Policy 50%
    Monitored Interfaces 1 of 250 maximum
    failover replication http
    Config sync: active
    Version: Ours 4.1(5), Mate 4.1(5)
    Last Failover at: 19:18:35 UTC Nov 23 2011
            This host: Primary - Active
                    Active time: 1125 (sec)
                    admin Interface inside (10.1.1.1): Normal (Not-Monitored)
                    admin Interface outside (20.1.1.1): No Link (Not-Monitored)
                    FW-GoB-Fija Interface WASOB2N-SISOB2N-Fija (10.115.30.36): Normal (Waiting)
                    GESTION-WAS Interface OUTSIDE (10.116.20.22): Normal (Not-Monitored)
                    GESTION-WAS Interface U2000 (10.123.20.1): Normal (Not-Monitored)
            Other host: Secondary - Cold Standby
                    Active time: 0 (sec)
                    admin Interface inside (0.0.0.0): Unknown (Not-Monitored)
                    admin Interface outside (0.0.0.0): Unknown (Not-Monitored)
                    FW-GoB-Fija Interface WASOB2N-SISOB2N-Fija (10.115.30.37): Unknown (Waiting)
                    GESTION-WAS Interface OUTSIDE (0.0.0.0): Unknown (Not-Monitored)
                    GESTION-WAS Interface U2000 (0.0.0.0): Unknown (Not-Monitored)
    Stateful Failover Logical Update Statistics
            Link : STATELINK Vlan 1101 (up)
            Stateful Obj    xmit       xerr       rcv        rerr     
            General         0          0          0          0       
            sys cmd         0          0          0          0       
            up time         0          0          0          0       
            RPC services    0          0          0          0       
            TCP conn        0          0          0          0       
            UDP conn        0          0          0          0       
            ARP tbl         0          0          0          0       
            Xlate_Timeout   0          0          0          0       
            AAA tbl         0          0          0          0       
            DACL            0          0          0          0       
            Acl optimization        0          0          0          0       
            OSPF Area SeqNo         0          0          0          0       
            Mamba stats msg         0          0          0          0       
            Logical Update Queue Information
                            Cur     Max     Total
            Recv Q:         0       0       0
            Xmit Q:         0       0       0
    FWSM-Primario# 
    FWSM-Primario#
    The configuration in the SW-6500
    SW-PRIMARY#sh run | in fire
    firewall multiple-vlan-interfaces
    firewall module 3 vlan-group 1,2
    firewall vlan-group 1  10,20,25,400,1709
    firewall vlan-group 2  1100,1101,1111,1112
    SW-SECUNDARY#sh run | in fire
    firewall multiple-vlan-interfaces
    firewall module 3 vlan-group 1,2
    firewall vlan-group 1  900,1709
    firewall vlan-group 2  1100,1101,1111,1112
    ip subnet-zero
    FWSM-Primario(config)# sh run
    : Saved
    FWSM Version 4.1(5) <system>
    resource acl-partition 12
    hostname FWSM-Primario
    hostname secondary FWSM-Secundario
    domain-name cisco.com
    enable password 8Ry2YjIyt7RRXU24 encrypted
    interface Vlan10
    interface Vlan29
    shutdown
    interface Vlan400
    interface Vlan1100
    description LAN Failover Interface
    interface Vlan1101
    description STATE Failover Interface
    interface Vlan1111
    description FWSW_7200_GoB_Fija
    interface Vlan1112
    description FWSW_7200_GoB_BA
    interface Vlan1709
    passwd 2KFQnbNIdI.2KYOU encrypted
    class default
      limit-resource IPSec 5
      limit-resource Mac-addresses 65535
      limit-resource ASDM 5
      limit-resource SSH 5
      limit-resource Telnet 5
      limit-resource All 0
    ftp mode passive
    pager lines 24
    failover
    failover lan unit primary
    failover lan interface FAILLINK Vlan1100
    failover replication http
    failover link STATELINK Vlan1101
    failover interface ip FAILLINK 10.115.30.17 255.255.255.252 standby 10.115.30.18
    failover interface ip STATELINK 10.115.30.21 255.255.255.252 standby 10.115.30.22
    failover group 1
      preempt
      replication http
    no asdm history enable
    arp timeout 14400
    console timeout 0
    admin-context admin
    context admin
      allocate-interface Vlan10
      allocate-interface Vlan29
      config-url disk:/admin.cfg
    context GESTION-WAS
      allocate-interface Vlan1709
      allocate-interface Vlan400
      config-url disk:/GESTION-WAS
    context FW-GoB-Fija
      allocate-interface Vlan1111
      allocate-interface Vlan1112
      config-url disk:/FW-GoB-Fija.cfg
      join-failover-group 1
    prompt hostname context
    Cryptochecksum:8b5fabc676745cfbafd6569c623a98b1
    : end
    SECUNDARY FIREWALL.
    FWSM# sh run
    : Saved
    FWSM Version 4.1(5) <system>
    resource acl-partition 12
    hostname FWSM
    domain-name cisco.com
    enable password S13FcA2URRiGrTIN encrypted
    interface Vlan100
    shutdown
    interface Vlan900
    interface Vlan1100
    description LAN Failover Interface
    interface Vlan1101
    description STATE Failover Interface
    interface Vlan1111
    interface Vlan1112
    interface Vlan1709
    passwd 2KFQnbNIdI.2KYOU encrypted
    class default
      limit-resource IPSec 5
      limit-resource Mac-addresses 65535
      limit-resource ASDM 5
      limit-resource SSH 5
      limit-resource Telnet 5
      limit-resource All 0
    ftp mode passive
    pager lines 24
    no failover
    failover lan unit secondary
    failover lan interface FAILLINK Vlan1100
    failover replication http
    failover link STATELINK Vlan1101
    failover interface ip FAILLINK 10.115.30.17 255.255.255.252 standby 10.115.30.18
    failover interface ip STATELINK 10.115.30.21 255.255.255.252 standby 10.115.30.22
    failover group 1
      preempt
      replication http
    no asdm history enable
    arp timeout 14400
    console timeout 0
    admin-context PCBA-NAT
    context PCBA-NAT
      allocate-interface Vlan1709
      allocate-interface Vlan900
      config-url disk:/PCBA-NAT
    context FW-GoB-Fija
      allocate-interface Vlan1111
      allocate-interface Vlan1112
      config-url disk:/FW-GoB-Fija
      join-failover-group 1
    prompt hostname context
    Cryptochecksum:c7529707b6d10d02c296a57253a925b2
    : end
    FWSM#
    I WILL APRECIATE YOUR COMMENTS, BECAUSE IT´S IMPORTANT , THE FWSM SUPPORT FOR DEFAULT 3 CONTEXT.
    Regards,
    Robert Soto.

    Hi Robert,
    Unfortunately no, this is not possible.
    Since you enable failover at the system level, all contexts will particpate in failover and there is no way to change this.
    Additionally, both firewalls in the failover pair must have identical licenses, VLANs, and software versions in order for failover to work properly.
    -Mike

  • RAC failover time problem!

    I try TAF (transparent application failover) on RAC (9.0.1.3 and 9.2.0.1) and i have the same problem. When i play with "shutdown abort" the failover is fast (about 5-7 sec.). When i turn off the node, the failover is working fine, but it take too mutch time (about 3 minutes). Is there any parameter (tcp/ip or oracle net timeout, or keepalive parameter) that helps?
    Thanks: Robert Gasz

    Can you confirm that you are able to set up RAC with 9.2.0 on Linux?
    Did you use the files downloadable from technet?
    I have problems in joining the cluster from the second node (anyone) when
    the first is up (anyone).
    I hadn't this problem with 9.0.1.x version.
    What is the release of Linuz you are using?
    Are you using raw devices or lvm or what for your raw partitions?
    Thanks in advance.
    Bye,
    Gianluca

  • FWSM Failover - Possible with different hardware versions?

    Hi, I need to replace a FWSM module currently running as the primary unit in a failover configuration installed in two 6509s. The replacement FWSM module is a newer hardware version than the current module it is to replace. Obviously I will ensure the same IOS and licenses are installed on the new module but will having a difference in the hardware versions affect the failover configuration?
    The faulty module being replaced has the following hardware config:
    HW 3.0
    FW 7.2(1)
    The replacement module has the following config:
    HW 4.2
    FW 7.2(1)
    Thanks in advance for any help..

    Daniel, this is a good question for TAC. I do not see any ducumentation on FWSM requiering to be same Hardware version, the failover requires same code and you are correct on that one. I don't think hardware version diferences may affect failover, I would suggest to have it cleared by TAC.
    Jorge

  • HA nfs failover time? [SC3.1 2005Q4]

    Just build a test cluster to play with for a project (a v210, a v240 and a 3310). All appears to be working fine and we have a couple of NFS services running on it. One question however :-)
    How long should it take to failover a simple NFS resource group? It's currently taking something like 1min45secs to failover and the scswitch command doesn't return for over 4min30secs. Is that normal (it probably is I just thought NFS would migrate faster than this for some reason :)).
    Also, why does the scswitch command take so much longer to return - the service has failed over and started fine yes it still takes a couple more mins to return a prompt. Is it waiting for sucessful probes or something (which I guess makes sense...)
    cheers,
    Darren

    Failing over from one machine (dev-v210) to the other (dev-v240):
    Aug 1 11:31:59 dev-v210 Cluster.RGM.rgmd: [ID 529407 daemon.notice] resource group nfs-rg1 state on node dev-v210 change to RG_PENDING_OFFLINE
    Aug 1 11:31:59 dev-v210 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nfs1-res state on node dev-v210 change to R_MON_STOPPING
    Aug 1 11:31:59 dev-v210 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nfs1-hastorageplus-res state on node dev-v210 change to R_MON_STOPPING
    Aug 1 11:31:59 dev-v210 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nfs1-hafoip-res state on node dev-v210 change to R_MON_STOPPING
    Aug 1 11:31:59 dev-v210 Cluster.RGM.rgmd: [ID 707948 daemon.notice] launching method <hafoip_monitor_stop> for resource <nfs1-hafoip-res>, resource group <nfs-rg1>, timeout <300> seconds
    Aug 1 11:31:59 dev-v210 Cluster.RGM.rgmd: [ID 707948 daemon.notice] launching method <hastorageplus_monitor_stop> for resource <nfs1-hastorageplus-res>, resource group <nfs-rg1>, timeout <90> seconds
    Aug 1 11:31:59 dev-v210 Cluster.RGM.rgmd: [ID 707948 daemon.notice] launching method <nfs_monitor_stop> for resource <nfs1-res>, resource group <nfs-rg1>, timeout <300> seconds
    Aug 1 11:31:59 dev-v210 Cluster.RGM.rgmd: [ID 736390 daemon.notice] method <hastorageplus_monitor_stop> completed successfully for resource <nfs1-hastorageplus-res>, resource group <nfs-rg1>, time used: 0% of timeout <90 seconds>
    Aug 1 11:31:59 dev-v210 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nfs1-hastorageplus-res state on node dev-v210 change to R_ONLINE_UNMON
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 736390 daemon.notice] method <hafoip_monitor_stop> completed successfully for resource <nfs1-hafoip-res>, resource group <nfs-rg1>, time used: 0% of timeout <300 seconds>
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nfs1-hafoip-res state on node dev-v210 change to R_ONLINE_UNMON
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 736390 daemon.notice] method <nfs_monitor_stop> completed successfully for resource <nfs1-res>, resource group <nfs-rg1>, time used: 0% of timeout <300 seconds>
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nfs1-res state on node dev-v210 change to R_ONLINE_UNMON
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nfs1-res state on node dev-v210 change to R_STOPPING
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 707948 daemon.notice] launching method <nfs_svc_stop> for resource <nfs1-res>, resource group <nfs-rg1>, timeout <300> seconds
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 784560 daemon.notice] resource nfs1-res status on node dev-v210 change to R_FM_UNKNOWN
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 922363 daemon.notice] resource nfs1-res status msg on node dev-v210 change to <Stopping>
    Aug 1 11:32:00 dev-v210 SC[SUNW.nfs:3.1,nfs-rg1,nfs1-res,nfs_svc_stop]: [ID 584207 daemon.notice] Stopping nfsd and mountd.
    Aug 1 11:32:00 dev-v210 SC[SUNW.nfs:3.1,nfs-rg1,nfs1-res,nfs_svc_stop]: [ID 948424 daemon.notice] Stopping NFS daemon /usr/lib/nfs/mountd.
    Aug 1 11:32:00 dev-v210 SC[SUNW.nfs:3.1,nfs-rg1,nfs1-res,nfs_svc_stop]: [ID 948424 daemon.notice] Stopping NFS daemon /usr/lib/nfs/nfsd.
    Aug 1 11:32:00 dev-v210 nfssrv: [ID 624069 kern.notice] NOTICE: nfs_server: server is now quiesced; NFSv4 state has been preserved
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 736390 daemon.notice] method <nfs_svc_stop> completed successfully for resource <nfs1-res>, resource group <nfs-rg1>, time used: 0% of timeout <300 seconds>
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nfs1-res state on node dev-v210 change to R_STOPPED
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nfs1-hastorageplus-res state on node dev-v210 change to R_STOPPING
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 707948 daemon.notice] launching method <hastorageplus_stop> for resource <nfs1-hastorageplus-res>, resource group <nfs-rg1>, timeout <1800> seconds
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 784560 daemon.notice] resource nfs1-hastorageplus-res status on node dev-v210 change to R_FM_UNKNOWN
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 922363 daemon.notice] resource nfs1-hastorageplus-res status msg on node dev-v210 change to <Stopping>
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 736390 daemon.notice] method <hastorageplus_stop> completed successfully for resource <nfs1-hastorageplus-res>, resource group <nfs-rg1>, time used: 0% of timeout <1800 seconds>
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nfs1-hastorageplus-res state on node dev-v210 change to R_STOPPED
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nfs1-hafoip-res state on node dev-v210 change to R_STOPPING
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 784560 daemon.notice] resource nfs1-hafoip-res status on node dev-v210 change to R_FM_UNKNOWN
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 922363 daemon.notice] resource nfs1-hafoip-res status msg on node dev-v210 change to <Stopping>
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 707948 daemon.notice] launching method <hafoip_stop> for resource <nfs1-hafoip-res>, resource group <nfs-rg1>, timeout <300> seconds
    Aug 1 11:32:00 dev-v210 ip: [ID 678092 kern.notice] TCP_IOC_ABORT_CONN: local = 129.012.020.137:0, remote = 000.000.000.000:0, start = -2, end = 6
    Aug 1 11:32:00 dev-v210 ip: [ID 302654 kern.notice] TCP_IOC_ABORT_CONN: aborted 0 connection
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 784560 daemon.notice] resource nfs1-hafoip-res status on node dev-v210 change to R_FM_OFFLINE
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 922363 daemon.notice] resource nfs1-hafoip-res status msg on node dev-v210 change to <LogicalHostname offline.>
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 736390 daemon.notice] method <hafoip_stop> completed successfully for resource <nfs1-hafoip-res>, resource group <nfs-rg1>, time used: 0% of timeout <300 seconds>
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nfs1-hafoip-res state on node dev-v210 change to R_OFFLINE
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nfs1-res state on node dev-v210 change to R_POSTNET_STOPPING
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 707948 daemon.notice] launching method <nfs_postnet_stop> for resource <nfs1-res>, resource group <nfs-rg1>, timeout <300> seconds
    Aug 1 11:32:00 dev-v210 SC[SUNW.nfs:3.1,nfs-rg1,nfs1-res,nfs_postnet_stop]: [ID 584207 daemon.notice] Stopping lockd and statd.
    Aug 1 11:32:00 dev-v210 SC[SUNW.nfs:3.1,nfs-rg1,nfs1-res,nfs_postnet_stop]: [ID 948424 daemon.notice] Stopping NFS daemon /usr/lib/nfs/lockd.
    Aug 1 11:32:01 dev-v210 SC[SUNW.nfs:3.1,nfs-rg1,nfs1-res,nfs_postnet_stop]: [ID 948424 daemon.notice] Stopping NFS daemon /usr/lib/nfs/statd.
    Aug 1 11:32:01 dev-v210 SC[SUNW.nfs:3.1,nfs-rg1,nfs1-res,nfs_postnet_stop]: [ID 530938 daemon.notice] Starting NFS daemon /usr/lib/nfs/statd.
    Aug 1 11:33:51 dev-v210 SC[SUNW.nfs:3.1,nfs-rg1,nfs1-res,nfs_postnet_stop]: [ID 906922 daemon.notice] Started NFS daemon /usr/lib/nfs/statd.
    Aug 1 11:33:51 dev-v210 SC[SUNW.nfs:3.1,nfs-rg1,nfs1-res,nfs_postnet_stop]: [ID 530938 daemon.notice] Starting NFS daemon /usr/lib/nfs/lockd.
    Aug 1 11:33:51 dev-v210 SC[SUNW.nfs:3.1,nfs-rg1,nfs1-res,nfs_postnet_stop]: [ID 906922 daemon.notice] Started NFS daemon /usr/lib/nfs/lockd.
    Aug 1 11:33:51 dev-v210 SC[SUNW.nfs:3.1,nfs-rg1,nfs1-res,nfs_postnet_stop]: [ID 530938 daemon.notice] Starting NFS daemon /usr/lib/nfs/mountd.
    Aug 1 11:33:51 dev-v210 SC[SUNW.nfs:3.1,nfs-rg1,nfs1-res,nfs_postnet_stop]: [ID 906922 daemon.notice] Started NFS daemon /usr/lib/nfs/mountd.
    Aug 1 11:33:51 dev-v210 SC[SUNW.nfs:3.1,nfs-rg1,nfs1-res,nfs_postnet_stop]: [ID 530938 daemon.notice] Starting NFS daemon /usr/lib/nfs/nfsd.
    Aug 1 11:33:51 dev-v210 nfssrv: [ID 760318 kern.notice] NOTICE: nfs_server: server was previously quiesced; existing NFSv4 state will be re-usedAug 1 11:33:51 dev-v210 SC[SUNW.nfs:3.1,nfs-rg1,nfs1-res,nfs_postnet_stop]: [ID 906922 daemon.notice] Started NFS daemon /usr/lib/nfs/nfsd.
    Aug 1 11:33:51 dev-v210 Cluster.RGM.rgmd: [ID 784560 daemon.notice] resource nfs1-res status on node dev-v210 change to R_FM_OFFLINE
    Aug 1 11:33:51 dev-v210 Cluster.RGM.rgmd: [ID 922363 daemon.notice] resource nfs1-res status msg on node dev-v210 change to <Completed successfully.>
    Aug 1 11:33:51 dev-v210 Cluster.RGM.rgmd: [ID 736390 daemon.notice] method <nfs_postnet_stop> completed successfully for resource <nfs1-res>, resource group <nfs-rg1>, time used: 36% of timeout <300 seconds>
    Aug 1 11:33:51 dev-v210 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nfs1-res state on node dev-v210 change to R_OFFLINE
    The delay seems come with "Starting NFS daemon /usr/lib/nfs/statd." It appears to stop it and then start it again - and the starting takes a couple of mins.
    When the other node starts it up again we see a similar thing - starting statd takes a couple of mins.
    Other than that it works fine - it feels like statd blocks on some sort of timeout?...
    Would be good to get this failing over faster if possible!
    Uname reports "SunOS dev-v210 5.10 Generic_118833-17 sun4u sparc SUNW,Sun-Fire-V210". Not using veritas VM at all - this is all SVM on these machines.
    Darren

  • FWSM failover

    Guys,
    I have a dought about failover with FWSM.
    I have 2 Cisco 6500 with FWSM board. They work with Active (Primary) /Standby (Standby).
    This days, I had a problem with Active,  then, the Standby was changed to Active, ok.
    When the Primary returned, I checked that the Secondary FWSM configuration had a line: "no failover"
    I didn't understood  why the Secondary changed this line, because before of problem this line was "failover".
    So, i had to change this line putting: failover and them normalize.
    Someone knows why the Secondary FWSM changed the line failover to no failover?  Is normal? I could to configure it to don't change?
    Thank you!
    Anderson.

    Hi Anderson,
    The most common cause of this is if you have a different set of VLANs passed to the FWSMs. Check the output of 'show run | i firewall' on both 6500s and make sure the output matches exactly on both sides.
    -Mike

Maybe you are looking for