6509 Sup swap failover time

Hi,
I'm having MLS 6509 with Sup2/MSFC2 and running either 12.1(23)E1 and 12.1(26)E1.
IF i do sup swap using "reducancy force switchover" all line card interface goes down and even ospf also goes down and then come back and that create outage for me.
Can any body tell me is it expected behaviour ? and if yes, what is taken over time inlclduing making interface and all routing process up.
Rgds
Chintan

Hi Chintan
I think the best way to avoid this is configuring NSF with SSO Supervisor Engine Redundancy:
http://www.cisco.com/en/US/products/hw/switches/ps708/products_configuration_guide_chapter09186a008027e4cd.html
Cisco NSF works with SSO to minimize the amount of time a network is unavailable to its users following a switchover while continuing to forward IP packets. The main purpose of NSF is to continue forwarding IP packets following a supervisor engine switchover.
Cisco NSF is supported by the BGP, OSPF, and IS-IS protocols for routing and is supported by Cisco Express Forwarding (CEF) for forwarding. The routing protocols have been enhanced with NSF-capability and awareness, which means that routers running these protocols can detect a switchover and take the necessary actions to continue forwarding network traffic and to recover route information from the peer devices.
But you will need a 12.2 IOS release to run this feature.
Hope this helps

Similar Messages

  • Can't access Web Site, super slowly and time out!

    Hi all,
    I am using Server 2012 R2 with Hyper-V, HP DL360 G6 Server
    The Master OS have a NIC1 to VLAN1 Switch (Also in the Cisco SG300 Switch)
    Plus other NIC2 connect to Cisco SG300 28 Switch (Running in L3 Mode), this Cisco then connect  to a SonicWALL NSA 2400.
    I build some VLAN31 on SonicWALL NSA 2400, and add this to the Cisco SG300 as Truck Port connect Both...
    Other Port on the SG300 are untouch, so mean default running mode is Truck mode, and one of it port connect to the NIC2 to the Server ...
    Ok, if Hyper-V VM, not enable VLAN in the Hyper-V Management in 2012 R2, it can take the DHCP from the SonicWALL NSA 2400 no problems, can see other LAN (VLAN1) machine well both direction, and go to Internet too.
    But, if I setting the VLAN31 on Hyper Manager to some VM, it can take the DHCP and take the IP from SonicWALL for this VALN31, include DNS, GW etc...
    But ... it just can ping internet, for example, ping www.yahoo.com it ok, but open IE, enter www.yahoo.com will super super slowly and time out, if running CentOS 6.5 Linux, enter yum update, it total not work, time out, 0.2kb/s etc. But it can ping to www.yahoo.com
    no problems...
    And If Windows run routert www.yahoo.com, over 10s to see next hub ...
    I am double checked SonicWALL NSA2400 Firewall Policy is allow all VLAN and LAN to WAN, and VLAN TO VLAN too..
    What is wrong on the Hyper-V VLAN? Do I am setting wrong of my switch port mode (Access/Trunking)?
    I am now out of direction what is go on to troubleshooting this.
    Thank you.

    Microsoft Windows [Version 6.3.9600]
    (c) 2013 Microsoft Corporation. All rights reserved.
    C:\Users\Administrator>ipconfig /all
    Windows IP Configuration
       Host Name . . . . . . . . . . . . : JimmyChan-G6-01
       Primary Dns Suffix  . . . . . . . :
       Node Type . . . . . . . . . . . . : Hybrid
       IP Routing Enabled. . . . . . . . : No
       WINS Proxy Enabled. . . . . . . . : No
    Ethernet adapter Ethernet 5:
       Media State . . . . . . . . . . . : Media disconnected
       Connection-specific DNS Suffix  . :
       Description . . . . . . . . . . . : Intel(R) PRO/1000 PT Dual Port Network Co
    nnection #2
       Physical Address. . . . . . . . . : 00-1B-78-5A-B2-81
       DHCP Enabled. . . . . . . . . . . : Yes
       Autoconfiguration Enabled . . . . : Yes
    Ethernet adapter Ethernet 4:
       Media State . . . . . . . . . . . : Media disconnected
       Connection-specific DNS Suffix  . :
       Description . . . . . . . . . . . : HP NC382T PCIe DP Multifunction Gigabit S
    erver Adapter #52
       Physical Address. . . . . . . . . : 00-26-55-23-1F-E2
       DHCP Enabled. . . . . . . . . . . : Yes
       Autoconfiguration Enabled . . . . : Yes
    Ethernet adapter Ethernet 3:
       Connection-specific DNS Suffix  . :
       Description . . . . . . . . . . . : HP NC382i DP Multifunction Gigabit Server
     Adapter #49
       Physical Address. . . . . . . . . : 00-26-55-1A-26-F8
       DHCP Enabled. . . . . . . . . . . : No
       Autoconfiguration Enabled . . . . : Yes
       Link-local IPv6 Address . . . . . : fe80::1de6:509d:b011:775a%14(Preferred)
       IPv4 Address. . . . . . . . . . . : 192.168.218.157(Preferred)
       Subnet Mask . . . . . . . . . . . : 255.255.255.0
       Default Gateway . . . . . . . . . : 192.168.218.1
       DHCPv6 IAID . . . . . . . . . . . : 436217429
       DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-19-D3-8A-8D-00-26-55-23-1F-E0
       DNS Servers . . . . . . . . . . . : 8.8.8.8
       NetBIOS over Tcpip. . . . . . . . : Enabled
    Ethernet adapter Ethernet 2:
       Connection-specific DNS Suffix  . :
       Description . . . . . . . . . . . : Intel(R) PRO/1000 PT Dual Port Network Co
    nnection
       Physical Address. . . . . . . . . : 00-1B-78-5A-B2-80
       DHCP Enabled. . . . . . . . . . . : No
       Autoconfiguration Enabled . . . . : Yes
       Link-local IPv6 Address . . . . . : fe80::acb8:319f:2c75:f829%13(Preferred)
       IPv4 Address. . . . . . . . . . . : 192.168.101.11(Preferred)
       Subnet Mask . . . . . . . . . . . : 255.255.255.0
       Default Gateway . . . . . . . . . :
       DHCPv6 IAID . . . . . . . . . . . : 218110840
       DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-19-D3-8A-8D-00-26-55-23-1F-E0
       DNS Servers . . . . . . . . . . . : fec0:0:0:ffff::1%1
                                           fec0:0:0:ffff::2%1
                                           fec0:0:0:ffff::3%1
       NetBIOS over Tcpip. . . . . . . . : Enabled
    Ethernet adapter Ethernet:
       Media State . . . . . . . . . . . : Media disconnected
       Connection-specific DNS Suffix  . :
       Description . . . . . . . . . . . : HP NC382T PCIe DP Multifunction Gigabit S
    erver Adapter #51
       Physical Address. . . . . . . . . : 00-26-55-23-1F-E0
       DHCP Enabled. . . . . . . . . . . : Yes
       Autoconfiguration Enabled . . . . : Yes
    Tunnel adapter isatap.{A9915F4B-84C6-4168-8E25-909FD2105BD9}:
       Media State . . . . . . . . . . . : Media disconnected
       Connection-specific DNS Suffix  . :
       Description . . . . . . . . . . . : Microsoft ISATAP Adapter
       Physical Address. . . . . . . . . : 00-00-00-00-00-00-00-E0
       DHCP Enabled. . . . . . . . . . . : No
       Autoconfiguration Enabled . . . . : Yes
    Tunnel adapter isatap.{E893BCF0-EA57-4E6D-8406-1B6ECF27A1C1}:
       Media State . . . . . . . . . . . : Media disconnected
       Connection-specific DNS Suffix  . :
       Description . . . . . . . . . . . : Microsoft ISATAP Adapter #2
       Physical Address. . . . . . . . . : 00-00-00-00-00-00-00-E0
       DHCP Enabled. . . . . . . . . . . : No
       Autoconfiguration Enabled . . . . : Yes
    C:\Users\Administrator>
    Above  is the Server 2012 R2 Master Host (Mother)
    But ... actually still have one Ethernet 6 NIC Card there, it also NC382i, buildin HP DL360 G6 NIC... This card is connect to the Cisco SG300 28 Switch and the Server, and make this card as for Hyper-V Switch (Extenal) ...
    I don't why run ipconfig /all not see this card above ..
    If you need Screen Capture, let me know ...
    THX

  • Failover time.

    hi,
              I have got a problem with failover time.
              My environment,
              One cluster: two weblogic servers5.1 sp4s running on Sun Solaris. The
              cluster uses In-memory replication.
              Web Server is Apache running on Sun solaris. Apache bridge is setup
              with weblogic.conf reads:
              WeblogicCluster 10.2.2.20:7001,10.2.2.21:7001
              ConnectTimeoutSecs 10
              ConnectRetrySecs 5
              StatPath true
              HungServerRecoverSecs 30:100:120
              Everything is starting fine. Both weblogic server says Joins the
              cluster....and application is working fine. When one weblogic server is
              forced to shutdown, failover takes place fine.
              The problem occurs when the machine, that has first entry in
              weblogic.conf file( 10.2.2.20 )running weblogic server is unplugged from
              the network, failover takes after three minutes.
              Could someone help me how to reduce this time. Is there any property
              that has to be set in the weblogic.conf or in weblogic.properties file
              that need to be set.
              Thanks in Advance
              Arun
              

    arunbabu wrote:
              > hi,
              > I have got a problem with failover time.
              > My environment,
              > One cluster: two weblogic servers5.1 sp4s running on Sun Solaris. The
              > cluster uses In-memory replication.
              > Web Server is Apache running on Sun solaris. Apache bridge is setup
              > with weblogic.conf reads:
              >
              > WeblogicCluster 10.2.2.20:7001,10.2.2.21:7001
              > ConnectTimeoutSecs 10
              > ConnectRetrySecs 5
              > StatPath true
              > HungServerRecoverSecs 30:100:120
              >
              > Everything is starting fine. Both weblogic server says Joins the
              > cluster....and application is working fine. When one weblogic server is
              > forced to shutdown, failover takes place fine.
              > The problem occurs when the machine, that has first entry in
              > weblogic.conf file( 10.2.2.20 )running weblogic server is unplugged from
              > the network, failover takes after three minutes.
              > Could someone help me how to reduce this time. Is there any property
              > that has to be set in the weblogic.conf or in weblogic.properties file
              > that need to be set.
              HungServerRecoverSecs seconds
              This implementation takes care of the hung or unresponsive servers in
              the cluster. The plug-in waits for HungServerRecoverSecs for the server to
              respond and then declares that server dead, failing over to the next server.
              The minimum value for this setting is 10 and the maximum value is 600. The
              default is set at 300. It should be set to a very large value. If it is less
              than the time the servlets take to process, then you will see unexpected
              results.
              Try reducing hungserver recover seconds. But remember if you application
              processing takes long time then you will in trouble since the plugin will be
              failing over to other servers in the cluster and you will be thrashing the
              servers.
              - Prasad
              >
              > Thanks in Advance
              > Arun
              Cheers
              - Prasad
              

  • What are typical failover times for application X on Sun Cluster

    Our company does not yet have any hands-on experience with clustering anything on Solaris, although we do with Veritas and Miscrosoft. My experience with MS is that it is as close to seemless (instantaneous) as possible. The Veritas clustering takes a little bit longer to activate the standby's. A new application we are bringing in house soon runs on Sun cluster (it is some BEA Tuxedo/WebLogic/Oracle monster). They claim the time it takes to flip from the active node to the standby node is ~30minutes. This to us seems a bit insane since they are calling this "HA". Is this type of failover time typical in Sun land? Thanks for any numbers or reference.

    This is a hard question to answer because it depends on the cluster agent/application.
    On one hand you may have a simple Sun Cluster application that fails over in seconds because it has to do a limited amount of work (umount here, mount there, plumb network interface, etc) to actually failover.
    On the other hand these operations may, depending on the application, take longer than another application due to the very nature of that application.
    An Apache web server failover may take 10-15 seconds but an Oracle failover may take longer. There are many variables that control what happens from the time that a node failure is detected to the time that an application appears on another cluster node.
    If the failover time is 30 minutes I would ask your vendor why that is exactly.
    Not in a confrontational way but a 'I don't get how this is high availability' since the assumption is that up to 30 minutes could elapse from the time that your application goes down to it coming back on another node.
    A better solution might be a different application vendor (I know, I know) or a scalable application that can run on more than one cluster node at a time.
    The logic with the scalable approach is that if a failover takes 30 minutes or so to complete it (failover) becomes an expensive operation so I would rather that my application can use multiple nodes at once rather than eat a 30 minute failover if one node dies in a two node cluster:
    serverA > 30 minute failover > serverB
    seems to be less desirable than
    serverA, serverB, serverC, etc concurrently providing access to the application so that failover only happens when we get down to a handful of nodes
    Either one is probably more desirable than having an application outage(?)

  • Would Super Duper!/Time Machine function w/ Littlle disk - MacBook 2.0

    RE: Would Super Duper!/Time Machine function w/ Littlle disk - MacBook 2.0
    Thanks to all who read on...
    The situation at hand is this...
    I have 2 Lacie Little Disk Drives 120 & 250 GB, that I once used to make FW clones from iBook G4 to backup my information , applications etc.
    1.0_How would I translate such operations with a substituted Macbook 2.0 Aluminum?
    1.1_It seems the Macbook doesnt have any FW ports, nor is their any reliable source stating a T-100-to-Firewire adapter would work with OS 10.5.6...?
    THAT being said, the USB ports do recognize the built in Lacie Hi-Speed 2.0 extractable USB Connector. (See Topic: Little disk on a hub with Macbook?)
    I have given Super duper a try to make a Leopard (10.5.6) over this Hi-speed USB 2.0 and it seems to have made the backup, although I haven't as of yet actually used this clone -yet.
    2.0_How can I take advantage of Leopards Time Machine instead of Super Duper!?
    2.1_Whilst still possibly incorporating the usb/firewire Lacie Little disks? (As once sod on THIS very site)

    I'll agree with the previous two posts. With TimeMachine, you always have your most current and previous versions of data backed up. With SuperDuper (or CarbonCopyCloner), your data is only as current as the last time you ran a backup. The major benefit of SuperDuper (or CCC) is that you can create a bootable backup. If you need to restore your system from a TimeMachine backup, you'll need to start your system from your install DVD which will allow you to restore your system from your TimeMachine backup. I think the biggest benefit to having an external clone is in case your internal drive fails. Since you can't boot from a TimeMachine backup, if you have a hard drive failure, you're out of luck until you get that drive replaced. If you have an external clone, you can simply boot up from that which will allow you to continue working until you can get your internal drive replaced. As has been mentioned, both have their benefits and using both to compliment each other is your best option. Since TimeMachine is part of OS X and both SuperDuper and CarbonCopyCloner are free for full clones (actually, CCC is completely free for all functionality now), there's really no reason not to use both.

  • VIP failover time

    I have configured a critical service(ap-kal-pinglist) for the VIP redundant failover, default freq,maxfail and retry freq is 5,3,5, so I think the failover time is 5+5*3*2=35s.But the virtual-router's state changed from "master" to "backup" in around 5 secs after connection lost.
    Anyone help me to understand it?

    Service sw1-up-down connect to e2 interface,going down in 15sec
    Service sw2-up-down connect to e3 interface,going down in 4sec?
    JAN 14 02:38:41 5/1 3857 NETMAN-2: Generic:LINK DOWN for e2
    JAN 14 02:39:57 5/1 3858 NETMAN-2: Generic:LINK DOWN for e3
    JAN 14 02:39:57 5/1 3859 VRRP-0: VrrpTx: Failed on Ipv4FindInterface
    JAN 14 02:40:11 5/1 3860 NETMAN-2: Enterprise:Service Transition:sw2-up-down -> down
    JAN 14 02:40:11 5/1 3861 NETMAN-2: Enterprise:Service Transition:sw1-up-down -> down

  • Failover time using BFD

    Hi Champs,
    we have configured BFD in multihoming scenario with BGP routing protocol.Timer configuration is bfd interval 100 min_rx 100 multiplier 5.
    Failover from first ISP to second ISP takes 30 sec and same from first ISP to second ISP takes more than 1min. Can you suggest reason for different failver times and how can i have equal failover time from both ISP.How convergence time is calculated in BGP + BFD scenario?
    Regards
    V

    Vicky,
    A simple topology would help better understand the scenario. Do you have both the ISP terminated on same router or different router?.
    How many prefixes are you learning?. Full internet table or few prefixes?.
    Accordingly, you can consider BGP PIC or best external to speed up the convergence.
    -Nagendra

  • 2540 / RDAC path failover time

    Hi,
    I have a RHEL 5.3 server with two single port HBAs. These connect to a Brocade 300 switch and are zoned to two controllers on a 2540. Each HBA is zoned to see each controller. RDAC is used as the multipathing driver.
    When testing the solution, if I pull the cable from the active path between the HBA and the switch, it takes 60 seconds before the path fails over to the second HBA. No controller failover is taking place on the array - the path already exists through the brocade between the preferred array controller and the second HBA. After 60 seconds disk I/O continues to the original controller.
    Is this normal ? Is there a way of reducing the failover time ? I had a look at the /etc/mpp.conf variables but there is nothing obvious there that is causing this delay.
    Thanks

    Thanks Hugh,
    I forgot to mention that we were using Qlogic HBAs so our issue was a bit different...
    To resolve our problem; since we had 2x2FC HBA cards in each server we needed to configure zoning on the brocade switch to ensure that each HBA port only saw one of the two array controllers (previously both controllers were visable to each HBA port - which was breaking some RDAC rule). Also we upgraded the qlogic drivers using qlinstall -i before installing RDAC (QLogic drivers which come with RHEL5.3 are pretty old it seems).
    Anyway, after these changes path failovers were working as expected and our timeout value of 60sec for Oracle ocfs2 cluster was not exceeded.
    We actually ended up having to increase the ocfs2 timeout from 60 to 120 seconds because another test case failed - it was taking more than 60sec for a controller to failover (simulated by placing active controller offline from the service advisor). We are not sure if this time is expected or not... anyway have a service request open for this.
    Thanks again,
    Trev

  • Optimize rac failover time?

    I have 2node RAC and the failover time is taking 4 minutes. Please advice some tips/documents/links that shows, how to optimize the rac failover time?
    [email protected]

    Hi
    Could you provide some more information of what it is you are trying to achieve. I assume you are talking about a the time it takes for clients to start connecting to the available instance on the second node, could you clarify this?
    There is SQLnet parameters that can be set, you can also make shadow connections with the preconnect parameter in your fail_over section of your tnsnames.ora on the clients.
    Have you set both of your hosts as preferred in the service configuration on the RAC cluster. The impact will be less in a failure as approximately half of your connections will be unaffeced when an instance fails.
    Cheers
    Peter

  • FWSM Failover times

    Hi Folks
    I have 2 6509's with fwsm in them. They are xconfigured in active standby failover.... default values
    the 6500's are OSPF routers also. Everything is redundant HSRP, FWSM etc.
    when we reboot one of the 6500's it takes approximately 45 seconds for the standby FWSM to become active.
    Is this normal? can the time be shortened?
    any comments appreciated.

    Hi,
    The initial 15 seconds detection time can be reduced to 3 seconds, by tuning failover polltime and holdtime to the following:
    "failover polltime unit 1 holdtime 3"
    Also keep in mind after  switchover new active will establish nbr relation with nbr router. At any point of time standby does  not participate in OSPF process.  so in short new active have to  re-establish adjacencies.
    Hope that helps.
    Thanks,
    Varun

  • Fwsm failover times in real crash

    Hi,
    I have got two cat6k vss and two servis modelu FWSM
    How fast FWSM will be switch over to back up Firewall, after active-fw crash/down power?
    Sent from Cisco Technical Support iPad App

    Hi,
    The initial 15 seconds detection time can be reduced to 3 seconds, by tuning failover polltime and holdtime to the following:
    "failover polltime unit 1 holdtime 3"
    Also keep in mind after  switchover new active will establish nbr relation with nbr router. At any point of time standby does  not participate in OSPF process.  so in short new active have to  re-establish adjacencies.
    Hope that helps.
    Thanks,
    Varun

  • RAC failover time problem!

    I try TAF (transparent application failover) on RAC (9.0.1.3 and 9.2.0.1) and i have the same problem. When i play with "shutdown abort" the failover is fast (about 5-7 sec.). When i turn off the node, the failover is working fine, but it take too mutch time (about 3 minutes). Is there any parameter (tcp/ip or oracle net timeout, or keepalive parameter) that helps?
    Thanks: Robert Gasz

    Can you confirm that you are able to set up RAC with 9.2.0 on Linux?
    Did you use the files downloadable from technet?
    I have problems in joining the cluster from the second node (anyone) when
    the first is up (anyone).
    I hadn't this problem with 9.0.1.x version.
    What is the release of Linuz you are using?
    Are you using raw devices or lvm or what for your raw partitions?
    Thanks in advance.
    Bye,
    Gianluca

  • HA nfs failover time? [SC3.1 2005Q4]

    Just build a test cluster to play with for a project (a v210, a v240 and a 3310). All appears to be working fine and we have a couple of NFS services running on it. One question however :-)
    How long should it take to failover a simple NFS resource group? It's currently taking something like 1min45secs to failover and the scswitch command doesn't return for over 4min30secs. Is that normal (it probably is I just thought NFS would migrate faster than this for some reason :)).
    Also, why does the scswitch command take so much longer to return - the service has failed over and started fine yes it still takes a couple more mins to return a prompt. Is it waiting for sucessful probes or something (which I guess makes sense...)
    cheers,
    Darren

    Failing over from one machine (dev-v210) to the other (dev-v240):
    Aug 1 11:31:59 dev-v210 Cluster.RGM.rgmd: [ID 529407 daemon.notice] resource group nfs-rg1 state on node dev-v210 change to RG_PENDING_OFFLINE
    Aug 1 11:31:59 dev-v210 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nfs1-res state on node dev-v210 change to R_MON_STOPPING
    Aug 1 11:31:59 dev-v210 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nfs1-hastorageplus-res state on node dev-v210 change to R_MON_STOPPING
    Aug 1 11:31:59 dev-v210 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nfs1-hafoip-res state on node dev-v210 change to R_MON_STOPPING
    Aug 1 11:31:59 dev-v210 Cluster.RGM.rgmd: [ID 707948 daemon.notice] launching method <hafoip_monitor_stop> for resource <nfs1-hafoip-res>, resource group <nfs-rg1>, timeout <300> seconds
    Aug 1 11:31:59 dev-v210 Cluster.RGM.rgmd: [ID 707948 daemon.notice] launching method <hastorageplus_monitor_stop> for resource <nfs1-hastorageplus-res>, resource group <nfs-rg1>, timeout <90> seconds
    Aug 1 11:31:59 dev-v210 Cluster.RGM.rgmd: [ID 707948 daemon.notice] launching method <nfs_monitor_stop> for resource <nfs1-res>, resource group <nfs-rg1>, timeout <300> seconds
    Aug 1 11:31:59 dev-v210 Cluster.RGM.rgmd: [ID 736390 daemon.notice] method <hastorageplus_monitor_stop> completed successfully for resource <nfs1-hastorageplus-res>, resource group <nfs-rg1>, time used: 0% of timeout <90 seconds>
    Aug 1 11:31:59 dev-v210 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nfs1-hastorageplus-res state on node dev-v210 change to R_ONLINE_UNMON
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 736390 daemon.notice] method <hafoip_monitor_stop> completed successfully for resource <nfs1-hafoip-res>, resource group <nfs-rg1>, time used: 0% of timeout <300 seconds>
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nfs1-hafoip-res state on node dev-v210 change to R_ONLINE_UNMON
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 736390 daemon.notice] method <nfs_monitor_stop> completed successfully for resource <nfs1-res>, resource group <nfs-rg1>, time used: 0% of timeout <300 seconds>
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nfs1-res state on node dev-v210 change to R_ONLINE_UNMON
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nfs1-res state on node dev-v210 change to R_STOPPING
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 707948 daemon.notice] launching method <nfs_svc_stop> for resource <nfs1-res>, resource group <nfs-rg1>, timeout <300> seconds
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 784560 daemon.notice] resource nfs1-res status on node dev-v210 change to R_FM_UNKNOWN
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 922363 daemon.notice] resource nfs1-res status msg on node dev-v210 change to <Stopping>
    Aug 1 11:32:00 dev-v210 SC[SUNW.nfs:3.1,nfs-rg1,nfs1-res,nfs_svc_stop]: [ID 584207 daemon.notice] Stopping nfsd and mountd.
    Aug 1 11:32:00 dev-v210 SC[SUNW.nfs:3.1,nfs-rg1,nfs1-res,nfs_svc_stop]: [ID 948424 daemon.notice] Stopping NFS daemon /usr/lib/nfs/mountd.
    Aug 1 11:32:00 dev-v210 SC[SUNW.nfs:3.1,nfs-rg1,nfs1-res,nfs_svc_stop]: [ID 948424 daemon.notice] Stopping NFS daemon /usr/lib/nfs/nfsd.
    Aug 1 11:32:00 dev-v210 nfssrv: [ID 624069 kern.notice] NOTICE: nfs_server: server is now quiesced; NFSv4 state has been preserved
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 736390 daemon.notice] method <nfs_svc_stop> completed successfully for resource <nfs1-res>, resource group <nfs-rg1>, time used: 0% of timeout <300 seconds>
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nfs1-res state on node dev-v210 change to R_STOPPED
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nfs1-hastorageplus-res state on node dev-v210 change to R_STOPPING
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 707948 daemon.notice] launching method <hastorageplus_stop> for resource <nfs1-hastorageplus-res>, resource group <nfs-rg1>, timeout <1800> seconds
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 784560 daemon.notice] resource nfs1-hastorageplus-res status on node dev-v210 change to R_FM_UNKNOWN
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 922363 daemon.notice] resource nfs1-hastorageplus-res status msg on node dev-v210 change to <Stopping>
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 736390 daemon.notice] method <hastorageplus_stop> completed successfully for resource <nfs1-hastorageplus-res>, resource group <nfs-rg1>, time used: 0% of timeout <1800 seconds>
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nfs1-hastorageplus-res state on node dev-v210 change to R_STOPPED
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nfs1-hafoip-res state on node dev-v210 change to R_STOPPING
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 784560 daemon.notice] resource nfs1-hafoip-res status on node dev-v210 change to R_FM_UNKNOWN
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 922363 daemon.notice] resource nfs1-hafoip-res status msg on node dev-v210 change to <Stopping>
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 707948 daemon.notice] launching method <hafoip_stop> for resource <nfs1-hafoip-res>, resource group <nfs-rg1>, timeout <300> seconds
    Aug 1 11:32:00 dev-v210 ip: [ID 678092 kern.notice] TCP_IOC_ABORT_CONN: local = 129.012.020.137:0, remote = 000.000.000.000:0, start = -2, end = 6
    Aug 1 11:32:00 dev-v210 ip: [ID 302654 kern.notice] TCP_IOC_ABORT_CONN: aborted 0 connection
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 784560 daemon.notice] resource nfs1-hafoip-res status on node dev-v210 change to R_FM_OFFLINE
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 922363 daemon.notice] resource nfs1-hafoip-res status msg on node dev-v210 change to <LogicalHostname offline.>
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 736390 daemon.notice] method <hafoip_stop> completed successfully for resource <nfs1-hafoip-res>, resource group <nfs-rg1>, time used: 0% of timeout <300 seconds>
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nfs1-hafoip-res state on node dev-v210 change to R_OFFLINE
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nfs1-res state on node dev-v210 change to R_POSTNET_STOPPING
    Aug 1 11:32:00 dev-v210 Cluster.RGM.rgmd: [ID 707948 daemon.notice] launching method <nfs_postnet_stop> for resource <nfs1-res>, resource group <nfs-rg1>, timeout <300> seconds
    Aug 1 11:32:00 dev-v210 SC[SUNW.nfs:3.1,nfs-rg1,nfs1-res,nfs_postnet_stop]: [ID 584207 daemon.notice] Stopping lockd and statd.
    Aug 1 11:32:00 dev-v210 SC[SUNW.nfs:3.1,nfs-rg1,nfs1-res,nfs_postnet_stop]: [ID 948424 daemon.notice] Stopping NFS daemon /usr/lib/nfs/lockd.
    Aug 1 11:32:01 dev-v210 SC[SUNW.nfs:3.1,nfs-rg1,nfs1-res,nfs_postnet_stop]: [ID 948424 daemon.notice] Stopping NFS daemon /usr/lib/nfs/statd.
    Aug 1 11:32:01 dev-v210 SC[SUNW.nfs:3.1,nfs-rg1,nfs1-res,nfs_postnet_stop]: [ID 530938 daemon.notice] Starting NFS daemon /usr/lib/nfs/statd.
    Aug 1 11:33:51 dev-v210 SC[SUNW.nfs:3.1,nfs-rg1,nfs1-res,nfs_postnet_stop]: [ID 906922 daemon.notice] Started NFS daemon /usr/lib/nfs/statd.
    Aug 1 11:33:51 dev-v210 SC[SUNW.nfs:3.1,nfs-rg1,nfs1-res,nfs_postnet_stop]: [ID 530938 daemon.notice] Starting NFS daemon /usr/lib/nfs/lockd.
    Aug 1 11:33:51 dev-v210 SC[SUNW.nfs:3.1,nfs-rg1,nfs1-res,nfs_postnet_stop]: [ID 906922 daemon.notice] Started NFS daemon /usr/lib/nfs/lockd.
    Aug 1 11:33:51 dev-v210 SC[SUNW.nfs:3.1,nfs-rg1,nfs1-res,nfs_postnet_stop]: [ID 530938 daemon.notice] Starting NFS daemon /usr/lib/nfs/mountd.
    Aug 1 11:33:51 dev-v210 SC[SUNW.nfs:3.1,nfs-rg1,nfs1-res,nfs_postnet_stop]: [ID 906922 daemon.notice] Started NFS daemon /usr/lib/nfs/mountd.
    Aug 1 11:33:51 dev-v210 SC[SUNW.nfs:3.1,nfs-rg1,nfs1-res,nfs_postnet_stop]: [ID 530938 daemon.notice] Starting NFS daemon /usr/lib/nfs/nfsd.
    Aug 1 11:33:51 dev-v210 nfssrv: [ID 760318 kern.notice] NOTICE: nfs_server: server was previously quiesced; existing NFSv4 state will be re-usedAug 1 11:33:51 dev-v210 SC[SUNW.nfs:3.1,nfs-rg1,nfs1-res,nfs_postnet_stop]: [ID 906922 daemon.notice] Started NFS daemon /usr/lib/nfs/nfsd.
    Aug 1 11:33:51 dev-v210 Cluster.RGM.rgmd: [ID 784560 daemon.notice] resource nfs1-res status on node dev-v210 change to R_FM_OFFLINE
    Aug 1 11:33:51 dev-v210 Cluster.RGM.rgmd: [ID 922363 daemon.notice] resource nfs1-res status msg on node dev-v210 change to <Completed successfully.>
    Aug 1 11:33:51 dev-v210 Cluster.RGM.rgmd: [ID 736390 daemon.notice] method <nfs_postnet_stop> completed successfully for resource <nfs1-res>, resource group <nfs-rg1>, time used: 36% of timeout <300 seconds>
    Aug 1 11:33:51 dev-v210 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nfs1-res state on node dev-v210 change to R_OFFLINE
    The delay seems come with "Starting NFS daemon /usr/lib/nfs/statd." It appears to stop it and then start it again - and the starting takes a couple of mins.
    When the other node starts it up again we see a similar thing - starting statd takes a couple of mins.
    Other than that it works fine - it feels like statd blocks on some sort of timeout?...
    Would be good to get this failing over faster if possible!
    Uname reports "SunOS dev-v210 5.10 Generic_118833-17 sun4u sparc SUNW,Sun-Fire-V210". Not using veritas VM at all - this is all SVM on these machines.
    Darren

  • RAC Active Active cluster failover time

    Hi,
    In a RAC active active cluster , how long does it take to failover to the surviving instance.
    As per the docu I understand that rollback is done just for the select statements and not others. Is that correct?

    RAC is an active-active cluster situation by design.
    A failover from a session from a stopped/crashed instance to a surviving one can be implemented in several ways.
    The most common way to do failover is using TAF, Transparent Application Failover, which is implemented on the client (using settings in the tnsnames.ora file)
    When an instance of a RAC cluster is crashed, the surviving instances (actually the voted master instance) will detect an instance is crashed, and recover the crashed instance using its online redologfiles. Current transactions in that instance will be rolled back. The time it will take is depended on activity in the database, thus the amount to recover.

  • Customize SUP Reboot countdown time for a particular collection of computers

    SCCM 2007 SP2 R3.  We deploy to a clinical environment, and while many workstations can be updated and rebooted at any time (generally our maintenance windows run from 1800 - 0600 and we have a 60-minute reboot countdown set at the Site Server on the
    agent) we have a special subset of wireless computers on roll-around medicine carts that need to be treated differently, including a 120-minute countdown.
    IS there a way to set the reboot countdown for SUP for the computers in these collections to a different time than the default, or, any way to do this outside of the Site Server controlling things via the agent?
    Thanks.

    First, note that there is no reboot countdown specific to just updates; there is just a single reboot countdown for any reboot initiated by ConfigMgr.
    I don't remember when they added it (R2 I think), but you can override this value on a collection by collection basis. Simply open the properties dialog of the desired collection and it's on one of the tabs (sorry, don't have a 2007 console in front of me
    to tell you the exact place).
    Jason | http://blog.configmgrftw.com

Maybe you are looking for

  • FLV Progressive Download SeekNavCue

    Hello! I have been working on this issue for a couple days now and cannot seem to find the resolution. I have a flash movie that includes three FLV files that are progressively downloaded. I have no problems playing them, but when I get to a certain

  • Apache not working after time machine restore

    i had major problems upgrading to 10.5.3. after trying everything and hoping to avoid reverting to a backup, it was apparent i would have to revert. turns out, it was quite easy since i had a recent TM backup. now, i'm back to 10.5.2 fine, but for wh

  • RFC_READ_TABLE Options - Text length

    Hi, Is there a limit on the amount of characters you can use in the RFC_TABLE_READ > option > text parameter (where clause) seems to ignore and syntax after approximately 70 characters, just wondering if anyone has any experience using this call thro

  • Re: Cross-fade playback not working

    - 0.9.7.16.g4b197456 - No local files (Just streaming Pink Floyd's The Wall) - Yes - Pink Floyd – Empty Spaces - 2011 - Remaster and Pink Floyd – Young Lust - 2011 - Remaster  

  • Class instantiation

    under what circumstances does one instantiate a class containing the main() method? i mean if one never needs to do it, then what's the use of having main() as static?