VPC failover

We have 2 nexus 5548UP running Layer 2.  I have setup vPC between both nexus switches and have a few questions on how vPC failover works.  Below is partial snippets of what I have configured for vPC,etc. During testing, I reload nx-1 and what I am seeing is that the vPC peer-link goes down as expected but all vpc port-channels are in a failed state on nx-2 until nx-1 is back online once vPC is formed and functioning everything looks good. I then reload nx-2 and the vPC port-channels go into fail state, etc. Am I missing something in my configuration?
NX-1
feature tacacs+
feature udld
feature lacp
feature vpc
feature fex
feature vtp
cfs eth distribute
no ip domain-lookup
vtp mode transparent
vpc domain 300
role priority 200
peer-keepalive destination 10.100.5.12 source 10.100.5.11
int po12
description NETAPPA
switchport mode trunk
vpc 12
switchport trunk allowed vlan 5,6,7,8
spanning-tree port type edge trunk
int po13
description NETAPPB
switchport mode trunk
vpc 13
switchport trunk allowed vlan 5,6,7,8
spanning-tree port type edge trunk
int po250
description nx-2
switchport mode trunk
vpc peer-link
switchport trunk native vlan 999
switchport trunk allowed vlan 5,6,7,8
spanning-tree port type network
int e1/27
description vpc peer link
switchport mode trunk
switchport trunk native vlan 999
switchport trunk allowed vlan 5,6,7,8
channel-group 250 mode active
int e1/28
description vpc peer link
switchport mode trunk
switchport trunk native vlan 999
switchport trunk allowed vlan 5,6,7,8
channel-group 250 mode active
int e1/1
description netappa:e1a
switchport mode trunk
switchport trunk native vlan 999
switchport trunk allowed vlan 5,6,7,8
channel-group 12 mode active
int e1/2
description netappb:e1a
switchport mode trunk
switchport trunk native vlan 999
switchport trunk allowed vlan 5,6,7,8
channel-group 13 mode active
NX-2
feature tacacs+
feature udld
feature lacp
feature vpc
feature fex
feature vtp
cfs eth distribute
no ip domain-lookup
vtp mode transparent
vpc domain 300
role priority 300
peer-keepalive destination 10.100.5.11 source 10.100.5.12
int po12
description NETAPPA
switchport mode trunk
vpc 12
switchport trunk allowed vlan 5,6,7,8
spanning-tree port type edge trunk
int po13
description NETAPPB
switchport mode trunk
vpc 13
switchport trunk allowed vlan 5,6,7,8
spanning-tree port type edge trunk
int po250
description nx-1
switchport mode trunk
vpc peer-link
switchport trunk native vlan 999
switchport trunk allowed vlan 5,6,7,8
spanning-tree port type network
int e1/27
description vpc peer link
switchport mode trunk
switchport trunk native vlan 999
switchport trunk allowed vlan 5,6,7,8
channel-group 250 mode active
int e1/28
description vpc peer link
switchport mode trunk
switchport trunk native vlan 999
switchport trunk allowed vlan 5,6,7,8
channel-group 250 mode active
int e1/1
description netappa:e1b
switchport mode trunk
switchport trunk native vlan 999
switchport trunk allowed vlan 5,6,7,8
channel-group 12 mode active
int e1/2
description netappb:e1b
switchport mode trunk
switchport trunk native vlan 999
switchport trunk allowed vlan 5,6,7,8
channel-group 13 mode active

What Nexus OS are you running.. You could be seeing this bug
http://tools.cisco.com/Support/BugToolKit/search/getBugDetails.do?method=fetchBugDetails&bugId=CSCtw76636

Similar Messages

  • 5548P and 5548UP in same VPC pair?

    Can I create a VPC pair with one 5548P and 5548UP. Don't have any 5548Ps in the Lab to test with.
    Thanks, Jim

    The primary difference between P and UP is UP can do unified fabric on all ports, while the P can only do so on the expansion module. So I don't understand why you'd want to mix P and UP. Say you have eth1/20 that you configure as an fcoe port on the UP switch. If vPC failover occurs, eth1/20 which is a fixed port on the P won't be able to do FCOE, so you will have a service disruption.
    That being said, I haven't done vPC between a P and a UP so I don't know for sure. But I have a strong feeling it won't be supported for the above reason.

  • Best Practice for VPC Domain failover with One M2 per N7K switch and 2 sups

    I Have been testing some failover scenarios with 4 nexus 7000 switches with an M2 and an F2 card in each. Each Nexus has two supervisor modules.
    I have 3 VDC's Admin, F2 and M2
    all ports in the M2 are in the M2 VDC and all ports on the F2 are in the F2 VDC.
    All vPC's are connected on the M2 cards, configured in the M2 VDC
    We have 2 Nexus representing each "site"
    In one site we have a vPC domain "100"
    The vPC Peer link is connected on ports E1/3 and E1/4 in Port channel 100
    The peer-keepalive is configured to use the management ports. This is patched in both Sups into our 3750s. (this is will eventually be on a management out of band switch)
    Please see the diagram.
    There are 2 vPC's 1&2 connected at each site which represent the virtual port channels that connect back to a pair of 3750X's (the layer 2 switch icons in the diagram.)
    There is also the third vPC that connects the 4 Nexus's together. (po172)
    We are stretching vlan 900 across the "sites" and would like to keep spanning tree out of this as much as we can, and minimise outages based on link failures, module failures, switch failures, sup failures etc..
    ONLY the management vlan (100,101) is allowed on the port-channel between the 3750's, so vlan 900 spanning tree shouldnt have to make this decision.
    We are only concerned about layer two for this part of the testing.
    As we are connecting the vPC peer link to only one module in each switch (a sinlge) M2 we have configured object tracking as follows:
    n7k-1(config)#track 1 interface ethernet 1/1 line-protocol
    n7k-1(config)#track 2 interface ethernet 1/2 line-protocol
    n7k-1(config)#track 5 interface ethernet 1/5 line-protocol
    track 101 list boolean OR
    n7k-1(config-track)# object 1
    n7k-1(config-track)# object 2
    n7k-1(config-track)# object 5
    n7k-1(config-track)# end
    n7k-1(config)# vpc domain 101
    n7k-1(config-vpc-domain)# track 101
    The other site is the same, just 100 instead of 101.
    We are not tracking port channel 101, not the member interfaces of this port channel as this is the peer link and apparently tracking upstream interfaces and the peer link is only necessary when you have ONE link and one module per switch.
    As the interfaces we are tracking are member ports of a vPC, is this a chicken and egg scenario when seeing if these 3 interfaces are up? or is line-protocol purely layer 1 - so that the vPC isnt downing these member ports at layer 2 when it sees a local vPC domain failure, so that the track fails?
    I see most people are monitoring upstream layer3 ports that connect back to a core? what about what we are doing monitoring upstream(the 3750's) & downstream layer2 (the other site) - that are part of the very vPC we are trying to protect?
    We wanted all 3 of these to be down, for example if the local M2 card failed, the keepalive would send the message to the remote peer to take over.
    What are the best practices here? Which objects should we be tracking? Should we also track the perr-link Port channel101?
    We saw minimal outages using this design. when reloading the M2 modules, usually 1 -3 pings lost between the laptops in the diff sites across the stretched vlan. Obviously no outages when breaking any link in a vPC
    Any wisdom would be greatly appreciated.
    Nick

    Nick,
    I was not talking about the mgmt0 interface. The vlan that you are testing will have a link blocked between the two 3750 port-channel if the root is on the nexus vPC pair.
    Logically your topology is like this:
        |                             |
        |   Nexus Pair          |
    3750-1-----------------------3750-2
    Since you have this triangle setup one of the links will be in blocking state for any vlan configured on these devices.
    When you are talking about vPC and L3 are you talking about L3 routing protocols or just intervaln routing.
    Intervlan routing is fine. Running L3 routing protocols over the peer-link and forming an adjaceny with an router upstream using L2 links is not recommended. Teh following link should give you an idea about what I am talking here:
    http://bradhedlund.com/2010/12/16/routing-over-nexus-7000-vpc-peer-link-yes-and-no/
    HSRP is fine.
    As mentioned tracking feature purpose is to avoid block hole of traffic. It completely depends on your network setup. Don't think you would be needing to track all the interfaces.
    JayaKrishna

  • Layer 3 peering over VPC+

    Hi, we are doing a customer deployment in which 2 x n7ks are fabricpath enabled and are doing vpc+ all the devices that are dual attached to them. We need to connect the ASAs to them and the customer wants to do dynamic layer 3 peering.  (Not static routes) .
    I am yet to do this in a lab environment, BUT will the ASAs see the 2 x N7Ks as 2 different rouiting-peers? (Same if you connect them to a VPC Domain).
    what would be the best way to interconnect the ASAs with the N7Ks?

    I am afraid this is an unsupported design and may lead to traffic loss when packets need to be switched via peer link between both N7k.
    Simple design would be to use layer2 links between N7k+ASA with VLAN interfaces on both N7ks, then peer the ASA with both of them. Assuming you use 2 ASAs with active/standby you still have redundancy if the single link to the active device goes down.
    Oh one more thing: do some failover testing with the ASA and the dynamic routing protocol. If you use OSPF get ready for a disappointing surprise.

  • VPC on Nexus 5000 with Catalyst 6500 (no VSS)

    Hi, I'm pretty new on the Nexus and UCS world so I have some many questions I hope you can help on getting some answers.
    The diagram below is the configuration we are looking to deploy, that way because we do not have VSS on the 6500 switches so we can not create only one  Etherchannel to the 6500s.
    Our blades inserted on the UCS chassis  have INTEL dual port cards, so they do not support full failover.
    Questions I have are.
    - Is this my best deployment choice?
    - vPC highly depend on the management interface on the Nexus 5000 for the keep alive peer monitoring, so what is going to happen if the vPC brakes due to:
         - one of the 6500 goes down
              - STP?
              - What is going to happend with the Etherchannels on the remaining  6500?
         - the Management interface goes down for any other reason
              - which one is going to be the primary NEXUS?
    Below is the list of devices involved and the configuration for the Nexus 5000 and 65000.
    Any help is appreciated.
    Devices
    ·         2  Cisco Catalyst with two WS-SUP720-3B each (no VSS)
    ·         2 Cisco Nexus 5010
    ·         2 Cisco UCS 6120xp
    ·         2 UCS Chassis
         -    4  Cisco  B200-M1 blades (2 each chassis)
              - Dual 10Gb Intel card (1 per blade)
    vPC Configuration on Nexus 5000
    TACSWN01
    TACSWN02
    feature vpc
    vpc domain 5
    reload restore
    reload restore   delay 300
    Peer-keepalive   destination 10.11.3.10
    role priority 10
    !--- Enables vPC, define vPC domain and peer   for keep alive
    int ethernet 1/9-10
    channel-group 50   mode active
    !--- Put Interfaces on Po50
    int port-channel 50
    switchport mode   trunk
    spanning-tree port   type network
    vpc peer-link
    !--- Po50 configured as Peer-Link for vPC
    inter ethernet 1/17-18
    description   UCS6120-A
    switchport mode   trunk
    channel-group 51   mode active
    !--- Associates interfaces to Po51 connected   to UCS6120xp-A  
    int port-channel 51
    swithport mode   trunk
    vpc 51
    spannig-tree port   type edge trunk
    !--- Associates vPC 51 to Po51
    inter ethernet 1/19-20
    description   UCS6120-B
    switchport mode   trunk
    channel-group 52   mode active
    !--- Associates interfaces to Po51 connected   to UCS6120xp-B  
    int port-channel 52
    swithport mode   trunk
    vpc 52
    spannig-tree port   type edge trunk
    !--- Associates vPC 52 to Po52
    !----- CONFIGURATION for Connection to   Catalyst 6506
    Int ethernet 1/1-3
    description   Cat6506-01
    switchport mode   trunk
    channel-group 61   mode active
    !--- Associate interfaces to Po61 connected   to Cat6506-01
    Int port-channel 61
    switchport mode   trunk
    vpc 61
    !--- Associates vPC 61 to Po61
    Int ethernet 1/4-6
    description   Cat6506-02
    switchport mode   trunk
    channel-group 62   mode active
    !--- Associate interfaces to Po62 connected   to Cat6506-02
    Int port-channel 62
    switchport mode   trunk
    vpc 62
    !--- Associates vPC 62 to Po62
    feature vpc
    vpc domain 5
    reload restore
    reload restore   delay 300
    Peer-keepalive   destination 10.11.3.9
    role priority 20
    !--- Enables vPC, define vPC domain and peer   for keep alive
    int ethernet 1/9-10
    channel-group 50   mode active
    !--- Put Interfaces on Po50
    int port-channel 50
    switchport mode   trunk
    spanning-tree port   type network
    vpc peer-link
    !--- Po50 configured as Peer-Link for vPC
    inter ethernet 1/17-18
    description   UCS6120-A
    switchport mode   trunk
    channel-group 51   mode active
    !--- Associates interfaces to Po51 connected   to UCS6120xp-A  
    int port-channel 51
    swithport mode   trunk
    vpc 51
    spannig-tree port   type edge trunk
    !--- Associates vPC 51 to Po51
    inter ethernet 1/19-20
    description   UCS6120-B
    switchport mode   trunk
    channel-group 52   mode active
    !--- Associates interfaces to Po51 connected   to UCS6120xp-B  
    int port-channel 52
    swithport mode   trunk
    vpc 52
    spannig-tree port   type edge trunk
    !--- Associates vPC 52 to Po52
    !----- CONFIGURATION for Connection to   Catalyst 6506
    Int ethernet 1/1-3
    description   Cat6506-01
    switchport mode   trunk
    channel-group 61   mode active
    !--- Associate interfaces to Po61 connected   to Cat6506-01
    Int port-channel 61
    switchport mode   trunk
    vpc 61
    !--- Associates vPC 61 to Po61
    Int ethernet 1/4-6
    description   Cat6506-02
    switchport mode   trunk
    channel-group 62   mode active
    !--- Associate interfaces to Po62 connected   to Cat6506-02
    Int port-channel 62
    switchport mode   trunk
    vpc 62
    !--- Associates vPC 62 to Po62
    vPC Verification
    show vpc consistency-parameters
    !--- show compatibility parameters
    Show feature
    !--- Use it to verify that vpc and lacp features are enabled.
    show vpc brief
    !--- Displays information about vPC Domain
    Etherchannel configuration on TAC 6500s
    TACSWC01
    TACSWC02
    interface range GigabitEthernet2/38 - 43
    description   TACSWN01 (Po61 vPC61)
    switchport
    switchport trunk   encapsulation dot1q
    switchport mode   trunk
    no ip address
    channel-group 61   mode active
    interface range GigabitEthernet2/38 - 43
    description   TACSWN02 (Po62 vPC62)
    switchport
    switchport trunk   encapsulation dot1q
    switchport mode   trunk
    no ip address
    channel-group 62   mode active

    ihernandez81,
    Between the c1-r1 & c1-r2 there are no L2 links, ditto with d6-s1 & d6-s2.  We did have a routed link just to allow orphan traffic.
    All the c1r1 & c1-r2 HSRP communications ( we use GLBP as well ) go from c1-r1 to c1-r2 via the hosp-n5k-s1 & hosp-n5k-s2.  Port channels 203 & 204 carry the exact same vlans.
    The same is the case on the d6-s1 & d6-s2 sides except we converted them to a VSS cluster so we only have po203 with  4 *10 Gb links going to the 5Ks ( 2 from each VSS member to each 5K).
    As you can tell what we were doing was extending VM vlans between 2 data centers prior to arrivals of 7010s and UCS chassis - which  worked quite well.
    If you got on any 5K you would see 2 port channels - 203 & 204  - going to each 6500, again when one pair went to VSS po204 went away.
    I know, I know they are not the same things .... but if you view the 5Ks like a 3750 stack .... how would you hook up a 3750 stack from 2 6500s and if you did why would you run an L2 link between the 6500s ?
    For us using 4 10G ports between 6509s took ports that were too expensive - we had 6704s - so use the 5Ks.
    Our blocking link was on one of the links between site1 & site2.  If we did not have wan connectivty there would have been no blocking or loops.
    Caution .... if you go with 7Ks beware of the inability to do L2/L3 via VPCs.
    better ?
    one of the nice things about working with some of this stuff is as long as you maintain l2 connectivity if you are migrating things they tend to work, unless they really break

  • Link failover on Nexus 5K

    We have Dell R730 server hooked up to two N5Ks and bundled using LACP(vPC), however when one of the NICs on server is disabled it takes about 30 seconds to fail over to second segment in bundle. Any suggestions what might be the reason or what could be done to fix this. Other servers connected same manner have no issues and failover is subsecond.
    Thanks

    Thanks Stephen, from that it sounds to me that if I enable jumbo frames system wide on the 5k, only devices that are configured on their end will use jumbo frames and devices connected to the switch that are configured for mtu 1500, will still use 1500.

  • Nexus 7k VPC connecting to Juniper MX series?

    Hi.
    I'm in a situation where I need to link my Nexus 7k core (two 7010's running multiple VPC's) to a pair of Juniper MX-240 switch/routers.
    Both devices are capable of a form of VPC - Juniper call it MC-LAG (Multi-chassis Link Aggregation), Cisco obviously call it VPC.
    What I need to find out is if a VPC across the two-Nexus system will be compatible with a MC-LAG on the Juniper setup. I have VPC's working find to other Cisco devices.
    As far as I can tell from the reading fo the standards I can find it *should* work, but since I'm working in production I'm very wary of making a change which would cause an issue in this.
    Has anyone done this/come across this before? Anyone got any insight if it's possible?
    Thanks.

    Answering my own question for reference of anyone else who might need to know.
    It *is* possible, but you only get an active/active LACP connection to an EX series switch running in a virtual chassis configuration.
    Juniper's implementation of MC-LAG on non-EX devices only supports active/standby mode - one port of the VPC on the Nexus pair will go immediately into hot-standby mode and stay that way.
    You have to fiddle with the LACP priorities to get any form of failover working - otherwiswe, the only way I could find to get the second leg of the LACP to go active was to shut down the primary on the Nexus, then shut down and no shut the port on the second Nexus.
    To an EX series switch, LACP "just worked".

  • ASA FW VPC to N7K with FP enabled

    Hi,
    We doing some testing on FabricPath (FP), VPC and some other NX-OS features on Nexus Platform.
    The ASA Firewall is connecting to 2 x N7K with VPC (2 x 10GbE).
    Logically, we setup like this
    N7K --- VLAN10 --- FW ---- VLAN12 ---- PC
    N7K is doing static route to VLAN12 via FW.
    We do some basic testing from the N7K and we can ping the front interface of the firewall on VLAN10. But we cannot ping the PC on VLAN12.
    When we do ping from N7K, the packet did not reach the PC and even the FW (based on log and trace on FW)
    When we do debug ICMP on N7K and ping from the PC, i can see the traffic and the N7K replies. But the FW did not receive any packet.
    Any idea where else i shoud check?

    Hi Marvin,
    I will collect show interface and send it, but now i have the following "show-run interfaces":
    N7K-1:
    interface Ethernet3/19
      description ### ASA Failover , connected to N7k-2 ###
      switchport
      switchport mode trunk
      switchport trunk allowed vlan 5,6
      speed 1000
      no shutdown
    N7K-2:
    interface Ethernet3/19
      description ### ASA Failover , connected to N7k-1 ###
      switchport
      switchport mode trunk
      switchport trunk allowed vlan 5,6
      speed 1000
      no shutdown
    ASAs:
    interface Ethernet3/5
      description ### Connected to ASA-1 ###
      switchport
      switchport mode trunk
      switchport trunk allowed vlan 5,6
      speed 1000
    interface Ethernet3/5
      description ### Connected to ASA-2 ###
      switchport
      switchport mode trunk
      switchport trunk allowed vlan 5,6
      speed 1000
      no shutdown
    Regards

  • In oracle rac, If user query a select query and in processing data is fetched but in the duration of fetching the particular node is evicted then how failover to another node internally?

    In oracle rac, If user query a select query and in processing data is fetched but in the duration of fetching the particular node is evicted then how failover to another node internally?

    The query is re-issued as a flashback query and the client process can continue to fetch from the cursor. This is described in the Net Services Administrators Guide, the section on Transparent Application Failover.

  • Reporting Services as a generic service in a failover cluster group?

    There is some confusion on whether or not Microsoft will support a Reporting Services deployment on a failover cluster using scale-out, and adding the Reporting Services service as a generic service in a cluster group to achieve active-passive high
    availability.
    A deployment like this is described by Lukasz Pawlowski (Program Manager on the Reporting Services team) in this blog article
    http://blogs.msdn.com/b/lukaszp/archive/2009/10/28/high-availability-frequently-asked-questions-about-failover-clustering-and-reporting-services.aspx. There it is stated that it can be done, and what needs to be considered when doing such a deployment.
    This article (http://technet.microsoft.com/en-us/library/bb630402.aspx) on the other hand states: "Failover clustering is supported only for the report server database; you
    cannot run the Report Server service as part of a failover cluster."
    This is somewhat confusing to me. Can I expect to receive support from Microsoft for a setup like this?
    Best Regards,
    Peter Wretmo

    Hi Peter,
    Thanks for your posting.
    As Lukasz said in the
    blog, failover clustering with SSRS is possible. However, during the failover there is some time during which users will receive errors when accessing SSRS since the network names will resolve to a computer where the SSRS service is in the process of starting.
    Besides, there are several considerations and manual steps involved on your part before configuring the failover clustering with SSRS service:
    Impact on other applications that share the SQL Server. One common idea is to put SSRS in the same cluster group as SQL Server.  If SQL Server is hosting multiple application databases, other than just the SSRS databases, a failure in SSRS may cause
    a significant failover impact to the entire environment.
    SSRS fails over independently of SQL Server.
    If SSRS is running, it is going to do work on behalf of the overall deployment so it will be Active. To make SSRS Passive is to stop the SSRS service on all passive cluster nodes.
    So, SSRS is designed to achieve High Availability through the Scale-Out deployment. Though a failover clustered SSRS deployment is achievable, it is not the best option for achieving High Availability with Reporting Services.
    Regards,
    Mike Yin
    If you have any feedback on our support, please click
    here
    Mike Yin
    TechNet Community Support

  • What is solution of nat failover with 2 ISPs?

    Now I have lease line link to 2 ISPs for internet connection. I separate packets of users by accesslist such as www go to ISP1 and mail or other protocol go to ISP2 . Let's say link go to ISP1 down I need www traffics failover to ISP2 and vice versa.
    Problem is acl on nat statement?
    If you config about this.
    access-l 101 permit tcp any any www -->www traffic to ISP1
    access-l 101 permit tcp any any mail --> back up for mail packet to ISP2 down
    access-l 102 permit tcp any any mail -->mail packet to ISP2
    access-l 102 permit tcp any any www --> back up for www traffic go to ISP2
    ip nat inside source list 101 interface s0 overload
    ip nat inside source list 102 interface s1 overload
    In this case is links of ISP1 and ISP2 are UP.
    when you apply this acl on nat statement then nat will process each statement in order( if I incorrect please correct me) so mail traffics will match in this acl and then nat with ip of ISP1 only.
    please advice solution about this
    TIA

    Hi,
    If you have two serial links connecting to two diff service provider , then you can try this .
    access-l 101 permit tcp any any www
    access-l 102 permit tcp any any mail
    route-map isp1 permit 10
    match ip address 101
    set interface s0
    route-map isp2 permit 10
    match ip address 102
    set interface s1
    ip nat inside route-map isp1 interface s0 overload
    ip nat inside source route-map isp2 interface s1 overload
    ip nat inside source list 103 interface s0 overload
    ip nat inside source list 104 interface s1 overload
    ip route 0.0.0.0 0.0.0.0 s0
    ip route 0.0.0.0 0.0.0.0 s1 100
    In case if any of the link fails , automatically the other traffic would prefer the other serial.
    I have not tried the config , just worked out the config on logic .pls go through and try if possible
    pls see the note2 column
    http://www.cisco.com/en/US/tech/tk648/tk361/technologies_tech_note09186a0080093fca.shtml#related
    Hope it helps
    regards
    vanesh k

  • Advice Requested - High Availability WITHOUT Failover Clustering

    We're creating an entirely new Hyper-V virtualized environment on Server 2012 R2.  My question is:  Can we accomplish high availability WITHOUT using failover clustering?
    So, I don't really have anything AGAINST failover clustering, and we will happily use it if it's the right solution for us, but to be honest, we really don't want ANYTHING to happen automatically when it comes to failover.  Here's what I mean:
    In this new environment, we have architected 2 identical, very capable Hyper-V physical hosts, each of which will run several VMs comprising the equivalent of a scaled-back version of our entire environment.  In other words, there is at least a domain
    controller, multiple web servers, and a (mirrored/HA/AlwaysOn) SQL Server 2012 VM running on each host, along with a few other miscellaneous one-off worker-bee VMs doing things like system monitoring.  The SQL Server VM on each host has about 75% of the
    physical memory resources dedicated to it (for performance reasons).  We need pretty much the full horsepower of both machines up and going at all times under normal conditions.
    So now, to high availability.  The standard approach is to use failover clustering, but I am concerned that if these hosts are clustered, we'll have the equivalent of just 50% hardware capacity going at all times, with full failover in place of course
    (we are using an iSCSI SAN for storage).
    BUT, if these hosts are NOT clustered, and one of them is suddenly switched off, experiences some kind of catastrophic failure, or simply needs to be rebooted while applying WSUS patches, the SQL Server HA will fail over (so all databases will remain up
    and going on the surviving VM), and the environment would continue functioning at somewhat reduced capacity until the failed host is restarted.  With this approach, it seems to me that we would be running at 100% for the most part, and running at 50%
    or so only in the event of a major failure, rather than running at 50% ALL the time.
    Of course, in the event of a catastrophic failure, I'm also thinking that the one-off worker-bee VMs could be replicated to the alternate host so they could be started on the surviving host if needed during a long-term outage.
    So basically, I am very interested in the thoughts of others with experience regarding taking this approach to Hyper-V architecture, as it seems as if failover clustering is almost a given when it comes to best practices and high availability.  I guess
    I'm looking for validation on my thinking.
    So what do you think?  What am I missing or forgetting?  What will we LOSE if we go with a NON-clustered high-availability environment as I've described it?
    Thanks in advance for your thoughts!

    Udo -
    Yes your responses are very helpful.
    Can we use the built-in Server 2012 iSCSI Target Server role to convert the local RAID disks into an iSCSI LUN that the VMs could access?  Or can that not run on the same physical box as the Hyper-V host?  I guess if the physical box goes down
    the LUN would go down anyway, huh?  Or can I cluster that role (iSCSI target) as well?  If not, do you have any other specific product suggestions I can research, or do I just end up wasting this 12TB of local disk storage?
    - Morgan
    That's a bad idea. First of all Microsoft iSCSI target is slow (it's non-cached @ server side). So if you really decided to use dedicated hardware for storage (maybe you do have a reason I don't know...) and if you're fine with your storage being a single
    point of failure (OK, maybe your RTOs and RPOs are fair enough) then at least use SMB share. SMB at least does cache I/O on both client and server sides and also you can use Storage Spaces as a back end of it (non-clustered) so read "write back flash cache
    for cheap". See:
    What's new in iSCSI target with Windows Server 2012 R2
    http://technet.microsoft.com/en-us/library/dn305893.aspx
    Improved optimization to allow disk-level caching
    Updated
    iSCSI Target Server now sets the disk cache bypass flag on a hosting disk I/O, through Force Unit Access (FUA), only when the issuing initiator explicitly requests it. This change can potentially improve performance.
    Previously, iSCSI Target Server would always set the disk cache bypass flag on all I/O’s. System cache bypass functionality remains unchanged in iSCSI Target Server; for instance, the file system cache on the target server is always bypassed.
    Yes you can cluster iSCSI target from Microsoft but a) it would be SLOW as there would be only active-passive I/O model (no real use from MPIO between multiple hosts) and b) that would require a shared storage for Windows Cluster. What for? Scenario was
    usable with a) there was no virtual FC so guest VM cluster could not use FC LUs and b) there was no shared VHDX so SAS could not be used for guest VM cluster as well. Now both are present so scenario is useless: just export your existing shared storage without
    any Microsoft iSCSI target and you'll be happy. For references see:
    MSFT iSCSI Target in HA mode
    http://technet.microsoft.com/en-us/library/gg232621(v=ws.10).aspx
    Cluster MSFT iSCSI Target with SAS back end
    http://techontip.wordpress.com/2011/05/03/microsoft-iscsi-target-cluster-building-walkthrough/
    Guest
    VM Cluster Storage Options
    http://technet.microsoft.com/en-us/library/dn440540.aspx
    Storage options
    The following tables lists the storage types that you can use to provide shared storage for a guest cluster.
    Storage Type
    Description
    Shared virtual hard disk
    New in Windows Server 2012 R2, you can configure multiple virtual machines to connect to and use a single virtual hard disk (.vhdx) file. Each virtual machine can access the virtual hard disk just like servers
    would connect to the same LUN in a storage area network (SAN). For more information, see Deploy a Guest Cluster Using a Shared Virtual Hard Disk.
    Virtual Fibre Channel
    Introduced in Windows Server 2012, virtual Fibre Channel enables you to connect virtual machines to LUNs on a Fibre Channel SAN. For more information, see Hyper-V
    Virtual Fibre Channel Overview.
    iSCSI
    The iSCSI initiator inside a virtual machine enables you to connect over the network to an iSCSI target. For more information, see iSCSI
    Target Block Storage Overviewand the blog post Introduction of iSCSI Target in Windows
    Server 2012.
    Storage requirements depend on the clustered roles that run on the cluster. Most clustered roles use clustered storage, where the storage is available on any cluster node that runs a clustered
    role. Examples of clustered storage include Physical Disk resources and Cluster Shared Volumes (CSV). Some roles do not require storage that is managed by the cluster. For example, you can configure Microsoft SQL Server to use availability groups that replicate
    the data between nodes. Other clustered roles may use Server Message Block (SMB) shares or Network File System (NFS) shares as data stores that any cluster node can access.
    Sure you can use third-party software to replicate 12TB of your storage between just a pair of nodes to create a fully fault-tolerant cluster. See (there's also a free offering):
    StarWind VSAN [Virtual SAN] for Hyper-V
    http://www.starwindsoftware.com/native-san-for-hyper-v-free-edition
    Product is similar to what VMware had just released for ESXi except it's selling for ~2 years so is mature :)
    There are other guys doing this say DataCore (more playing for Windows-based FC) and SteelEye (more about geo-cluster & replication). But you may want to give them a try.
    Hope this helped a bit :) 
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • No data in ecc variables in failover mode

    Hi all,
    got a problem with custom enterprise data layout when CAD is connected to secondary node.
    With "set enterprise..." I set a special layout for CAD enterprise data in my script. This works fine as long as CAD is connected to primary node. When CAD is connected to secondary node there is no data in expanded call variales and CAD displays default layout. Changing default layout and writing the content to predefined variables works fine. So it seems only extended call variables are not working in failover mode.
    Any ideas?
    Br
    Sven
    (UCCX v8.5)

    Hi Sven,
    Looks like TAC helped fix the issue.  Previously we had rebooted both servers without success.  TAC suggested to restart the Desktop Enterprise Service on both servers (I did Pub first than Sub).  I've verified ECC variables are being sent to CAD correctly.
    The root cause might of been a network outage we had a week ago.  We were using the Desktop Workflow Administrator at the time of the outage.
    Kyle

  • Difference between scalable and failover cluster

    Difference between scalable and fail over cluster

    A scalable cluster is usually associated with HPC clusters but some might argue that Oracle RAC is this type of cluster. Where the workload can be divided up and sent to many compute nodes. Usually used for a vectored workload.
    A failover cluster is where a standby system or systems are available to take the workload when needed. Usually used for scalar workloads.

  • Back to Back vPC - Why is it not possible?

    Good Evening!
    I'm studying for CCDP and am currently sitting on page 271 for those of you that have the official book (642-874). 
    Similar topology to book here.
    If I understand correctly, in an Active/Active FEX design two Cisco Nexus 5000s plug into two Cisco Nexus 2000s which in turn plug into the server. There is a two way vPC between the 2000s and 5000s (it doesn't look like that's shown in the picture though). However, because there is a vPC between the FEXs and the Nexus 5000 you cannot have a vPC between the FEXs and the servers. Why is this? Any clarification on at a conceptual level of how an Active/Active FEX configuration works is also appreciated. I've read the section, but since I haven't had hands on with any of this equipment I'm having some trouble conceptualizing everything. Thanks for your time.
    Grant

    Question:
    =======
     However, because there is a vPC between the FEXs and the Nexus 5000 you cannot have a vPC between the FEXs and the servers. Why is this?
    Answer: No you can have VPC between the Fex and the servers.
    https://communities.cisco.com/thread/21567?tstart=0
    2)
    good document on VPC:
    http://www.cisco.com/c/en/us/products/collateral/switches/nexus-5000-series-switches/configuration_guide_c07-543563.html
    Video on the same:
    http://www.ine.com/all-access-pass/training/playlist/ccie-data-center-nexus/vpc---active-100121202.html
    HTH

Maybe you are looking for

  • Macbook pro 2011 keyboard skipping key strokes

    When typing quickly the laptop keyboard it sometimes skips characters. I have an external windows keyboard that does not seem to have that problem. I read about a keyboard problem from a few years ago that caused the first character to get missed. Th

  • How to display logo in alv grid display

    Hi, i am using 'REUSE_ALV_COMMENTARY_WRITE' to display logo. but i couldn't get in the output. plain explain how use this functoin module. DATA  :  it_listheader   TYPE slis_t_listheader. CALL FUNCTION 'REUSE_ALV_COMMENTARY_WRITE'     EXPORTING      

  • Removing dictionaries in Mail

    is it possible to remove some of those in Mail? and if so how?

  • Bridge thumbnails not updating

    I am using Bridge CS6 and at some point recently my thumbnails (using "filmstrip" mode) stopped updating after I had altered my RAW images.  I've tried to find something in the menus/Preferences that would cause this to happen but have had no success

  • Two ipods Down - How long should an ipod last?

    I gave my daughter a 20gig ipod about two years ago. It crashed, apparently a hard drive problem. It was no longer under warranty when she told me about it. I think she was embarassed that somehow it was her fault it didn't work anymore so she didn't