Nexus 7000 Switch Fabric Redundancy with Fab-2 ?

Hi all,
I know that N7K did not have N+1 Switch Fabric redundancy. It is required to have all five SF modules to provide full throughput. Does this change with FAB-2 modules ? I mean with new modules, does N7K have N+1 SF redundancy ?
Thanks in advance.
Dumlu

The N+1 fabric redundancy will depend on the type of module.
If you are using M132 (80G) per slot and you have 3x FAB-1 (46G each), it will provide N+1.
The same story applies to FAB-2. If you are using the F1 series, and have 4x FAB-2, you have N+1. Of course, if you are using F2 series linecard and have all FAB-2 populated, you don't have N+1 because F2 is 480G and all 5 FAB-2 will give you 550G.
HTH,
jerry

Similar Messages

  • Nexus 7000 - unexpected shutdown of vPC-Ports during reload of the primary vPC Switch

    Dear Community,
    We experienced an unusual behavior of two Nexus 7000 switches within a vPC domain.
    According to the attached sketch, we have four N7Ks in two data centers - two Nexus 7Ks are in a vPC domain for each data center.
    Both data centers are connected via a Multilayer-vPC.
    We had to reload one of these switches and I expected the other N7K in this vPC domain to continue forwarding over its vPC-Member-ports.
    Actually, all vPC ports have been disabled on the secondary switch until the reload of the first N7K (vPC-Role: primary) finished.
    Logging on Switch B:
    20:11:51 <Switch B> %VPC-2-VPC_SUSP_ALL_VPC: Peer-link going down, suspending all vPCs on secondary
    20:12:01 <Switch B> %VPC-2-PEER_KEEP_ALIVE_RECV_FAIL: In domain 1, VPC peer keep-alive receive has failed
    In case of a Peer-link failure, I would expect this behavior if the other switch is still reachable via the Peer-Keepalive-Link (via the Mgmt-Port), but since we reloaded the whole switch, the vPCs should continue forwarding. 
    Could this be a bug or are there any timers to be tuned?
    All N7K switches are running on NX-OS 6.2(8)
    Switch A:
    vpc domain 1
      peer-switch
      role priority 2048
      system-priority 1024
      peer-keepalive destination <Mgmt-IP-Switch-B>
      delay restore 360
      peer-gateway
      auto-recovery reload-delay 360
      ip arp synchronize
    interface port-channel1
      switchport mode trunk
      switchport trunk allowed vlan <x-y>
      spanning-tree port type network
      vpc peer-link
    Switch B:
    vpc domain 1
      peer-switch
      role priority 1024
      system-priority 1024
      peer-keepalive destination <Mgmt-IP-Switch-A>
      delay restore 360
      peer-gateway
      auto-recovery reload-delay 360
      ip arp synchronize
    interface port-channel1
      switchport mode trunk
      switchport trunk allowed vlan <x-y>
      spanning-tree port type network
      vpc peer-link
    Best regards

    Problem solved:
    During the reload of the Nexus 7K, the linecards were powerd off a short time earlier than the Mgmt-Interface. As a result of this behavior, the secondary Nexus 7K received at least one vPC-Peer-Keepalive Message while its peer-link was already powerd off. To avoid a split brain scenario, the VPC-member-ports have been shut down.
    Now we are using dedicated interfaces on the linecards for the VPC-Peer-Keepalive-Link and a reload of one N7K won't result in a total network outage any more.

  • Challenge: Spanning Tree Control Between 2 links from Switch DELL M6220 to 2 links towards 2 switches CISCO 3750 connected with an stack (behavior like one switch for redundancy)

    Hello,
    I have an Spanning tree problem when i conect  2 links from Switch DELL M6220 (there are blades to virtual machines too) to 2 links towards 2 switches CISCO 3750 connected with an stack (behavior  like one switch  for redundancy, with one IP of management)
    In dell virtual machine is Spanning tree rapid stp, and in 3750 is Spanning tree mode pvst, cisco says that this is not important, only is longer time to create the tree.
     I dont know but do you like this solutions i want to try on sunday?:
     Could Spanning tree needs to work to send one native vlan to negociate the bdpus? switchport trunk native vlan 250
    Is it better to put spanning-tree guard root in both 3750 in the ports to mitigate DELL to be root in Spanning Tree?
    Is it better to put spanning- tree port-priority in the ports of Swicht Dell?
    ¿could you help me to control the root? ¿Do you think its better another solution? thanks!
     CONFIG WITH PROBLEM
    ======================
    3750: (the 2 ports are of 2 switches 3750s conected with a stack cable, in a show run you can see this)
    interface GigabitEthernet2/0/28
     description VIRTUAL SNMP2
     switchport trunk encapsulation dot1q
     switchport trunk allowed vlan 4,13,88,250
     switchport mode trunk
     switchport nonegotiate
     logging event trunk-status
     shutdown
    interface GigabitEthernet1/0/43
     description VIRTUAL SNMP1
     switchport trunk encapsulation dot1q
     switchport trunk allowed vlan 4,13,88,250
     switchport mode trunk
     switchport nonegotiate
     shutdown
    DELL M6220: (its only one swith)
    interface Gi3/0/19
    switchport mode trunk
    switchport trunk allowed vlan 4,13,88,250
    exit
    interface Gi4/0/19
    switchport mode trunk
    switchport trunk allowed vlan 4,13,88,250
    exit

    F.Y.I for catylyst heroes - here is the equivalent config for SG-300 - Vlan1 is required on the allowed list on the catylyst side (3xxx/4xxx/6xxx)
    In this example:
    VLANS - Voice on 188, data on 57, management on 56.
    conf t
    hostname XXX-VOICE-SWXX
    no passwords complexity enable
    username xxxx priv 15 password XXXXX
    enable password xxxxxx
    ip ssh server
    ip telnet server
    crypto key generate rsa
    macro auto disabled
    voice vlan state auto-enabled !(otherwise one switch controls your voice vlan….)
    vlan 56,57,188
    voice vlan id 188
    int vlan 56
    ip address 10.230.56.12 255.255.255.0
    int vlan1
    no ip add dhcp
    ip default-gateway 10.230.56.1
    interface range GE1 - 2
    switchport mode trunk
    channel-group 1 mode auto
    int range fa1 - 24
    switchport mode trunk
    switchport trunk allowed vlan add 188
    switchport trunk native vlan 57
    qos advanced
    qos advanced ports-trusted
    exit
    int Po1
    switchport trunk allowed vlan add 56,57,188
    switchport trunk native vlan 1
    do sh interfaces switchport po1
    !CATYLYST SIDE
    !Must Explicitly allow VLan1, this is not normal for catalysts - or spanning tree will not work ! Even though it’s the native vlan on both sides.
    interface Port-channel1
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 1,56,57,189
    switchport mode trunk

  • %ARP-3-DUP_VADDR_SRC_IP on two Nexus 7000 using HSRP

    Hi,
    I am receiving the error %ARP-3-DUP_VADDR_SRC_IP on two Nexus 7000 switch that is configured with HSRP.  I only see this error when the Nexus performs a failover to the HSRP standby unit.  I personally think this can be safely ignored,but wanted to get another opinion.
    I can generate the error when i initiate the failover of several SVIs that are configured for HSRP.  I do not see the error when failover doesn't happen.
    I haven't been unable to find any documentation for Nexus on this error.
    I have found documentation on this error for Catalyst switches and they seem to indicate a loop in the network.  I can confirm that there are no loops in the network.
    Has anyone else seen this happen on a Nexus?  Any links to documentation would be great too. 
    Thanks

    you have duplicate IP addres on some host connected to portchanel10
    probably some access layer switch is connected to your portchanel 10
    try to find port where this host is connected in access layer switch
    sh mac addr | i ac8f
    and dont forget to rate post

  • Ciscoworks 2.6 and Nexus 7000 issues

    Running LMS 2.6 with RME version 4.0.6, and DFM 2.0.13.
    We keep getting false alerts in DFM on the temperature in our Nexus 7000 switches. The alert says that the high temp threshold is 45C, and it's being exceeded at 46C. The thing that bothers me is that the actual switch reads that the threshold is around 100C or more. Any ideas as to why DFM would be picking up a temperature so far off the mark?
    Also, in regards to RME, I cannot pull configs from the Nexus 7000's. The check box in "archive config" is blanked out to where I can't check it. I download the device packages for the 7000 into RME but it will not pull configs. Is this not supported under our version of RME, or would there be some other reason that I can't do this?
    Thanks for any assistance with these issues!

    UPDATE:
    I fixed the RMA config pull issue. I thought I had previously downloaded the Nexus device packages so that RMA could work with them, but upon checking again, it looks like I just didn't have them installed. Got that piece fixed and now I can pull configs from the switches just fine.
    Still having problems with the temperature reading in DFM not accurately reflecting what is actually on the switches. Any suggestions as to where to start hunting down the issue for this are greatly appreciated. Thanks!

  • FCoE using Brocade cards CNA1020 and Cisco Nexus 5548 switches

    All,
    I have the following configuration and problem that I am not sure how to fix:
    I  have three Dell R910 servers with 1TB of memory and each has two  brocade 1020 CNA cards dual port.  I am using distributed switches for  the VM network and a second distributed switch for VMotion.  I have two  of the 10G ports configured in each distributed switch using IP Hash.   The management network is configured using a standard switch with two 1G  ports.
    The  Nexus configuration is we have two nexus 5548 switch connected together  with a trunk.  We have two VPC's configured to each ESX hosts  consisting of two 10gig ports in each VPC with one port going to each  switch.  The VPC is configured for static LAG.
    What  I am seeing is that after a few hours the virtual machines will not be  accessible via network anymore.  So if you ping the VM it will not work  and if you get on the console of the VM then ping the gateway then  nothing as well but if you try to ping another virtual machine on the  same host on the same VLAN then it will work so traffic is going through  the ESX backplane.  If I reboot the ESX host then things will work  again for another few hours or so then the problem repeats.
    The version of vSphere I am using is ESXi4.1
    Please assist I am stuck.
    Thanks...

    Here is the link for Nexus and Brocade interoperability Matrix
    http://www.cisco.com/en/US/docs/switches/datacenter/mds9000/interoperability/matrix/Matrix7.html#wp313498
    usually this table would show those models those have been tested and verified
    However I do not see  Brocade 5300 listed in the table . It could be, interoperability may have not been tested by both vendors perticularly to 5300 type Model.

  • Nexus 7000, 2000, FCOE and Fabric Path

    Hello,
    I have a couple of design questions that I am hoping some of you can help me with.
    I am working on a Dual DC Upgrade. It is pretty standard design, customer requires a L2 extension between the DC for Vmotion etc. Customer would like to leverage certain features of the Nexus product suite, including:
    Trust Sec
    VDC
    VPC
    High Bandwidth Scalability
    Unified I/O
    As always cost is a major issue and consolidation is encouraged where possible. I have worked on a couple of Nexus designs in the past and have levergaed the 7000, 5000, 2000 and 1000 in the DC.
    The feedback that I am getting back from Customer seems to be mirrored in Cisco's technology roadmap. This relates specifically to the features supported in the Nexus 7000 and Nexus 5000.
    Many large enterprise Customers ask the question of why they need to have the 7000 and 5000 in their topologies as many of the features they need are supported in both platforms and their environments will never scale to meet such a modular, tiered design.
    I have a few specific questions that I am hoping can be answered:
    The Nexus 7000 only supports the 2000 on the M series I/O Modules; can FCOE be implemented on a 2000 connected to a 7000 using the M series I/O Module?
    Is the F Series I/O Module the only I/O Module that supports FCOE?
    Are there any plans to introduce the native FC support on the Nexus 7000?
    Are there any plans to introduce full fabric support (230 Gbps) to the M series I/O module?
    Are there any plans to introduce Fabric path to the M series I/O module?
    Are there any plans to introduce L3 support to the F series I/O Module?
    Is the entire 2000 series allocated to a single VDC or can individual 2000 series ports be allocated to a VDC?
    Is Trust Sec only support on multi hop DCI links when using the ASR on EoMPLS pwire?
    Are there any plans to inroduce Trust Sec and VDC to the Nexus 5500?
    Thanks,
    Colm

    Hello Allan
    The only IO card which cannot co-exist with other cards in the same VDC is F2 due to specific hardware realisation.
    All other cards can be mixed.
    Regarding the Fabric versions - Fabric-2 gives much bigger throughoutput in comparing with Fabric-1
    So in order to get full speed from F2/M2 modules you will need Fab-2 modules.
    Fab2 modules won't give any advantages to M1/F1 modules.
    http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/data_sheet_c78-685394.html
    http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/prodcut_bulletin_c25-688075.html
    HTH,
    Alex

  • Nexus 7000 and 2000. Is FEX supported with vPC?

    I know this was not supported a few months ago, curious if anything has changed?

    Hi Jenny,
    I think the answer will depend on what you mean by is FEX supported with vPC?
    When connecting a FEX to the Nexus 7000 you're able to run vPC from the Host Interfaces of a pair of FEX to an end system running IEEE 802.1AX (802.3ad) Link Aggregation. This is shown is illustration 7 of the diagram shown on the post Nexus 7000 Fex Supported/Not Supported Topologies.
    What you're not able to do is run vPC on the FEX Network Interface that connect up to the Nexus 7000 i.e., dual-homing the FEX to two Nexus 7000. This is shown in illustrations 8 and 9 of under the FEX topologies not supported on the same page.
    There's some discussion on this in the forum post DualHoming 2248TP-E to N7K that explains why it's not supported, but essentially it offers no additional resilience.
    From that post:
    The view is that when connecting FEX to the Nexus 7000, dual-homing does not add any level of resilience to the design. A server with dual NIC can attach to two FEX  so there is no need to connect the FEX to two parent switches. A server with only a single NIC can only attach to a single FEX, but given that FEX is supported by a fully redundant Nexus 7000 i.e., SE, fabrics, power, I/O modules etc., the availability is limited by the single FEX and so dual-homing does not increase availability.
    Regards

  • Nexus 7000 with VPC and HSRP Configuration

    Hi Guys,
    I would like to know how to implement HSRP with the following setup:
    There are 2 Nexus 7000 connected with VPC Peer link. Each of the Nexus 7000 has a FEX attached to it.
    The server has two connections going to the FEX on each Nexus 7k (VPC). FEX's are not dual homed as far as I now they are not supported currently.
    R(A)              R(S)
    |                     |
    7K Peer Link 7K
    |                     |
    FEX              FEX
    Server connected to both FEX
    The question is we have two routers connected to each of the Nexus 7k in HSRP (active and one is standby). How can I configure HSRP on the nexus switches and how the traffic will routed from the Standby Nexus switch to Active Nexus switch (I know HSRP works differently here as both of them can forward packets). Will the traffic go to the secondary switch and then via the peer link to the active switch and then to the active router ? (From what I read the packet from end hosts which will go via the peer link will get dropped)
    Has anyone implemented this before ?
    Thanks

    Hi Kuldeep,
    If you intend to put those routers on a non-vpc vlan, you  may create  a new inter-switch trunk between the N7K and allow that non-vpc vlan . However if those will be on a VPC vlan, best to create two links to the N7K pair and create a VPC, otherwise configure those ports as orphan ports which will leverage the VPC peer link .
    HTH
    Jay Ocampo

  • Broadcom LiveLink : Receiving MAC flaps with Cisco Nexus 7000

    We are migrating from using two Nortel 8600's running VRRP at the distribution to Cisco Nexus 7K's using HSRP.  So we have a server connected to two 3750G switches which then connect to the Nexi (previously the 8600's).  As soon as we connected the 3750's to the Nexus and moved the gateway to Nexus, LiveLink forces all the servers to alternate traffic between NIC1 and NIC2. 
    Since LiveLink is a teaming application, it uses virtual mac for nic1 and nic2, but the virtual mac associated with the IP address moves to the active link.
    LiveLink is used to check the availability of the gateway by polling the gateway out of each interface using an ARP request.
    The problem does not exhibit itself in our Cisco VSS environment, and with Nortel's VRRP.  I tried running VRRP on the Nexus but no joy.
    Anyone know of a bug that could cause this issue?

    Unfortunately we have LiveLink enabled on most of our Windows servers in our data centers.  One of my colleagues sent me this bug issue.  I'm not sure if this is the cause, but it's worth trying.   We will update the NxOs (currently on 5.1.1) next week and see if that fixes the problem.
    •CSCtl85080
    Symptom: Incomplete Address Resolution Protocol (ARP) entries are observed on a Cisco Nexus 7000 Series switch, along with partial packet loss and a memory leak.
    Conditions: This symptom might be seen when ARP packets have a nonstandard size (that is, greater than 64 bytes).
    Workaround: This issue is resolved in 5.1.3.

  • FCoE with Cisco Nexus 5548 switches and VMware ESXi 4.1

    Can someone share with me what needs to be setup on the Cisco Nexus side to work with VMware in the following scenario?
    Two servers with two cards dual port FCoE cards with two ports connected to two Nexus 5548 switches that are clusterd together.  We want to team the ports together on the VMware side using IP Hash so what should be done on the cisco side for this to work? 
    Thanks...

    Andres,
    The Cisco Road Map for the 5010 and 5020 doesn't include extending the current total (12) FEX capabities.  The 5548 and 5596 will support more (16) per 55xxk, and with the 7K will support upto 32 FEX's.
    Documentation has been spotty on this subject, because the term 5k indicates that all 5000 series switches will support extended FEX's which is not the case only the 55xx will support more than 12 FEX.  Maybe in the future the terminology for the 5k series should be term 5000 series and 5500 series Nexus, there are several differences and advancements between the two series.

  • Dell Servers with Nexus 7000 + Nexus 2000 extenders

    << Original post by smunzani. Answered by Robert. Moving from Document section to Discussions>>
    Team,
    I would like to use some of the existing Dell Servers for new network design of Nexus 7000 + Nexus 2000 extenders. What are my options for FEC to the hosts? All references of M81KR I found on CCO are related to UCS product only.
    What's best option for following setup?
    N7K(Aggregation Layer) -- N2K(Extenders) -- Dell servers
    Need 10G to the servers due to dense population of the VMs. The customer is not up for dumping recently purchased dell boxes in favor of UCS. Customer VMware license is Enterprise Edition.
    Thanks in advance.

    To answer your question, the M81KR-VIC is a Mezz card for UCS blades only.  For Cisco rack there is a PCIe version which is called the P81.  These are both made for Cisco servers only due to the integration with server management and virtual interface functionality.
    http://www.cisco.com/en/US/prod/collateral/ps10265/ps10493/data_sheet_c78-558230.html
    More information on it here:
    Regards,
    Robert

  • Very low transfer speed issue on SUN4270 M2 server connected with nexus 5548 switch on 10GB fiber.

                       Hi,
    I have 2 SUN 4270 M2 servers connected with Nexus 5548 switch over 10Gb fiber card. I am getting performance of just 60 MB per second while transfer of 5Gb file across 2 servers. The similar speed i use to get on 1Gb network also. Please suggest how to improve the tranfer speed. On servers, ports ET4 and ETH5 are bonded in bond0 with mode=1. The server envrionment will be used for OVS 2.2.2.
    Below are the details of network configuration on server. I quick help will be highly appriciated--
    [root@host1 network-scripts]# ifconfig eth4
    eth4      Link encap:Ethernet  HWaddr 90:E2:BA:0E:22:4C
              UP BROADCAST RUNNING SLAVE MULTICAST  MTU:9000  Metric:1
              RX packets:5648589 errors:215 dropped:0 overruns:0 frame:215
              TX packets:3741680 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:2492781394 (2.3 GiB)  TX bytes:3911207623 (3.6 GiB)
    [root@host1 network-scripts]# ifconfig eth5
    eth5      Link encap:Ethernet  HWaddr 90:E2:BA:0E:22:4C
              UP BROADCAST RUNNING SLAVE MULTICAST  MTU:9000  Metric:1
              RX packets:52961 errors:215 dropped:0 overruns:0 frame:215
              TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:3916644 (3.7 MiB)  TX bytes:0 (0.0 b)
    [root@host1 network-scripts]# ethtool eth4
    Settings for eth4:
            Supported ports: [ FIBRE ]
            Supported link modes:   1000baseT/Full
            Supports auto-negotiation: Yes
            Advertised link modes:  1000baseT/Full
                                    10000baseT/Full
            Advertised auto-negotiation: Yes
            Speed: 10000Mb/s
            Duplex: Full
            Port: FIBRE
            PHYAD: 0
            Transceiver: external
            Auto-negotiation: on
            Supports Wake-on: d
            Wake-on: d
            Current message level: 0x00000007 (7)
            Link detected: yes
    [root@host1 network-scripts]# ethtool eth5
    Settings for eth5:
            Supported ports: [ FIBRE ]
            Supported link modes:   1000baseT/Full
            Supports auto-negotiation: Yes
            Advertised link modes:  1000baseT/Full
                                    10000baseT/Full
            Advertised auto-negotiation: Yes
            Speed: 10000Mb/s
            Duplex: Full
            Port: FIBRE
            PHYAD: 0
            Transceiver: external
            Auto-negotiation: on
            Supports Wake-on: d
            Wake-on: d
            Current message level: 0x00000007 (7)
            Link detected: yes
    [root@host1 network-scripts]#
    [root@host1 network-scripts]# cat /proc/net/bonding/bond0
    Ethernet Channel Bonding Driver: v3.2.4 (January 28, 2008)
    Bonding Mode: fault-tolerance (active-backup)
    Primary Slave: None
    Currently Active Slave: eth4
    MII Status: up
    MII Polling Interval (ms): 100
    Up Delay (ms): 0
    Down Delay (ms): 0
    Slave Interface: eth4
    MII Status: up
    Link Failure Count: 0
    Permanent HW addr: 90:e2:ba:0e:22:4c
    Slave Interface: eth5
    MII Status: up
    Link Failure Count: 0
    Permanent HW addr: 90:e2:ba:0e:22:4d
    [root@host1 network-scripts]# modinfo ixgbe | grep ver
    filename:       /lib/modules/2.6.18-128.2.1.4.44.el5xen/kernel/drivers/net/ixgbe/ixgbe.ko
    version:        3.9.17-NAPI
    description:    Intel(R) 10 Gigabit PCI Express Network Driver
    srcversion:     31C6EB13C4FA6749DF3BDF5
    vermagic:       2.6.18-128.2.1.4.44.el5xen SMP mod_unload Xen 686 REGPARM 4KSTACKS gcc-4.1
    [root@host1 network-scripts]#brctl show
    bridge name     bridge id               STP enabled     interfaces
    vlan301         8000.90e2ba0e224c       no              bond0.301
    vlan302         8000.90e2ba0e224c       no              vif1.0
                                                            bond0.302
    vlan303         8000.90e2ba0e224c       no              bond0.303
    vlan304         8000.90e2ba0e224c       no              bond0.304
    [root@host2 test]# ifconfig eth5
    eth5      Link encap:Ethernet  HWaddr 90:E2:BA:0F:C3:15
              UP BROADCAST RUNNING SLAVE MULTICAST  MTU:9000  Metric:1
              RX packets:4416730 errors:215 dropped:0 overruns:0 frame:215
              TX packets:2617152 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:190977431 (182.1 MiB)  TX bytes:3114347186 (2.9 GiB)
    [root@host2 network-scripts]# ifconfig eth4
    eth4      Link encap:Ethernet  HWaddr 90:E2:BA:0F:C3:15
              UP BROADCAST RUNNING SLAVE MULTICAST  MTU:9000  Metric:1
              RX packets:28616 errors:3 dropped:0 overruns:0 frame:3
              TX packets:424 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:4982317 (4.7 MiB)  TX bytes:80029 (78.1 KiB)
    [root@host2 test]#
    [root@host2 network-scripts]# ethtool eth4
    Settings for eth4:
            Supported ports: [ FIBRE ]
            Supported link modes:   1000baseT/Full
            Supports auto-negotiation: Yes
            Advertised link modes:  1000baseT/Full
                                    10000baseT/Full
            Advertised auto-negotiation: Yes
            Speed: 10000Mb/s
            Duplex: Full
            Port: FIBRE
            PHYAD: 0
            Transceiver: external
            Auto-negotiation: on
            Supports Wake-on: d
            Wake-on: d
            Current message level: 0x00000007 (7)
            Link detected: yes
    [root@host2 test]# ethtool eth5
    Settings for eth5:
            Supported ports: [ FIBRE ]
            Supported link modes:   1000baseT/Full
            Supports auto-negotiation: Yes
            Advertised link modes:  1000baseT/Full
                                    10000baseT/Full
            Advertised auto-negotiation: Yes
            Speed: 10000Mb/s
            Duplex: Full
            Port: FIBRE
            PHYAD: 0
            Transceiver: external
            Auto-negotiation: on
            Supports Wake-on: d
            Wake-on: d
            Current message level: 0x00000007 (7)
            Link detected: yes
    [root@host2 network-scripts]# cat /proc/net/bonding/bond0
    Ethernet Channel Bonding Driver: v3.2.4 (January 28, 2008)
    Bonding Mode: fault-tolerance (active-backup)
    Primary Slave: None
    Currently Active Slave: eth5
    MII Status: up
    MII Polling Interval (ms): 100
    Up Delay (ms): 0
    Down Delay (ms): 0
    Slave Interface: eth5
    MII Status: up
    Link Failure Count: 0
    Permanent HW addr: 90:e2:ba:0f:c3:14
    Slave Interface: eth4
    MII Status: up
    Link Failure Count: 0
    Permanent HW addr: 90:e2:ba:0f:c3:15
    [root@host2 network-scripts]# modinfo ixgbe | grep ver
    filename:       /lib/modules/2.6.18-128.2.1.4.44.el5xen/kernel/drivers/net/ixgbe/ixgbe.ko
    version:        3.9.17-NAPI
    description:    Intel(R) 10 Gigabit PCI Express Network Driver
    srcversion:     31C6EB13C4FA6749DF3BDF5
    vermagic:       2.6.18-128.2.1.4.44.el5xen SMP mod_unload Xen 686 REGPARM 4KSTACKS gcc-4.1
    [root@host2 network-scripts]#brctl show
    bridge name     bridge id               STP enabled     interfaces
    vlan301         8000.90e2ba0fc315       no              bond0.301
    vlan302         8000.90e2ba0fc315       no              bond0.302
    vlan303         8000.90e2ba0fc315       no              bond0.303
    vlan304         8000.90e2ba0fc315       no              vif1.0
           bond0.304
    Thanks....
    Jay

    Hi,
    Thanks for reply..but the RX errors count is keep on increasing and the transfer speed between 2 servers are max 60MB/ps on 10GB FC card. Even on storage also, i am getting the same speed when i try to transfer data from server to storage on 10GB FC card. Servers and storage are connected through Nexus 5548 switch.
    #ifconfig eth5
    eth5      Link encap:Ethernet  HWaddr 90:E2:BA:0E:22:4C
              UP BROADCAST RUNNING SLAVE MULTICAST  MTU:9000  Metric:1
              RX packets:21187303 errors:1330 dropped:0 overruns:0 frame:1330
              TX packets:17805543 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:624978785 (596.0 MiB)  TX bytes:2897603160 (2.6 GiB)
    JP

  • ESXi 4.1 NIC Teaming's Load-Balancing Algorithm,Nexus 7000 and UCS

    Hi, Cisco Gurus:
    Please help me in answering the following questions (UCSM 1.4(xx), 2 UCS 6140XP, 2 Nexus 7000, M81KR in B200-M2, No Nexus 1000V, using VMware Distributed Switch:
    Q1. For me to configure vPC on a pair of Nexus 7000, do I have to connect Ethernet Uplink from each Cisco Fabric Interconnect to the 2 Nexus 7000 in a bow-tie fashion? If I connect, say 2 10G ports from Fabric Interconnect 1 to 1 Nexus 7000 and similar connection from FInterconnect 2 to the other Nexus 7000, in this case can I still configure vPC or is it a validated design? If it is, what is the pro and con versus having 2 connections from each FInterconnect to 2 separate Nexus 7000?
    Q2. If vPC is to be configured in Nexus 7000, is it COMPULSORY to configure Port Channel for the 2 Fabric Interconnects using UCSM? I believe it is not. But what is the pro and con of HAVING NO Port Channel within UCS versus HAVING Port Channel when vPC is concerned?
    Q3. if vPC is to be configured in Nexus 7000, I understand there is a limitation on confining to ONLY 1 vSphere NIC Teaming's Load-Balancing Algorithm i.e. Route Based on IP Hash. Is it correct?
    Again, what is the pro and con here with regard to application behaviours when Layer 2 or 3 is concerned? Or what is the BEST PRACTICES?
    I would really appreciate if someone can help me clear these lingering doubts of mine.
    God Bless.
    SiM

    Sim,
    Here are my thoughts without a 1000v in place,
    Q1. For me to configure vPC on a pair of Nexus 7000, do I have to connect Ethernet Uplink from each Cisco Fabric Interconnect to the 2 Nexus 7000 in a bow-tie fashion? If I connect, say 2 10G ports from Fabric Interconnect 1 to 1 Nexus 7000 and similar connection from FInterconnect 2 to the other Nexus 7000, in this case can I still configure vPC or is it a validated design? If it is, what is the pro and con versus having 2 connections from each FInterconnect to 2 separate Nexus 7000?   //Yes, for vPC to UCS the best practice is to bowtie uplink to (2) 7K or 5Ks.
    Q2. If vPC is to be configured in Nexus 7000, is it COMPULSORY to configure Port Channel for the 2 Fabric Interconnects using UCSM? I believe it is not. But what is the pro and con of HAVING NO Port Channel within UCS versus HAVING Port Channel when vPC is concerned? //The port channel will be configured on both the UCSM and the 7K. The pro of a port channel would be both bandwidth and redundancy. vPC would be prefered.
    Q3. if vPC is to be configured in Nexus 7000, I understand there is a limitation on confining to ONLY 1 vSphere NIC Teaming's Load-Balancing Algorithm i.e. Route Based on IP Hash. Is it correct? //Without the 1000v, I always tend to leave to dvSwitch load balence behavior at the default of "route by portID". 
    Again, what is the pro and con here with regard to application behaviours when Layer 2 or 3 is concerned? Or what is the BEST PRACTICES? UCS can perform L2 but Northbound should be performing L3.
    Cheers,
    David Jarzynka

  • Ask the Expert: Basic Introduction and Troubleshooting on Cisco Nexus 7000 NX-OS Virtual Device Context

    With Vignesh R. P.
    Welcome to the Cisco Support Community Ask the Expert conversation.This is an opportunity to learn and ask questions of Cisco expert Vignesh R. P. about the Cisco® Nexus 7000 Series Switches and support for the Cisco NX-OS Software platform .
    The Cisco® Nexus 7000 Series Switches introduce support for the Cisco NX-OS Software platform, a new class of operating system designed for data centers. Based on the Cisco MDS 9000 SAN-OS platform, Cisco NX-OS introduces support for virtual device contexts (VDCs), which allows the switches to be virtualized at the device level. Each configured VDC presents itself as a unique device to connected users within the framework of that physical switch. The VDC runs as a separate logical entity within the switch, maintaining its own unique set of running software processes, having its own configuration, and being managed by a separate administrator.
    Vignesh R. P. is a customer support engineer in the Cisco High Touch Technical Support center in Bangalore, India, supporting Cisco's major service provider customers in routing and MPLS technologies. His areas of expertise include routing, switching, and MPLS. Previously at Cisco he worked as a network consulting engineer for enterprise customers. He has been in the networking industry for 8 years and holds CCIE certification in the Routing & Switching and Service Provider tracks.
    Remember to use the rating system to let Vignesh know if you have received an adequate response. 
    Vignesh might not be able to answer each question due to the volume expected during this event. Remember that you can continue the conversation on the  Data Center sub-community discussion forum shortly after the event. This event lasts through through January 18, 2013. Visit this forum often to view responses to your questions and the questions of other community members.

    Hi Vignesh
    Is there is any limitation to connect a N2K directly to the N7K?
    if i have a an F2 card 10G and another F2 card 1G and i want to creat 3 VDC'S
    VDC1=DC-Core
    VDC2=Aggregation
    VDC3=Campus core
    do we need to add a link between the different VDC's
    thanks

Maybe you are looking for

  • What app l do need to set up a video call by Facebook?

    What app do I need to download to  make a video call by facebook?

  • /Library/Receipts does not contain *any* Apple app receipts?

    Has Snow Leopard changed the way Apple Installers and Software Update handles receipts? Used to be if you installed an app like iLife (or any other app), there'd be one or more .pkg files in /Library/Receipts. I've got a fresh install of Snow Leopard

  • How to Calculate Seniority of an Employee in Time Evaluation????

    Hi All, I am working on Carry Forward of the Vacation Quota from one year to the next. The number of hours of the Vacation Quota that an Employee could carry forward to the next year is based on the Seniority of the employee. My question to you all i

  • Sequence diagram lifeline in Oracle BPA Suite

    I try to create sequence diagram in Oracle Busines Process Architect 11g. I use type of model "UML Sequence diagram". But in palette there is no object with type "Lifeline". How can I add lifelines to my sequence diagram?

  • Export of a table

    win2000, 9iRel2 when i try to export a table in EM i get the following error: VNI-2015. i browsed a lot of pages about it. it seems to be the win2000 which generates the error. could someone give me a step-by-step guidance (or a link) on which machin