Enabling FCoE on N7K-F132XP-15

I ran into a problem not being able to license FCoE on N7K-F132XP-15 module. I have the license file installed properly, but still getting "
ERROR:  Cannot obtain license, line card is not supported".  It seems the problem is with version of HW on this particular module 1.0. Anyone has experience with this ?
switch(config)# show module
Mod  Ports  Module-Type                         Model Status
1    32     10 Gbps Ethernet Module             N7K-M132XP-12 ok
2    32     1/10 Gbps Ethernet Module           N7K-F132XP-15 ok
5    0      Supervisor module-1X                N7K-SUP1 active *
Mod  Sw              Hw
1    6.0(4)          1.8
2    6.0(4)          1.0
5    6.0(4)          1.4
Mod  MAC-Address(es)                         Serial-Num
1    d0-d0-fd-f1-5d-00 to d0-d0-fd-f1-5d-24  JAF1429CJCR
2    f8-66-f2-e4-d5-4c to f8-66-f2-e4-d5-90  JAF1704BFTD
5    00-24-f7-1d-c3-08 to 00-24-f7-1d-c3-10  JAF1317BDBF
Mod  Online Diag Status
1    Pass
2    Pass
==================================================================================
switch(config)# show license usage
Feature                      Ins  Lic   Status Expiry Date Comments
                                 Count
MPLS_PKG                      No    -   Unused             -
STORAGE-ENT                   No    -   Unused             -
ENTERPRISE_PKG                No    -   Unused             -
FCOE-N7K-F132XP               Yes   1   Unused 30 Jan 2014 -
ENHANCED_LAYER2_PKG           No    -   Unused             -
SCALABLE_SERVICES_PKG         No    -   Unused             -
TRANSPORT_SERVICES_PKG        No    -   Unused             -
LAN_ADVANCED_SERVICES_PKG     Yes   -   Unused Never       license missing
LAN_ENTERPRISE_SERVICES_PKG   Yes   -   Unused Never       license missing
=======================================================================================
switch(config)# install feature-set fcoe
feature set is installed already(0x40aa0011)
switch(config)# li
license   line
switch(config)# license fcoe module 2
ERROR:  Cannot obtain license, line card is not supported
switch(config)#

Padma, I think you solved the problem. The "license fcoe module 2 force" took affect (see output below). The "force" option is hidden, so I didn't make that guess on my own. I hope from here on the card should be able to handle FCoE configuration. If you know of any other pitfalls with this particular hardware revision, please share.
n7k# show version
Cisco Nexus Operating System (NX-OS) Software
TAC support: http://www.cisco.com/tac
Documents: http://www.cisco.com/en/US/products/ps9372/tsd_products_support_serie
s_home.html
Copyright (c) 2002-2012, Cisco Systems, Inc. All rights reserved.
The copyrights to certain works contained in this software are
owned by other third parties and used and distributed under
license. Certain components of this software are licensed under
the GNU General Public License (GPL) version 2.0 or the GNU
Lesser General Public License (LGPL) Version 2.1. A copy of each
such license is available at
http://www.opensource.org/licenses/gpl-2.0.php and
http://www.opensource.org/licenses/lgpl-2.1.php
Software
  BIOS:      version 3.22.0
  kickstart: version 6.0(4)
  system:    version 6.0(4)
  BIOS compile time:       02/20/10
  kickstart image file is: bootflash:///n7000-s1-kickstart.6.0.4.bin
  kickstart compile time:  12/25/2020 12:00:00 [06/22/2012 19:26:16]
  system image file is:    bootflash:///n7000-s1-dk9.6.0.4.bin
  system compile time:     6/6/2012 18:00:00 [06/22/2012 21:03:20]
Hardware
  cisco Nexus7000 C7010 (10 Slot) Chassis ("Supervisor module-1X")
  Intel(R) Xeon(R) CPU         with 8245320 kB of memory.
  Processor Board ID JAF1330AHKM
  Device name: n7k
  bootflash:    2030616 kB
  slot0:        2075246 kB (expansion flash)
Kernel uptime is 0 day(s), 1 hour(s), 36 minute(s), 46 second(s)
Last reset
  Reason: Unknown
  System version: 6.0(4)
  Service:
plugin
  Core Plugin, Ethernet Plugin
CMP (Module 5) ok
CMP Software
  CMP BIOS version:        02.01.05
  CMP Image version:       5.1(1) [build 5.0(0.66)]
  CMP BIOS compile time:   7/13/2008 19:44:27
  CMP Image compile time:  11/29/2010 12:00:00
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
n7k# attach module 2
module-2# show hardware internal dev-version
Name                     InstanceNum      Version
Orion Fwding Driver              1           3.0
Orion Fwding Driver              2           3.0
Orion Fwding Driver              3           3.0
Orion Fwding Driver              4           3.0
Orion Fwding Driver              5           3.0
Orion Fwding Driver              6           3.0
Orion Fwding Driver              7           3.0
Orion Fwding Driver              8           3.0
Orion Fwding Driver              9           3.0
Orion Fwding Driver             10           3.0
Orion Fwding Driver             11           3.0
Orion Fwding Driver             12           3.0
Orion Fwding Driver             13           3.0
Orion Fwding Driver             14           3.0
Orion Fwding Driver             15           3.0
Orion Fwding Driver             16           3.0
PHY                              1          56778642.1291
PHY                              2          56778642.1291
PHY                              3          56778642.1291
PHY                              4          56778642.1291
PHY                              5          56778642.1291
PHY                              6          56778642.1291
PHY                              7          56778642.1291
PHY                              8          56778642.1291
PHY                              9          56778642.1291
PHY                             10          56778642.1291
PHY                             11          56778642.1291
PHY                             12          56778642.1291
PHY                             13          56778642.1291
PHY                             14          56778642.1291
PHY                             15          56778642.1291
PHY                             16          56778642.1291
Santa-Cruz-Module                1           0.4
Santa-Cruz-Module                2           0.4
Falcon                           1           2.0
IO FPGA                          1          0.045
PM FPGA                          1          1.001
BIOS version                       v1.10.17(04/25/11)
Alternate BIOS version             v1.10.17(04/25/11)
module-2# 
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
n7k(config)# license fcoe module 2
ERROR:  Cannot obtain license, line card is not supported
n7k(config)# license fcoe module 2 force
n7k(config)# show license usage
Feature                      Ins  Lic   Status Expiry Date Comments
                                 Count
MPLS_PKG                      No    -   Unused             -
STORAGE-ENT                   No    -   Unused             Grace 119D 23H
ENTERPRISE_PKG                No    -   Unused             Grace 119D 22H
FCOE-N7K-F132XP               No    0   Rsrved             Grace 81D 3H
ENHANCED_LAYER2_PKG           Yes   -   In use Never       license missing
SCALABLE_SERVICES_PKG         No    -   Unused             -
TRANSPORT_SERVICES_PKG        Yes   -   Unused Never       license missing
LAN_ADVANCED_SERVICES_PKG     Yes   -   In use Never       license missing
LAN_ENTERPRISE_SERVICES_PKG   Yes   -   Unused Never       license missing

Similar Messages

  • Nexus 7K question regarding N7K-F132XP-15

    Dear expert,
    I work in Cisco lab and doing performance demo on mobile packet core solutions. We have a Next 7000 10 slot chassis with 3 32 port 10G modules(N7K-M132XP-12L). Since N7K-M132XP-12L limit total bandwidth to 80G, we want to get several new 32 port 1G and 10G F module(N7K-F132XP-15). I have a few questions through:
    1, Can N7K-F132XP-15 and N7K-M132XP-12L co-exist in same chassis? My feeling is "yes" but want to confirm.
    2, N7K-F132XP-15 can have 230G bandwidth to switch fabric given 5 switch fabric installed, how does 230G distribute to each goup of 8 ports(or 4 ports maybe)? I know for N7K-M132XP-12L, group of 4 ports share 10G, any such limitation for  N7K-F132XP-15?
    3, N7K-F132XP-15 appears to be less expensive than N7K-M132XP-12L even it has more bandwidth, do I loss something compare to N7K-M132XP-12L?
    Thanks a lot
    Ning Chen
    Technical Marketing Engineer
    Cisco

    Hi Ning
    I can help out on a few, so to answer:
    1) Yes, you can have both N7K-F132XP-15 and N7K-M132XP-12L in the same chassis
    2) I'm not sure on this one, will have to look into some specs on it.  Perhaps someone else knows, feel free to chime in
    3) Keep in mind the N7K-F132XP-15 (F1 module) is not very comparable to N7K-M132XP-12L(M1 module).  F1 modules are L2 only, so no routing is done on the linecard itself.  F1 can do fabricpath where as M1 cannot.
    Depending on what you want to do with the module the M1 may be a better choice.  If you are looking at pure L2 then F1 is fine, and then you can even take advantage of FabricPath, however if you want to do any L3 you need the M1 modules.
    Hope that helps
    Chad

  • N7K F1 TCAM limitations ("ERROR: Hardware programming failed. Reason: Tcam will be over used, please enable bank chaining and/or turn off atomic update")

    Platform/versions:
    # sh ver
      kickstart: version 6.2(2)
      system:    version 6.2(2)
    # sh mod
    Mod  Ports  Module-Type                         Model              Status
    1    32     10 Gbps Ethernet Module             N7K-M132XP-12      ok
    2    48     1000 Mbps Optical Ethernet Module   N7K-M148GS-11      ok
    3    32     1/10 Gbps Ethernet Module           N7K-F132XP-15      ok
    4    32     1/10 Gbps Ethernet Module           N7K-F132XP-15      ok
    5    0      Supervisor Module-1X                N7K-SUP1           ha-standby
    6    0      Supervisor Module-1X                N7K-SUP1           active *
    I recently tried to add a couple of "ip dhcp relay address" statements to an SVI and received the following error message:
    ERROR: Hardware programming failed. Reason: Tcam will be over used, please enable bank chaining and/or turn off atomic update
    Studying this page I was able to determine, that I seemed to be hitting a 50% TCAM utilization limit on the F1 modules, which prevented atomic updates:
    # show hardware access-list resource entries module 3
             ACL Hardware Resource Utilization (Module 3)
                              Instance  1, Ingress
    TCAM: 530 valid entries   494 free entries
    IPv6 TCAM: 8 valid entries   248 free entries
                              Used    Free    Percent
                                              Utilization
    TCAM                      530     494     51.75 
    I was able to workaround it be disabling atomic updates:
    hardware access-list update default-result permit
    no hardware access-list update atomic
    I understand, that with this config I am theoretically allowing ACL traffic during updates which shouldn't be allowed (alternately I could drop it), but that's not really my primary concern here.
    First of all I need to understand why adding DHCP relays are apparently affecting my TCAM entry resources?
    Second, I need to understand if there are other implications of disabling atomic updates, such as during ISSU?
    Third,  What are my options - if any - for planning the usage of the apparently relatively scarce resources on the F1 modules?

    Could be CSCua13121:
    Symptom:
    Host of certain Vlan will not get IP address from DHCP.
    Conditions:
    M1/F1 Chassis with Fabric Path Vlan with atomic update enable.
    On such system if configured SVI with DHCP is bounced that is if we shut down and bring it up may cause DHCP relay issue.
    Workaround:
    Disable Atomic update and bounce the Vlan. Disabling Atomic update may cause packet drops .
    Except that ought to be fixed in 6.2(2).

  • Some problem about FCoE with N7000

    Hello every one !
         I`m a Network guy and I think I`m a fresh for storage. I`m config a FCoE link on Nexus 7000,but there is something wrong and I need some help from you !
    ==============================================================================================
         Problem:
             1. I can built a storage VDC , but I can`t enable " feature fcoe " , but "feature-set fcoe" is okay
             2. when cheak interface status ,It appear :
    xxxxxxxx# show int e3/39
    Ethernet3/39 is down (Internal-Fail errDisable, fcoe_mgr: ethernet interface module is not licen)
    ===============================================================================================
             my software version is 6.1.2 and line-card is N7K-F248XP-25E ,and is my license:
    xxxxxxxx# show license usage
    Feature                      Ins  Lic   Status Expiry Date Comments
                                     Count
    MPLS_PKG                      Yes   -   Unused Never       -
    STORAGE-ENT                   Yes   -   Unused Never       -
    VDC_LICENSES                  No    0   Unused             -
    ENTERPRISE_PKG                No    -   Unused             -
    FCOE-N7K-F132XP               No    0   Unused             -
    FCOE-N7K-F248XP               No    0   Unused             -
    ENHANCED_LAYER2_PKG           Yes   -   In use Never       -
    SCALABLE_SERVICES_PKG         Yes   -   Unused Never       -
    TRANSPORT_SERVICES_PKG        Yes   -   Unused Never       -
    LAN_ADVANCED_SERVICES_PKG     Yes   -   In use Never       -
    LAN_ENTERPRISE_SERVICES_PKG   Yes   -   Unused Never       -
    =================================================================================================
    It is a license problem ?  Please help me ,Thanks Very much!

    Please confim below things.
    1. On the switch make sure that fcoe feature-set is isnatlled -
    install feature-set fcoe
    2. Make sure that, you have allowed fcoe feature-set for the storage vdc
    conf t
    vdc
    allow feature-set fcoe
    3. Make sure that the module you are using is licensed for FCoE
    conf t
    license fcoe module
    Hope this helps.
    -Ganesh

  • Nexus 7000 fcoe expert advice

    Hi,
    I have one Nexus 7000 with a fcoe supported blade N7K-F132XP-15.
    Want to deply this in the LAB. I created a fcoe port for test purpose which is working. But some things are not very clear to me. I am pointing them below.
    How to create the interface membership for storage VDC. Right now i created shared interfcae with the default VDC. But can i create the interfcae dedicated? What is the difference between when the interface is shared and dedicated ?
    Also, on my shared interfcae while configuring FCoE port, it did not allow me to create priority flow control settings. See the error below.
    fcoe-dvt(config-if)# priority-flow-control mode auto
    ERROR: pfc config not allowed on shared interface (0x1a000000)
    fcoe-dvt(config-if)#
    On Nexus 5020, we have priority flow control settings as "auto" for eahc fcoe interface. In the nexus 7000 case, i dont know what and how priority flow control is taking place.
    Any white paper on Nexus 7000 fcoe will be greatly appreciated.
    Thanks,

    Marko,
    Yes it is supported, here is a link to the interoperability matrix for storage, also  you can find CNA compatibility
    here as well. It is a difficult to find on cisco.com because it has been added in with the MDS.
    http://www.cisco.com/en/US/docs/switches/datacenter/mds9000/interoperability/matrix/Matrix1.html
    Thanks,
    Bill

  • FCOE design

    I currently have a nexus 5010 with 10gb hosts and fiber attached storage at location A. At location B, I have an additional nexus 5010 with 10gb host with a CNA that will be using fcoe to connect to storage.
    I'd like for the host at location B to connect FCOE to the fiber attached storage at location A. What configuration steps do I need to take to get location A's storage visible to FCOE. As I see it, I need to enable FCOE, allow the FCOE vlan, and tie that vlan to the vsan. At this point should location A be able to see B's storage? Thanks

    Over here they describe a multihop FCOE solution.
    http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/VMDC/tech_eval/mFCoEwp.html
    However they still use dedicated links for FCOE traffic from the N5K towards the N7K's and a dedicated vPC for the normal LAN traffic. They even use a dedicated VDC for storage and normal traffic.
    Therefore I am not sure if it is supported to mix the ethernettraffic with the FCOE traffic one one link.

  • Change VDC type on n7k sup1

    Hi.
    I have a pair of Nexus 7010 with sup1 running 6.2(8)
    There is default VDC only:
    # sh vdc detail
    Switchwide mode is m1 f1 m1xl f2 m2xl f2e f3
    vdc id: 1
    vdc name: nex7010-12
    vdc state: active
    vdc mac address: 40:55:39:23:1e:c1
    vdc ha policy: RELOAD
    vdc dual-sup ha policy: SWITCHOVER
    vdc boot Order: 1
    vdc create time: Tue Dec 20 11:23:37 2011
    vdc reload count: 0
    vdc uptime: 989 day(s), 22 hour(s), 10 minute(s), 34 second(s)
    vdc restart count: 0
    vdc type: Ethernet
    vdc supported linecards: m1 f1 m1xl m2xl
    I want to add N7K-F248XP-25E line card. So as I understand I should enter
    (config)# vdc nex7010-11 id 1
    (config-vdc)# limit-resource module-type m1 f1 m1xl m2xl f2e
    And then insert line card. Is this right?
    So my questions. Is this operation disruptive to traffic forwarding? Do I lost allocated interface and/or limit-resources after change VDC type and need to retype them again?

    Hi insharie,
    thanks for fast answer :)
    My config:
     Model        
     N7K-M148GT-11
     N7K-M148GT-11
     N7K-M148GT-11
     N7K-M148GT-11
     N7K-SUP1     
     N7K-SUP1     
     N7K-M108X2-12L
     N7K-M108X2-12L
     N7K-F132XP-15
    So there is no F2 line cards.
    I want to add F2e - N7K-F248XP-25E line card.
    In "Cisco Nexus 7000 Series NX-OS VDC Configuration Guide":
    "Beginning with Cisco NX-OS Release 6.2(2), F2e module type can exist with M1, M1XL, and M2XL module types in the same VDC."
    And as I understand F2e will be working in Proxy Mode 
    "When you enter the limit-resource module-type command and it changes the F2e mode between the old VDC type and the new VDC type, you are prompted to enter the rebind interface command"
    From the table "Table 7 VDC Types Impacted By F2e Proxy Mode" I do not see my case
    Old VDC: M1,F1 --> New VDC: M1,F1,F2e
    And can not figure out the impact.

  • N7K-F248XP-25E 32k Route limit

    We have a nexus 7009 with N7K-F248XP-25E, the F2 is limited to 32k routes, how can this be bypass and use the 7k limit of 5.2mil routes? Does anyone know if it can.
    Thanks.

    No I dont see the way to increase it. Please find below details:
    Nexus 7700 F3-Series 12-Port 100 Gigabit Ethernet Module
    Layer 2 / Layer 3
    12 x 100 Gbps (CPAK)
    192 x wire-rate 100 Gigabit Ethernet ports are supported in a Nexus 7718 chassis
    1.8 billion packets per second layer 2 and layer 3 IPv4 and IPv6 forwarding
    1.2 Tbps of data throughput (2.4 Tbps full-duplex)
    7710 and 7718 chassis
    Supervisor Engine 2E
    Fabric-2 Module (7010 and 7018 chassis)
    4 ingress and 8 egress queues per port
    Jumbo Frame Support
    64K MAC addresses
    4096 VLANs per VDC
    64K IPv4 routes
    32K IPv6 routes
    Switch on Chip (SoC) ASIC
    N77-F312CK-26
    Nexus 7700 F3-Series 24-Port 40 Gigabit Ethernet Module
    Layer 2 / Layer 3
    24 x 40 Gbps (QSFP+)
    384 x wire-rate 40 Gigabit Ethernet ports in a Nexus 7718 chassis
    1.44 billion packets per second layer 2 and layer 3 IPv4 and IPv6 forwarding
    960 Gbps data throughput (1.92 Tbps full-duplex)
    7710 and 7718 chassis
    Supervisor Engine 2E
    Fabric-2 Module (7010 and 7018 chassis)
    4 ingress and 8 egress queues per port
    Jumbo Frame Support
    64K MAC addresses
    4096 VLANs per VDC
    64K IPv4 routes
    32K IPv6 routes
    Switch on Chip (SoC) ASIC
    N77-F324FQ-25
    Nexus 7700 F3-Series 48-Port Fiber 1 and 10 Gigabit Ethernet Module
    Layer 2 / Layer 3
    48 x 1/10 Gbps (SFP and SFP+)
    768 x wire-rate 10 Gigabit Ethernet ports in a single Nexus 7718 chassis
    720 million packets per second layer 2 and layer 3 IPv4 and IPv6 forwarding
    480 Gbps data throughput (960 Gbps full-duplex)
    7706, 7710 and 7718 chassis
    Supervisor Engine 2E
    Fabric-2 Module (7010 and 7018 chassis)
    4 ingress and 8 egress queues per port
    Jumbo Frame Support
    64K MAC addresses
    4096 VLANs per VDC
    64K IPv4 routes
    32K IPv6 routes
    Switch on Chip (SoC) ASIC
    N77-F348XP-23
    Nexus 7700 F2-Series Enhanced 48-Port Fiber 1 and 10 Gigabit Ethernet Module
    Layer 2 / Layer 3
    48 x 1/10 Gbps (SFP and SFP+)
    768 x wire-rate 10 Gigabit Ethernet ports in a single Nexus 7718 chassis
    720 million packets per second layer 2 and layer 3 IPv4 and IPv6 forwarding
    480 Gbps data throughput (960 Gbps full-duplex)
    7706, 7710 and 7718 chassis
    Supervisor Engine 2E
    Fabric-2 Module (7010 and 7018 chassis)
    4 ingress and 4 egress queues per port
    Jumbo Frame Support
    16K MAC addresses
    4096 VLANs per VDC
    32K IPv4 routes
    16K IPv6 routes
    Switch on Chip (SoC) ASIC
    N77-F248XP-23E
    NX-OS 6.2.2+
    Nexus 7000 F3-Series 6-Port 100 Gigabit Ethernet Module
    Layer 2 / Layer 3
    6 x 100 Gbps (CPAK)
    96 x wire-rate 100 Gigabit Ethernet ports in a single Nexus 7018 chassis
    900 million packets per second layer 2 and layer 3 IPv4 and IPv6 forwarding
    600 Gbps data throughput (1.2 Tbps full-duplex)
    7004, 7009, 7010 and 7018 chassis
    Supervisor Engine 1, 2 and 2E
    Fabric-1 Module (7010 and 7018 chassis), Fabric-2 Module (7009, 7010, 7018 chassis)
    4 ingress and 4 egress queues per port
    Jumbo Frame Support
    64K MAC addresses
    4096 VLANs per VDC
    64K IPv4 routes
    32K IPv6 routes
    Switch on Chip (SoC) ASIC
    N7K-F306CK-25
    Nexus 7000 F3-Series 12-Port 40 Gigabit Ethernet Module
    Layer 2 / Layer 3
    12 x 40 Gbps (QSFP+)
    192 x wire-rate 40 Gigabit Ethernet ports in a single Nexus 7018 chassis
    720 million packets per second layer 2 and layer 3 IPv4 and IPv6 forwarding
    480 Gbps data throughput (960 Gbps full-duplex)
    7004, 7009, 7010 and 7018 chassis
    Supervisor Engine 1, 2 and 2E
    Fabric-1 Module (7010 and 7018 chassis), Fabric-2 Module (7009, 7010, 7018 chassis)
    4 ingress and 4 egress queues per port
    Jumbo Frame Support
    64K MAC addresses
    4096 VLANs per VDC
    64K IPv4 routes
    32K IPv6 routes
    Switch on Chip (SoC) ASIC
    N7K-F312FQ-25
    Nexus 7000 F2-Series Enhanced 48-Port Fiber 1 and 10 Gigabit Ethernet Module
    Layer 2 / Layer 3
    48 x 1/10 Gbps (SFP and SFP+)
    768 x wire-rate 10 Gigabit Ethernet ports in a single Nexus 7018 chassis
    720 million packets per second layer 2 and layer 3 IPv4 and IPv6 forwarding
    480 Gbps data throughput (960 Gbps full-duplex)
    7004, 7009, 7010 and 7018 chassis
    Supervisor Engine 1, 2 and 2E
    Fabric-1 Module (7010 and 7018 chassis), Fabric-2 Module (7009, 7010, 7018 chassis)
    4 ingress and 4 egress queues per port
    Jumbo Frame Support
    16K MAC addresses
    4096 VLANs per VDC
    32K IPv4 routes
    16K IPv6 routes
    Switch on Chip (SoC) ASIC
    N7K-F248XP-25E
    NX-OS 6.1.2+
    Nexus 7000 F2-Series Enhanced 48-Port 1 and 10GBASE-T Ethernet Copper Module
    Layer 2 / Layer 3
    48 x 1/10 Gbps (RJ-45)
    768 x wire-rate 10GBase-T Ethernet ports in a single Nexus 7018 chassis
    720 million packets per second layer 2 and layer 3 IPv4 and IPv6 forwarding
    480 Gbps data throughput (960 Gbps full-duplex)
    7004, 7009, 7010 and 7018 chassis
    Supervisor Engine 1, 2 and 2E
    Fabric-1 Module (7010 and 7018 chassis), Fabric-2 Module (7009, 7010, 7018 chassis)
    4 ingress and 4 egress queues per port
    Jumbo Frame Support
    16K MAC addresses
    4096 VLANs per VDC
    32K IPv4 routes
    16K IPv6 routes
    Switch on Chip (SoC) ASIC
    N7K-F248XT-25E
    NX-OS 6.1.2+
    Nexus 7000 F2-Series 48-Port 1 and 10 Gigabit Ethernet Module
    Layer 2 / Layer 3
    48 x 1/10 Gbps (SFP and SFP+)
    768 x wire-rate 10 Gigabit Ethernet ports in a single Nexus 7018 chassis
    720 million packets per second layer 2 and layer 3 IPv4 and IPv6 forwarding
    480 Gbps data throughput (960 Gbps full-duplex)
    7004, 7009, 7010 and 7018 chassis
    Supervisor Engine 1, 2 and 2E
    Fabric-1 Module (7010 and 7018 chassis), Fabric-2 Module (7009, 7010, 7018 chassis)
    4 ingress and 4 egress queues per port
    Jumbo Frame Support
    16K MAC addresses
    4096 VLANs per VDC
    32K IPv4 routes
    16K IPv6 routes
    Switch on Chip (SoC) ASIC
    N7K-F248XP-25
    NX-OS 6.0+
    Nexus 7000 F1-Series 32-Port 1 and 10 Gigabit Ethernet Module
    Layer 2
    32 x 1/10 Gbps (SFP and SFP+)
    512 x wire-rate 10 Gigabit Ethernet ports in a single Nexus 7018 chassis
    480 million packets per second layer 2 forwarding
    320 Gbps data throughput (640 Gbps full-duplex)
    7009, 7010 and 7018 chassis
    Supervisor Engine 1, 2 and 2E
    Fabric-1 Module (7010 and 7018 chassis), Fabric-2 Module (7009, 7010, 7018 chassis)
    4 ingress and 8 egress queues per port
    Jumbo Frame Support
    256K MAC addresses
    16K VLANs per module
    N7K-F132XP-15
    NX-OS 5.1.1
    Nexus 7000 M2-Series 24-Port 10 Gigabit Ethernet Module with XL Option
    Layer 2 / Layer 3
    24 x 10 Gbps (SFP+)
    384 x wire-rate 10 Gigabit Ethernet ports in a single Nexus 7018 chassis
    120 million packets per second layer 2 and layer 3 IPv4 forwarding and 60 mpps IPv6
    550 Gbps data throughput (1.1 Tbps full-duplex)
    7004, 7009, 7010 and 7018 chassis
    Supervisor Engine 1, 2 and 2E
    Fabric-1 Module (7010 and 7018 chassis), Fabric-2 Module (7009, 7010, 7018 chassis)
    8 ingress and 8 egress queues per port
    Jumbo Frame Support
    128K MAC addresses
    4096 VLANs per VDC
    1M IPv4 routes
    350K IPv6 routes
    N7K-M224XP-23L
    NX-OS 6.1+
    Nexus 7000 M2-Series 6-Port 40 Gigabit Ethernet Module with XL Option
    Layer 2 / Layer 3
    6 x 40 Gbps (QSFP+)
    96 x wire-rate 40 Gigabit Ethernet and 10 Gbps ports in a Nexus 7018 chassis
    120 million packets per second layer 2 and layer 3 IPv4 forwarding and 60 mpps IPv6
    550 Gbps data throughput (1.1 Tbps full-duplex)
    7004, 7009, 7010 and 7018 chassis
    Supervisor Engine 1, 2 and 2E
    Fabric-1 Module (7010 and 7018 chassis), Fabric-2 Module (7009, 7010, 7018 chassis)
    8 ingress and 8 egress queues per port
    Jumbo Frame Support
    128K MAC addresses
    4096 VLANs per VDC
    1M IPv4 routes
    350K IPv6 routes
    N7K-M206FQ-23L
    NX-OS 6.1+
    Nexus 7000 M2-Series 2-Port 100 Gigabit Ethernet Module with XL Option
    Layer 2 / Layer 3
    2 x 100 Gbps (CFP)
    32 x wire-rate 100 Gigabit Ethernet ports in a single Nexus 7018 chassis
    120 million packets per second layer 2 and layer 3 IPv4 forwarding and 60 mpps IPv6
    550 Gbps of data throughput (1.1 Tbps full-duplex)
    7004, 7009, 7010 and 7018 chassis
    Supervisor Engine 1, 2 and 2E
    Fabric-1 Module (7010 and 7018 chassis), Fabric-2 Module (7009, 7010, 7018 chassis)
    8 ingress and 8 egress queues per port
    Jumbo Frame Support
    128K MAC addresses
    4096 VLANs per VDC
    1M IPv4 routes
    350K IPv6 routes
    N7K-M202CF-22L
    NX-OS 6.1+
    Nexus 7000 M1-Series 8-Port 10 Gigabit Ethernet Module with XL Option
    Layer 2 / Layer 3
    8 x 10 Gbps (X2)
    128 x wire-rate 10 Gigabit Ethernet ports in a single Nexus 7018 chassis
    120 million packets per second layer 2 and layer 3 IPv4 forwarding and 60 mpps IPv6
    80 Gbps of data throughput (160 Gbps full-duplex)
    7004, 7009, 7010 and 7018 chassis
    Supervisor Engine 1, 2 and 2E
    Fabric-1 Module (7010 and 7018 chassis), Fabric-2 Module (7009, 7010, 7018 chassis)
    8 ingress and 8 egress queues per port
    Jumbo Frame Support
    128K MAC addresses
    4096 VLANs per VDC
    1M IPv4 routes
    350K IPv6 routes
    N7K-M108X2-12L
    NX-OS 5.0+
    Nexus 7000 M1-Series 32-Port 10 Gigabit Ethernet Module with XL Option
    Layer 2 / Layer 3
    32 x 10 Gbps (SFP+)
    512 x 10 Gigabit Ethernet ports in a single Nexus 7018 chassis
    60 million packets per second layer 2 and layer 3 IPv4 forwarding and 30 mpps IPv6
    80 Gbps of data throughput (160 Gbps full-duplex)
    7004, 7009, 7010 and 7018 chassis
    Supervisor Engine 1, 2 and 2E
    Fabric-1 Module (7010 and 7018 chassis), Fabric-2 Module (7009, 7010, 7018 chassis)
    8 ingress and 8 egress queues per port
    Jumbo Frame Support
    128K MAC addresses
    4096 VLANs per VDC
    1M IPv4 routes
    350K IPv6 routes
    4:1 oversubscription
    N7K-M132XP-12L
    NX-OS 5.1+
    Nexus 7000 M1-Series 48-Port Gigabit Ethernet Modules with XL Option
    Layer 2 / Layer 3
    48 x 1 Gbps (SFP/RJ45)
    768 x Gigabit Ethernet (SFP) or 10/100/1000 (RJ45) ports in Nexus 7018 chassis
    60 million packets per second layer 2 and layer 3 IPv4 forwarding and 30 mpps IPv6
    46 Gbps of data throughput (92 Gbps full-duplex)
    7004, 7009, 7010 and 7018 chassis
    Supervisor Engine 1, 2 and 2E
    Fabric-1 Module (7010 and 7018 chassis), Fabric-2 Module (7009, 7010, 7018 chassis)
    2 ingress and 4 egress queues per port
    Jumbo Frame Support
    128K MAC addresses
    4096 VLANs per VDC
    1M IPv4 routes
    350K IPv6 routes
    Some oversubscription
    N7K-M148GS-11L
    NX-OS 5.0 (SFP) / 5.1+ (RJ45)
    Nexus 7000 M1-Series 32-Port 10 Gigabit Ethernet Module
    Layer 2 / Layer 3
    32 x 10 Gbps (SFP+)
    512 x 10 Gigabit Ethernet ports in a single Nexus 7018 chassis
    60 million packets per second layer 2 and layer 3 IPv4 forwarding and 30 mpps IPv6
    80 Gbps of data throughput (160 Gbps full-duplex)
    7009, 7010 and 7018 chassis
    Supervisor Engine 1, 2 and 2E
    Fabric-1 Module (7010 and 7018 chassis), Fabric-2 Module (7009, 7010, 7018 chassis)
    8 ingress and 8 egress queues per port
    Jumbo Frame Support
    128K MAC addresses
    4096 VLANs per VDC
    128K FIB routes
    4:1 oversubscription
    N7K-M132XP-12
    NX-OS 4.0+
    Nexus 7000 M1-Series 48-Port Gigabit Ethernet Modules
    Layer 2 / Layer 3
    48 x 1 Gbps (SFP/RJ45)
    768 x 10/100/1000 RJ45 or Gigabit Ethernet (SFP) ports in Nexus 7018 chassis
    60 million packets per second layer 2 and layer 3 IPv4 forwarding and 30 mpps IPv6
    46 Gbps of data throughput (92 Gbps full-duplex)
    7009, 7010 and 7018 chassis
    Supervisor Engine 1, 2 and 2E
    Fabric-1 Module (7010 and 7018 chassis), Fabric-2 Module (7009, 7010, 7018 chassis)
    2 ingress and 4 egress queues per port
    Jumbo Frame Support
    128K MAC addresses
    4096 VLANs per VDC
    128K FIB routes
    Some oversubscription
    N7K-M148GT-11
    NX-OS 4.1+ (SFP) / 4.0+ (RJ45)

  • N7K with F1 VDC, F2 VDC and Storage VDC - Configuration/Support Help

    Hi there,
    We currently have an N7k running version 6.2.(6)  with a default ethernet VDC and a storage VDC.  We are running F1 modules (N7K-F132XP-15) and share the interfaces across to the storage VDC.
    We recently purchased an F2 module to connect our HP Bladecenter Chassis via a B22 FEX module.  We were told by our account team that this was supported in our environment.  I created a third VDC for the F2 module and now want to share those ports into the storage VDC.  
    When I try to add support for the F2 card in the storage VDC I get this error:
    nyc-nx02(config-vdc)# limit-resource module-type f1 f2
    This will cause all ports of unallowed types to be removed from this vdc. Continue (y/n)? [yes] 
    ERROR: Not all listed line card types are supported in a storage vdc on the current switch
    nyc-nx02(config-vdc)#
    I understand that the F1 and F2 modules are not supported in same VDC, but can they be shared in a storage VDC on the same switch?  Before I continue troubleshooting I want to make sure this is the supported and best practice for this setup.
    Thanks for your help

    I think what you need is shared ports (access) splitting into storage VDC and ethernet VDC ?
    see attachment
    and
    http://www.cisco.com/c/en/us/td/docs/switches/datacenter/sw/nx-os/virtual_device_context/configuration/guide/vdc_nx-os_cfg/vdc_overview.html#wp1087325

  • New N7K-F248XP-25

    Ciao,
    in my previous post I understand that this linecard will be officially released on 18th ... and today I discovered something new in the DS published.
    "This extremely comprehensive set of Layer 2 and  Layer 3  functions makes this module ideal for data center networks,  where  density, performance, and continuous system operation are  critical."
    Performance
    • 720-mpps Layer 2 and Layer 3 forwarding capacity for both IPv4 and IPv6 packets
    The DS is here:
    http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/data_sheet_c78-685394.html
    is this new linecard N7K-F248XP-25 a L2/L3 like the M series???
    And, according to the Dynamic Configuration Tool it's stated:
    "A separate VDC is needed when deploying the N7K-F248XP-25 modules in a  chassis that also contains other families of modules (i.e. N7K-M1xxx and  N7K-F1xxx).The VDC feature is enabled by the N7K-ADV1K9 license. A  separate VDC is NOT required when the chassis contains only  N7K-F248XP-25 modules."
    And also NX-OS Version 6 required ...
    Any ideas??
    Ciao e grazie!
    Luciano

    Hi Robert,
    The F2 module is L2 and L3 capable.  It is quite a step up from F1 as it has no L3 capabilites, plus adding FEX to the F2 is a great improvement.  As far as F2 vs M1, the M1 still has larger L3 tables and can hold more routes in hardware as well as run OTV.  So the two types of modules are for different locations in your data center.  M1 is a great fit for your routing edge due to this.
    Lucinao,
    For fabric 2 on 7010 and 7018 you must have NX-OS 6. 
    As far as the new VDC requirement for F2.  Since the Nexus 7000s modules all have their own forwarding engines they must all run in a compatible mode.  Think of it like mixing a 6500 with sup2 and a module with a DFC3...you can't do it.
    So instead of just saying your chassis can only contain F2 or F1/M1 we are able to seperate the "generations" up by VDCs within the same chassis.
    Chad

  • Nexus 5K FCoE Licences

    Hi ,
    I need following clarification from anyone who is familliar with Nexus switch family , Since this things are bit confusing .
    1) . If only one  N55-8P-SSK9 ( Nexus 5500 Storage License, 8 Ports ) licences  is installed in  Nexus 5596UP, Does this enable FCoE capability to entire switch or only for 8 Ports ?
    2). If 2232PP is connected to 5596UP  ,Do all the Downlink ports which connected to 2232PP counts as  FCoE ?
    Thanks
    Rajitha

    Hello Rajitha,
    Please see the below table for N55-8P-SSK9( Nexus 5500 Storage License, 8 Ports )
    1)
    For more info on this please refer to below link:-
    http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/data_sheet_c78-618603.html
    2) If 2232PP is connected to 5596UP, Yes all the Downlink ports which connected to 2232PP counts as  FCoE.
    Thanks,
    Gaurav.

  • NPV and FCoE-NPV

    Hi,
    If I wish to enable NPV on a Nexus 5548 (so it can integrate with a 3rd party SAN switch), do I also need to enable FCoE-NPV?
    Enable NPV:
    switch(config)# npv enable
    Enable FCoE:
    switch(config)# feature fcoe
    FCoE-NPV:
    switch(config)# feature fcoe-npv
    That is, to use 'npv enable', do I need to use 'feature fcoe-npv'?
    I am trying to use FC to connect to a legacy SAN, while using FCoE to connect to servers in UCS. I'm not sure if I understand correctly, but I don't think I need FCoE-NPV to for UCS, but I do need to globally enable NPV for the Brocade switch.
    One last thing... I've read that when enabling NPV the switch will erase the config and reboot. Why is this? Or does this only happen if the switch already has SAN config?
    Thankyou

    Thankyou. Here is some more information on our environment.
    Our legacy system consists of Dell rack servers (The ones that run ESX have HBA's), IBM storage and 2x Brocade 300 SAN switches. The Brocades are not connected, so they run as separate fabrics. Most FC devices have dual-HBA's, so they can connect to both fabrics.
    Our new environment (still being built) uses Cisco UCS B series blades, and Nexus 5548 switching. For storage we use NetApp (NFS and CIFS, no FC/FCoE/iSCSI).
    We would like to connect the legacy and new environments, so one backup system can be used. I'd like to use FC for our new tape library, and continue using FC for our legacy. I'd like to be able to present storage to ESXi VMs running on the UCS blades.
    After doing some research, I think that making the 5k's the 'core' fabric would be best, which means putting the Brocade's into AG Mode (NPV). The FC ports on the 5k's would be in NPIV mode.
    As I understand it, the Fabric Interconnects (in end-host mode) are also in NPV. Does this mean I need to use 'feature fcoe' on the 5548's, but not 'feature fcoe-npv'?
    Does this plan make sense? Unfortunately I'm a networking guy, not a storage guy, so some of this may be way off.
    Thanks again for your help

  • How to implement FCoE interface on OL 6.3

    Hi everybody,
    I try to enable FCoE on a server with Oracle Linux 6.3 but I have some troubles...
    The server is an UCS C240-M3 with an UCS 1225 VIC (Virtual Interface Card). Per default, my Linux has set 6 network interfaces with DHCP (eth0 -> eth5). It corresponds to a 4 Gigabit PCI card + 2 Virtual NIC from the VIC 1225 adapter. The VIC 1225 is supposed to be used for Networking and FCoE, thanks to VLANs.
    Linux displays two FCoE devices with lspci:
    [root@localhost /]# lspci | grep -i fcoe
    0d:00.0 Fibre Channel: Cisco Systems Inc VIC FCoE HBA (rev a2)
    0e:00.0 Fibre Channel: Cisco Systems Inc VIC FCoE HBA (rev a2)
    And it appears that drivers are well installed:
    [root@localhost ~]$ dmesg | grep fnic
    fnic: Cisco FCoE HBA Driver, ver 1.5.0.1
    fnic 0000:0d:00.0: PCI INT A -> GSI 46 (level, low) -> IRQ 46
    fnic 0000:0d:00.0: setting latency timer to 64
    fnic 0000:0d:00.0: irq 75 for MSI/MSI-X
    fnic 0000:0d:00.0: irq 76 for MSI/MSI-X
    fnic 0000:0d:00.0: irq 77 for MSI/MSI-X
    fnic 0000:0d:00.0: irq 78 for MSI/MSI-X
    scsi1 : fnic
    fnic 0000:0e:00.0: PCI INT A -> GSI 40 (level, low) -> IRQ 40
    fnic 0000:0e:00.0: setting latency timer to 64
    fnic 0000:0e:00.0: irq 79 for MSI/MSI-X
    fnic 0000:0e:00.0: irq 80 for MSI/MSI-X
    fnic 0000:0e:00.0: irq 81 for MSI/MSI-X
    fnic 0000:0e:00.0: irq 82 for MSI/MSI-X
    scsi2 : fnic
    How can I enable FCoE on my Linux ? I found some help on the Internet, and examples show how to use ethN interface etc.... I don't know how we habitually identify the interface (eth*) to use with FCoE. Is the FCoE interface considered as a "network" interface by Linux, and generates a script in /etc/sysconfig/network-script ? Am I supposed to see additional interfaces (eth6 and eth7) and use it for FCoE ?
    Thanks

    Ok but in my case, even if the physical Cisco card is a CNA device (it is plugged with a link to the FCoE switch, and a link to the Ethernet network), the Operating System doesn't see such a device since we have configured on the UCS :
    - Two virtual interfaces for Ethernet traffic
    - Two virtual interfaces for FCoE traffic
    Here we have the same behavior as a virtual machine, when it is connected to virtual devices. For the operating system, the two virtual devices configured on UCS side for FCoE are seen as physical dedicated FCoE devices.
    So I don't need to configure an ethX device for both Ethernet and FCoE, because both FCoE and Ethernet traffic are separated.

  • N7K-D132XP-15 module

    Is there any N7K-D132XP-15 module. I have seen this module reference in many other Cisco presentation. Is this a typo and correction would be N7K-F132XP-15 or a new module expected to be announce...

    Hello,
    So the fabric module that we have on the nexus 7000 will supply 46gbps per slot for each module you add:
    http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/ps9512/Data_Sheet_C78-437760.html
    Depending on the modules you are using will dictate how many fabrics you need.  For example if you are using a N7K-M132XP-12L:
    http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/data_sheet_c78-605622.html
    It has 80gb backplane connection, so to get full capabilitys of the module, you need 2 fabric modules.  You would likely want 3 for N+1 redundancy.
    However if you are using N7K-F132XP-15, it can make use of 230gb on the fabric...so for full capacity you would want 5 fabric modules:
    http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/data_sheet_c78-605622.html
    Overall just check the datasheet for the modules you are using and by looking at switch fabric interface speed you should be able to plan out the number of fabrics you'll need for your cards.
    Hope that helps
    Chad

  • While configuring speed 1000 i am getting error as sw2-storage-vdc(config-if)# speed 1000 ERROR: Ethernet2/6: Configuration does not match the port capability.

    storage-vdc(config-if)# show module
    Mod  Ports  Module-Type                         Model              Status
    2    32     1/10 Gbps Ethernet Module           N7K-F132XP-15      ok
    sw1-gd78(config-if)# sh module
    Mod  Ports  Module-Type                         Model              Status
    2    48     1/2/4/8 Gbps FC Module              DS-X9248-96K9      ok
    4    8      10 Gbps FCoE Module                 DS-X9708-K9        ok
    7    0      Supervisor/Fabric-2a                DS-X9530-SF2AK9    active *
    8    0      Supervisor/Fabric-2a                DS-X9530-SF2AK9    ha-standby
    10   22     4x1GE IPS, 18x1/2/4Gbps FC Module   DS-X9304-18K9      ok
    Mod  Sw              Hw      World-Wide-Name(s) (WWN)
    2    5.2(2)          1.1     20:41:00:0d:ec:fb:8a:00 to 20:70:00:0d:ec:fb:8a:00
    4    5.2(2)          0.107   --
    7    5.2(2)          1.8     --
    8    5.2(2)          1.8     --
    10   5.2(2)          1.3     22:41:00:0d:ec:fb:8a:00 to 22:52:00:0d:ec:fb:8a:00
    sw1-gd78(config-if)# sh run int ethernet4/6
    !Command: show running-config interface Ethernet4/6
    !Time: Mon Feb 20 22:56:12 2012
    version 5.2(2)
    interface Ethernet4/6
      no shutdown
    sw1-gd78(config-if)# no shut
    sw1-gd78(config-if)# speed 1000
    ERROR: Ethernet4/6: Configuration does not match the port capability.
    sw1-gd72# sh int ethernet4/6 capabilities
    Ethernet4/6
      Model:                 DS-X9708-K9
      Type (SFP capable):    10Gbase-SR
      Speed:                 1000,10000
      Duplex:                full
      Trunk encap. type:     802.1Q
      Channel:               yes
      Broadcast suppression: percentage(0-100)
      Flowcontrol:           rx-(off/on/desired),tx-(off/on/desired)
      Rate mode:             dedicated
      QOS scheduling:        rx-(2q4t),tx-(1p3q4t)
      CoS rewrite:           yes
      ToS rewrite:           yes
      SPAN:                  yes
      UDLD:                  yes
      Link Debounce:         yes
      Link Debounce Time:    yes
      MDIX:                  no
      Port Group Members:    none
      TDR capable:           no
      FabricPath capable:    yes
      Port mode:             Switched
    sw1-gd72# sh int ethernet4/6 transceiver details
    Ethernet4/6
        transceiver is present
        type is 10Gbase-SR
        name is CISCO-FINISAR
        part number is FTLX8571D3BCL-CS
        revision is C
        serial number is FNS12090EMJ
        nominal bitrate is 10300 MBit/sec
        Link length supported for 50/125um OM2 fiber is 82 m
        Link length supported for 50/125um OM3 fiber is 300 m
        Link length supported for 62.5/125um fiber is 26 m
        cisco id is --
        cisco extended id number is 4
               SFP Detail Diagnostics Information (internal calibration)
                                         Alarms                  Warnings
                                    High        Low         High          Low
      Temperature   36.21 C        75.00 C     -5.00 C     70.00 C        0.00 C
      Voltage        3.29 V         3.63 V      2.97 V      3.46 V        3.13 V
      Current        8.11 mA       11.80 mA     4.00 mA    10.80 mA       5.00 mA
      Tx Power       -2.65 dBm       1.49 dBm  -11.30 dBm   -1.50 dBm     -7.30 dBm
      Rx Power       -2.21 dBm       1.99 dBm  -13.97 dBm   -1.00 dBm     -9.91 dBm
      Transmit Fault Count = 0
      Note: ++  high-alarm; +  high-warning; --  low-alarm; -  low-warning

    Ankit,
    You are trying to set speed 1000 on a 10g sfp.
    type is 10Gbase-SR
    You will need to insert a 1gig sfp and then you will be able to set the speed.
    Also, I noticed that you posted first with interface 2/6 and the output you gave me was for 4/6. Are you sure you're in the right interface?

Maybe you are looking for