N7K-F248XP-25 VDC licensing

                   Hi All
My clients want 2
N7K-F248XP-25
and 1 N7K-M148GS-11
So how many N7K-ADV1K9 license i must take?
1 for M1 and 2 for F2
or
1 for M1 and 1 for F2..
My chassis is;
N7K-C7010-BUN2-R

Hi Robert,
The F2 module is L2 and L3 capable.  It is quite a step up from F1 as it has no L3 capabilites, plus adding FEX to the F2 is a great improvement.  As far as F2 vs M1, the M1 still has larger L3 tables and can hold more routes in hardware as well as run OTV.  So the two types of modules are for different locations in your data center.  M1 is a great fit for your routing edge due to this.
Lucinao,
For fabric 2 on 7010 and 7018 you must have NX-OS 6. 
As far as the new VDC requirement for F2.  Since the Nexus 7000s modules all have their own forwarding engines they must all run in a compatible mode.  Think of it like mixing a 6500 with sup2 and a module with a DFC3...you can't do it.
So instead of just saying your chassis can only contain F2 or F1/M1 we are able to seperate the "generations" up by VDCs within the same chassis.
Chad

Similar Messages

  • Nexus 7009 does not show the N7K-F248XP-25 modules ethernet ports n sh run

    Hi everyone,
    I have a question...
    I going to install two Nexus 7009 with three N7K-F248XP-25  modules on each one, I am planning to create 3 VDC, but at the initial configuration the system does not show the Ethernet ports of these modules, even with the show inventory and show module I can see that the modules are recognized and its status is OK. There is something that I have to do before start to configure these modules...? enable some feature or license in order to see the ports with show running CLI...?

    You can enable F1, F2, M1, M1XL and M2XL Series modules. There are no restrictions on the type of mix allowed for the system module-type command. The system module-type command allows a mix of F1, F2, M1, M1XL, and M2XL Series modules in the VDC.
    Note The limit-resource module-type command controls the restrictions on the module-types that can be mixed in the VDC.
    Note Use the system module-type f2 command to allow F2E Series modules into a VDC. The ports from F2 and F2E Series modules can be allocated like any other ports.
    Note The modules that you do not enable must not be powered on after you configure this feature and enter yes. An error message will force you to manually disable these modules before proceeding. This prevents major disruption and service issues within a VDC.
    The no form of this command resets the configuration mode to allow all modules.
    hope this helps

  • New N7K-F248XP-25

    Ciao,
    in my previous post I understand that this linecard will be officially released on 18th ... and today I discovered something new in the DS published.
    "This extremely comprehensive set of Layer 2 and  Layer 3  functions makes this module ideal for data center networks,  where  density, performance, and continuous system operation are  critical."
    Performance
    • 720-mpps Layer 2 and Layer 3 forwarding capacity for both IPv4 and IPv6 packets
    The DS is here:
    http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/data_sheet_c78-685394.html
    is this new linecard N7K-F248XP-25 a L2/L3 like the M series???
    And, according to the Dynamic Configuration Tool it's stated:
    "A separate VDC is needed when deploying the N7K-F248XP-25 modules in a  chassis that also contains other families of modules (i.e. N7K-M1xxx and  N7K-F1xxx).The VDC feature is enabled by the N7K-ADV1K9 license. A  separate VDC is NOT required when the chassis contains only  N7K-F248XP-25 modules."
    And also NX-OS Version 6 required ...
    Any ideas??
    Ciao e grazie!
    Luciano

    Hi Robert,
    The F2 module is L2 and L3 capable.  It is quite a step up from F1 as it has no L3 capabilites, plus adding FEX to the F2 is a great improvement.  As far as F2 vs M1, the M1 still has larger L3 tables and can hold more routes in hardware as well as run OTV.  So the two types of modules are for different locations in your data center.  M1 is a great fit for your routing edge due to this.
    Lucinao,
    For fabric 2 on 7010 and 7018 you must have NX-OS 6. 
    As far as the new VDC requirement for F2.  Since the Nexus 7000s modules all have their own forwarding engines they must all run in a compatible mode.  Think of it like mixing a 6500 with sup2 and a module with a DFC3...you can't do it.
    So instead of just saying your chassis can only contain F2 or F1/M1 we are able to seperate the "generations" up by VDCs within the same chassis.
    Chad

  • N7K-F248XP-25E 32k Route limit

    We have a nexus 7009 with N7K-F248XP-25E, the F2 is limited to 32k routes, how can this be bypass and use the 7k limit of 5.2mil routes? Does anyone know if it can.
    Thanks.

    No I dont see the way to increase it. Please find below details:
    Nexus 7700 F3-Series 12-Port 100 Gigabit Ethernet Module
    Layer 2 / Layer 3
    12 x 100 Gbps (CPAK)
    192 x wire-rate 100 Gigabit Ethernet ports are supported in a Nexus 7718 chassis
    1.8 billion packets per second layer 2 and layer 3 IPv4 and IPv6 forwarding
    1.2 Tbps of data throughput (2.4 Tbps full-duplex)
    7710 and 7718 chassis
    Supervisor Engine 2E
    Fabric-2 Module (7010 and 7018 chassis)
    4 ingress and 8 egress queues per port
    Jumbo Frame Support
    64K MAC addresses
    4096 VLANs per VDC
    64K IPv4 routes
    32K IPv6 routes
    Switch on Chip (SoC) ASIC
    N77-F312CK-26
    Nexus 7700 F3-Series 24-Port 40 Gigabit Ethernet Module
    Layer 2 / Layer 3
    24 x 40 Gbps (QSFP+)
    384 x wire-rate 40 Gigabit Ethernet ports in a Nexus 7718 chassis
    1.44 billion packets per second layer 2 and layer 3 IPv4 and IPv6 forwarding
    960 Gbps data throughput (1.92 Tbps full-duplex)
    7710 and 7718 chassis
    Supervisor Engine 2E
    Fabric-2 Module (7010 and 7018 chassis)
    4 ingress and 8 egress queues per port
    Jumbo Frame Support
    64K MAC addresses
    4096 VLANs per VDC
    64K IPv4 routes
    32K IPv6 routes
    Switch on Chip (SoC) ASIC
    N77-F324FQ-25
    Nexus 7700 F3-Series 48-Port Fiber 1 and 10 Gigabit Ethernet Module
    Layer 2 / Layer 3
    48 x 1/10 Gbps (SFP and SFP+)
    768 x wire-rate 10 Gigabit Ethernet ports in a single Nexus 7718 chassis
    720 million packets per second layer 2 and layer 3 IPv4 and IPv6 forwarding
    480 Gbps data throughput (960 Gbps full-duplex)
    7706, 7710 and 7718 chassis
    Supervisor Engine 2E
    Fabric-2 Module (7010 and 7018 chassis)
    4 ingress and 8 egress queues per port
    Jumbo Frame Support
    64K MAC addresses
    4096 VLANs per VDC
    64K IPv4 routes
    32K IPv6 routes
    Switch on Chip (SoC) ASIC
    N77-F348XP-23
    Nexus 7700 F2-Series Enhanced 48-Port Fiber 1 and 10 Gigabit Ethernet Module
    Layer 2 / Layer 3
    48 x 1/10 Gbps (SFP and SFP+)
    768 x wire-rate 10 Gigabit Ethernet ports in a single Nexus 7718 chassis
    720 million packets per second layer 2 and layer 3 IPv4 and IPv6 forwarding
    480 Gbps data throughput (960 Gbps full-duplex)
    7706, 7710 and 7718 chassis
    Supervisor Engine 2E
    Fabric-2 Module (7010 and 7018 chassis)
    4 ingress and 4 egress queues per port
    Jumbo Frame Support
    16K MAC addresses
    4096 VLANs per VDC
    32K IPv4 routes
    16K IPv6 routes
    Switch on Chip (SoC) ASIC
    N77-F248XP-23E
    NX-OS 6.2.2+
    Nexus 7000 F3-Series 6-Port 100 Gigabit Ethernet Module
    Layer 2 / Layer 3
    6 x 100 Gbps (CPAK)
    96 x wire-rate 100 Gigabit Ethernet ports in a single Nexus 7018 chassis
    900 million packets per second layer 2 and layer 3 IPv4 and IPv6 forwarding
    600 Gbps data throughput (1.2 Tbps full-duplex)
    7004, 7009, 7010 and 7018 chassis
    Supervisor Engine 1, 2 and 2E
    Fabric-1 Module (7010 and 7018 chassis), Fabric-2 Module (7009, 7010, 7018 chassis)
    4 ingress and 4 egress queues per port
    Jumbo Frame Support
    64K MAC addresses
    4096 VLANs per VDC
    64K IPv4 routes
    32K IPv6 routes
    Switch on Chip (SoC) ASIC
    N7K-F306CK-25
    Nexus 7000 F3-Series 12-Port 40 Gigabit Ethernet Module
    Layer 2 / Layer 3
    12 x 40 Gbps (QSFP+)
    192 x wire-rate 40 Gigabit Ethernet ports in a single Nexus 7018 chassis
    720 million packets per second layer 2 and layer 3 IPv4 and IPv6 forwarding
    480 Gbps data throughput (960 Gbps full-duplex)
    7004, 7009, 7010 and 7018 chassis
    Supervisor Engine 1, 2 and 2E
    Fabric-1 Module (7010 and 7018 chassis), Fabric-2 Module (7009, 7010, 7018 chassis)
    4 ingress and 4 egress queues per port
    Jumbo Frame Support
    64K MAC addresses
    4096 VLANs per VDC
    64K IPv4 routes
    32K IPv6 routes
    Switch on Chip (SoC) ASIC
    N7K-F312FQ-25
    Nexus 7000 F2-Series Enhanced 48-Port Fiber 1 and 10 Gigabit Ethernet Module
    Layer 2 / Layer 3
    48 x 1/10 Gbps (SFP and SFP+)
    768 x wire-rate 10 Gigabit Ethernet ports in a single Nexus 7018 chassis
    720 million packets per second layer 2 and layer 3 IPv4 and IPv6 forwarding
    480 Gbps data throughput (960 Gbps full-duplex)
    7004, 7009, 7010 and 7018 chassis
    Supervisor Engine 1, 2 and 2E
    Fabric-1 Module (7010 and 7018 chassis), Fabric-2 Module (7009, 7010, 7018 chassis)
    4 ingress and 4 egress queues per port
    Jumbo Frame Support
    16K MAC addresses
    4096 VLANs per VDC
    32K IPv4 routes
    16K IPv6 routes
    Switch on Chip (SoC) ASIC
    N7K-F248XP-25E
    NX-OS 6.1.2+
    Nexus 7000 F2-Series Enhanced 48-Port 1 and 10GBASE-T Ethernet Copper Module
    Layer 2 / Layer 3
    48 x 1/10 Gbps (RJ-45)
    768 x wire-rate 10GBase-T Ethernet ports in a single Nexus 7018 chassis
    720 million packets per second layer 2 and layer 3 IPv4 and IPv6 forwarding
    480 Gbps data throughput (960 Gbps full-duplex)
    7004, 7009, 7010 and 7018 chassis
    Supervisor Engine 1, 2 and 2E
    Fabric-1 Module (7010 and 7018 chassis), Fabric-2 Module (7009, 7010, 7018 chassis)
    4 ingress and 4 egress queues per port
    Jumbo Frame Support
    16K MAC addresses
    4096 VLANs per VDC
    32K IPv4 routes
    16K IPv6 routes
    Switch on Chip (SoC) ASIC
    N7K-F248XT-25E
    NX-OS 6.1.2+
    Nexus 7000 F2-Series 48-Port 1 and 10 Gigabit Ethernet Module
    Layer 2 / Layer 3
    48 x 1/10 Gbps (SFP and SFP+)
    768 x wire-rate 10 Gigabit Ethernet ports in a single Nexus 7018 chassis
    720 million packets per second layer 2 and layer 3 IPv4 and IPv6 forwarding
    480 Gbps data throughput (960 Gbps full-duplex)
    7004, 7009, 7010 and 7018 chassis
    Supervisor Engine 1, 2 and 2E
    Fabric-1 Module (7010 and 7018 chassis), Fabric-2 Module (7009, 7010, 7018 chassis)
    4 ingress and 4 egress queues per port
    Jumbo Frame Support
    16K MAC addresses
    4096 VLANs per VDC
    32K IPv4 routes
    16K IPv6 routes
    Switch on Chip (SoC) ASIC
    N7K-F248XP-25
    NX-OS 6.0+
    Nexus 7000 F1-Series 32-Port 1 and 10 Gigabit Ethernet Module
    Layer 2
    32 x 1/10 Gbps (SFP and SFP+)
    512 x wire-rate 10 Gigabit Ethernet ports in a single Nexus 7018 chassis
    480 million packets per second layer 2 forwarding
    320 Gbps data throughput (640 Gbps full-duplex)
    7009, 7010 and 7018 chassis
    Supervisor Engine 1, 2 and 2E
    Fabric-1 Module (7010 and 7018 chassis), Fabric-2 Module (7009, 7010, 7018 chassis)
    4 ingress and 8 egress queues per port
    Jumbo Frame Support
    256K MAC addresses
    16K VLANs per module
    N7K-F132XP-15
    NX-OS 5.1.1
    Nexus 7000 M2-Series 24-Port 10 Gigabit Ethernet Module with XL Option
    Layer 2 / Layer 3
    24 x 10 Gbps (SFP+)
    384 x wire-rate 10 Gigabit Ethernet ports in a single Nexus 7018 chassis
    120 million packets per second layer 2 and layer 3 IPv4 forwarding and 60 mpps IPv6
    550 Gbps data throughput (1.1 Tbps full-duplex)
    7004, 7009, 7010 and 7018 chassis
    Supervisor Engine 1, 2 and 2E
    Fabric-1 Module (7010 and 7018 chassis), Fabric-2 Module (7009, 7010, 7018 chassis)
    8 ingress and 8 egress queues per port
    Jumbo Frame Support
    128K MAC addresses
    4096 VLANs per VDC
    1M IPv4 routes
    350K IPv6 routes
    N7K-M224XP-23L
    NX-OS 6.1+
    Nexus 7000 M2-Series 6-Port 40 Gigabit Ethernet Module with XL Option
    Layer 2 / Layer 3
    6 x 40 Gbps (QSFP+)
    96 x wire-rate 40 Gigabit Ethernet and 10 Gbps ports in a Nexus 7018 chassis
    120 million packets per second layer 2 and layer 3 IPv4 forwarding and 60 mpps IPv6
    550 Gbps data throughput (1.1 Tbps full-duplex)
    7004, 7009, 7010 and 7018 chassis
    Supervisor Engine 1, 2 and 2E
    Fabric-1 Module (7010 and 7018 chassis), Fabric-2 Module (7009, 7010, 7018 chassis)
    8 ingress and 8 egress queues per port
    Jumbo Frame Support
    128K MAC addresses
    4096 VLANs per VDC
    1M IPv4 routes
    350K IPv6 routes
    N7K-M206FQ-23L
    NX-OS 6.1+
    Nexus 7000 M2-Series 2-Port 100 Gigabit Ethernet Module with XL Option
    Layer 2 / Layer 3
    2 x 100 Gbps (CFP)
    32 x wire-rate 100 Gigabit Ethernet ports in a single Nexus 7018 chassis
    120 million packets per second layer 2 and layer 3 IPv4 forwarding and 60 mpps IPv6
    550 Gbps of data throughput (1.1 Tbps full-duplex)
    7004, 7009, 7010 and 7018 chassis
    Supervisor Engine 1, 2 and 2E
    Fabric-1 Module (7010 and 7018 chassis), Fabric-2 Module (7009, 7010, 7018 chassis)
    8 ingress and 8 egress queues per port
    Jumbo Frame Support
    128K MAC addresses
    4096 VLANs per VDC
    1M IPv4 routes
    350K IPv6 routes
    N7K-M202CF-22L
    NX-OS 6.1+
    Nexus 7000 M1-Series 8-Port 10 Gigabit Ethernet Module with XL Option
    Layer 2 / Layer 3
    8 x 10 Gbps (X2)
    128 x wire-rate 10 Gigabit Ethernet ports in a single Nexus 7018 chassis
    120 million packets per second layer 2 and layer 3 IPv4 forwarding and 60 mpps IPv6
    80 Gbps of data throughput (160 Gbps full-duplex)
    7004, 7009, 7010 and 7018 chassis
    Supervisor Engine 1, 2 and 2E
    Fabric-1 Module (7010 and 7018 chassis), Fabric-2 Module (7009, 7010, 7018 chassis)
    8 ingress and 8 egress queues per port
    Jumbo Frame Support
    128K MAC addresses
    4096 VLANs per VDC
    1M IPv4 routes
    350K IPv6 routes
    N7K-M108X2-12L
    NX-OS 5.0+
    Nexus 7000 M1-Series 32-Port 10 Gigabit Ethernet Module with XL Option
    Layer 2 / Layer 3
    32 x 10 Gbps (SFP+)
    512 x 10 Gigabit Ethernet ports in a single Nexus 7018 chassis
    60 million packets per second layer 2 and layer 3 IPv4 forwarding and 30 mpps IPv6
    80 Gbps of data throughput (160 Gbps full-duplex)
    7004, 7009, 7010 and 7018 chassis
    Supervisor Engine 1, 2 and 2E
    Fabric-1 Module (7010 and 7018 chassis), Fabric-2 Module (7009, 7010, 7018 chassis)
    8 ingress and 8 egress queues per port
    Jumbo Frame Support
    128K MAC addresses
    4096 VLANs per VDC
    1M IPv4 routes
    350K IPv6 routes
    4:1 oversubscription
    N7K-M132XP-12L
    NX-OS 5.1+
    Nexus 7000 M1-Series 48-Port Gigabit Ethernet Modules with XL Option
    Layer 2 / Layer 3
    48 x 1 Gbps (SFP/RJ45)
    768 x Gigabit Ethernet (SFP) or 10/100/1000 (RJ45) ports in Nexus 7018 chassis
    60 million packets per second layer 2 and layer 3 IPv4 forwarding and 30 mpps IPv6
    46 Gbps of data throughput (92 Gbps full-duplex)
    7004, 7009, 7010 and 7018 chassis
    Supervisor Engine 1, 2 and 2E
    Fabric-1 Module (7010 and 7018 chassis), Fabric-2 Module (7009, 7010, 7018 chassis)
    2 ingress and 4 egress queues per port
    Jumbo Frame Support
    128K MAC addresses
    4096 VLANs per VDC
    1M IPv4 routes
    350K IPv6 routes
    Some oversubscription
    N7K-M148GS-11L
    NX-OS 5.0 (SFP) / 5.1+ (RJ45)
    Nexus 7000 M1-Series 32-Port 10 Gigabit Ethernet Module
    Layer 2 / Layer 3
    32 x 10 Gbps (SFP+)
    512 x 10 Gigabit Ethernet ports in a single Nexus 7018 chassis
    60 million packets per second layer 2 and layer 3 IPv4 forwarding and 30 mpps IPv6
    80 Gbps of data throughput (160 Gbps full-duplex)
    7009, 7010 and 7018 chassis
    Supervisor Engine 1, 2 and 2E
    Fabric-1 Module (7010 and 7018 chassis), Fabric-2 Module (7009, 7010, 7018 chassis)
    8 ingress and 8 egress queues per port
    Jumbo Frame Support
    128K MAC addresses
    4096 VLANs per VDC
    128K FIB routes
    4:1 oversubscription
    N7K-M132XP-12
    NX-OS 4.0+
    Nexus 7000 M1-Series 48-Port Gigabit Ethernet Modules
    Layer 2 / Layer 3
    48 x 1 Gbps (SFP/RJ45)
    768 x 10/100/1000 RJ45 or Gigabit Ethernet (SFP) ports in Nexus 7018 chassis
    60 million packets per second layer 2 and layer 3 IPv4 forwarding and 30 mpps IPv6
    46 Gbps of data throughput (92 Gbps full-duplex)
    7009, 7010 and 7018 chassis
    Supervisor Engine 1, 2 and 2E
    Fabric-1 Module (7010 and 7018 chassis), Fabric-2 Module (7009, 7010, 7018 chassis)
    2 ingress and 4 egress queues per port
    Jumbo Frame Support
    128K MAC addresses
    4096 VLANs per VDC
    128K FIB routes
    Some oversubscription
    N7K-M148GT-11
    NX-OS 4.1+ (SFP) / 4.0+ (RJ45)

  • Does the F2 linecard (N7k-F248XP-25) on Nexus 7010 support Layer 3?

    Hi All,
    I am sure that F1 linecards on Nexus weren’t able to support L3 functionality, so my query is does the F2 linecard (N7k-F248XP-25) on Nexus 7010 support Layer 3?
    Regards,
    Mayank

    Hi, Im know that this is resolved but i have a f2e Card
    Model:                 N7K-F248XP-25E
    Type (SFP capable):    1000base-SX
    and i can not configure an interface as l3
    NX7K-1-VDC-3T-S1-L3FP(config)# interface ethernet 7/2
    NX7K-1-VDC-3T-S1-L3FP(config-if)# no switchport
    ERROR: Ethernet7/2: requested config change not allowed
    whats the problem??
    Software
      BIOS:      version 2.12.0
      kickstart: version 6.2(2)
      system:    version 6.2(2)
      BIOS compile time:       05/29/2013
      kickstart image file is: bootflash:///n7000-s2-kickstart-npe.6.2.2.bin
      kickstart compile time:  7/9/2013 20:00:00 [08/22/2013 04:51:27]
      system image file is:    bootflash:///n7000-s2-dk9.6.2.2.bin
      system compile time:     7/9/2013 20:00:00 [08/22/2013 08:07:03]
    Hardware
      cisco Nexus7000 C7010 (10 Slot) Chassis ("Supervisor Module-2")
      Intel(R) Xeon(R) CPU         with 12224956 kB of memory.

  • How to do routing on N7K-F248XP-25E (Nexus 7010) ?

                       Hi all,
         Please educate me the following scenario : I have Nexus 7010 with 2 L3 modules, N7K-M132XP-12L and N7K-M148GT-11L. Now to increase more ports for end devices, I add in the module N7K-F248XP-25E and believe it's for Layer 2 switching only. Is there a way to do routing on these L2 modules without having to go to the L3 modules ? Thanks for all help.

    Is there a way to do routing on these L2 modules without having to go to the L3 modules ?
    No.  If you have an M1/M2 card and routing is enabled, the F2E card will "step down" and do Layer 2 work.  All Layer 3 work will be done by the M1/M2 card.

  • 10Gig Copper SFP for Nexus F Series module (N7K-F248XP-25E)

    Does Cisco have 10Gig Copper SFP which can be supported on Nexus platform in the F Series module (N7K-F248XP-25E)

    Leo,
    Thank you for the answer, you mentioned 10Gig Copper module in your reply (Part# N7K-F248XT-25E) but I already have Fiber Module (Part# N7K-F248XP-25E), I just need 2 10Gig copper ports for my storage and cant afford to buy a whole new Copper module. My question was if there is any 10Gig Copper SFP supported on F Series Fiber Module (Part# N7K-F248XP-25E), (not a twinax/DAC cable since my distance is longer and Twinax wont work)

  • N7K with F1 VDC, F2 VDC and Storage VDC - Configuration/Support Help

    Hi there,
    We currently have an N7k running version 6.2.(6)  with a default ethernet VDC and a storage VDC.  We are running F1 modules (N7K-F132XP-15) and share the interfaces across to the storage VDC.
    We recently purchased an F2 module to connect our HP Bladecenter Chassis via a B22 FEX module.  We were told by our account team that this was supported in our environment.  I created a third VDC for the F2 module and now want to share those ports into the storage VDC.  
    When I try to add support for the F2 card in the storage VDC I get this error:
    nyc-nx02(config-vdc)# limit-resource module-type f1 f2
    This will cause all ports of unallowed types to be removed from this vdc. Continue (y/n)? [yes] 
    ERROR: Not all listed line card types are supported in a storage vdc on the current switch
    nyc-nx02(config-vdc)#
    I understand that the F1 and F2 modules are not supported in same VDC, but can they be shared in a storage VDC on the same switch?  Before I continue troubleshooting I want to make sure this is the supported and best practice for this setup.
    Thanks for your help

    I think what you need is shared ports (access) splitting into storage VDC and ethernet VDC ?
    see attachment
    and
    http://www.cisco.com/c/en/us/td/docs/switches/datacenter/sw/nx-os/virtual_device_context/configuration/guide/vdc_nx-os_cfg/vdc_overview.html#wp1087325

  • Change VDC type on n7k sup1

    Hi.
    I have a pair of Nexus 7010 with sup1 running 6.2(8)
    There is default VDC only:
    # sh vdc detail
    Switchwide mode is m1 f1 m1xl f2 m2xl f2e f3
    vdc id: 1
    vdc name: nex7010-12
    vdc state: active
    vdc mac address: 40:55:39:23:1e:c1
    vdc ha policy: RELOAD
    vdc dual-sup ha policy: SWITCHOVER
    vdc boot Order: 1
    vdc create time: Tue Dec 20 11:23:37 2011
    vdc reload count: 0
    vdc uptime: 989 day(s), 22 hour(s), 10 minute(s), 34 second(s)
    vdc restart count: 0
    vdc type: Ethernet
    vdc supported linecards: m1 f1 m1xl m2xl
    I want to add N7K-F248XP-25E line card. So as I understand I should enter
    (config)# vdc nex7010-11 id 1
    (config-vdc)# limit-resource module-type m1 f1 m1xl m2xl f2e
    And then insert line card. Is this right?
    So my questions. Is this operation disruptive to traffic forwarding? Do I lost allocated interface and/or limit-resources after change VDC type and need to retype them again?

    Hi insharie,
    thanks for fast answer :)
    My config:
     Model        
     N7K-M148GT-11
     N7K-M148GT-11
     N7K-M148GT-11
     N7K-M148GT-11
     N7K-SUP1     
     N7K-SUP1     
     N7K-M108X2-12L
     N7K-M108X2-12L
     N7K-F132XP-15
    So there is no F2 line cards.
    I want to add F2e - N7K-F248XP-25E line card.
    In "Cisco Nexus 7000 Series NX-OS VDC Configuration Guide":
    "Beginning with Cisco NX-OS Release 6.2(2), F2e module type can exist with M1, M1XL, and M2XL module types in the same VDC."
    And as I understand F2e will be working in Proxy Mode 
    "When you enter the limit-resource module-type command and it changes the F2e mode between the old VDC type and the new VDC type, you are prompted to enter the rebind interface command"
    From the table "Table 7 VDC Types Impacted By F2e Proxy Mode" I do not see my case
    Old VDC: M1,F1 --> New VDC: M1,F1,F2e
    And can not figure out the impact.

  • VDC configuration error

    Hi,
    I have Nexus 7018 with NX-OS 6.0 and 2 * N7K-M148GT-11L(Copper 1Gig) cards, 1 * N7K-F248XP-25(10Gig SPF) card.
    I created vdc TEST and able to allocate all interfaces of N7K-M148GT-11L(Copper 1Gig) but unable to allocate any interface in N7K-F248XP-25(10Gig SPF) card. While trying to allocate am getting the below error.
    "ERROR: 1 or more interfaces are from a module of type not supported by this vdc"
    Can someone help me to understand this and let me know how to resolve this?
    Best Regards,
    Ashok

    Hello Guys,
    I can have F2 and M2 module in the same VDC?
    Or F2 can work only with other F2 module in the same VDC?

  • Nexus N7K Sup 1 replacement to SUP2E rollback plan and EPLD upgrad time.

    Dear Expert,
    My customer has N7K SUP1 with HA as below and I need to replace 2  with SUP2E 6.1.2 from SUP1(6.1.1).
    As per Cisco document, it is required EPLD upgrade from SUP1 to SUP2E with Fabric-2 module, but there is no guide time for EPLD.
    Mod  Ports  Module-Type                         Model              Status
    1    0      Supervisor module-1X                N7K-SUP1           active *
    2    0      Supervisor module-1X                N7K-SUP1           ha-standby
    3    48     1000 Mbps Optical Ethernet XL Modul N7K-M148GS-11L     ok
    4    48     1/10 Gbps Ethernet Module           N7K-F248XP-25      ok
    5    24     10 Gbps Ethernet Module             N7K-M224XP-23L     ok
    Mod  Sw              Hw
    1    6.1(1)          3.0     
    2    6.1(1)          3.0     
    3    6.1(1)          1.3     
    4    6.1(1)          1.0     
    5    6.1(1)          2.0 
    1. Can I have no downtime to do this activity ?
    As per Cisco HW installation guide, It seems I should replace 2 SUP at same time, but if I can replace separately, please let me know.
    2. How many minute is required for upgrade EPLD with above HW inventory ?
    I assummed this time about 1hr ~ 90 min(max), is it correct ?
    3. If this upgrade is failed, then I need to rollback from SUP2E(S2-6.1.2) to original SUP1(S1-6.1.1).
    Is below step is correct for rollback ?
    - Turn off the power to the switch
    - For each Supervisor 2E module installed in the switch, remove the module and replace it with a Supervisor 1 module.
    - Power on and verify all module is fine, sh version to check to rollback to S1-6.1.1.
    - Run ‘install all epld bootflash:n7000-s1-epld.6.1.1a.img parallel’ after rebooting with original SUP1.
    - Verify configuration and status
    4. when I downgrade to n7000-s1-epld-6.1.1a, only linecard will be downgrade > or All SUP and FAN or Power module EPLD also downgrade ?
    If so, how many minute will re required with previous HW inventory ?
    I assummed this time about 40 ~ 60 min(max), is it correct ?
    5. From S1-6.1.1a to S2-6.1.2a upgrade, will N2K FEX Software be upgrade as well ?
    I  need to update this downtime by tomorrow morning, so I seek your advice  urgently.
    Please share your experinece with me, thanks in advance.
    Jason.

    Hey Jason,
    Regarding your queries:-
    1. Can I have no downtime to do this activity ? As per Cisco HW installation guide, It seems I should replace 2 SUP at same time, but if I can replace separately, please let me know. - Replacing supervisors will have downtime as you need to move hardware physically in and out of chassis.
    2. How many minute is required for upgrade EPLD with above HW inventory ? I assummed this time about 1hr ~ 90 min(max), is it correct ? - Once the supervisor is upgraded, you may upgrade all the modules parallely so considering the boot time of all these modules you are in safe limit.
    3. If this upgrade is failed, then I need to rollback from SUP2E(S2-6.1.2) to original SUP1(S1-6.1.1). Is below step is correct for rollback ? - Indeed its correct.
    4. when I downgrade to n7000-s1-epld-6.1.1a, only linecard will be downgrade > or All SUP and FAN or Power module EPLD also downgrade ? If so, how many minute will re required with previous HW inventory ? I assummed this time about 40 ~ 60 min(max), is it correct ? - Check this:
    http://www.cisco.com/c/en/us/td/docs/switches/datacenter/sw/6_x/epld/epld_rn_6-1.html#wp93995. Also everything will b downgraded so again calculate the booting time + downgrade, i think with this inventory 60 minutes is managable.
    5. From S1-6.1.1a to S2-6.1.2a upgrade, will N2K FEX Software be upgrade as well ? - Yes, it will be upgraded.
    I also suggest the following links for smooth upgrade:
    Replacing Supervisor 1 Modules with Supervisor 2 or Supervisor 2E Modules:http://www.cisco.com/c/en/us/td/docs/switches/datacenter/hw/nexus7000/installation/guide/n7k_hig_book/n7k_replacing.html#pgfId-1410535
    Cisco Nexus 7000 Series FPGA/EPLD Upgrade Release Notes, Release 6.1: http://www.cisco.com/c/en/us/td/docs/switches/datacenter/sw/6_x/epld/epld_rn_6-1.html
    HTH.
    Regards,
    RS.

  • N7k - Domain ID for double-sided vPC topology

    I´m planning the configuration for 2 N7k using the following network diagram:

    Hello Clark, thanks but this is not my case, I have 2 N7k, with 2 VDC on each, so I'll create a vPC from Core Switches and another vPC from Aggregation Switches, and my question is for vPC domain ID for each vPC configured.
    What I had found till now, it's that it has to be different vPC domain ID.

  • Welcome to the Solutions and Architectures Data Center & Virtualization Community

    Welcome to the Solutions and Architectures Data Center & Virtualization Community. We encourage everyone to share their knowledge  and start conversations related to Data Center and Virtualization  Solutions and architectures.All topics are welcome, including  Servers – Unified Computing, Data Center Security, Data Center  Switching, Data Center Management and Automation, Storage Networking,  Application Networking Services and solutions to solve business  problems.
    Remember,  just like in the workplace,  be courteous to your fellow forum  participants. Please refrain from  using disparaging or obscene language  or posting advertisements.
    Cheers,
    Dan Bruhn 

    Hi,
    I have a question...
    I going to install two Nexus 7009 with three N7K-F248XP-25  modules on each one, I am planning to create 3 VDC, but at the initial configuration the system does not show the ethernets ports of these modules, even with the show inventory and show module I can see tah the modules are recognized and its status is OK. There is something that I have to do before start to configure these modules...? enable some feature or license in order to see the ports with show running CLI...?

  • Welcome to Solutions and Architectures Borderless Networks Community

    Welcome to the Solutions and Architectures Borderless Networks Community.  We encourage everyone to share their knowledge and start conversations related to Borderless Solutions and architectures. All topics are welcome, including Switches, Routers, Security, Wireless, Cloud and System Management, WAN Optimization and solutions to solve business problems.
    Remember,  just like in the workplace, be courteous to your fellow forum  participants. Please refrain from using disparaging or obscene language  or posting advertisements.
    Cheers,
    Dan Bruhn       

    Hi,
    I have a question...
    I going to install two Nexus 7009 with three N7K-F248XP-25  modules on each one, I am planning to create 3 VDC, but at the initial configuration the system does not show the ethernets ports of these modules, even with the show inventory and show module I can see tah the modules are recognized and its status is OK. There is something that I have to do before start to configure these modules...? enable some feature or license in order to see the ports with show running CLI...?

  • Welcome to the Solutions and Architectures Collaboration Community

    Welcome to the Solutions and Architectures Collaboration Community. We encourage everyone to share their knowledge  and start conversations related to Collaboration Solutions and  architectures.All topics are welcome, including Collaboration  Applications, Customer Collaboration, Telepresence, Unified  Communications and solutions to solve business problems.
    Remember,  just like in the workplace,  be courteous to your fellow forum  participants. Please refrain from  using disparaging or obscene language  or posting advertisements.
    Cheers,
    Dan Bruhn

    Hi,
    I have a question...
    I going to install two Nexus 7009 with three N7K-F248XP-25  modules on each one, I am planning to create 3 VDC, but at the initial configuration the system does not show the ethernets ports of these modules, even with the show inventory and show module I can see tah the modules are recognized and its status is OK. There is something that I have to do before start to configure these modules...? enable some feature or license in order to see the ports with show running CLI...?

Maybe you are looking for