Nexus 7000 and VDCs

Hello,
We have two Nexus 7010 chassis populated with F2e and M1 (1Gig) cards. I created two VDCs on each. VDC11 on chassis 1 and VDC12 on chassis 2 contaning F2e ports and VDC21 on chassis 1 and VDC22  on chassis 2 containing M1 ports. I have a Keep alive and VPC channel between VDC11 and VDC12 and that s working fine. I conncted VDC21 to VDC11  and VDC22 to VDC12 with crossover cable (Layer 2 trunk ports). I plan on having server connected to the VDC21 and VDC22 (Dual homed). Since F2e and M1 cards can't mix on the same VDC, I am having an issue. So without VPC peer link between VDC21 and VDC 22 i cannot dual home my servers. is there a work around to this issue till Cisco updates OS to support both M1 and F2e ports on same VDC?
Thanks

wey,
I that documents that I got I can help for you ask.
http://d2zmdbbm9feqrf.cloudfront.net/2012/usa/pdf/BRKDCT-2121.pdf
Best regards.!

Similar Messages

  • ESXi 4.1 NIC Teaming's Load-Balancing Algorithm,Nexus 7000 and UCS

    Hi, Cisco Gurus:
    Please help me in answering the following questions (UCSM 1.4(xx), 2 UCS 6140XP, 2 Nexus 7000, M81KR in B200-M2, No Nexus 1000V, using VMware Distributed Switch:
    Q1. For me to configure vPC on a pair of Nexus 7000, do I have to connect Ethernet Uplink from each Cisco Fabric Interconnect to the 2 Nexus 7000 in a bow-tie fashion? If I connect, say 2 10G ports from Fabric Interconnect 1 to 1 Nexus 7000 and similar connection from FInterconnect 2 to the other Nexus 7000, in this case can I still configure vPC or is it a validated design? If it is, what is the pro and con versus having 2 connections from each FInterconnect to 2 separate Nexus 7000?
    Q2. If vPC is to be configured in Nexus 7000, is it COMPULSORY to configure Port Channel for the 2 Fabric Interconnects using UCSM? I believe it is not. But what is the pro and con of HAVING NO Port Channel within UCS versus HAVING Port Channel when vPC is concerned?
    Q3. if vPC is to be configured in Nexus 7000, I understand there is a limitation on confining to ONLY 1 vSphere NIC Teaming's Load-Balancing Algorithm i.e. Route Based on IP Hash. Is it correct?
    Again, what is the pro and con here with regard to application behaviours when Layer 2 or 3 is concerned? Or what is the BEST PRACTICES?
    I would really appreciate if someone can help me clear these lingering doubts of mine.
    God Bless.
    SiM

    Sim,
    Here are my thoughts without a 1000v in place,
    Q1. For me to configure vPC on a pair of Nexus 7000, do I have to connect Ethernet Uplink from each Cisco Fabric Interconnect to the 2 Nexus 7000 in a bow-tie fashion? If I connect, say 2 10G ports from Fabric Interconnect 1 to 1 Nexus 7000 and similar connection from FInterconnect 2 to the other Nexus 7000, in this case can I still configure vPC or is it a validated design? If it is, what is the pro and con versus having 2 connections from each FInterconnect to 2 separate Nexus 7000?   //Yes, for vPC to UCS the best practice is to bowtie uplink to (2) 7K or 5Ks.
    Q2. If vPC is to be configured in Nexus 7000, is it COMPULSORY to configure Port Channel for the 2 Fabric Interconnects using UCSM? I believe it is not. But what is the pro and con of HAVING NO Port Channel within UCS versus HAVING Port Channel when vPC is concerned? //The port channel will be configured on both the UCSM and the 7K. The pro of a port channel would be both bandwidth and redundancy. vPC would be prefered.
    Q3. if vPC is to be configured in Nexus 7000, I understand there is a limitation on confining to ONLY 1 vSphere NIC Teaming's Load-Balancing Algorithm i.e. Route Based on IP Hash. Is it correct? //Without the 1000v, I always tend to leave to dvSwitch load balence behavior at the default of "route by portID". 
    Again, what is the pro and con here with regard to application behaviours when Layer 2 or 3 is concerned? Or what is the BEST PRACTICES? UCS can perform L2 but Northbound should be performing L3.
    Cheers,
    David Jarzynka

  • Migration from Nexus 7000 without VDC to VDC

    Hi all
    I am working on a DataCenter architecture where we would like to implement Nexus 7000.
    For the time being, there only one "context" but we may take the opportunity to implement VDC in a later future
    I was not able to find a clear answer on the following :
      Can we add the VDC licence & configure a new VDC on a Nexus 7000 running without VDC ?
      I suppose this is possible. but does it need to have the whole configuration changed or adding a VDC can be done without any interruption on the current environnement ?
    Thanks in advance !

    Hello
    To have VDC support on n7k you will require following license:
    LAN_ADVANCED_SERVICES_PKG
    To configure new vdc you need to run:
    Nexus(config)# vdc
    This will create new VDC which is separate from the current one. It shouldn't affect productional environment since separate processes started for new VDC.
    Then you can allocate some interfaces to it and configure.
    But you need to be careful to check whether you allocate unused interfaces and don't add resource excessive configuration.
    Here is a very good explanation of what is VDC and how it works:
    http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/ps9512/White_Paper_Tech_Overview_Virtual_Device_Contexts.html
    And here is VDC config guide:
    http://www.cisco.com/en/US/docs/switches/datacenter/sw/nx-os/virtual_device_context/configuration/guide/vdc_nx-os_cfg.html
    HTH,
    Alex

  • Nexus 7000 and 2000. Is FEX supported with vPC?

    I know this was not supported a few months ago, curious if anything has changed?

    Hi Jenny,
    I think the answer will depend on what you mean by is FEX supported with vPC?
    When connecting a FEX to the Nexus 7000 you're able to run vPC from the Host Interfaces of a pair of FEX to an end system running IEEE 802.1AX (802.3ad) Link Aggregation. This is shown is illustration 7 of the diagram shown on the post Nexus 7000 Fex Supported/Not Supported Topologies.
    What you're not able to do is run vPC on the FEX Network Interface that connect up to the Nexus 7000 i.e., dual-homing the FEX to two Nexus 7000. This is shown in illustrations 8 and 9 of under the FEX topologies not supported on the same page.
    There's some discussion on this in the forum post DualHoming 2248TP-E to N7K that explains why it's not supported, but essentially it offers no additional resilience.
    From that post:
    The view is that when connecting FEX to the Nexus 7000, dual-homing does not add any level of resilience to the design. A server with dual NIC can attach to two FEX  so there is no need to connect the FEX to two parent switches. A server with only a single NIC can only attach to a single FEX, but given that FEX is supported by a fully redundant Nexus 7000 i.e., SE, fabrics, power, I/O modules etc., the availability is limited by the single FEX and so dual-homing does not increase availability.
    Regards

  • Nexus 7000, 2000, FCOE and Fabric Path

    Hello,
    I have a couple of design questions that I am hoping some of you can help me with.
    I am working on a Dual DC Upgrade. It is pretty standard design, customer requires a L2 extension between the DC for Vmotion etc. Customer would like to leverage certain features of the Nexus product suite, including:
    Trust Sec
    VDC
    VPC
    High Bandwidth Scalability
    Unified I/O
    As always cost is a major issue and consolidation is encouraged where possible. I have worked on a couple of Nexus designs in the past and have levergaed the 7000, 5000, 2000 and 1000 in the DC.
    The feedback that I am getting back from Customer seems to be mirrored in Cisco's technology roadmap. This relates specifically to the features supported in the Nexus 7000 and Nexus 5000.
    Many large enterprise Customers ask the question of why they need to have the 7000 and 5000 in their topologies as many of the features they need are supported in both platforms and their environments will never scale to meet such a modular, tiered design.
    I have a few specific questions that I am hoping can be answered:
    The Nexus 7000 only supports the 2000 on the M series I/O Modules; can FCOE be implemented on a 2000 connected to a 7000 using the M series I/O Module?
    Is the F Series I/O Module the only I/O Module that supports FCOE?
    Are there any plans to introduce the native FC support on the Nexus 7000?
    Are there any plans to introduce full fabric support (230 Gbps) to the M series I/O module?
    Are there any plans to introduce Fabric path to the M series I/O module?
    Are there any plans to introduce L3 support to the F series I/O Module?
    Is the entire 2000 series allocated to a single VDC or can individual 2000 series ports be allocated to a VDC?
    Is Trust Sec only support on multi hop DCI links when using the ASR on EoMPLS pwire?
    Are there any plans to inroduce Trust Sec and VDC to the Nexus 5500?
    Thanks,
    Colm

    Hello Allan
    The only IO card which cannot co-exist with other cards in the same VDC is F2 due to specific hardware realisation.
    All other cards can be mixed.
    Regarding the Fabric versions - Fabric-2 gives much bigger throughoutput in comparing with Fabric-1
    So in order to get full speed from F2/M2 modules you will need Fab-2 modules.
    Fab2 modules won't give any advantages to M1/F1 modules.
    http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/data_sheet_c78-685394.html
    http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/prodcut_bulletin_c25-688075.html
    HTH,
    Alex

  • Requirements about delay and bandwith for using OTV in Nexus 7000 between two data centers separated 25 miles?

    We have two Nexus 7000, and I need use them with OTV between two data Centers separated 25 miles, but I don´t know what are the optimal values about bandwidth and delay (ms) for extended VLANs IDs (production and DAG replication) for Microsoft Exchange environment. Can somebody tell me please which are the values required for operate OTV in optimal conditions in this case? We have about 35 000 users that will use that platform of email. Thanks a lot for your comments. Regards.

    We have two Nexus 7000, and I need use them with OTV between two data Centers separated 25 miles, but I don´t know what are the optimal values about bandwidth and delay (ms) for extended VLANs IDs (production and DAG replication) for Microsoft Exchange environment. Can somebody tell me please which are the values required for operate OTV in optimal conditions in this case? We have about 35 000 users that will use that platform of email. Thanks a lot for your comments. Regards.

  • Nexus 7000 Platform Logging

    Hello,
    We recently had a power supply failure in one of our Nexus 7000s, and I noticed that the syslog for the Platform is only present in the default VDC, and not in any of the other VDCs syslogs. Is this by design, or is there a logging level I can turn up in another VDC to capture this log? Thanks for any input
    syslog from default VDC -
    2013 Mar 18 23:10:34  %PLATFORM-2-PS_CAPACITY_CHANGE: Power supply PS3 changed i
    ts capacity. possibly due to power cable removal/insertion (Serial number xxxxxxxx)
    nothing in the VDC where I would like to get the logging
    default VDC logging level -
    xxx7K02# show log level platform
    Facility        Default Severity        Current Session Severity
    platform                5                       5
    0(emergencies)          1(alerts)       2(critical)
    3(errors)               4(warnings)     5(notifications)
    6(information)          7(debugging)
    xxx7K02#
    loggging from the specific VDC where we have management tools.
    xxx-LOW# show log level platform
    Facility        Default Severity        Current Session Severity
    platform                5                       5
    0(emergencies)          1(alerts)       2(critical)
    3(errors)               4(warnings)     5(notifications)
    6(information)          7(debugging)
    xxx-LOW#

    Hello Carl,
    What version of code are you running on your Nexus 7k?
    The expected behavior is:
    "When a hardware issue occurs, syslog messages are sent to all VDCs."
    http://www.cisco.com/en/US/docs/switches/datacenter/sw/nx-os/virtual_device_context/configuration/guide/vdc_mgmt.html#wp1170241
    Dave

  • Ask the Expert: Basic Introduction and Troubleshooting on Cisco Nexus 7000 NX-OS Virtual Device Context

    With Vignesh R. P.
    Welcome to the Cisco Support Community Ask the Expert conversation.This is an opportunity to learn and ask questions of Cisco expert Vignesh R. P. about the Cisco® Nexus 7000 Series Switches and support for the Cisco NX-OS Software platform .
    The Cisco® Nexus 7000 Series Switches introduce support for the Cisco NX-OS Software platform, a new class of operating system designed for data centers. Based on the Cisco MDS 9000 SAN-OS platform, Cisco NX-OS introduces support for virtual device contexts (VDCs), which allows the switches to be virtualized at the device level. Each configured VDC presents itself as a unique device to connected users within the framework of that physical switch. The VDC runs as a separate logical entity within the switch, maintaining its own unique set of running software processes, having its own configuration, and being managed by a separate administrator.
    Vignesh R. P. is a customer support engineer in the Cisco High Touch Technical Support center in Bangalore, India, supporting Cisco's major service provider customers in routing and MPLS technologies. His areas of expertise include routing, switching, and MPLS. Previously at Cisco he worked as a network consulting engineer for enterprise customers. He has been in the networking industry for 8 years and holds CCIE certification in the Routing & Switching and Service Provider tracks.
    Remember to use the rating system to let Vignesh know if you have received an adequate response. 
    Vignesh might not be able to answer each question due to the volume expected during this event. Remember that you can continue the conversation on the  Data Center sub-community discussion forum shortly after the event. This event lasts through through January 18, 2013. Visit this forum often to view responses to your questions and the questions of other community members.

    Hi Vignesh
    Is there is any limitation to connect a N2K directly to the N7K?
    if i have a an F2 card 10G and another F2 card 1G and i want to creat 3 VDC'S
    VDC1=DC-Core
    VDC2=Aggregation
    VDC3=Campus core
    do we need to add a link between the different VDC's
    thanks

  • FCoE and multiple Nexus 5000s and a 7000 core

    Hi
    I have a customer who is looking at four Nexus 5020s to start with and more in the future uplinked to a Nexus 7000 core.
    Am I right in thinking that whilst data traffic will be able to reach hosts ocnnected to a different Nexus 5002 via the 7000 core FCoE traffic will not ?
    If so what is teh recommended way of rolling out pods of 5020s for VMware servers with converged network adaptors so that they can all access the SAN ?
    Regards
    Pat

    Don't use corecenter for fan speed control.  Instead, use the BIOS:
    1) Open Corecenter from the administrator account or a user account that has admin privileges.  Click on the top center logo which should open the fan speed control window.  There should be two items, CoolnQuiet and User/Manual or something like.  Put it in manual and move the slider all the way to the right. Ignore the CoolnQuiet, it's misnamed here - MSI's corecenter does not control CoolnQuiet.  What MSI is calling CoolnQuiet is just fan speed control.  Anyway, put it in manual.  Close Corecenter and you should never have to open it again unless you want to monitor fan RPM's.
    2) Next, reboot and enter the BIOS.  Go to the H/W  Monitor settings.  Turn on Smart CPU Fan Speed.  Set to 40 C +/- 1.    This will allow the motherboard to control the fan speed.  On the other hand, if you want your fan always at maximum, then disable Smart Fan Speen control.   Don't worry about the Smart NB Fan speed, leave it disabled.
    Also, keep in mind that when you first open Corecenter, the immediate fan RPM's reported are not correct.  It takes it a few seconds to get the readings.  So wait until Corecenter minimizes itself to the system tray, then open it from there, and you will see the correct fan RPM's.

  • Ciscoworks 2.6 and Nexus 7000 issues

    Running LMS 2.6 with RME version 4.0.6, and DFM 2.0.13.
    We keep getting false alerts in DFM on the temperature in our Nexus 7000 switches. The alert says that the high temp threshold is 45C, and it's being exceeded at 46C. The thing that bothers me is that the actual switch reads that the threshold is around 100C or more. Any ideas as to why DFM would be picking up a temperature so far off the mark?
    Also, in regards to RME, I cannot pull configs from the Nexus 7000's. The check box in "archive config" is blanked out to where I can't check it. I download the device packages for the 7000 into RME but it will not pull configs. Is this not supported under our version of RME, or would there be some other reason that I can't do this?
    Thanks for any assistance with these issues!

    UPDATE:
    I fixed the RMA config pull issue. I thought I had previously downloaded the Nexus device packages so that RMA could work with them, but upon checking again, it looks like I just didn't have them installed. Got that piece fixed and now I can pull configs from the switches just fine.
    Still having problems with the temperature reading in DFM not accurately reflecting what is actually on the switches. Any suggestions as to where to start hunting down the issue for this are greatly appreciated. Thanks!

  • Nexus 7000 with VPC and HSRP Configuration

    Hi Guys,
    I would like to know how to implement HSRP with the following setup:
    There are 2 Nexus 7000 connected with VPC Peer link. Each of the Nexus 7000 has a FEX attached to it.
    The server has two connections going to the FEX on each Nexus 7k (VPC). FEX's are not dual homed as far as I now they are not supported currently.
    R(A)              R(S)
    |                     |
    7K Peer Link 7K
    |                     |
    FEX              FEX
    Server connected to both FEX
    The question is we have two routers connected to each of the Nexus 7k in HSRP (active and one is standby). How can I configure HSRP on the nexus switches and how the traffic will routed from the Standby Nexus switch to Active Nexus switch (I know HSRP works differently here as both of them can forward packets). Will the traffic go to the secondary switch and then via the peer link to the active switch and then to the active router ? (From what I read the packet from end hosts which will go via the peer link will get dropped)
    Has anyone implemented this before ?
    Thanks

    Hi Kuldeep,
    If you intend to put those routers on a non-vpc vlan, you  may create  a new inter-switch trunk between the N7K and allow that non-vpc vlan . However if those will be on a VPC vlan, best to create two links to the N7K pair and create a VPC, otherwise configure those ports as orphan ports which will leverage the VPC peer link .
    HTH
    Jay Ocampo

  • Nexus 7000 - Fabric Failure and VOQ

    I have been doing some research on the Nexus 7k and from what i am reading the following occurs:
    1. Fabric Module Failure - Causes all traffic sent across that fabric modules crossbar to be lost
    2. VOQ - protects against lack of buffer availability on the egress interface
    Neither of these provide reliable transmission over the crossbar or acknowledgement of data crossing the crossbar fabric.
    So my question is, if i have storage traffic (unicast based FCIP) that is crossing the fabric when a fabric module fails, is my understanding correct, that those frames are lost on the portion of the fabric that is controlled by the failed fabric module?
    Even though the main fabric itself is intact for other traffic, this still means that I have loss in what is supposed to be a system built for zero-loss to support storage traffic.
    Am i way off here or is this accurate.
    Thanks.

    Thanks for the response. From what i have read the control plane and data plane are completely isolated in the nexus 7k. The supervisor modules control the control plane and the central arbiter and the fabric modules handle the VOQ and the xbar communication.
    It works like this as i understand it:
    1. packet arrives at the ingress of a line card and is passed on the port asic
    2. port asic does its thing and forwards the packet to the replication engine
    3. rep engine passes the packet onto the L2 and L3 Forwarding engines - they do their dance and pass the packet on to the fabric engine
    4. Fabric Engine and VOQ mgmr consults the central arbiters to get credits to send traffic on the fabric
    5. Central Arbiter checks the egress line card to ensure buffer space is available. If its available it grants credit to the fabric engine and VOQ engine to send the packet on the fabric.
    The fabric crossbar is BW is determined by the amount of fabric modules installed - 1 FM = 23Gbs x 2. When 2 or more FM are installed to create more Fabric BW, they forwarding across the fabric for unicast traffic acts like a Etherchannel and performs some sort of hashing algorithm to send the packet across the fabric.
    Lets say you have a 9216Byte packet and 3 Fabric modules installed. From what i am reading the packet would be broken up into 4 packets, around 2304 Bytes each (i think they might be 2460 can't recall), and passed across the fabric.
    So you have 1 large packet, fragmented across the fabric cards, sent to the destination IO card.
    While in transit, lets say one of the fabric Modules in the LB group dies. my understanding is the traffic on the trace goes with it.
    The traffic is lost in this case since there is no acknowledgement of traffic sent across the fabric. I would think in a high bandwidth situtation this could be a lot of traffic, considering the speeds we are talking about here.
    Is this a possibility or am i missing some redundancy here that will protect the traffic that would be lossed crossing the fabric?
    Is this the case on the 65k as well for traffic crossing the fabric?
    Thanks in advance.
    Mike

  • Nexus 7000 supervisor replacement

         I'm trying to get my head around how to replace a supervisor module on a nexus 7000 with a single supervisor.  The setup has the default vdc and one other defined.  So if a sup  was faulty  what is the best way to handle this? I have the default vdc config and the other vdc on a tftpboot server. Whats the easiest  and fastest way to hand this .  In the default vdc add address    and copy  the default vdc config and then when that's in copy the other vdc config file.    Just used to IOS where you normally had a single file and you got the box on the air enough to copy the config file into startup and reloaded .  Hope this makes sense.   Tried to read some  of the docs but it's still not clear what exactly needs to be done.   Thanks for any help...

    That makes sense.
    1. Restore the default VDC config.
    2. Create your second VDC.
    3. Restore the second VDC config.
    Don't forget to have a backup of any license files that you may have purchased, for example MPLS.

  • Nexus 7000 ERSPAN

    I have a question about ERSPAN on the Nexus 7000.
    I set up the necessary configuration for ERSPAN source on my 7000s, with a destination on a catalyst 6500. I went through all the standard ERSPAN configuration to see if there is a difference in Nexus vs. Catalyst.
    I saw that there is a command "monitor erspan origin ip-address <ip> global" that needs to be entered to make ERSPAN work correctly. I tried entering this on the VDC that has the source. I received a message that the origin command must be run only from the default VDC and wasn't allowed where I was trying to run it.
    So, I entered the command in the admin VDC, using the origin as the loopback address on the VDC with the actual source, and the ERSPAN immediately started working.
    Can someone explain what this command is doing, why it's needed and the flow of the ERSPAN traffic?
    Thanks.

    All can I please add another question can we have an ERSPAN on a nexus 7K set up as a source in one VDC and set as a destination in another VDC on the same N7K.
    Unfortunately I could find any documentation from Cisco Regarding Erspan on nexus 7K with multiple VDC
    Any help is much appreciated

  • NEXUS 7000 xcvrInval

    Hi,
    I have some Nexus 7000 with FET-10G with xcvrInval status
    Eth7/33          N5k-S1-3T-1/3   xcvrInval trunk     auto    auto    Fabric Exte
    and some other FET-10G with notconn status
    Eth7/8           FEX-101         notconnec 1         auto    auto    Fabric Exte
    If I inter exchange the position of both FET-10G the status port doesn´t change
    FET-10G from 7/8 to 7/33
    FET-10G from 7/33 to 7/8
    7/33 holds xcvrInval status
    7/8 holds notconnec status
    I have reconfigured from default interface with same results
    Next you´ll find the same serial number in deferent port, the diference is the current
    when is xcvrInva or when is notconnec
    What can I do to get FET10G in e7/33 validated?
    sh interface e7/33   transceiver details
    Ethernet7/33
          transceiver is present
          type is Fabric Extender Transceiver
          name is CISCO-FINISAR  
          part number is FTLX8570D3BCL-C2
          revision is A  
          serial number is FNS17201TE5    
          nominal bitrate is 10300 MBit/sec
          Link length supported for 62.5/125um fiber is 10 m
          Link length supported for 50/125um OM3 fiber is 100 m
          cisco id is --
          cisco extended id number is 4
          cisco part number is 10-2566-02
          cisco product id is FET-10G            
          cisco vendor id is V02
          number of lanes 1
                 SFP Detail Diagnostics Information (internal calibration)
                    Current              Alarms                  Warnings
                    Measurement     High        Low         High          Low
        Temperature   19.30 C        75.00 C      5.00 C     70.00 C       10.00 C
    [7m--More-- [m
        Voltage        3.31 V         3.63 V      2.97 V      3.46 V        3.13 V
        Current        0.06 mA  --     11.80 mA     4.00 mA    10.80 mA       5.00 mA
        Tx Power          N/A        22.69 dBm    8.69 dBm     18.69 dBm     12.69 dBm
        Rx Power          N/A        22.99 dBm    6.09 dBm     18.99 dBm     10.09 dBm
        Transmit Fault Count = 0
        Note: ++  high-alarm; +  high-warning; --  low-alarm; -  low-warning
    now in slot 7/8
    Ethernet7/8
          transceiver is present
          type is Fabric Extender Transceiver
          name is CISCO-FINISAR  
          part number is FTLX8570D3BCL-C2
          revision is A  
          serial number is FNS17201TE5    
          nominal bitrate is 10300 MBit/sec
          Link length supported for 62.5/125um fiber is 10 m
          Link length supported for 50/125um OM3 fiber is 100 m
          cisco id is --
          cisco extended id number is 4
          cisco part number is 10-2566-02
          cisco product id is FET-10G            
          cisco vendor id is V02
          number of lanes 1
                 SFP Detail Diagnostics Information (internal calibration)
                    Current              Alarms                  Warnings
                    Measurement     High        Low         High          Low
        Temperature   23.17 C        75.00 C      5.00 C     70.00 C       10.00 C
    [7m--More-- [m
        Voltage        3.30 V         3.63 V      2.97 V      3.46 V        3.13 V
        Current        7.50 mA       11.80 mA     4.00 mA    10.80 mA       5.00 mA
        Tx Power      17.65 dBm      22.69 dBm    8.69 dBm     18.69 dBm     12.69 dBm
        Rx Power     -12.21 dBm --   22.99 dBm    6.09 dBm     18.99 dBm     10.09 dBm
        Transmit Fault Count = 0
        Note: ++  high-alarm; +  high-warning; --  low-alarm; -  low-warning
    NX7K-1-VDC-3T-S1-L2FP# sh int e7/33
    Ethernet7/33 is down (Transceiver validation failed)
    admin state is up, Dedicated Interface
      Belongs to Po51
      Hardware: 1000/10000 Ethernet, address: 8478.ac23.6cec (bia 8478.ac23.6cec)
      Description: N5k-S1-3T-1/3
      MTU bytes (CoS values):  MTU  1500(0-2,4-7) bytes  MTU  2112(3) bytes
      BW 10000000 Kbit, DLY 10 usec, reliability 255/255, txload 1/255, rxload 1/255
      Encapsulation ARPA, medium is broadcast
      Port mode is trunk
    auto-speed  auto-duplex,, media type is 10G
      Beacon is turned off
      Auto-Negotiation is turned on
      Input flow-control is off, output flow-control is off
      Auto-mdix is turned on
      Rate mode is dedicated
      Switchport monitor is off
      EtherType is 0x8100
      EEE (efficient-ethernet) : n/a
      Last link flapped never
      Last clearing of "show interface" counters 07:22:09
      0 interface resets
      Load-Interval #1: 30 seconds
        30 seconds input rate 0 bits/sec, 0 packets/sec
        30 seconds output rate 0 bits/sec, 0 packets/sec
      Load-Interval #2: 5 minute (300 seconds)
        300 seconds input rate 0 bits/sec, 0 packets/sec
        300 seconds output rate 0 bits/sec, 0 packets/sec
      RX
        88 unicast packets  0 multicast packets  0 broadcast packets
        0 input packets  0 bytes
        0 jumbo packets  0 storm suppression packets
        0 runts  0 giants  0 CRC/FCS  0 no buffer
        0 input error  0 short frame  0 overrun   0 underrun  0 ignored
        0 watchdog  0 bad etype drop  0 bad proto drop  0 if down drop
        0 input with dribble  0 input discard
        0 Rx pause
      TX
        88 unicast packets  0 multicast packets  0 broadcast packets
        0 output packets  0 bytes
        0 jumbo packets
        0 output error  0 collision  0 deferred  0 late collision
        0 lost carrier  0 no carrier  0 babble  0 output discard
        0 Tx pause
    NX7K-1-VDC-3T-S1-L2FP# sh int e7/33
    Ethernet7/8 is down (Link not connected)
    admin state is up, Dedicated Interface
      Belongs to Po101
      Hardware: 1000/10000 Ethernet, address: 8478.ac23.6cd3 (bia 8478.ac23.6cd3)
      Description: FEX-101
      MTU bytes (CoS values):  MTU  1500(0-2,4-7) bytes  MTU  2112(3) bytes
      BW 10000000 Kbit, DLY 10 usec, reliability 255/255, txload 1/255, rxload 1/255
      Encapsulation ARPA, medium is p2p
      Port mode is fex-fabric
    auto-speed  auto-duplex,, media type is 10G
      Beacon is turned off
      Auto-Negotiation is turned on
      Input flow-control is off, output flow-control is off
      Auto-mdix is turned on
      Rate mode is dedicated
      Switchport monitor is off
      EtherType is 0x8100
      EEE (efficient-ethernet) : n/a
      Last link flapped 5week(s) 1day(s)
      Last clearing of "show interface" counters never
      0 interface resets
      Load-Interval #1: 30 seconds
        30 seconds input rate 0 bits/sec, 0 packets/sec
    [7m--More-- [m
        30 seconds output rate 0 bits/sec, 0 packets/sec
      Load-Interval #2: 5 minute (300 seconds)
        300 seconds input rate 0 bits/sec, 0 packets/sec
        300 seconds output rate 0 bits/sec, 0 packets/sec
      RX
        10588 unicast packets  0 multicast packets  0 broadcast packets
        4 input packets  0 bytes
        0 jumbo packets  0 storm suppression packets
        0 runts  0 giants  0 CRC/FCS  0 no buffer
        0 input error  0 short frame  0 overrun   0 underrun  0 ignored
        0 watchdog  0 bad etype drop  0 bad proto drop  0 if down drop
        0 input with dribble  0 input discard
        0 Rx pause
      TX
        10588 unicast packets  1 multicast packets  0 broadcast packets
        4 output packets  5688 bytes
        0 jumbo packets
        0 output error  0 collision  0 deferred  0 late collision
        0 lost carrier  0 no carrier  0 babble  0 output discard
        0 Tx pause

    Hi Ans,
    You are rigth, I have defaulted againt the port, now configured with switchport mode FEX, and now the FET-10G is validated
    NX7K-1-VDC-3T-S1-L2FP(config-if)#  description FEX-101
    NX7K-1-VDC-3T-S1-L2FP(config-if)#   switchport
    NX7K-1-VDC-3T-S1-L2FP(config-if)#   switchport mode fex-fabric
    NX7K-1-VDC-3T-S1-L2FP(config-if)#   fex associate 101
    NX7K-1-VDC-3T-S1-L2FP(config-if)#   medium p2p
    NX7K-1-VDC-3T-S1-L2FP(config-if)#   channel-group 101
    NX7K-1-VDC-3T-S1-L2FP(config-if)#   no shutdown
    NX7K-1-VDC-3T-S1-L2FP(config-if)#
    NX7K-1-VDC-3T-S1-L2FP(config-if)# sh int e7/33 status
    Port             Name            Status    Vlan      Duplex  Speed   Type
    Eth7/33          FEX-101         notconnec 1         auto    auto    Fabric Exte
    NX7K-1-VDC-3T-S1-L2FP(config-if)#
    Thanks for your help, and have a nice weekend.
    Atte,
    EF

Maybe you are looking for

  • Matrix report in BI publisher performance

    HI Team, I have build RTF layout for matrix report,the data generation is completing in very less time.. but the layout swapping the rows as columns taking too long time , If data generation is completing around 5 mins..layout building is completing

  • Cannot move applications around dock

    I used to be able to put iphoto and itunes together, etc etc Now I'm stuck with them in an order, and I can't move them around. any ideas ?

  • IPad 2 headset no sound from one side

    Only one side of the earphone (right) has sound when headset is plugged into my iPad 2 running OS 8. I've tried several other headsets as well, but it's still not working. The headset works perfectly on other devices though. The problem have been the

  • Styling image that displays after clicking thumb ?

    Hello, Below is a snippet from your average list of thumbnails that lead to a full sized image when clicked. <ul>  <li>    <a href="fullsizeimage.jpg"><img src="thumbnail.jpg" /></a>  </li></ul> The full sized image of course opens "stand-alone" in a

  • Combo box choose several

    Hello I try to make a combo box, where it should be possible to choose several values at the same time. The item list contains a bunch of historical traces and I want to do a graph showing only the chosen ones. Is ther any possibility to set a bool f