Nexus 7000 - Moving vPC keep alive

We have two Nexus 7010 switches running a vPC domain between the two switches.  On one of the 7010B, the peer keep alive (from the mgmt VRF) is connected to a 3560B *and* that 3560B also has a data connection back to the same 7010B.  Everything is fine with that setup.
Our second 7010A, the peer keep alive link is also connected to a coresponding 3560A switch.  However, that 3560A switch is not connected to 7010A.
I want to move the uplink from the 3560A from where it is to the 7010A which will break the keep alive.  However, I will not be breaking the vPC peer link as it is a pair of 10G connections between the two 7010 switches.
I have read that the vPC won't come up unless the peer keep alive is present, but it wasn't clear about taking down the keep alive link momentarily.  Moving the cable would be quick, but I know the mac table will need to update since 7010B switch will now see the keep alive across it's peer link instead of some other direction.
Can I take the peer keep alive link down providing the peer link stays up?
We are running kickstart and system version 5.0(3).
Thanks!
/alan

Peer keepalive works on UDP port 3200 over IP with 1 sec interval and 5 sec timeout.
Iit is not requirement to have peer-keepalive destination IP in same subnet but if you do not have it in same subnet then you need to make sure you route it properly and your IP routed infrastructure that carries keeplive satisfies above requirement to make sure not a single event cause on that IP infrastructure causes keeplives to loose packets since peer-keepalive is UDP it is not reliable delivery method.
Recommendation in past i heard was to use your managemet ports as peer-keepalive. But one problem happens during ISSU with dual sup, the each supervisor reboots and after it comes up role of active and standby gets switch at the end. So If you did not connect two managment ports(one from each supervisor) to your management network then you will loose keepalives during software upgrade because supervisor switch over occurs and new maangement port becomes active.
So second recomendation is to create one peer-keepalive vrf so that it will have its own address space, if you have M1 1 gig card in each switch then connect one cable between switch and assign IP address (like 1.1.1.1-2/30) and put it in peer-keepalive vrf. With this set up during ISSU you do not loose peer keepalives because line cards does not need to reboot and your peer-keepalive UDP traffic will not depend on any other switch or router.    

Similar Messages

  • Nexus 7000 with VPC and HSRP Configuration

    Hi Guys,
    I would like to know how to implement HSRP with the following setup:
    There are 2 Nexus 7000 connected with VPC Peer link. Each of the Nexus 7000 has a FEX attached to it.
    The server has two connections going to the FEX on each Nexus 7k (VPC). FEX's are not dual homed as far as I now they are not supported currently.
    R(A)              R(S)
    |                     |
    7K Peer Link 7K
    |                     |
    FEX              FEX
    Server connected to both FEX
    The question is we have two routers connected to each of the Nexus 7k in HSRP (active and one is standby). How can I configure HSRP on the nexus switches and how the traffic will routed from the Standby Nexus switch to Active Nexus switch (I know HSRP works differently here as both of them can forward packets). Will the traffic go to the secondary switch and then via the peer link to the active switch and then to the active router ? (From what I read the packet from end hosts which will go via the peer link will get dropped)
    Has anyone implemented this before ?
    Thanks

    Hi Kuldeep,
    If you intend to put those routers on a non-vpc vlan, you  may create  a new inter-switch trunk between the N7K and allow that non-vpc vlan . However if those will be on a VPC vlan, best to create two links to the N7K pair and create a VPC, otherwise configure those ports as orphan ports which will leverage the VPC peer link .
    HTH
    Jay Ocampo

  • VPC Keep-alive link in F1 series Linecards

    Hi.
    Can we use N7K F1 linecards for vpc keepalive link?
    Configure layer 2  portchannel  and a point-to-point vlan interface?
    thanks

    Hi,
    This is supported, but as per page 28 of the Best Practices for Virtual Port Channels (vPC) on Cisco Nexus 7000 Series Switches:
    Note: If you are using a pure Cisco Nexus F1 Series system or VDC (that is, only F1 line cards used in the chassis or only F1 ports in the VDC), the peer-keepalive link can be formed with mgmt0 interface or 10-Gigabit Ethernet front panel port. In the latter case, use the management command under the SVI to enable it for inband management (otherwise, the SVI is brought down because no M1 modules exist in the system or VDC).
    It's probably worth noting the recommendations as to how the peer-keepalive link should be configured:
    Strong Recommendations:
    When building a vPC peer-keepalive link, use the following in descending order of preference:
    1. Dedicated link(s) (1-Gigabit Ethernet port is enough) configured as L3. Port-channel with 2 X 1G port is even better.
    2. Mgmt0 interface (along with management traffic)
    3. As a last resort, route the peer-keepalive link over the Layer 3 infrastructure
    Regards

  • Log configuration changes to syslog on Nexus 7000?

    I need to be able to log any configuration changes to syslog on our Nexus switches. On IOS this is easy with the archive commands, but I'm a little stuck trying to do this on our Nexus gear. On the IOS gear I run the commands:
    archive
    log config
    logging enable
    logging size 100
    hidekeys
    notify syslog
    How do I do the equivalent on NX-OS?

    ​Cisco NX-OS can log configuration change events along with the individual changes when AAA command accounting is enabled.
    With command accounting enabled, all CLI commands entered, including configuration commands, are logged to the configured AAA server. Using this information, a forensic trail for configuration change events along with the individual commands entered for those changes can be recorded and reviewed.
    Because of this capability, it is strongly advised that AAA command accounting be enabled and configured.
    Refer to the “TACACS+ Command Accounting” section of this document for more information.
    The Nexus 7000, by default keeps a local accounting log of all the configuration commands entered on the device; you can view this with the 'show accounting log' command.
    In NX-OS, we changed the way logging works.  We keep a local accounting log of all the
    configuration changes ("show accounting log"), but if you want to send those logs to a
    server, it must be done with through a TACACS server.  Please see the below documentation:
    Configuring AAA on Nexus
    TACACS command accounting
    -Thanks
    Vinod
    **Encourage Contributors. RATE Them.**

  • Nexus 7000 - unexpected shutdown of vPC-Ports during reload of the primary vPC Switch

    Dear Community,
    We experienced an unusual behavior of two Nexus 7000 switches within a vPC domain.
    According to the attached sketch, we have four N7Ks in two data centers - two Nexus 7Ks are in a vPC domain for each data center.
    Both data centers are connected via a Multilayer-vPC.
    We had to reload one of these switches and I expected the other N7K in this vPC domain to continue forwarding over its vPC-Member-ports.
    Actually, all vPC ports have been disabled on the secondary switch until the reload of the first N7K (vPC-Role: primary) finished.
    Logging on Switch B:
    20:11:51 <Switch B> %VPC-2-VPC_SUSP_ALL_VPC: Peer-link going down, suspending all vPCs on secondary
    20:12:01 <Switch B> %VPC-2-PEER_KEEP_ALIVE_RECV_FAIL: In domain 1, VPC peer keep-alive receive has failed
    In case of a Peer-link failure, I would expect this behavior if the other switch is still reachable via the Peer-Keepalive-Link (via the Mgmt-Port), but since we reloaded the whole switch, the vPCs should continue forwarding. 
    Could this be a bug or are there any timers to be tuned?
    All N7K switches are running on NX-OS 6.2(8)
    Switch A:
    vpc domain 1
      peer-switch
      role priority 2048
      system-priority 1024
      peer-keepalive destination <Mgmt-IP-Switch-B>
      delay restore 360
      peer-gateway
      auto-recovery reload-delay 360
      ip arp synchronize
    interface port-channel1
      switchport mode trunk
      switchport trunk allowed vlan <x-y>
      spanning-tree port type network
      vpc peer-link
    Switch B:
    vpc domain 1
      peer-switch
      role priority 1024
      system-priority 1024
      peer-keepalive destination <Mgmt-IP-Switch-A>
      delay restore 360
      peer-gateway
      auto-recovery reload-delay 360
      ip arp synchronize
    interface port-channel1
      switchport mode trunk
      switchport trunk allowed vlan <x-y>
      spanning-tree port type network
      vpc peer-link
    Best regards

    Problem solved:
    During the reload of the Nexus 7K, the linecards were powerd off a short time earlier than the Mgmt-Interface. As a result of this behavior, the secondary Nexus 7K received at least one vPC-Peer-Keepalive Message while its peer-link was already powerd off. To avoid a split brain scenario, the VPC-member-ports have been shut down.
    Now we are using dedicated interfaces on the linecards for the VPC-Peer-Keepalive-Link and a reload of one N7K won't result in a total network outage any more.

  • Nexus 7000 vPC modification - avoiding type1 inconsistencies

    Hi Everyone,
    I need to configure some features on a pair of Nexus 7000's running 4.2(6) - one of them is Root Guard.
    I am aware that when I enable Root Guard on the first vPC peer, the vPC will go into suspended state until I configure the other vPC peer identically.
    This is causing me a big service disruption headache as I need to do this for a whole Data Centre.
    I see on the Nexus 5k, you can do port-profiles which seems to enabled config synchronisation across vPC peers - so I assume the vPC would stay up due to both peers receiving config at exactly the same time - but this feature is not available on Nexus 7k.
    Does anybody know for sure if I were to create a scheduled job to run at the same time on both vPC peers with identical config content - i.e. apply Root Guard to vPC - would this prevent the vPC going into suspend state?
    If not, do you know of any other ways to prevent vPC going into suspend?
    Thanks in advance for any advice!

    Hi Raj,
    thankyou for your response.
    We have VPC between Core - Aggregation - all 7k and Aggregation to Access (5ks). VPC down from Core all the way to Access and also up all the way from Access to Core.
    So from a STP point of view, the topology is a single switch for Core, Aggregation and Access - so no loops.
    I agree this limits the potential for trouble if a switch is plugged into the access layer by mistake for example - but the customer is adamant they want it (RootGuard).
    Thanks,
    Oswaldo

  • Nexus 7000 and 2000. Is FEX supported with vPC?

    I know this was not supported a few months ago, curious if anything has changed?

    Hi Jenny,
    I think the answer will depend on what you mean by is FEX supported with vPC?
    When connecting a FEX to the Nexus 7000 you're able to run vPC from the Host Interfaces of a pair of FEX to an end system running IEEE 802.1AX (802.3ad) Link Aggregation. This is shown is illustration 7 of the diagram shown on the post Nexus 7000 Fex Supported/Not Supported Topologies.
    What you're not able to do is run vPC on the FEX Network Interface that connect up to the Nexus 7000 i.e., dual-homing the FEX to two Nexus 7000. This is shown in illustrations 8 and 9 of under the FEX topologies not supported on the same page.
    There's some discussion on this in the forum post DualHoming 2248TP-E to N7K that explains why it's not supported, but essentially it offers no additional resilience.
    From that post:
    The view is that when connecting FEX to the Nexus 7000, dual-homing does not add any level of resilience to the design. A server with dual NIC can attach to two FEX  so there is no need to connect the FEX to two parent switches. A server with only a single NIC can only attach to a single FEX, but given that FEX is supported by a fully redundant Nexus 7000 i.e., SE, fabrics, power, I/O modules etc., the availability is limited by the single FEX and so dual-homing does not increase availability.
    Regards

  • Which port to use for the peer-keep alive

    Hi All,
    We have 2 Nexus 6001s in our data center.
    The management port of each 6001 is connected to the other and this link is used as the peer keep alive link.
    My colleague is suggesting that we use one of the inline data ports as the keep alive link.
    Can you please advise on the pros and cons of using management/inline port as keep live link and the best practise to follow in this case?
    Thanks,
    Pete

    Hi Pete,
    Here are the best recommendation in order of preference.
    1. Use mgmt0 (along with management traffic)
         * Pros: whats good on this option is you are totally separating the VPC Peer keepalive link on another VRF (management) and does not mingle with the data or global vrf..
         * Cons: VPC PKL is dependent on the OOB management switch.
    2. Use dedicated 1G/10GE front panel ports. 
        * Pros - can just be a direct link between the N6K pair and not dependent on other boxes. 
        * Cons - you need extra SFPs for VPC PKL while the VPC PKL traffic join the global VRF.
    HTH
    Jay Ocampo

  • Nexus 7000 and VDCs

    Hello,
    We have two Nexus 7010 chassis populated with F2e and M1 (1Gig) cards. I created two VDCs on each. VDC11 on chassis 1 and VDC12 on chassis 2 contaning F2e ports and VDC21 on chassis 1 and VDC22  on chassis 2 containing M1 ports. I have a Keep alive and VPC channel between VDC11 and VDC12 and that s working fine. I conncted VDC21 to VDC11  and VDC22 to VDC12 with crossover cable (Layer 2 trunk ports). I plan on having server connected to the VDC21 and VDC22 (Dual homed). Since F2e and M1 cards can't mix on the same VDC, I am having an issue. So without VPC peer link between VDC21 and VDC 22 i cannot dual home my servers. is there a work around to this issue till Cisco updates OS to support both M1 and F2e ports on same VDC?
    Thanks

    wey,
    I that documents that I got I can help for you ask.
    http://d2zmdbbm9feqrf.cloudfront.net/2012/usa/pdf/BRKDCT-2121.pdf
    Best regards.!

  • Catalyst 6500 - Nexus 7000 migration

    Hello,
    I'm planning a platform migration from Catalyst 6500 til Nexus 7000. The old network consists of two pairs of 6500's as serverdistribution, configured with HSRPv1 as FHRP, rapid-pvst and ospf as IGP. Futhermore, the Cat6500 utilize mpls/l3vpn with BGP for 2/3 of the vlans. Otherwise, the topology is quite standard, with a number of 6500 and CBS3020/3120 as serveraccess.
    In preparing for the migration, VTP will be discontinued and vlans have been manually "copied" from the 6500 to the N7K's. Bridge assurance is enabled downstream toward the new N55K access-switches, but toward the 6500, the upcoming etherchannels will run in "normal" mode, trying to avoid any problems with BA this way. For now, only L2 will be utilized on the N7K, as we're avaiting the 5.2 release, which includes mpls/l3vpn. But all servers/blade switches will be migrated prior to that.
    The questions arise, when migrating Layer3 functionality, incl. hsrp. As per my understanding, hsrp in nxos has been modified slightly to better align with the vPC feature and to avoid sub-optimal forwarding across the vPC peerlink. But that aside, is there anything that would complicate a "sliding" FHRP migration? I'm thinking of configuring SVI's on the N7K's, configuring them with unused ip's and assign the same virtual ip, only decrementing the prio to a value below the current standby-router. Also spanning-tree prio will, if necessary, be modified to better align with hsrp.
    From a routing perspective, I'm thinking of configuring ospf/bgp etc. similar to that of the 6500's, only tweaking the metrics (cost, localpref etc) to constrain forwarding on the 6500's and subsequently migrate both routing and FHRP at the same time. Maybe not in a big bang style, but stepwise. Is there anything in particular one should be aware of when doing this? At present, for me this seems like a valid approach, but maybe someone has experience with this (good/bad), so I'm hoping someone has some insight they would like to share.
    Topology drawing is attached.
    Thanks
    /Ulrich

    In a normal scenario, yes. But not in vPC. HSRP is a bit different in the vPC environment. Even though the SVI is not the HSRP primary, it will still forward traffic. Please see the below white paper.
    http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/white_paper_c11-516396.html
    I will suggest you to set up the SVIs on the N7K but leave them in the down state. Until you are ready to use the N7K as the gateway for the SVIs, shut down the SVIs on the C6K one at a time and turn up the N7K SVIs. When I said "you are ready", it means the spanning-tree root is at the N7K along with all the L3 northbound links (toward the core).
    I had a customer who did the same thing that you are trying to do - to avoid down time. However, out of the 50+ SVIs, we've had 1 SVI that HSRP would not establish between C6K and N7K, we ended up moving everything to the N7K on a fly during of the migration. Yes, they were down for about 30 sec - 1 min for each SVI but it is less painful and waste less time because we don't need to figure out what is wrong or any NXOS bugs.
    HTH,
    jerry

  • Dell Servers with Nexus 7000 + Nexus 2000 extenders

    << Original post by smunzani. Answered by Robert. Moving from Document section to Discussions>>
    Team,
    I would like to use some of the existing Dell Servers for new network design of Nexus 7000 + Nexus 2000 extenders. What are my options for FEC to the hosts? All references of M81KR I found on CCO are related to UCS product only.
    What's best option for following setup?
    N7K(Aggregation Layer) -- N2K(Extenders) -- Dell servers
    Need 10G to the servers due to dense population of the VMs. The customer is not up for dumping recently purchased dell boxes in favor of UCS. Customer VMware license is Enterprise Edition.
    Thanks in advance.

    To answer your question, the M81KR-VIC is a Mezz card for UCS blades only.  For Cisco rack there is a PCIe version which is called the P81.  These are both made for Cisco servers only due to the integration with server management and virtual interface functionality.
    http://www.cisco.com/en/US/prod/collateral/ps10265/ps10493/data_sheet_c78-558230.html
    More information on it here:
    Regards,
    Robert

  • ESXi 4.1 NIC Teaming's Load-Balancing Algorithm,Nexus 7000 and UCS

    Hi, Cisco Gurus:
    Please help me in answering the following questions (UCSM 1.4(xx), 2 UCS 6140XP, 2 Nexus 7000, M81KR in B200-M2, No Nexus 1000V, using VMware Distributed Switch:
    Q1. For me to configure vPC on a pair of Nexus 7000, do I have to connect Ethernet Uplink from each Cisco Fabric Interconnect to the 2 Nexus 7000 in a bow-tie fashion? If I connect, say 2 10G ports from Fabric Interconnect 1 to 1 Nexus 7000 and similar connection from FInterconnect 2 to the other Nexus 7000, in this case can I still configure vPC or is it a validated design? If it is, what is the pro and con versus having 2 connections from each FInterconnect to 2 separate Nexus 7000?
    Q2. If vPC is to be configured in Nexus 7000, is it COMPULSORY to configure Port Channel for the 2 Fabric Interconnects using UCSM? I believe it is not. But what is the pro and con of HAVING NO Port Channel within UCS versus HAVING Port Channel when vPC is concerned?
    Q3. if vPC is to be configured in Nexus 7000, I understand there is a limitation on confining to ONLY 1 vSphere NIC Teaming's Load-Balancing Algorithm i.e. Route Based on IP Hash. Is it correct?
    Again, what is the pro and con here with regard to application behaviours when Layer 2 or 3 is concerned? Or what is the BEST PRACTICES?
    I would really appreciate if someone can help me clear these lingering doubts of mine.
    God Bless.
    SiM

    Sim,
    Here are my thoughts without a 1000v in place,
    Q1. For me to configure vPC on a pair of Nexus 7000, do I have to connect Ethernet Uplink from each Cisco Fabric Interconnect to the 2 Nexus 7000 in a bow-tie fashion? If I connect, say 2 10G ports from Fabric Interconnect 1 to 1 Nexus 7000 and similar connection from FInterconnect 2 to the other Nexus 7000, in this case can I still configure vPC or is it a validated design? If it is, what is the pro and con versus having 2 connections from each FInterconnect to 2 separate Nexus 7000?   //Yes, for vPC to UCS the best practice is to bowtie uplink to (2) 7K or 5Ks.
    Q2. If vPC is to be configured in Nexus 7000, is it COMPULSORY to configure Port Channel for the 2 Fabric Interconnects using UCSM? I believe it is not. But what is the pro and con of HAVING NO Port Channel within UCS versus HAVING Port Channel when vPC is concerned? //The port channel will be configured on both the UCSM and the 7K. The pro of a port channel would be both bandwidth and redundancy. vPC would be prefered.
    Q3. if vPC is to be configured in Nexus 7000, I understand there is a limitation on confining to ONLY 1 vSphere NIC Teaming's Load-Balancing Algorithm i.e. Route Based on IP Hash. Is it correct? //Without the 1000v, I always tend to leave to dvSwitch load balence behavior at the default of "route by portID". 
    Again, what is the pro and con here with regard to application behaviours when Layer 2 or 3 is concerned? Or what is the BEST PRACTICES? UCS can perform L2 but Northbound should be performing L3.
    Cheers,
    David Jarzynka

  • Ciscoworks 2.6 and Nexus 7000 issues

    Running LMS 2.6 with RME version 4.0.6, and DFM 2.0.13.
    We keep getting false alerts in DFM on the temperature in our Nexus 7000 switches. The alert says that the high temp threshold is 45C, and it's being exceeded at 46C. The thing that bothers me is that the actual switch reads that the threshold is around 100C or more. Any ideas as to why DFM would be picking up a temperature so far off the mark?
    Also, in regards to RME, I cannot pull configs from the Nexus 7000's. The check box in "archive config" is blanked out to where I can't check it. I download the device packages for the 7000 into RME but it will not pull configs. Is this not supported under our version of RME, or would there be some other reason that I can't do this?
    Thanks for any assistance with these issues!

    UPDATE:
    I fixed the RMA config pull issue. I thought I had previously downloaded the Nexus device packages so that RMA could work with them, but upon checking again, it looks like I just didn't have them installed. Got that piece fixed and now I can pull configs from the switches just fine.
    Still having problems with the temperature reading in DFM not accurately reflecting what is actually on the switches. Any suggestions as to where to start hunting down the issue for this are greatly appreciated. Thanks!

  • Netflow Nexus 7000

    Hi all,
    A few months ago I have configured netflow on a Nexus 7000 with NX-OS version 6.0.2.
    This was my config:
    flow exporter Fluke_NetflowTracker
      description export netflow to Fluke_NetflowTracker
      destination x.x.x.x use-vrf management
      transport udp 2055
      source mgmt0
      version 9
    flow exporter Fluke_Optiview
      description export netflow to Fluke_Optiview
      destination x.x.x.x  transport udp 2055
      source Vlanx
      version 9
    flow monitor MonitorTrafficToFluke
      record netflow-original
      exporter Fluke_NetflowTracker
      exporter Fluke_Optiview
    This flow was activated on some SVI's. "ip flow monitor MonitorTrafficToFluke input"
    Recently we have upgraded the NX-OS to version 6.1.3. The netflow keeps on working, but the syntax of the netflow configuration has changed. Now you have to add a sampler as well.
    So I have created the following sampler.
    sampler NetFlow-Sampler
      description Netflow Sampler
      mode 1 out-of 1000
    When I want to update the current configuration with the sampler I can't adapt or remove the existing netflow configuration on the SVI.
    NK7(config-if)# no ip flow monitor MonitorTrafficToFluke input
    ERROR: A sampler must be configured for an interface on an F2 card
    NK7(config-if)# ip flow monitor MonitorTrafficToFluke input sampler NetFlow-Sampler
    An additional 1:100 sampler, over the configured sampler is applicable for F2 ports
    Error: Sampler can not be changed on Interface Vlanx. Remove flow monitor first.
    ERROR: Command has failed
    How do I update or remove the existing configuration on the SVI.
    I want the config to be "ip flow monitor MonitorTrafficToFluke input sampler NetFlow-Sampler"
    Thank you,
    Best Regards,
    Joris

    Hi Joris,
    Try no feature netflow under the interface and try to re-apply the whole configs. Since its a F2 we dont support config changes until 6.2(2) only way is to remove the configs using no feature netflow and re-applying it.
    Thanks,
    Richard.
    *Rate if its useful

  • Nexus 7000-Error Message

    Hi
    We are having 2 nexus switches configured in the network as core with HSRP configured between them..The access switches are connected withdual 10G links to both core switches with VPC configured in Nexus..In both core switches 10G module is used for uplink termination..In one of the core switch for this 10 G module we get the follwoing error
    Module-1 reported minor temperature alarm. Sensor=20 Temperature=101 MinThreshold=100 2011 Dec 22 08:10:19 CORE-SEC %PLATFORM-2-MOD_TEMPOK:
    Module-1 recovered from minor temperature alarm. Sensor=20 Temperature=99 MinThreshold=100 even though the room temprature is 23 Degree still we get this error wherein as per the nexus documenation allowed room temparature is 0-40 Degree (Operating temperature: 32º to 104ºF (0º to 40ºC) `
    show module`
    Mod  Ports  Module-Type                      Model                            Status
    1    8      10 Gbps Ethernet XL Module      N7K-M108X2-12L        ok
    2    32    1/10 Gbps Ethernet Module        N7K-F132XP-15          ok
    3    48    10/100/1000 Mbps Ethernet XL Mod N7K-M148GT-11L    ok
    5    0      Supervisor module-1X            N7K-SUP1                      active *
    As per the nexus module documentation for module1 the allwed temparature is 0-40degree wherein the actual room temparatue is 23degree..below is the exception message for module1
    exception information --- exception instance 1 ----
    Module Slot Number: 1
    Device Id         : 49
    Device Name       : Temperature-sensor
    Device Errorcode : 0xc3114203
    Device ID         : 49 (0x31)
    Device Instance   : 20 (0x14)
    Dev Type (HW/SW) : 02 (0x02)
    ErrNum (devInfo) : 03 (0x03)
    System Errorcode : 0x4038001e Module recovered from minor temperature alarm
    Error Type       : Minor error
    PhyPortLayer     :
    Port(s) Affected :
    DSAP             : 39 (0x27)
    UUID             : 24 (0x18
    Same module exists in second Nexus 7000 which is in same datacenter but not getting this alarm..
    can anyone please suggest on the same..Software details are as below
    Software
      BIOS:      version 3.22.0
    kickstart: version 5.1(3)
      system:    version 5.1(3)
      BIOS compile time:       02/20/10
      kickstart image file is: bootflash:///n7000-s1-kickstart.5.1.3.bin
      kickstart compile time:  12/25/2020 12:00:00 [03/11/2011 07:42:56]
      system image file is:    bootflash:///n7000-s1-dk9.5.1.3.bin
      system compile time:     1/21/2011 19:00:00 [03/11/2011 08:37:35]

    Hi Sameer
    Temperature alarm means that one particular sensor on the linecard warms up to 101 degree.
    This can be caused by damaged sensor or problems with cooling in that particular part of chassis.
    You can check temperature on the module using following command:
    show environment temperature module 1
    Tru to move the module to another slot. If the issue reoccure - open a TAC case.
    HTH,
    Alex

Maybe you are looking for