Ntp on nexus Nexus5020

Hello,
is it possible to configure nexus with ntp broadcast client ?
nx-os:
kickstart.4.1.3.N2.1a.bin
thanx

Hello
NTP broadcast client is not supported in any NX-OS
http://tools.cisco.com/Support/BugToolKit/search/getBugDetails.do?method=fetchBugDetails&bugId=CSCsv33349
Thanks
-Prashanth

Similar Messages

  • NTP Error Logs on Nexus 5K [ ntpd[4746]: ntp:time reset +0.279670 s]

    Hi Team,
    We are using almost 10 Nexus 5k in our DC currently we are getting same error logs in all Nexus 5k.
    " ntpd[4746]: ntp:time reset +0.279670 s " 
    Is it major error or just for reset time?........
    Please check and let me know if you need any other logs from our switches.
    Thanks....
    Regards,
    Senthil

    Hi Senthilkumar
    it is just an informational message stating that the NTP timer has been reset which would have been originated from the NTP server and communicated to all the devices connecting to this server
    It's an informational message.
    HTH,
    Alex

  • Nexus 5000 as NTP client

    We run 6509 core routers as NTP servers to other IOS routers/switches & servers of several OS flavours.
    All good.
    Recently added some Nexus 5000s and cannot get them to lock.
    No firewalls or ACLs in the path
    6509 (1 of 4) state:
    LNPSQ01CORR01>sh ntp ass
          address         ref clock     st  when  poll reach  delay  offset    disp
    + 10.0.1.2         131.188.3.220     2   223  1024  377     0.5   -6.23     0.7
    +~130.149.17.21    .PPS.             1   885  1024  377    33.7   -0.26     0.8
    *~138.96.64.10     .GPS.             1   680  1024  377    22.7   -2.15     1.0
    +~129.6.15.29      .ACTS.            1   720  1024  377    84.9   -3.37     0.6
    +~129.6.15.28      .ACTS.            1   855  1024  377    84.8   -3.30     2.3
    * master (synced), # master (unsynced), + selected, - candidate, ~ configured
    Nexus state:
    BL01R01B10SRVS01# sh ntp peer-status
    Total peers : 4
    * - selected for sync, + -  peer mode(active),
    - - peer mode(passive), = - polled in client mode
        remote               local              st  poll  reach   delay
    =10.0.1.1               10.0.201.11            16   64       0   0.00000
    =10.0.1.2               10.0.201.11            16   64       0   0.00000
    =10.0.1.3               10.0.201.11            16   64       0   0.00000
    =10.0.1.4               10.0.201.11            16   64       0   0.00000
    Nexus config:
    ntp distribute
    ntp server 10.0.1.1
    ntp server 10.0.1.2
    ntp server 10.0.1.3
    ntp server 10.0.1.4
    ntp source 10.0.201.11
    ntp commit
    interface mgmt0
      ip address 10.0.201.11/24
    vrf context management
      ip route 0.0.0.0/0 10.0.201.254
    Reachability to the NTP source...
    BL01R01B10SRVS01# ping 10.0.1.1 vrf management source 10.0.201.11
    PING 10.0.1.1 (10.0.1.1) from 10.0.201.11: 56 data bytes
    64 bytes from 10.0.1.1: icmp_seq=0 ttl=253 time=3.487 ms
    64 bytes from 10.0.1.1: icmp_seq=1 ttl=253 time=4.02 ms
    64 bytes from 10.0.1.1: icmp_seq=2 ttl=253 time=3.959 ms
    64 bytes from 10.0.1.1: icmp_seq=3 ttl=253 time=4.053 ms
    64 bytes from 10.0.1.1: icmp_seq=4 ttl=253 time=4.093 ms
    --- 10.0.1.1 ping statistics ---
    5 packets transmitted, 5 packets received, 0.00% packet loss
    round-trip min/avg/max = 3.487/3.922/4.093 ms
    BL01R01B10SRVS01#
    Are we missing some NTP or managment vrf setup in the Nexus 5Ks??
    Thanks
    Rob Spain
    UK

    I have multiple 5020's, 5548's, and 5596's, and they all experience this same problem. Mind you I run strictly layer 2. I don't even have feature interface-vlan enabled. I tried: "ntp server X.X.X.X use-vrf management" as well as "clock protocol ntpt". These didn't help. 
    I was told by TAC that there is a bug (sorry I do not have the ID), but basically NTP will not work over the management VRF. The only way I got NTP to work, was by enabling the feature interface-vlan, and adding a vlan interface with an IP and retrieving NTP through this interface. 
    I upgraded to 5.2 (1) in hopes that this would fix the issue. but it did not. 

  • Cisco Nexus 7K as NTP server

    I want to configure cisco nexus as NTP server so that it can provide NTP source to other network devices.I am planning to configure the following configuration on my cisco nexus 7k
    ntp master stratum 2
    server0.europe.pool.ntp.org Prefer
    server1.europe.pool.ntp.org
    server2.europe.pool.ntp.org
    server3.europe.pool.ntp.org
    ntp source-interface mgmt0 
    Is there anything else i need to configure and any security concerns if i allow NTP port on my firewall to nexus core 7K switches
    Thanks for your help

    Consult these secure NTP recommendations from Team Cymru.
    Don't forget to rate all helpful posts.
    http://www.team-cymru.org/ReadingRoom/Templates/secure-ntp-template.html

  • NTP Nexus 5548s

    I know that with the Nexus switches that we must use the management port and the management vrf for services such as NTP, SNMP etc. I have this configured on my 5548 and it still will not sync with NTP.
    ntp server 10.1.0.63 prefer use-vrf management
    ntp server 10.1.0.65 use-vrf management
    ntp peer 10.100.4.156 use-vrf management
    ntp source 10.100.4.155
    I verified connectivity to the NTP server by pinging it:
    5548-A# ping 10.1.0.63 vrf management
    PING 10.1.0.63 (10.1.0.63): 56 data bytes
    64 bytes from 10.1.0.63: icmp_seq=0 ttl=126 time=1.095 ms
    64 bytes from 10.1.0.63: icmp_seq=1 ttl=126 time=0.657 ms
    64 bytes from 10.1.0.63: icmp_seq=2 ttl=126 time=1.242 ms
    64 bytes from 10.1.0.63: icmp_seq=3 ttl=126 time=0.783 ms
    64 bytes from 10.1.0.63: icmp_seq=4 ttl=126 time=0.735 ms
    Maybe it is working and I dont know it. When I run "show ntp status", I get this
    5548-A# show ntp status
    Distribution : Enabled
    Last operational state: Commit operation successful
    5548-A#
    5548-A#
    5548-A# show ntp peers
      Peer IP Address               Serv/Peer
      10.100.4.156                  Peer (configured)
      10.1.0.63                     Server (configured)
      10.1.0.65                     Server (configured)
    5548-A#
    i am used to seeing a different output from show ntp status. It is not telling me that it is synchronized. Please tell me what I am missing here.
    Bruce

    Hi Bruce,
    This looks good. I admit NTP isn't as easy to verify on NX-OS as it is on IOS, this indeed causes confusion.
    In your case you see the * for 10.1.0.63, so the device is synched to this source. You can verify with:
    # show ntp statistics peer ipaddr x.x.x.x
    with the above IP.
    Here is a brief explanation:
    * - selected for sync
    Device is synchronized to this source
    + -  peer mode(active)
    The mode in which a host has configured itself to poll a time server
    that it might synchronize with. In this mode, the host also allows
    itself to be polled by that time server.
    - - peer mode(passive)
    The mode in which a time server is polled by a host that has configured
    itself in ``symmetric active mode''. In this mode, the time server can
    also poll that host.
    = - polled in client mode
    The mode in which a host polls a time server that it might synchronize
    with, but it will not respond to polls from that time server.
    I hope the above helps, but this looks as if you are up and running at this point.
    Thanks,
    /gary

  • Configuring NTP on the nexus 7010

    Hi All,
    I'm a little confused about how to configure NTP on the nexus 7010.  I have an admin VDC and four working VDCs.  I read that you can only configure NTP on the admin VDC but the commands are also available on the other VDCs.  As the admin VDC is setup as an admin VDC it doesn't allow any commands other than those used to configure the other VDCs.  If I configure 'clock protocol ntp vdc X' from the admin vdc conf t cli it doesn't appear to apply that command to the individual VDCs.  If I try that command on each of the VDCs I get an error message.  If I do a 'show ntp peer-status' I get a message stating 'the clock is not controlled by ntp' with an explanation teling me to use the 'clock protocol ntp vdc X' command however as already explained that doesn't appear to work. I'm running nxos 6.2
    Any help, documentation etc would be greatly appreciated.

    An update.
    I have now configured NTP.  The 'clock protocol ntp vdc X' command is accessed from the conf t Cli within the admin VDC. From my reading of the NXOS documentation from 6.2 you should be able to run NTP in multiple VDCs however this does not appear to be the case.  I've configured the admin VDC as the NTP master.

  • Error on nexus 7k series " operation failed.the fabric is already locked"

    getting following error on nexus 7k series switch: error is " operation failed.the fabric is already locked", while removing ntp commands (no ntp server 10.101.1.1) from switch. Please help.

    I had the same error message, only in my situation Outlook would only open successfully every 4 or 5 attempts. When Outlook would open (versus just hanging at the splash screen), I would get two dialog boxes with the warning/error message. 
    After I clicked okay I could get into Outlook.
    Previously I experienced this problem running my Office 2010 client with my Exchange 2007 mailbox on my old laptop.  The problem followed my mailbox through an Exchange 2010 migration (so new Exchange org), client upgrade to Outlook 2013, and a new
    laptop (so I knew it wasn't a corrupt profile).  This led me to the conclusion that it was a corrupt item(s) in my mailbox.
    To resolve the issue, I archived *everything* in my mailbox to a PST file, ran SCANPST to fix the corruption, and then uploaded everything back into my Exchange mailbox one folder at a time, stopping after each folder to close and restart Outlook so I could
    narrow down which folder had the corrupt item if the problem recurred.  I'm happy to say my issue is now resolved.

  • Can't archive nexus 5010 configs (LMS 3.2)

    Using LMS 3.2 I can't archive configs from
    our Nexus 5010 ( 4.2(1)N1(1) ) devices . I have installed
    the   necessary device package Nexus.RME431.v2-4.zip. Following error occures:
    *** Device Details for                          gf02na50050p                         ***
    Protocol ==>                          Unknown / Not Applicable
    Selected Protocols with order ==>                                                                                  SSH,SCP,TFTP,RCP,Telnet
    Execution Result:
    RUNNING
    CM0151 PRIMARY RUNNING Config  fetch failed for gf02na50050p Cause: Failed to fetch config using  TFTPFailed to establish TELNET connection to 10.92.170.214 - Cause:  Connection refused.
    Action: Check if protocol is  supported by device and required device package is installed. Check  device credentials. Increase timeout value, if required.
    Telnet on device is disabled, SSH enabled.
    with only SSH on RME   followig error:
    *** Device Details for                          gf02na50050p                         ***
    Protocol ==>                          Unknown / Not Applicable
    Selected Protocols with order ==>                                                                                  SSH
    Execution Result:
    RUNNING
    CM0151 PRIMARY RUNNING Config  fetch failed for gf02na50050p Cause: Failed to get the start  tag-Building Configuration ... in the configuration.  Action: Check if  protocol is supported by device and required device package is  installed. Check device credentials. Increase timeout value, if  required.
    Thanks
    Igor

    RME 4.3.1; windows platform
    with one customer I have the same issue with a Nexus 5020. Software version is :
    Software
      BIOS:      version 1.3.0 [last: ]
      loader:    version N/A
      kickstart: version 4.2(1)N2(1)
      system:    version 4.2(1)N2(1)
      power-seq: version v1.2
      BIOS compile time:       09/08/09 [last: ]
      kickstart image file is: bootflash:///n5000-uk9-kickstart.4.2.1.N2.1.bin
      kickstart compile time:  7/28/2010 18:00:00 [07/29/2010 03:10:19]
      system image file is:    bootflash:/n5000-uk9.4.2.1.N2.1.bin
      system compile time:     7/28/2010 18:00:00 [07/29/2010 07:18:12]
    Hardware
      cisco Nexus5020 Chassis ("40x10GE/Supervisor")
      Intel(R) Celeron(R) M CPU    with 2074284 kB of memory.
    ===============
    following RME packages are installed:
        SharedNetshowSS      1.1.2      SharedNetshowSS device package
        SharedSwimMDS9000    1.6.3      SharedSwimMDS9000 device package
        SharedInventoryMDS   1.5.1      SharedInventoryMDS device package
        SharedDcmaSS         2.2.2      SharedDcmaSS device package
        Nexus                2.4        Nexus device package
    ===============
    >>> I see the following problems with SSH instrumentation of the Nexus Platform when looking into the debug file. The file I got is more or less the same then the one Marc posted.
    The customer does not have a banner configured so the standard login procedure shows the following when login is done manually. RME automatically interpretes the first line after "Login as: " as a banner...
    login as: Nexus
    Nexus 5000 Switch
    Using keyboard-interactive authentication.
    Password:
    Cisco Nexus Operating System (NX-OS) Software
    TAC support: http://www.cisco.com/tac
    Copyright (c) 2002-2010, Cisco Systems, Inc. All rights reserved.
    The copyrights to certain works contained in this software are
    owned by other third parties and used and distributed under
    license. Certain components of this software are licensed under
    the GNU General Public License (GPL) version 2.0 or the GNU
    Lesser General Public License (LGPL) Version 2.1. A copy of each
    such license is available at
    http://www.opensource.org/licenses/gpl-2.0.php and
    http://www.opensource.org/licenses/lgpl-2.1.php
    devicename#
    >>> and RME wants to take the last line as the prompt, resulting in a non-fatal java exception:
    [line 1471 ff.]
    [ Mon Aug 30  08:58:01 CEST 2010 ],DEBUG,[Thread-38],com.cisco.nm.xms.xdi.transport.cmdsvc.LogAdapter,debug,31,Learning prompt: sA[0] == 'http://www.opensource.org/licenses/lgpl-2.1.php'
    [ Mon Aug 30  08:58:01 CEST 2010 ],DEBUG,[Thread-38],com.cisco.nm.xms.xdi.transport.cmdsvc.LogAdapter,debug,31,Learning prompt: sA[1] == ''
    [ Mon Aug 30  08:58:01 CEST 2010 ],DEBUG,[Thread-38],com.cisco.nm.xms.xdi.transport.cmdsvc.LogAdapter,printStackTrace,51,stacktracecom.cisco.nm.lib.cmdsvc.CmdSvcException: Prompt learning failed: 'http://www.opensource.org/licenses/lgpl-2.1.php' && '' do not match.
        at com.cisco.nm.lib.cmdsvc.Session.tune(Session.java:904)
        at com.cisco.nm.lib.cmdsvc.Session.tune(Session.java:833)
        at com.cisco.nm.lib.cmdsvc.AuthHandler.connect(AuthHandler.java:267)
        at com.cisco.nm.lib.cmdsvc.OpConnect.invoke(OpConnect.java:56)
        at com.cisco.nm.lib.cmdsvc.SessionContext.invoke(SessionContext.java:299)
    >> Ok, RME proceeds, but the SSH implementation (SharedDcmaSS ?) cannot extract the config and a java exception occures which leads to "Closing the session":
    [ Mon Aug 30  08:58:06 CEST 2010 ],DEBUG,[Thread-38],com.cisco.nm.xms.xdi.transport.cmdsvc.LogAdapter,debug,31,Returning from Session.send('show startup-config
    [ Mon Aug 30  08:58:06 CEST 2010 ],DEBUG,[Thread-38],com.cisco.nm.xms.xdi.transport.cmdsvc.LogAdapter,debug,31,in trimPrompt(), prompt == ''
    [ Mon Aug 30  08:58:06 CEST 2010 ],DEBUG,[Thread-38],com.cisco.nm.xms.xdi.pkgs.LibDcma.persistor.CliOperator,fetchConfig,490,Failed to get the start tag-Building Configuration ... in the configuration.com.cisco.nm.xms.xdi.ags.config.ConfigTransportException: Failed to get the start tag-Building Configuration ... in the configuration.
        at com.cisco.nm.xms.xdi.pkgs.SharedDcmaSS.transport.SSCliOperator.extractConfig2Buffer(SSCliOperator.java:176)
        at com.cisco.nm.xms.xdi.pkgs.LibDcma.persistor.CliOperator.fetchConfig(CliOperator.java:436)
        at com.cisco.nm.xms.xdi.pkgs.LibDcma.persistor.CliOperator.fetchConfig(CliOperator.java:510)
        at com.cisco.nm.xms.xdi.pkgs.LibDcma.persistor.SimpleFetchOperation.performOperation(SimpleFetchOperation.java:61)
        at com.cisco.nm.xms.xdi.pkgs.LibDcma.persistor.ConfigOperation.doConfigOperation(ConfigOperation.java:111)
        at com.cisco.nm.xms.xdi.pkgs.SharedDcmaSS.transport.SSConfigOperator.fetchConfig(SSConfigOperator.java:65)
        at com.cisco.nm.rmeng.dcma.configmanager.ConfigManager.updateArchiveForDevice(ConfigManager.java:1315)
        at com.cisco.nm.rmeng.dcma.configmanager.ConfigManager.performCollection(ConfigManager.java:3291)
        at com.cisco.nm.rmeng.dcma.configmanager.CfgUpdateThread.run(CfgUpdateThread.java:27)
    [ Mon Aug 30  08:58:06 CEST 2010 ],DEBUG,[Thread-38],com.cisco.nm.xms.xdi.pkgs.LibDcma.persistor.SimpleFetchOperation,performOperation,62,FetchStatus - FAILURE for Protocol SSH for device x.x.x.x
    >> As telnet and tftp are not allowed the config cannot be archived (Customer reports no problems with telnet...)
    The config itself starts and ends with the following lines:
    devicename# show startup-config
    !Command: show startup-config
    !Time: Mon Sep  6 16:41:02 2010
    !Startup config saved at: Mon Aug 16 15:41:39 2010
    version 4.2(1)N2(1)
    no feature telnet
    no telnet server enable
    no feature http-server
    cfs eth distribute
    feature udld
    feature interface-vlan
    feature lacp
    feature vpc
    feature lldp
    feature vtp
    interface mgmt0
      shutdown force
      shutdown force
      no snmp trap link-status
      no snmp trap link-status
    clock timezone MEZ 1 0
    clock summer-time MEZS 5 sun mar 02:00 5 sun oct 03:00 60
    line console
    boot kickstart bootflash:/n5000-uk9-kickstart.4.2.1.N2.1.bin
    boot system bootflash:/n5000-uk9.4.2.1.N2.1.bin
    ip route 0.0.0.0/0 172.16.4.1
    vtp mode transparent
    vtp domain sap
    logging server x.x.x.x 7 use-vrf default
    logging server y.y.y.y 7 use-vrf default
    devicename#
    For me it looks like as a bug in the ssh implementation in RME, but I cannot find a bug id on CCO ....

  • NTP on Nexus5k and 3560

    I have begun moving NTP from our 6500 to 4 Nexus 5k as part of a core upgrade.  The Nexus will act as our internal NTP server for all switches.  Any switches that are on the same vlan as the Nexus have no issues syncing NTP from them.  However any switch that has to have the traffice routed to the Nexus is showing that the time source as insane.
    The configuration on our Nexus is as follows the Nexus are .11,12,13 and 14:
    ntp peer 172.24.1.12
    ntp peer 172.24.1.13
    ntp peer 172.24.1.14
    ntp server 192.43.244.18
    clock timezone CST -6 0
    clock summer-time CDT 2 Sun Mar 2:00 1 Sun Nov 2:00 60
    Here is the configuration on one of our 3560's:
    clock timezone CST -6
    clock summer-time CDT recurring
    ntp server 172.24.1.11
    ntp server 172.24.1.13
    ntp server 172.24.1.12
    ntp server 172.24.1.14
    This same configuration worked when the switches were configured as NTP Peers to our 6500 (172.24.1.1).  The ip for the 6500 has been moved to an HSRP address across the Nexus so I have pointed the switches at the individual IP for each Nexus.
    Here is a debug ntp packet ouput from one of the 3560s:
    .Mar  7 17:21:22: NTP: xmit packet to 172.24.1.11:
    .Mar  7 17:21:22:  leap 3, mode 3, version 3, stratum 0, ppoll 64
    .Mar  7 17:21:22:  rtdel 2445 (141.678), rtdsp C804D (12501.175), refid AC180101
    (172.24.1.1)
    .Mar  7 17:21:22:  ref D2F4A4F5.9CBFA919 (06:32:53.612 CST6 Sun Feb 26 2012)
    .Mar  7 17:21:22:  org 00000000.00000000 (18:00:00.000 CST6 Thu Dec 31 1899)
    .Mar  7 17:21:22:  rec 00000000.00000000 (18:00:00.000 CST6 Thu Dec 31 1899)
    .Mar  7 17:21:22:  xmt D3021792.8D0B8963 (11:21:22.550 CST6 Wed Mar 7 2012)

    Thanks for your reply.
    My issue may be a little different than you encountered. In my configuration I am able to get some, but not all, SVIs on Nexus 5548s to funciton as NTP servers.
    I have two Nexus 5548 vPC peers configured (N5K-1 and N5K-2) for HSRP and as NTP servers. A downstream 2960S switch stack (STK-7) is the NTP client. STK-7 is connected to N5K-1 and N5K-2 with a physical link each bundled into a port channel (multi-chassis Etherchannel on the STK-7 stack and vPC on the 5548 peers).
    When the STK-7 NTP client is configure for NTP server IP addresses on the same network as the switch stack (10.3.0.0 in the diagram below) all possible IP addresses work (IP addresses in green), the “real” IP addresses of each SVI on the 5548s (10.3.0.111 & 10.3.0.112) as well as the HSRP IP address (10.3.0.1).
    When the STK-7 NTP client is configured for NTP server IP addresses on a different network than the switch stack (10.10.0.0 in the diagram below) only the “real” IP address of the SVI on the 5548 to which the Etherchannel load-balancing mechanism directs the client to server NTP traffic (N5K-2) works. In the diagram above the client to server NTP traffic is sent on the link to N5K-2. In the diagram below NTP server 10.10.0.112 is reported as sane but NTP servers 10.10.0.111 and 10.10.0.1 are reported as insane (in red).
    I am concerned that the issue is related to my vPC configuration.
    Cisco TAC has indicated that this behavior is normal.

  • DCNM 6.1.1a and Nexus 5020

    Hi all,
    I've got a problem that I am not able to figure out.
    I've got 2 nexus 5020 running
    Software
      system:    version 5.0(2)N2(1)
      system image file is:    bootflash:/n5000-uk9.5.0.2.N2.1.bin
    Hardware
      cisco Nexus5020 Chassis ("40x10GE/Supervisor")
    I use Nagios for management and so far, everything is working fine. Since a couple of days, I was thinking to install dcnm 6.1.1a on a red hat linux to test the app and see what can I do with the nexus. The problem is that every time that I try to work with the DCNM app, the CPU goes up a lot.
    The show process cpu history is the following:
                1
       1 1 2 2 2014 3 1
       175785558555676737556055796009577855685588655825576458656955
    100 #
    90 #
    80 #
    70 #
    60 #
    50 # #
    40 # #
    30 # # #
    20 # # # #### # #
    10 ################################################### ########
       0....5....1....1....2....2....3....3....4....4....5....5....
                 0    5    0    5    0    5    0    5    0    5
    CPU% per second (last 60 seconds)
    # = average CPU%
       111111111111111111111111111111111111111111111111111111111111
       000000000000000000000000000000000000000000000000000000000000
       000000000000000000000000000000000000000000000000000000000000
    100 ************************************************************
    90 ************************************************************
    80 ***************************************** *******************
    70 ************************************************************
    60 ************************************************************
    50 ************************************************************
    40 ************************************************************
    30 ************************************************************
    20 ******#*****************************************************
    10 ############################################################
       0....5....1....1....2....2....3....3....4....4....5....5....
                 0    5    0    5    0    5    0    5    0    5
    CPU% per minute (last 60 minutes)
    * = maximum CPU% # = average CPU%
        111111111111111111111111111111111111111111111111111111111111111111111111
        335654444333535333333474344735935547557454443544355623332443363242463447
    100
    90
    80
    70
    60
    50
    40
    30
    20 *** * * * * ** ** **** * * *** * * *
    10 ############################################### #########################
       0....5....1....1....2....2....3....3....4....4....5....5....6....6....7.
                 0    5    0    5    0    5    0    5    0    5    0    5    0
    CPU% per hour (last 72 hours)
    * = maximum CPU% # = average CPU%
            1      1 1          1 1  1                  1        1
       799870778787080999797778070770777887797877878760777776770777
       418260550868090312495185010220451125592520612190533039160102
    100  * * * * * * * * * * *
    90 ** * * ****** * ** * * * * * * *
    80 ***************** ** *** * * * ***** * * * ** **
    70 ************************************************************
    60 ************************************************************
    50 ************************************************************
    40 ************************************************************
    30 ************************************************************
    20 ************************************************************
    10 ############################################################
       0....5....1....1....2....2....3....3....4....4....5....5....
                 0    5    0    5    0    5    0    5    0    5
    CPU% per minute (last 60 minutes)
    * = maximum CPU% # = average CPU%
    Could SNMP be the cause of this high CPU? How could I confirm it? Maybe it is a bug but I could not find anything.
    Any ideas or comments are welcomed!
    THanks in advance,
    Fernando

    Just to close this thread, we saw that the average cpu utilization is 10%. The CPU spikes might be due to the polling time or some cosmetic output. There is no output error, no discards, no traffic loss.
    thanks in advance,
    Fernando

  • Can we make Nexus7009 as NTP server

    Hi 
    Can we make Nexus7009 as an NTP server. Basically we are craeting one MGMT vlan with HSRP configured between two chassis. HSRP IP want to use as an NTP server IP .
    is it possible and if yes , what will be the configuration for NTP?

    Hello
    I have neve done this on an nexus however you can try the following:
    Are you going to be pointing the 7k to an external authoritative time server and having you lan clients point to the 7K? - meaning do you still want this 7k be authoritative ntp for others even if itself ntp isnt sychronised to an external time source?
    access-list 10 permit 192.168.10.0 0.0.0.255 ( internal lan ntp clients)
    access-list 10 permit host 172.16.10.1 ( external time source)
    ntp ccess-group peer accesslist 10
    ntp server 172.16.10.1
    tp source-interfface x.x.x.x ( managment interface)
    Lan client ( 192.168.10.100)
    ntp server(7k) prefer
    res
    Paul

  • Nexus 1KV TACACS+ Not Working

    I have been trying to get my Nexus 1KV working with AAA/TACACS+ and I'm stumped.
    The short version is that I see where the issue is, but can't seem to resolve it.
    When I try to log in using TACACS, it fails.  The ACS server reports InvalidPassword.
    The CLI on the Nexus shows:
    2011 Sep  9 16:37:13 NY_nexus1000v %TACACS-3-TACACS_ERROR_MESSAGE: All servers failed to respond
    2011 Sep  9 16:37:14 NY_nexus1000v %AUTHPRIV-3-SYSTEM_MSG: pam_aaa:Authentication failed for user gtopf from 192.168.20.151 - sshd[15675]
    2011 Sep  9 16:37:23 NY_nexus1000v %DAEMON-3-SYSTEM_MSG: error: PAM: Authentication failure for illegal user gtopf from 192.168.20.151 - sshd[15672]
    And an AAA test from the nexus fails.
    I have good connectivity between the two boxes, I can ping, and obviously the failed login showing on ACS shows that it's talking, but it's just not working.
    My config is below (omitted ethernet port configs)
    !Command: show running-config
    !Time: Fri Sep  9 16:45:49 2011
    version 4.2(1)SV1(4a)
    no feature telnet
    feature tacacs+
    feature lacp
    username admin password 5 $1$Q50UpgN/$4eu39QmZHLTf3FAkwwdOF1  role network-admin
    banner motd #Nexus 1000v Switch#
    ssh key rsa 2048
    ip domain-lookup
    ip domain-lookup
    ip name-server 192.168.20.10
    tacacs-server timeout 30
    tacacs-server host 192.168.20.30 key 7 "j3gp0"
    aaa group server tacacs+ TacServer
        server 192.168.20.30
        deadtime 15
        use-vrf management
        source-interface mgmt0
    hostname NY_nexus1000v
    ntp server 192.168.20.10
    aaa authentication login default group TacServer
    aaa authentication login console group TacServer
    aaa authentication login error-enable
    tacacs-server directed-request
    vrf context management
      ip route 0.0.0.0/0 192.168.240.1
    vlan 1,20,40,240
    lacp offload
    port-channel load-balance ethernet source-mac
    port-profile default max-ports 32
    port-profile type ethernet Unused_Or_Quarantine_Uplink
      vmware port-group
      shutdown
      description Port-group created for Nexus1000V internal usage. Do not use.
      state enabled
    port-profile type vethernet Unused_Or_Quarantine_Veth
      vmware port-group
      shutdown
      description Port-group created for Nexus1000V internal usage. Do not use.
      state enabled
    port-profile type ethernet system-uplink
      vmware port-group
      switchport mode trunk
      switchport trunk allowed vlan 20,40,240
      channel-group auto mode active
      no shutdown
      system vlan 240
      description "System profile for critical ports"
      state enabled
    port-profile type vethernet data20
      vmware port-group
      switchport mode access
      switchport access vlan 20
      no shutdown
      description "Data profile for VM traffic 20 VLAN"
      state enabled
    port-profile type vethernet data40
      vmware port-group
      switchport mode access
      switchport access vlan 40
      no shutdown
      description "Data profile for VM traffic 40 VLAN"
      state enabled
    port-profile type vethernet data240
      vmware port-group
      switchport mode access
      switchport access vlan 240
      no shutdown
      description "Data profile for VM traffic 240 VLAN"
      state enabled
    port-profile type vethernet system-upilnk
      description "Uplink profile for VM traffic"
    vdc NY_nexus1000v id 1
      limit-resource vlan minimum 16 maximum 2049
      limit-resource monitor-session minimum 0 maximum 2
      limit-resource vrf minimum 16 maximum 8192
      limit-resource port-channel minimum 0 maximum 768
      limit-resource u4route-mem minimum 32 maximum 32
      limit-resource u6route-mem minimum 16 maximum 16
      limit-resource m4route-mem minimum 58 maximum 58
      limit-resource m6route-mem minimum 8 maximum 8
    interface port-channel1
      inherit port-profile system-uplink
      vem 3
    interface port-channel2
      inherit port-profile system-uplink
      vem 4
    interface port-channel3
      inherit port-profile system-uplink
      vem 5
    interface port-channel4
      inherit port-profile system-uplink
      vem 6
    interface mgmt0
      ip address 192.168.240.10/24
    interface control0
    line console
    boot kickstart bootflash:/nexus-1000v-kickstart-mz.4.2.1.SV1.4a.bin sup-1
    boot system bootflash:/nexus-1000v-mz.4.2.1.SV1.4a.bin sup-1
    boot kickstart bootflash:/nexus-1000v-kickstart-mz.4.2.1.SV1.4a.bin sup-2
    boot system bootflash:/nexus-1000v-mz.4.2.1.SV1.4a.bin sup-2
    svs-domain
      domain id 500
      control vlan 240
      packet vlan 240
      svs mode L2 
    svs connection vcenter
      protocol vmware-vim
      remote ip address 192.168.20.127 port 80
      vmware dvs uuid "52 8b 1d 50 44 9d d7 1f-b6 25 76 f1 f7 97 d8 5e" datacenter-name 28th St Datacenter
      max-ports 8192
      connect
    vsn type vsg global
      tcp state-checks
    vnm-policy-agent
      registration-ip 0.0.0.0
      shared-secret **********
      log-level

    FYI...
    I was able to get TACACS+ auth working using the commands in the Original Post (without the two additional suggestions) as follows...
    1000v# conf t
    1000v(config)# feature tacacs+
    1000v(config)# tacacs-server host 192.168.1.1 key 0
    1000v(config)# aaa group server tacacs+ TacServer
    1000v(config-tacacs+)# server 192.168.1.1
    1000v(config-tacacs+)# use-vrf management
    1000v(config-tacacs+)# source-interface mgmt 0
    1000v(config-tacacs+)# aaa authentication login default group TacServer local
    1000v(config)# aaa authentication login error-enable
    1000v(config)# tacacs-server directed-request
    I guess the OP had some other problem (perhaps incorrect shared secret??)

  • How to config N5K as NTP server

                       I am testing N5K as NTP server feature. I have found a CLI in N7K: ntp master, but I have not found the similar command in N5K. I am running Nexus OS 5.2.1.N(1).
    Any config example would be greatly appreciated.
    Thx.
    gy

    i do not know for certain about the jsps you're talking about, but i would guess they should run just fine under jrun, and you wouldn't have to make jrun and jserv coexist.
    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by One:
    Hi ilya:
    Thank you for your quick reply.
    I will try that way but I wonder if the thing like "how about the *.jsp developed by the JDeveloper and BC4J run under JRun; could and is it necessary to make the Apache+JServ and Apache+JRun co-exist ... "<HR></BLOCKQUOTE>
    null

  • Nexus 7010 mgmt0 useage opinion

    As a Senior Network Engineer I have entered into a bit of a debate with our Architect about the use of the mgmt0 interfaces on the nexus 7010 switch (dual-sups, M2 and F2 linecards).
    I would like to know opinion of the Cisco support network.
    I believe the mgmt0 interface should left alone for control plane traffic only and Out Of Band management access (ie ssh).  At the moment I have made a subnet for all VDCs with the mgmt0 (vrf management) sitting in a common subnet.  The physical mgmt0 interfaces from both SUPs are connected a management hand off switch.  The mgmt0s also serves as our control plane for VPCs. The VPC peer-link however is using main interfaces of the line-cards.
    The opinions;
    - The Architect thinks we should use all the mgmt0 interfaces for snmp, ntp, tacacs netflow-analysis and switch management.
    - However, I think I should use a traditional Loopback to perform these functions within the linecards.  The mgmt0 should only be used if traditional restricted switch access has failed.
    My Basis;
    the Loopback never goes down, uses multiple paths (the OOB hand off switch could fail closing switch management access completely).  The mgmt0 should be used as a last resort of management access to CMP.
    Thoughts please - Cheers

    I see your point about wanting to mitigate the impact of losing the OOB switch. I don't think the mgmt0 interface going down is considered the level of failure that will trigger a Supervisor switchover though. That's the way I read the Nexus 7000 HA whitepaper (and what I've seen based on some limited experience with taking apart a 7k pair).
    So, no the 7k can't send you an SNMP trap or syslog message if it's configured management path is offline. Mitigation of that could be via your NMS polling the devices's mgmt0 addresses. No response = trouble in paradise. Investigation step would be to log into the 7ks using the loopback IP and local authentication since your TACACS source-interface (mgmt0) is offline and going from there.
    The handful I've built (mostly 5k setups) I go for a Cat 3k switch with dual power supplies as the OOB switch. Once one of those is setup and seen not to be DOA, it's generally going to stay up until someone goes in and uplugs it or initiates a system reload.

  • DCNM5.2 as NX1010 VSB: How to setup system time or NTP?

    Hello,
    We installed DCNM 5.2.2c Virtual Service Blade on a Nexus 1010. But we are not able to find how to setup time or NTP neither from GUI nor from the NX1010 CLI. Do we need to touch the virtual appliance for time settings?
    Steffen

    What is the best way to do that?
    Use ethernet.. performance of wireless is never as good as ethernet.
    What wifi channel need choose to?
    There is no such thing as the best channel..
    Leave everything auto.. and see if it gives you full download speed.
    Use 5ghz.. and keep everything up close to the TC for the best wireless speed.
    If you are far away it will drop back to 2.4ghz which is slower.
    Once you reach the internet speed nothing is going to help it go faster so you are worrying about nothing.

Maybe you are looking for