Nexus 7000 Platform Logging

Hello,
We recently had a power supply failure in one of our Nexus 7000s, and I noticed that the syslog for the Platform is only present in the default VDC, and not in any of the other VDCs syslogs. Is this by design, or is there a logging level I can turn up in another VDC to capture this log? Thanks for any input
syslog from default VDC -
2013 Mar 18 23:10:34  %PLATFORM-2-PS_CAPACITY_CHANGE: Power supply PS3 changed i
ts capacity. possibly due to power cable removal/insertion (Serial number xxxxxxxx)
nothing in the VDC where I would like to get the logging
default VDC logging level -
xxx7K02# show log level platform
Facility        Default Severity        Current Session Severity
platform                5                       5
0(emergencies)          1(alerts)       2(critical)
3(errors)               4(warnings)     5(notifications)
6(information)          7(debugging)
xxx7K02#
loggging from the specific VDC where we have management tools.
xxx-LOW# show log level platform
Facility        Default Severity        Current Session Severity
platform                5                       5
0(emergencies)          1(alerts)       2(critical)
3(errors)               4(warnings)     5(notifications)
6(information)          7(debugging)
xxx-LOW#

Hello Carl,
What version of code are you running on your Nexus 7k?
The expected behavior is:
"When a hardware issue occurs, syslog messages are sent to all VDCs."
http://www.cisco.com/en/US/docs/switches/datacenter/sw/nx-os/virtual_device_context/configuration/guide/vdc_mgmt.html#wp1170241
Dave

Similar Messages

  • Log configuration changes to syslog on Nexus 7000?

    I need to be able to log any configuration changes to syslog on our Nexus switches. On IOS this is easy with the archive commands, but I'm a little stuck trying to do this on our Nexus gear. On the IOS gear I run the commands:
    archive
    log config
    logging enable
    logging size 100
    hidekeys
    notify syslog
    How do I do the equivalent on NX-OS?

    ​Cisco NX-OS can log configuration change events along with the individual changes when AAA command accounting is enabled.
    With command accounting enabled, all CLI commands entered, including configuration commands, are logged to the configured AAA server. Using this information, a forensic trail for configuration change events along with the individual commands entered for those changes can be recorded and reviewed.
    Because of this capability, it is strongly advised that AAA command accounting be enabled and configured.
    Refer to the “TACACS+ Command Accounting” section of this document for more information.
    The Nexus 7000, by default keeps a local accounting log of all the configuration commands entered on the device; you can view this with the 'show accounting log' command.
    In NX-OS, we changed the way logging works.  We keep a local accounting log of all the
    configuration changes ("show accounting log"), but if you want to send those logs to a
    server, it must be done with through a TACACS server.  Please see the below documentation:
    Configuring AAA on Nexus
    TACACS command accounting
    -Thanks
    Vinod
    **Encourage Contributors. RATE Them.**

  • Ask the Expert: Basic Introduction and Troubleshooting on Cisco Nexus 7000 NX-OS Virtual Device Context

    With Vignesh R. P.
    Welcome to the Cisco Support Community Ask the Expert conversation.This is an opportunity to learn and ask questions of Cisco expert Vignesh R. P. about the Cisco® Nexus 7000 Series Switches and support for the Cisco NX-OS Software platform .
    The Cisco® Nexus 7000 Series Switches introduce support for the Cisco NX-OS Software platform, a new class of operating system designed for data centers. Based on the Cisco MDS 9000 SAN-OS platform, Cisco NX-OS introduces support for virtual device contexts (VDCs), which allows the switches to be virtualized at the device level. Each configured VDC presents itself as a unique device to connected users within the framework of that physical switch. The VDC runs as a separate logical entity within the switch, maintaining its own unique set of running software processes, having its own configuration, and being managed by a separate administrator.
    Vignesh R. P. is a customer support engineer in the Cisco High Touch Technical Support center in Bangalore, India, supporting Cisco's major service provider customers in routing and MPLS technologies. His areas of expertise include routing, switching, and MPLS. Previously at Cisco he worked as a network consulting engineer for enterprise customers. He has been in the networking industry for 8 years and holds CCIE certification in the Routing & Switching and Service Provider tracks.
    Remember to use the rating system to let Vignesh know if you have received an adequate response. 
    Vignesh might not be able to answer each question due to the volume expected during this event. Remember that you can continue the conversation on the  Data Center sub-community discussion forum shortly after the event. This event lasts through through January 18, 2013. Visit this forum often to view responses to your questions and the questions of other community members.

    Hi Vignesh
    Is there is any limitation to connect a N2K directly to the N7K?
    if i have a an F2 card 10G and another F2 card 1G and i want to creat 3 VDC'S
    VDC1=DC-Core
    VDC2=Aggregation
    VDC3=Campus core
    do we need to add a link between the different VDC's
    thanks

  • Nexus 7000 - unexpected shutdown of vPC-Ports during reload of the primary vPC Switch

    Dear Community,
    We experienced an unusual behavior of two Nexus 7000 switches within a vPC domain.
    According to the attached sketch, we have four N7Ks in two data centers - two Nexus 7Ks are in a vPC domain for each data center.
    Both data centers are connected via a Multilayer-vPC.
    We had to reload one of these switches and I expected the other N7K in this vPC domain to continue forwarding over its vPC-Member-ports.
    Actually, all vPC ports have been disabled on the secondary switch until the reload of the first N7K (vPC-Role: primary) finished.
    Logging on Switch B:
    20:11:51 <Switch B> %VPC-2-VPC_SUSP_ALL_VPC: Peer-link going down, suspending all vPCs on secondary
    20:12:01 <Switch B> %VPC-2-PEER_KEEP_ALIVE_RECV_FAIL: In domain 1, VPC peer keep-alive receive has failed
    In case of a Peer-link failure, I would expect this behavior if the other switch is still reachable via the Peer-Keepalive-Link (via the Mgmt-Port), but since we reloaded the whole switch, the vPCs should continue forwarding. 
    Could this be a bug or are there any timers to be tuned?
    All N7K switches are running on NX-OS 6.2(8)
    Switch A:
    vpc domain 1
      peer-switch
      role priority 2048
      system-priority 1024
      peer-keepalive destination <Mgmt-IP-Switch-B>
      delay restore 360
      peer-gateway
      auto-recovery reload-delay 360
      ip arp synchronize
    interface port-channel1
      switchport mode trunk
      switchport trunk allowed vlan <x-y>
      spanning-tree port type network
      vpc peer-link
    Switch B:
    vpc domain 1
      peer-switch
      role priority 1024
      system-priority 1024
      peer-keepalive destination <Mgmt-IP-Switch-A>
      delay restore 360
      peer-gateway
      auto-recovery reload-delay 360
      ip arp synchronize
    interface port-channel1
      switchport mode trunk
      switchport trunk allowed vlan <x-y>
      spanning-tree port type network
      vpc peer-link
    Best regards

    Problem solved:
    During the reload of the Nexus 7K, the linecards were powerd off a short time earlier than the Mgmt-Interface. As a result of this behavior, the secondary Nexus 7K received at least one vPC-Peer-Keepalive Message while its peer-link was already powerd off. To avoid a split brain scenario, the VPC-member-ports have been shut down.
    Now we are using dedicated interfaces on the linecards for the VPC-Peer-Keepalive-Link and a reload of one N7K won't result in a total network outage any more.

  • Requirements about delay and bandwith for using OTV in Nexus 7000 between two data centers separated 25 miles?

    We have two Nexus 7000, and I need use them with OTV between two data Centers separated 25 miles, but I don´t know what are the optimal values about bandwidth and delay (ms) for extended VLANs IDs (production and DAG replication) for Microsoft Exchange environment. Can somebody tell me please which are the values required for operate OTV in optimal conditions in this case? We have about 35 000 users that will use that platform of email. Thanks a lot for your comments. Regards.

    We have two Nexus 7000, and I need use them with OTV between two data Centers separated 25 miles, but I don´t know what are the optimal values about bandwidth and delay (ms) for extended VLANs IDs (production and DAG replication) for Microsoft Exchange environment. Can somebody tell me please which are the values required for operate OTV in optimal conditions in this case? We have about 35 000 users that will use that platform of email. Thanks a lot for your comments. Regards.

  • Nexus 7000-Error Message

    Hi
    We are having 2 nexus switches configured in the network as core with HSRP configured between them..The access switches are connected withdual 10G links to both core switches with VPC configured in Nexus..In both core switches 10G module is used for uplink termination..In one of the core switch for this 10 G module we get the follwoing error
    Module-1 reported minor temperature alarm. Sensor=20 Temperature=101 MinThreshold=100 2011 Dec 22 08:10:19 CORE-SEC %PLATFORM-2-MOD_TEMPOK:
    Module-1 recovered from minor temperature alarm. Sensor=20 Temperature=99 MinThreshold=100 even though the room temprature is 23 Degree still we get this error wherein as per the nexus documenation allowed room temparature is 0-40 Degree (Operating temperature: 32º to 104ºF (0º to 40ºC) `
    show module`
    Mod  Ports  Module-Type                      Model                            Status
    1    8      10 Gbps Ethernet XL Module      N7K-M108X2-12L        ok
    2    32    1/10 Gbps Ethernet Module        N7K-F132XP-15          ok
    3    48    10/100/1000 Mbps Ethernet XL Mod N7K-M148GT-11L    ok
    5    0      Supervisor module-1X            N7K-SUP1                      active *
    As per the nexus module documentation for module1 the allwed temparature is 0-40degree wherein the actual room temparatue is 23degree..below is the exception message for module1
    exception information --- exception instance 1 ----
    Module Slot Number: 1
    Device Id         : 49
    Device Name       : Temperature-sensor
    Device Errorcode : 0xc3114203
    Device ID         : 49 (0x31)
    Device Instance   : 20 (0x14)
    Dev Type (HW/SW) : 02 (0x02)
    ErrNum (devInfo) : 03 (0x03)
    System Errorcode : 0x4038001e Module recovered from minor temperature alarm
    Error Type       : Minor error
    PhyPortLayer     :
    Port(s) Affected :
    DSAP             : 39 (0x27)
    UUID             : 24 (0x18
    Same module exists in second Nexus 7000 which is in same datacenter but not getting this alarm..
    can anyone please suggest on the same..Software details are as below
    Software
      BIOS:      version 3.22.0
    kickstart: version 5.1(3)
      system:    version 5.1(3)
      BIOS compile time:       02/20/10
      kickstart image file is: bootflash:///n7000-s1-kickstart.5.1.3.bin
      kickstart compile time:  12/25/2020 12:00:00 [03/11/2011 07:42:56]
      system image file is:    bootflash:///n7000-s1-dk9.5.1.3.bin
      system compile time:     1/21/2011 19:00:00 [03/11/2011 08:37:35]

    Hi Sameer
    Temperature alarm means that one particular sensor on the linecard warms up to 101 degree.
    This can be caused by damaged sensor or problems with cooling in that particular part of chassis.
    You can check temperature on the module using following command:
    show environment temperature module 1
    Tru to move the module to another slot. If the issue reoccure - open a TAC case.
    HTH,
    Alex

  • Nexus 7000, 2000, FCOE and Fabric Path

    Hello,
    I have a couple of design questions that I am hoping some of you can help me with.
    I am working on a Dual DC Upgrade. It is pretty standard design, customer requires a L2 extension between the DC for Vmotion etc. Customer would like to leverage certain features of the Nexus product suite, including:
    Trust Sec
    VDC
    VPC
    High Bandwidth Scalability
    Unified I/O
    As always cost is a major issue and consolidation is encouraged where possible. I have worked on a couple of Nexus designs in the past and have levergaed the 7000, 5000, 2000 and 1000 in the DC.
    The feedback that I am getting back from Customer seems to be mirrored in Cisco's technology roadmap. This relates specifically to the features supported in the Nexus 7000 and Nexus 5000.
    Many large enterprise Customers ask the question of why they need to have the 7000 and 5000 in their topologies as many of the features they need are supported in both platforms and their environments will never scale to meet such a modular, tiered design.
    I have a few specific questions that I am hoping can be answered:
    The Nexus 7000 only supports the 2000 on the M series I/O Modules; can FCOE be implemented on a 2000 connected to a 7000 using the M series I/O Module?
    Is the F Series I/O Module the only I/O Module that supports FCOE?
    Are there any plans to introduce the native FC support on the Nexus 7000?
    Are there any plans to introduce full fabric support (230 Gbps) to the M series I/O module?
    Are there any plans to introduce Fabric path to the M series I/O module?
    Are there any plans to introduce L3 support to the F series I/O Module?
    Is the entire 2000 series allocated to a single VDC or can individual 2000 series ports be allocated to a VDC?
    Is Trust Sec only support on multi hop DCI links when using the ASR on EoMPLS pwire?
    Are there any plans to inroduce Trust Sec and VDC to the Nexus 5500?
    Thanks,
    Colm

    Hello Allan
    The only IO card which cannot co-exist with other cards in the same VDC is F2 due to specific hardware realisation.
    All other cards can be mixed.
    Regarding the Fabric versions - Fabric-2 gives much bigger throughoutput in comparing with Fabric-1
    So in order to get full speed from F2/M2 modules you will need Fab-2 modules.
    Fab2 modules won't give any advantages to M1/F1 modules.
    http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/data_sheet_c78-685394.html
    http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/prodcut_bulletin_c25-688075.html
    HTH,
    Alex

  • AAA problems Nexus 7000 %AUTHPRIV-3-SYSTEM_MSG: Unable to create temporary user

    Hi,
    I'm having problems getting our Nexus 7000 to authenticate users from our Windows domain. If I set up a user within the ACS server and use the CiscoSecure database for password authentication it works fine.
    In the logs on the nexus I receive the following messages when logging on using my windows account.
    %AUTHPRIV-3-SYSTEM_MSG: Unable to create temporary user 16894. Error 0x404a0036  - login[20923]
    %AUTHPRIV-3-SYSTEM_MSG: pam_aaa:Authentication failed for user 16894 from 10.128.45.44 - login[20923]
    We can log on to all other Cisco OS devices using windows domain accounts, its just the Nexus.
    Any help much appreciated.
    Thanks
    Darren

    No errors the autnetication on the ACS is showing as passed. The problem is I get an access denied message from the nexus switch,

  • High process in nexus 7000

    Hello,
    My name is Benjamin and I have problems with my Nexus 7000. It have high cpu process, I think that is not normal., what do you think?
    # sh process cpu sort
    PID    Runtime(ms)  Invoked   uSecs  1Sec    Process
    8259      1848785  56524183     32   27.6%  in.dcos-telnetd
    4717          231        96   2413   24.7%  netstack
    3536    402542882  64927941   6199    3.0%  platform
    4573    501774551  35371572  14185    1.0%  xbar_driver_usd
    4714          107        22   4871    1.0%  arp
        1       179754   5381666     33    0.0%  init
        2            2       300      9    0.0%  kthreadd
        3         3342    559942      5    0.0%  migration/0
        4      1936854  444724651      4    0.0%  ksoftirqd/0
        5       143477   2220884     64    0.0%  watchdog/0
        6         2042    349180      5    0.0%  migration/1
        7      1452663  372943404      3    0.0%  ksoftirqd/1
         1      111    111 11 1         1
        907878660006976000800707766999960776799987777777777678687773
        603310880008399000100504278989780308288903490180025795804831
    100 **      ***    *** ** *    **** *    ***
    90 **      *** *  *** ** *    *##* *    ***             *
    80 ** * *  *** ** *#***#**    *##* *    ###*  *  *   * ** * *
    70 ##*************##**##*******##*******###*******************
    60 ###########################################################
    50 ###########################################################
    40 ###########################################################
    30 ###########################################################*
    20 ###########################################################*
    10 ############################################################
        0....5....1....1....2....2....3....3....4....4....5....5....
                  0    5    0    5    0    5    0    5    0    5
                   CPU% per minute (last 60 minutes)
                  * = maximum CPU%   # = average CPU%

    I solved my issue, it was a bug problem:
    Some of the telnet sessions do not get cleared with recursive telnet
    Bug: CSCtk56774
    Workaround: to issue "clear user admin" command
    Regards

  • Virtualized Lab Infrastructure - 3560G connecting to a Nexus 7000 - Help!

    Hi all,
    I've been struggling with the configuration for my small environment for a week or so now, and being a Cisco beginner, I'm worried about going down the wrong path, so I'm hoping someone on here would be able to help with my lab configuration.
    As you can see from the graphic, I have been allocated VLANs 16-22 for my use, on the Nexus 7000. There are lots of other VLANs in use on the Nexus, by other groups, most of which are routable between one another. VLAN 99 is used for switch management, and VLAN 11, is where the Domain Controller, DHCP and Windows Deployment Server reside for the lab domain. Servers across different VLANs use this DC/DHCP/WDS set of servers. These VLANS route out to the internet successfully.
    I have been allocated eth 3/26 on the Nexus, as my uplink connection to my own ToR 3560G. All of my servers, of which there are around 8 in total, are connected to the 3560. I have enabled IP routing on the 3560, and created VLANs 18-22, providing an IP on each. This config has been assigned to all 48 gigabit ports on the 3560 (using the commands in the graphic), and each Windows Server 2012 R2 Hyper-V host connects to the 3560 via 4 x 1GbE connections. On each Hyper-V host, the 4 x 1GbE ports are teamed, and a Hyper-V vSwitch is bound to that team. I then assign the VLAN ID at the vNIC level.
    Routing between the VLANs is currently working fine - As a test, i can put 2 of the servers on different VLANs, each with their respective VLAN default gateway, and they can ping between one another.
    My challenge is, I'm not quite sure what i need to do for the following:
    1) How should I configure the uplink gi 0/52 on the 3560 to enable my VLANs to reach the internet?
    2) How should I configure eth 3/26 on the Nexus?
    3) I need to ensure that the 3560 is also on the management VLAN 99 so it can be managed successfully.
    4) I do not want to route to VLAN 11, as i intend to have my own domain (DC/DNS/DHCP/WDS)
    Any help or guidance you can provide would be much appreciated!
    Thanks!
    Matt

    Hi again Jon,
    OK, been battling with it a little more.
    Here's the config for the 3560:
    Current configuration : 11643 bytes
    version 12.2
    no service pad
    service timestamps debug uptime
    service timestamps log uptime
    no service password-encryption
    hostname CSP_DX_Cluster
    no aaa new-model
    vtp mode transparent
    ip subnet-zero
    ip routing
    no file verify auto
    spanning-tree mode pvst
    spanning-tree extend system-id
    vlan internal allocation policy ascending
    vlan 16,18-23,99
    interface GigabitEthernet0/1
    switchport trunk encapsulation dot1q
    switchport trunk native vlan 18
    switchport trunk allowed vlan 18-22
    switchport mode trunk
    spanning-tree portfast trunk
    <same through interface GigabitEthernet0/48>
    interface GigabitEthernet0/52
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 16,99
    switchport mode trunk
    interface Vlan1
    no ip address
    interface Vlan16
    ip address 10.0.6.2 255.255.255.252
    interface Vlan18
    ip address 10.0.8.1 255.255.255.0
    interface Vlan19
    ip address 10.0.9.1 255.255.255.0
    interface Vlan20
    ip address 10.0.12.1 255.255.255.0
    interface Vlan21
    no ip address
    interface Vlan22
    ip address 10.0.14.1 255.255.255.0
    interface Vlan99
    ip address 10.0.99.87 255.255.255.0
    ip classless
    ip route 0.0.0.0 0.0.0.0 10.0.6.1
    ip http server
    control-plane
    l
    end
    At the Nexus end, the port connecting to the 3560 is configured as:
    interface Ethernet3/26
      description DX_3560_uplink
      switchport
      switchport mode trunk
      switchport trunk allowed vlan 16,99
      no shutdown
    Now, the problem I'm currently having, is that on the 3560, things route fine, between VLANs. However, from on a server within one of the VLANs, say, 18, trying to ping the default gateway of the 3560 fails. I can ping 10.0.6.2 which is the 3560-end of VLAN 16, but i can't get over to 10.0.6.1 and beyond. I suspect, it's relating to what you said about "the only thing missing is you also need routes on the Nexus switch for the IP subnets on your 3560 and the next hop IP would be 10.0.6.2 ie the vlan 16 SVI IP on the 3560"
    I suspect that, in layman's (my terms!) terms, the Nexus simply doesn't know about the networks 10.0.8.1 (VLAN 18), 10.0.9.1 (VLAN 19) and so on.
    So, i need routes on my Nexus to fix this. The problem is, I'm not quite sure what that looks like.
    Would it be:
    ip route 10.0.8.0 255.255.255.0 10.0.6.2
    ip route 10.0.9.0 255.255.255.0 10.0.6.2 and so on?
    To give a bit of history, prior to me creating VLANs 18-22 on the 3560, all VLANs originally existing on the Nexus. Everything routed fine out to the internet, for all of the VLANs (with the same subnet settings that i have configured, i.e. 10.0.8.x for VLAN 18 etc), so i'm presuming once I get the Nexus to understand that the IP subnets live on the 3560, traffic should flow successfully to the internet.
    Should.... :-)

  • Catalyst 6500 - Nexus 7000 migration

    Hello,
    I'm planning a platform migration from Catalyst 6500 til Nexus 7000. The old network consists of two pairs of 6500's as serverdistribution, configured with HSRPv1 as FHRP, rapid-pvst and ospf as IGP. Futhermore, the Cat6500 utilize mpls/l3vpn with BGP for 2/3 of the vlans. Otherwise, the topology is quite standard, with a number of 6500 and CBS3020/3120 as serveraccess.
    In preparing for the migration, VTP will be discontinued and vlans have been manually "copied" from the 6500 to the N7K's. Bridge assurance is enabled downstream toward the new N55K access-switches, but toward the 6500, the upcoming etherchannels will run in "normal" mode, trying to avoid any problems with BA this way. For now, only L2 will be utilized on the N7K, as we're avaiting the 5.2 release, which includes mpls/l3vpn. But all servers/blade switches will be migrated prior to that.
    The questions arise, when migrating Layer3 functionality, incl. hsrp. As per my understanding, hsrp in nxos has been modified slightly to better align with the vPC feature and to avoid sub-optimal forwarding across the vPC peerlink. But that aside, is there anything that would complicate a "sliding" FHRP migration? I'm thinking of configuring SVI's on the N7K's, configuring them with unused ip's and assign the same virtual ip, only decrementing the prio to a value below the current standby-router. Also spanning-tree prio will, if necessary, be modified to better align with hsrp.
    From a routing perspective, I'm thinking of configuring ospf/bgp etc. similar to that of the 6500's, only tweaking the metrics (cost, localpref etc) to constrain forwarding on the 6500's and subsequently migrate both routing and FHRP at the same time. Maybe not in a big bang style, but stepwise. Is there anything in particular one should be aware of when doing this? At present, for me this seems like a valid approach, but maybe someone has experience with this (good/bad), so I'm hoping someone has some insight they would like to share.
    Topology drawing is attached.
    Thanks
    /Ulrich

    In a normal scenario, yes. But not in vPC. HSRP is a bit different in the vPC environment. Even though the SVI is not the HSRP primary, it will still forward traffic. Please see the below white paper.
    http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/white_paper_c11-516396.html
    I will suggest you to set up the SVIs on the N7K but leave them in the down state. Until you are ready to use the N7K as the gateway for the SVIs, shut down the SVIs on the C6K one at a time and turn up the N7K SVIs. When I said "you are ready", it means the spanning-tree root is at the N7K along with all the L3 northbound links (toward the core).
    I had a customer who did the same thing that you are trying to do - to avoid down time. However, out of the 50+ SVIs, we've had 1 SVI that HSRP would not establish between C6K and N7K, we ended up moving everything to the N7K on a fly during of the migration. Yes, they were down for about 30 sec - 1 min for each SVI but it is less painful and waste less time because we don't need to figure out what is wrong or any NXOS bugs.
    HTH,
    jerry

  • Dell Servers with Nexus 7000 + Nexus 2000 extenders

    << Original post by smunzani. Answered by Robert. Moving from Document section to Discussions>>
    Team,
    I would like to use some of the existing Dell Servers for new network design of Nexus 7000 + Nexus 2000 extenders. What are my options for FEC to the hosts? All references of M81KR I found on CCO are related to UCS product only.
    What's best option for following setup?
    N7K(Aggregation Layer) -- N2K(Extenders) -- Dell servers
    Need 10G to the servers due to dense population of the VMs. The customer is not up for dumping recently purchased dell boxes in favor of UCS. Customer VMware license is Enterprise Edition.
    Thanks in advance.

    To answer your question, the M81KR-VIC is a Mezz card for UCS blades only.  For Cisco rack there is a PCIe version which is called the P81.  These are both made for Cisco servers only due to the integration with server management and virtual interface functionality.
    http://www.cisco.com/en/US/prod/collateral/ps10265/ps10493/data_sheet_c78-558230.html
    More information on it here:
    Regards,
    Robert

  • LMS 4.2.2 Interface utilisation on Nexus 7000

    Hi All,
    I'm trying to poll some interfaces for their utilization on a nexus 7000 through LMS 4.2.2.
    When I create a poller fot the specific instances, the LMS recognises the instances, but after activating the poller I get the error "No Such Instance - The specified instance is not available".
    No info is displayed when I generate an interface utilization report for the specific nexus.
    When I activate the automonitor for interface utilization, the interfaces on the nexus are polled.
    On the cisco website there are some features listed which LMS does not support on the Nexus 7000, but polling is not in that list (neither in the supported feature list).
    Any tips?
    Thanks for your help.
    Joris

    Any Idea..??

  • ESXi 4.1 NIC Teaming's Load-Balancing Algorithm,Nexus 7000 and UCS

    Hi, Cisco Gurus:
    Please help me in answering the following questions (UCSM 1.4(xx), 2 UCS 6140XP, 2 Nexus 7000, M81KR in B200-M2, No Nexus 1000V, using VMware Distributed Switch:
    Q1. For me to configure vPC on a pair of Nexus 7000, do I have to connect Ethernet Uplink from each Cisco Fabric Interconnect to the 2 Nexus 7000 in a bow-tie fashion? If I connect, say 2 10G ports from Fabric Interconnect 1 to 1 Nexus 7000 and similar connection from FInterconnect 2 to the other Nexus 7000, in this case can I still configure vPC or is it a validated design? If it is, what is the pro and con versus having 2 connections from each FInterconnect to 2 separate Nexus 7000?
    Q2. If vPC is to be configured in Nexus 7000, is it COMPULSORY to configure Port Channel for the 2 Fabric Interconnects using UCSM? I believe it is not. But what is the pro and con of HAVING NO Port Channel within UCS versus HAVING Port Channel when vPC is concerned?
    Q3. if vPC is to be configured in Nexus 7000, I understand there is a limitation on confining to ONLY 1 vSphere NIC Teaming's Load-Balancing Algorithm i.e. Route Based on IP Hash. Is it correct?
    Again, what is the pro and con here with regard to application behaviours when Layer 2 or 3 is concerned? Or what is the BEST PRACTICES?
    I would really appreciate if someone can help me clear these lingering doubts of mine.
    God Bless.
    SiM

    Sim,
    Here are my thoughts without a 1000v in place,
    Q1. For me to configure vPC on a pair of Nexus 7000, do I have to connect Ethernet Uplink from each Cisco Fabric Interconnect to the 2 Nexus 7000 in a bow-tie fashion? If I connect, say 2 10G ports from Fabric Interconnect 1 to 1 Nexus 7000 and similar connection from FInterconnect 2 to the other Nexus 7000, in this case can I still configure vPC or is it a validated design? If it is, what is the pro and con versus having 2 connections from each FInterconnect to 2 separate Nexus 7000?   //Yes, for vPC to UCS the best practice is to bowtie uplink to (2) 7K or 5Ks.
    Q2. If vPC is to be configured in Nexus 7000, is it COMPULSORY to configure Port Channel for the 2 Fabric Interconnects using UCSM? I believe it is not. But what is the pro and con of HAVING NO Port Channel within UCS versus HAVING Port Channel when vPC is concerned? //The port channel will be configured on both the UCSM and the 7K. The pro of a port channel would be both bandwidth and redundancy. vPC would be prefered.
    Q3. if vPC is to be configured in Nexus 7000, I understand there is a limitation on confining to ONLY 1 vSphere NIC Teaming's Load-Balancing Algorithm i.e. Route Based on IP Hash. Is it correct? //Without the 1000v, I always tend to leave to dvSwitch load balence behavior at the default of "route by portID". 
    Again, what is the pro and con here with regard to application behaviours when Layer 2 or 3 is concerned? Or what is the BEST PRACTICES? UCS can perform L2 but Northbound should be performing L3.
    Cheers,
    David Jarzynka

  • Privilege Level for Tacacs Account in Nexus 7000

    Hi,
    I have configured the Tacacs (ACS 4.2v) on Nexus 7000 (as mentioned below) and works fine but unlike IOS (6509) It's doesn't prompt that you are in userexec mode (>) and then need to type enable and password for full privilege.
    In n7k when I entered into "configure terminal" It won't allow me to access other commands.
    How to login into level 15 privilege mode after authenticating from tacacs
    (config)# show running-config tacacs+
    tacacs-server key 7 "xxxxx"
    tacacs-server host x.x.x.x key 7 "xxxx"
    aaa group server tacacs+ TacServer
        server x.x.x.x (same ip as tacacs-server host)
        use-vrf management
        source-interface Vlan2
    (config)# show running-config aaa
    aaa authentication login default group TacServer
    aaa authentication login console local
    aaa user default-role
    Here below are the commands accessible in "Terminal" currently
    (config)# ?
      no        Negate a command or set its defaults
      username  Configure user information.
      end       Go to exec mode
      exit      Exit from command interpreter
    isb.n7k-dcn-agg-1-sw(config)#

    Hi Jan.nielsen
    Issue is resolved but by another way.
    I have found the same resolution too of custom attirbute command but the Custom attribute Option for shell command wasn't available in ACS v4.2, so after enabling shell for users and by clicking exec--> Shell Exec and enabling priviledge level 15 in the same box of Shell options, It start working without any command

Maybe you are looking for