Dell Servers with Nexus 7000 + Nexus 2000 extenders

<< Original post by smunzani. Answered by Robert. Moving from Document section to Discussions>>
Team,
I would like to use some of the existing Dell Servers for new network design of Nexus 7000 + Nexus 2000 extenders. What are my options for FEC to the hosts? All references of M81KR I found on CCO are related to UCS product only.
What's best option for following setup?
N7K(Aggregation Layer) -- N2K(Extenders) -- Dell servers
Need 10G to the servers due to dense population of the VMs. The customer is not up for dumping recently purchased dell boxes in favor of UCS. Customer VMware license is Enterprise Edition.
Thanks in advance.

To answer your question, the M81KR-VIC is a Mezz card for UCS blades only.  For Cisco rack there is a PCIe version which is called the P81.  These are both made for Cisco servers only due to the integration with server management and virtual interface functionality.
http://www.cisco.com/en/US/prod/collateral/ps10265/ps10493/data_sheet_c78-558230.html
More information on it here:
Regards,
Robert

Similar Messages

  • Nexus 7000 and 2000. Is FEX supported with vPC?

    I know this was not supported a few months ago, curious if anything has changed?

    Hi Jenny,
    I think the answer will depend on what you mean by is FEX supported with vPC?
    When connecting a FEX to the Nexus 7000 you're able to run vPC from the Host Interfaces of a pair of FEX to an end system running IEEE 802.1AX (802.3ad) Link Aggregation. This is shown is illustration 7 of the diagram shown on the post Nexus 7000 Fex Supported/Not Supported Topologies.
    What you're not able to do is run vPC on the FEX Network Interface that connect up to the Nexus 7000 i.e., dual-homing the FEX to two Nexus 7000. This is shown in illustrations 8 and 9 of under the FEX topologies not supported on the same page.
    There's some discussion on this in the forum post DualHoming 2248TP-E to N7K that explains why it's not supported, but essentially it offers no additional resilience.
    From that post:
    The view is that when connecting FEX to the Nexus 7000, dual-homing does not add any level of resilience to the design. A server with dual NIC can attach to two FEX  so there is no need to connect the FEX to two parent switches. A server with only a single NIC can only attach to a single FEX, but given that FEX is supported by a fully redundant Nexus 7000 i.e., SE, fabrics, power, I/O modules etc., the availability is limited by the single FEX and so dual-homing does not increase availability.
    Regards

  • Nexus 7000 w/ 2000 FEX - Interface Status When N7K has power issues

    I have multiple nexus 2Ks connected to two N7Ks with my servers connecting to multiple N2Ks accordingly with dual NIC failover capability.
    What happens, or what state do the N2K interfaces go into if one of my N7K's looses power given the N2K is effectivly still receiving power?

    Hi Ans,
    You are rigth, I have defaulted againt the port, now configured with switchport mode FEX, and now the FET-10G is validated
    NX7K-1-VDC-3T-S1-L2FP(config-if)#  description FEX-101
    NX7K-1-VDC-3T-S1-L2FP(config-if)#   switchport
    NX7K-1-VDC-3T-S1-L2FP(config-if)#   switchport mode fex-fabric
    NX7K-1-VDC-3T-S1-L2FP(config-if)#   fex associate 101
    NX7K-1-VDC-3T-S1-L2FP(config-if)#   medium p2p
    NX7K-1-VDC-3T-S1-L2FP(config-if)#   channel-group 101
    NX7K-1-VDC-3T-S1-L2FP(config-if)#   no shutdown
    NX7K-1-VDC-3T-S1-L2FP(config-if)#
    NX7K-1-VDC-3T-S1-L2FP(config-if)# sh int e7/33 status
    Port             Name            Status    Vlan      Duplex  Speed   Type
    Eth7/33          FEX-101         notconnec 1         auto    auto    Fabric Exte
    NX7K-1-VDC-3T-S1-L2FP(config-if)#
    Thanks for your help, and have a nice weekend.
    Atte,
    EF

  • How many Nexus 7000, Nexus 5000 and 2000 can a DCNM deployment support?

    Hi Everyone,
    Good Day! I would like to inquire how many Nexus boxes can a single DCNM deployment support given that we have the recommended server specifications?
    Thanks and Regards,
    Albert

    Hi Lucien,
    I have 2 pair of Nexus switches in my setup as follows
    The first pair connection as below
    ==========================
    Nexus 1 (configured with vpc 1)----- 2 connections------ 6500 catalyst sw1(po 9)
    Nexus 2 (configured with vpc 1) ----- 2 connections------ 6500 catalyst sw2 (po9)
    po2 on nexus pair for vpc peer link
    Spanning tree on Nexus 1
    po1     root fwd1      p2p peer stp
    po2    desg fwd 1   vpc peer link
    Spanning tree on Nexus 2
    po1      altn blk1     p2p peer stp
    po2      root fwd1   vpc peer link
    The second pair connection
    =====================
    Nexus3 (configured with vpc 20 ) ------ 1 connection ------- 6500 catalyst sw1 (po20)
                  (configured with vpc 30) ------- 1 connection ------- 6500 catalyst sw2 (po30)
    Nexus4 (configured with vpc 20) ----- 1connection ---- 6500 catalyst sw1 (po20)(stp guard root)
                  (configured with vpc 30) ----- 1 connection ----6500 catalyst sw2 (po30)(stp guard root)
    po1 on nexus pair for vpc peer link
    Spanning tree on Nexus 3
    po1      desg fwd1        vpc peer link
    po20    root fwd1           p2p peer stp
    po30    altn blk1            p2p peer stp
    Spanning tree on Nexus 4
    po1      root fwd1           vpc peer link
    po20    root fwd 1          p2p peer stp
    po30    altn blk1               p2p peer stp
    Problem Observed :  High Ping response
    Source server on 1st pair of switches  ; Destination server on 2nd pair of switches
    Ping response from 1st pair of switches to destination server : normal (between 1 to 3 ms)
    Ping response fron 2nd pair of switches to source server  :   (jumping from 3ms to 100+ ms).
    There is no errors or packet drops on any of the above ports, I cannot understand why the ping response is high for connections from second pair.

  • R12 on Dell servers ???

    Hi,
    How is Oracle apps R12 on dell servers with EMC storage?
    Till now I have always used hp or sun servers. On my new they are thinking of R12 upgrade with Dell servers?
    Any input is apreciated.

    Just get comfortable with Linux and you will be fine. I would go with 64-bit, otherwise you will have a 1.7 GB SGA limitation on 32-bit linux.
    Oracle Applications Installation and Upgrade Notes Release 12 (12.0) for Linux (64-bit)
    http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=416305.1
    Oracle Applications Installation and Upgrade Notes Release 12 (12.0) for Linux (32-bit)
    http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=402310.1
    Oracle Applications Installation and Upgrade Notes Release 12 (12.0) for Solaris Operating System (SPARC)
    http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=402312.1

  • WYSE terminals, 802.1x, Cisco 3560, Dell Servers

    1)We are having an issue with our WYSE terminals where users can attempt to login before they are device is 802.1x authenticated.
    2) We have Dell Servers with Broadcom Gigabit NiCs. To run at 1000 Full, the card has to be configured for Auto, so I would assume that the switch should be set for auto going by best practices. However, outside vendors are saying that they read that the switch port should be locked at 1000 Full. Please share your thoughts.

    GB ports will support auto-negotiation for flowControl but should be set 1000mb statically if you required GB port speed.
    do not set your ports to auto, force them to 1GB. (standard practice; my preference)
    the ealiest implementations of GB ports did not support 'auto' as a speed type and you only had one option, 1000mb.
    todays cisco switches have the ability to set 'auto' for GB speeds but i personally stay away from that and use static settings for GB ports and at most, negotiate the flowControl.
    i think it a good practice to statically set your routed ports, server ports, etc. but the ability to 'auto' GB is in todays switches, so it should be possible for you to set it to 'auto' and it work as expected.

  • Nexus 7000, 2000, FCOE and Fabric Path

    Hello,
    I have a couple of design questions that I am hoping some of you can help me with.
    I am working on a Dual DC Upgrade. It is pretty standard design, customer requires a L2 extension between the DC for Vmotion etc. Customer would like to leverage certain features of the Nexus product suite, including:
    Trust Sec
    VDC
    VPC
    High Bandwidth Scalability
    Unified I/O
    As always cost is a major issue and consolidation is encouraged where possible. I have worked on a couple of Nexus designs in the past and have levergaed the 7000, 5000, 2000 and 1000 in the DC.
    The feedback that I am getting back from Customer seems to be mirrored in Cisco's technology roadmap. This relates specifically to the features supported in the Nexus 7000 and Nexus 5000.
    Many large enterprise Customers ask the question of why they need to have the 7000 and 5000 in their topologies as many of the features they need are supported in both platforms and their environments will never scale to meet such a modular, tiered design.
    I have a few specific questions that I am hoping can be answered:
    The Nexus 7000 only supports the 2000 on the M series I/O Modules; can FCOE be implemented on a 2000 connected to a 7000 using the M series I/O Module?
    Is the F Series I/O Module the only I/O Module that supports FCOE?
    Are there any plans to introduce the native FC support on the Nexus 7000?
    Are there any plans to introduce full fabric support (230 Gbps) to the M series I/O module?
    Are there any plans to introduce Fabric path to the M series I/O module?
    Are there any plans to introduce L3 support to the F series I/O Module?
    Is the entire 2000 series allocated to a single VDC or can individual 2000 series ports be allocated to a VDC?
    Is Trust Sec only support on multi hop DCI links when using the ASR on EoMPLS pwire?
    Are there any plans to inroduce Trust Sec and VDC to the Nexus 5500?
    Thanks,
    Colm

    Hello Allan
    The only IO card which cannot co-exist with other cards in the same VDC is F2 due to specific hardware realisation.
    All other cards can be mixed.
    Regarding the Fabric versions - Fabric-2 gives much bigger throughoutput in comparing with Fabric-1
    So in order to get full speed from F2/M2 modules you will need Fab-2 modules.
    Fab2 modules won't give any advantages to M1/F1 modules.
    http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/data_sheet_c78-685394.html
    http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/prodcut_bulletin_c25-688075.html
    HTH,
    Alex

  • Broadcom LiveLink : Receiving MAC flaps with Cisco Nexus 7000

    We are migrating from using two Nortel 8600's running VRRP at the distribution to Cisco Nexus 7K's using HSRP.  So we have a server connected to two 3750G switches which then connect to the Nexi (previously the 8600's).  As soon as we connected the 3750's to the Nexus and moved the gateway to Nexus, LiveLink forces all the servers to alternate traffic between NIC1 and NIC2. 
    Since LiveLink is a teaming application, it uses virtual mac for nic1 and nic2, but the virtual mac associated with the IP address moves to the active link.
    LiveLink is used to check the availability of the gateway by polling the gateway out of each interface using an ARP request.
    The problem does not exhibit itself in our Cisco VSS environment, and with Nortel's VRRP.  I tried running VRRP on the Nexus but no joy.
    Anyone know of a bug that could cause this issue?

    Unfortunately we have LiveLink enabled on most of our Windows servers in our data centers.  One of my colleagues sent me this bug issue.  I'm not sure if this is the cause, but it's worth trying.   We will update the NxOs (currently on 5.1.1) next week and see if that fixes the problem.
    •CSCtl85080
    Symptom: Incomplete Address Resolution Protocol (ARP) entries are observed on a Cisco Nexus 7000 Series switch, along with partial packet loss and a memory leak.
    Conditions: This symptom might be seen when ARP packets have a nonstandard size (that is, greater than 64 bytes).
    Workaround: This issue is resolved in 5.1.3.

  • Nexus 7000 with VPC and HSRP Configuration

    Hi Guys,
    I would like to know how to implement HSRP with the following setup:
    There are 2 Nexus 7000 connected with VPC Peer link. Each of the Nexus 7000 has a FEX attached to it.
    The server has two connections going to the FEX on each Nexus 7k (VPC). FEX's are not dual homed as far as I now they are not supported currently.
    R(A)              R(S)
    |                     |
    7K Peer Link 7K
    |                     |
    FEX              FEX
    Server connected to both FEX
    The question is we have two routers connected to each of the Nexus 7k in HSRP (active and one is standby). How can I configure HSRP on the nexus switches and how the traffic will routed from the Standby Nexus switch to Active Nexus switch (I know HSRP works differently here as both of them can forward packets). Will the traffic go to the secondary switch and then via the peer link to the active switch and then to the active router ? (From what I read the packet from end hosts which will go via the peer link will get dropped)
    Has anyone implemented this before ?
    Thanks

    Hi Kuldeep,
    If you intend to put those routers on a non-vpc vlan, you  may create  a new inter-switch trunk between the N7K and allow that non-vpc vlan . However if those will be on a VPC vlan, best to create two links to the N7K pair and create a VPC, otherwise configure those ports as orphan ports which will leverage the VPC peer link .
    HTH
    Jay Ocampo

  • Virtualized Lab Infrastructure - 3560G connecting to a Nexus 7000 - Help!

    Hi all,
    I've been struggling with the configuration for my small environment for a week or so now, and being a Cisco beginner, I'm worried about going down the wrong path, so I'm hoping someone on here would be able to help with my lab configuration.
    As you can see from the graphic, I have been allocated VLANs 16-22 for my use, on the Nexus 7000. There are lots of other VLANs in use on the Nexus, by other groups, most of which are routable between one another. VLAN 99 is used for switch management, and VLAN 11, is where the Domain Controller, DHCP and Windows Deployment Server reside for the lab domain. Servers across different VLANs use this DC/DHCP/WDS set of servers. These VLANS route out to the internet successfully.
    I have been allocated eth 3/26 on the Nexus, as my uplink connection to my own ToR 3560G. All of my servers, of which there are around 8 in total, are connected to the 3560. I have enabled IP routing on the 3560, and created VLANs 18-22, providing an IP on each. This config has been assigned to all 48 gigabit ports on the 3560 (using the commands in the graphic), and each Windows Server 2012 R2 Hyper-V host connects to the 3560 via 4 x 1GbE connections. On each Hyper-V host, the 4 x 1GbE ports are teamed, and a Hyper-V vSwitch is bound to that team. I then assign the VLAN ID at the vNIC level.
    Routing between the VLANs is currently working fine - As a test, i can put 2 of the servers on different VLANs, each with their respective VLAN default gateway, and they can ping between one another.
    My challenge is, I'm not quite sure what i need to do for the following:
    1) How should I configure the uplink gi 0/52 on the 3560 to enable my VLANs to reach the internet?
    2) How should I configure eth 3/26 on the Nexus?
    3) I need to ensure that the 3560 is also on the management VLAN 99 so it can be managed successfully.
    4) I do not want to route to VLAN 11, as i intend to have my own domain (DC/DNS/DHCP/WDS)
    Any help or guidance you can provide would be much appreciated!
    Thanks!
    Matt

    Hi again Jon,
    OK, been battling with it a little more.
    Here's the config for the 3560:
    Current configuration : 11643 bytes
    version 12.2
    no service pad
    service timestamps debug uptime
    service timestamps log uptime
    no service password-encryption
    hostname CSP_DX_Cluster
    no aaa new-model
    vtp mode transparent
    ip subnet-zero
    ip routing
    no file verify auto
    spanning-tree mode pvst
    spanning-tree extend system-id
    vlan internal allocation policy ascending
    vlan 16,18-23,99
    interface GigabitEthernet0/1
    switchport trunk encapsulation dot1q
    switchport trunk native vlan 18
    switchport trunk allowed vlan 18-22
    switchport mode trunk
    spanning-tree portfast trunk
    <same through interface GigabitEthernet0/48>
    interface GigabitEthernet0/52
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 16,99
    switchport mode trunk
    interface Vlan1
    no ip address
    interface Vlan16
    ip address 10.0.6.2 255.255.255.252
    interface Vlan18
    ip address 10.0.8.1 255.255.255.0
    interface Vlan19
    ip address 10.0.9.1 255.255.255.0
    interface Vlan20
    ip address 10.0.12.1 255.255.255.0
    interface Vlan21
    no ip address
    interface Vlan22
    ip address 10.0.14.1 255.255.255.0
    interface Vlan99
    ip address 10.0.99.87 255.255.255.0
    ip classless
    ip route 0.0.0.0 0.0.0.0 10.0.6.1
    ip http server
    control-plane
    l
    end
    At the Nexus end, the port connecting to the 3560 is configured as:
    interface Ethernet3/26
      description DX_3560_uplink
      switchport
      switchport mode trunk
      switchport trunk allowed vlan 16,99
      no shutdown
    Now, the problem I'm currently having, is that on the 3560, things route fine, between VLANs. However, from on a server within one of the VLANs, say, 18, trying to ping the default gateway of the 3560 fails. I can ping 10.0.6.2 which is the 3560-end of VLAN 16, but i can't get over to 10.0.6.1 and beyond. I suspect, it's relating to what you said about "the only thing missing is you also need routes on the Nexus switch for the IP subnets on your 3560 and the next hop IP would be 10.0.6.2 ie the vlan 16 SVI IP on the 3560"
    I suspect that, in layman's (my terms!) terms, the Nexus simply doesn't know about the networks 10.0.8.1 (VLAN 18), 10.0.9.1 (VLAN 19) and so on.
    So, i need routes on my Nexus to fix this. The problem is, I'm not quite sure what that looks like.
    Would it be:
    ip route 10.0.8.0 255.255.255.0 10.0.6.2
    ip route 10.0.9.0 255.255.255.0 10.0.6.2 and so on?
    To give a bit of history, prior to me creating VLANs 18-22 on the 3560, all VLANs originally existing on the Nexus. Everything routed fine out to the internet, for all of the VLANs (with the same subnet settings that i have configured, i.e. 10.0.8.x for VLAN 18 etc), so i'm presuming once I get the Nexus to understand that the IP subnets live on the 3560, traffic should flow successfully to the internet.
    Should.... :-)

  • Catalyst 6500 - Nexus 7000 migration

    Hello,
    I'm planning a platform migration from Catalyst 6500 til Nexus 7000. The old network consists of two pairs of 6500's as serverdistribution, configured with HSRPv1 as FHRP, rapid-pvst and ospf as IGP. Futhermore, the Cat6500 utilize mpls/l3vpn with BGP for 2/3 of the vlans. Otherwise, the topology is quite standard, with a number of 6500 and CBS3020/3120 as serveraccess.
    In preparing for the migration, VTP will be discontinued and vlans have been manually "copied" from the 6500 to the N7K's. Bridge assurance is enabled downstream toward the new N55K access-switches, but toward the 6500, the upcoming etherchannels will run in "normal" mode, trying to avoid any problems with BA this way. For now, only L2 will be utilized on the N7K, as we're avaiting the 5.2 release, which includes mpls/l3vpn. But all servers/blade switches will be migrated prior to that.
    The questions arise, when migrating Layer3 functionality, incl. hsrp. As per my understanding, hsrp in nxos has been modified slightly to better align with the vPC feature and to avoid sub-optimal forwarding across the vPC peerlink. But that aside, is there anything that would complicate a "sliding" FHRP migration? I'm thinking of configuring SVI's on the N7K's, configuring them with unused ip's and assign the same virtual ip, only decrementing the prio to a value below the current standby-router. Also spanning-tree prio will, if necessary, be modified to better align with hsrp.
    From a routing perspective, I'm thinking of configuring ospf/bgp etc. similar to that of the 6500's, only tweaking the metrics (cost, localpref etc) to constrain forwarding on the 6500's and subsequently migrate both routing and FHRP at the same time. Maybe not in a big bang style, but stepwise. Is there anything in particular one should be aware of when doing this? At present, for me this seems like a valid approach, but maybe someone has experience with this (good/bad), so I'm hoping someone has some insight they would like to share.
    Topology drawing is attached.
    Thanks
    /Ulrich

    In a normal scenario, yes. But not in vPC. HSRP is a bit different in the vPC environment. Even though the SVI is not the HSRP primary, it will still forward traffic. Please see the below white paper.
    http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/white_paper_c11-516396.html
    I will suggest you to set up the SVIs on the N7K but leave them in the down state. Until you are ready to use the N7K as the gateway for the SVIs, shut down the SVIs on the C6K one at a time and turn up the N7K SVIs. When I said "you are ready", it means the spanning-tree root is at the N7K along with all the L3 northbound links (toward the core).
    I had a customer who did the same thing that you are trying to do - to avoid down time. However, out of the 50+ SVIs, we've had 1 SVI that HSRP would not establish between C6K and N7K, we ended up moving everything to the N7K on a fly during of the migration. Yes, they were down for about 30 sec - 1 min for each SVI but it is less painful and waste less time because we don't need to figure out what is wrong or any NXOS bugs.
    HTH,
    jerry

  • Ask the Expert: Basic Introduction and Troubleshooting on Cisco Nexus 7000 NX-OS Virtual Device Context

    With Vignesh R. P.
    Welcome to the Cisco Support Community Ask the Expert conversation.This is an opportunity to learn and ask questions of Cisco expert Vignesh R. P. about the Cisco® Nexus 7000 Series Switches and support for the Cisco NX-OS Software platform .
    The Cisco® Nexus 7000 Series Switches introduce support for the Cisco NX-OS Software platform, a new class of operating system designed for data centers. Based on the Cisco MDS 9000 SAN-OS platform, Cisco NX-OS introduces support for virtual device contexts (VDCs), which allows the switches to be virtualized at the device level. Each configured VDC presents itself as a unique device to connected users within the framework of that physical switch. The VDC runs as a separate logical entity within the switch, maintaining its own unique set of running software processes, having its own configuration, and being managed by a separate administrator.
    Vignesh R. P. is a customer support engineer in the Cisco High Touch Technical Support center in Bangalore, India, supporting Cisco's major service provider customers in routing and MPLS technologies. His areas of expertise include routing, switching, and MPLS. Previously at Cisco he worked as a network consulting engineer for enterprise customers. He has been in the networking industry for 8 years and holds CCIE certification in the Routing & Switching and Service Provider tracks.
    Remember to use the rating system to let Vignesh know if you have received an adequate response. 
    Vignesh might not be able to answer each question due to the volume expected during this event. Remember that you can continue the conversation on the  Data Center sub-community discussion forum shortly after the event. This event lasts through through January 18, 2013. Visit this forum often to view responses to your questions and the questions of other community members.

    Hi Vignesh
    Is there is any limitation to connect a N2K directly to the N7K?
    if i have a an F2 card 10G and another F2 card 1G and i want to creat 3 VDC'S
    VDC1=DC-Core
    VDC2=Aggregation
    VDC3=Campus core
    do we need to add a link between the different VDC's
    thanks

  • ESXi 4.1 NIC Teaming's Load-Balancing Algorithm,Nexus 7000 and UCS

    Hi, Cisco Gurus:
    Please help me in answering the following questions (UCSM 1.4(xx), 2 UCS 6140XP, 2 Nexus 7000, M81KR in B200-M2, No Nexus 1000V, using VMware Distributed Switch:
    Q1. For me to configure vPC on a pair of Nexus 7000, do I have to connect Ethernet Uplink from each Cisco Fabric Interconnect to the 2 Nexus 7000 in a bow-tie fashion? If I connect, say 2 10G ports from Fabric Interconnect 1 to 1 Nexus 7000 and similar connection from FInterconnect 2 to the other Nexus 7000, in this case can I still configure vPC or is it a validated design? If it is, what is the pro and con versus having 2 connections from each FInterconnect to 2 separate Nexus 7000?
    Q2. If vPC is to be configured in Nexus 7000, is it COMPULSORY to configure Port Channel for the 2 Fabric Interconnects using UCSM? I believe it is not. But what is the pro and con of HAVING NO Port Channel within UCS versus HAVING Port Channel when vPC is concerned?
    Q3. if vPC is to be configured in Nexus 7000, I understand there is a limitation on confining to ONLY 1 vSphere NIC Teaming's Load-Balancing Algorithm i.e. Route Based on IP Hash. Is it correct?
    Again, what is the pro and con here with regard to application behaviours when Layer 2 or 3 is concerned? Or what is the BEST PRACTICES?
    I would really appreciate if someone can help me clear these lingering doubts of mine.
    God Bless.
    SiM

    Sim,
    Here are my thoughts without a 1000v in place,
    Q1. For me to configure vPC on a pair of Nexus 7000, do I have to connect Ethernet Uplink from each Cisco Fabric Interconnect to the 2 Nexus 7000 in a bow-tie fashion? If I connect, say 2 10G ports from Fabric Interconnect 1 to 1 Nexus 7000 and similar connection from FInterconnect 2 to the other Nexus 7000, in this case can I still configure vPC or is it a validated design? If it is, what is the pro and con versus having 2 connections from each FInterconnect to 2 separate Nexus 7000?   //Yes, for vPC to UCS the best practice is to bowtie uplink to (2) 7K or 5Ks.
    Q2. If vPC is to be configured in Nexus 7000, is it COMPULSORY to configure Port Channel for the 2 Fabric Interconnects using UCSM? I believe it is not. But what is the pro and con of HAVING NO Port Channel within UCS versus HAVING Port Channel when vPC is concerned? //The port channel will be configured on both the UCSM and the 7K. The pro of a port channel would be both bandwidth and redundancy. vPC would be prefered.
    Q3. if vPC is to be configured in Nexus 7000, I understand there is a limitation on confining to ONLY 1 vSphere NIC Teaming's Load-Balancing Algorithm i.e. Route Based on IP Hash. Is it correct? //Without the 1000v, I always tend to leave to dvSwitch load balence behavior at the default of "route by portID". 
    Again, what is the pro and con here with regard to application behaviours when Layer 2 or 3 is concerned? Or what is the BEST PRACTICES? UCS can perform L2 but Northbound should be performing L3.
    Cheers,
    David Jarzynka

  • Query Nexus 7000 Enviroment Status

    Hi,
    I am trying to figure out how to query a Nexus 7010 Chassis about its enviroment. For our IOS Switches we use SNMP and OID 1.3.6.1.4.1.9.9.13.1 and the related sub OIDs. But this does not work on the Nexus 7010 with version 5.1. Is querying the information not supported or is there another OID?

    Hi,I am trying to figure out how to query a Nexus 7010 Chassis about its enviroment. For our IOS Switches we use SNMP and OID 1.3.6.1.4.1.9.9.13.1 and the related sub OIDs. But this does not work on the Nexus 7010 with version 5.1. Is querying the information not supported or is there another OID?
    Hi,
    Check out the below link for nexus 7000 MIB reference ..
    http://www.cisco.com/en/US/docs/switches/datacenter/sw/mib/quickreference/Cisco_Nexus_7000_Series_NX-0S_MIB_Quick_Reference_chapter1.html#con_40545
    Hope to Help !!
    Ganesh.H

  • Rule based span on Nexus 7000

    Hi all,
    I'm trying to configure rule based span on my Nexus 7000.
    I want to monitor some vlans, but limit the traffic going to my monitor station by using frame-type ipv4 filter.
    The link below explains how to configure it, but my nexus doesn't recognise the command "mode extended".
    http://www.cisco.com/c/en/us/td/docs/switches/datacenter/sw/6_x/nx-os/system_management/configuration/guide/sm_nx_os_cg/sm_14span.html#wp1286697
    Am I missing something? I'm running version 6.1.3.
    Thanks,
    Joris
    NEXUS(config)# monitor session 1
    NEXUS(config-monitor)# mode extended
                                       ^
    % Invalid command at '^' marker.
    NEXUS(config-monitor)# mode ?
    *** No matching command found in current mode, matching in (exec) mode ***
      connect  Notify system on modem connection
      restart  Reenabling modem port

    Hi Joris,
    The rule based SPAN filtering was not introduced until NX-OS 6.2 so will not be available to you with NX-OS 6.1(3).
    See the section SPAN in the NX-OS 6.2 release notes.
    Regards

Maybe you are looking for

  • Generated JNLP and closing the original browser window

    Hi All, This is my first post so bear with me. Here is my problem. I am using Web Start for an application. I need to check to see if web start is installed or not. If it is NOT I need to auto-install if it is I need to call a servlet that dynamicall

  • Error while exporting : ORA-24801: illegal parameter value in OCI lob funct

    hello, I am doing an export on a 10.2.0.4 , solaris machine. The table i am doing an export has a blob & clob. During the export I get the error : EXP-00056: ORACLE error 24801 encountered ORA-24801: illegal parameter value in OCI lob function In met

  • Unable to configure/build using studio12u1 compiler

    I am trying this on sol10 host: Solaris 10 10/08 s10s_u6wos_07b SPARC Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Use is subject to license terms. Assembled 27 October 2008 Here is my script that I run to start configure: #!/bin/ksh pw

  • CD disk drive noisy, will not burn CD

    On my laptop, MacBook Pro  - - -  While in iTunes I cannot burn CDs if I have been listening to music for any period of time. I can successfully burn a CD only when I initially launch iTunes. Usually I am listening to music, create playlists and want

  • Airport Express Light Out

    I came home today to find myself disconnected from my Airport Express. When I went to check on it the light was neither green nor orange nor flashing. It was just off. I tried moving it to another outlet to test the outlets operability to no solution