UCS Uplinks and vPC
Hello,
I try to build virtual Portchannels on 2 Nexus 5548UP and 2 UCS 6248FI. With standalone links (without vPC) the communication between some ESX-Server and my Network is working. When I build vPCs on the N5k the vPC all portchannels (on N5K and FIs) are up.
The Portchannels are pinned to the vNICs and all looks fine. But there is no communication between my ESXs and my network. My configuration is like this:
Hi Roberts,
Are the VLANs allowed on the peer link? because if they work in standalone then that means trunk is fine but if they don't have them allowed on the peer link then it will stay up but wont talk
Similar Messages
-
Create port channel between UCS-FI and MDS 9124 (F Mode)
Dear Team,
We were trying to create port channel between UCS FI and MDS 9124
But the port channel not getting active in F mode on MDS 9124
FI is in FC End Host Mode
We have enabled FC uplink trunking on FI
We have enabled NPIV on MDS
We have enabled trunk on MDS
FI and MDS in default VSAN
To check we changed the FI mode to FC Switching mode and port channels became active but in E mode
when we enabled FC uplink trunking on FI and FC Switching mode port channels became active in TE mode
but in both the above cases showflogi database shows WWPN of SAN alone not showing any from FI.
How to achive this?
Have read that no need to change the swicthing mode to FC Switching mode and keep as FC Endhost mode
SO how to achieve Port channel with F mode in MDS and FI ( Mode showing as NProxy)
Does it has to do anything with MDS NX-OS version? (https://supportforums.cisco.com/thread/2179129)
If yes how to upgrade as license for ports came along with Device and we do not have any PAC/PAK or license file as it came
with license
Also we have seen 2 files availabe for download (m9100-s2ek9-kickstart-mz.5.2.8b.bin and m9100-s2ek9-mz.5.2.8b.bin) which to use
Thanks and Regards
JoseHi Jo Bo,
what version of software if your MDS running?
On your UCS do connect nxos and show inteface brieft and look at the mac address.
it is possible that you might be hitting the bug below. if this is the case you might need to upgrade the firmware on your MDS.
Add MAC OUI "002a6a", "8c604f", "00defb" for 5k/UCS-FI
http://tools.cisco.com/Support/BugToolKit/search/getBugDetails.do?method=fetchBugDetails&bugId=CSCty04686
Symptom:
Nexus switch unable to connect any other Nexus or other Cisco Switch in NPV mode with a F port-channel. Issue might be seen in earlier 5.1 releases like
5.1.3.N1.1a
but not the latest
5.1.3.N2.1c
release. Issue is also seen in
5.2(1)N1(1)
and
6.0(2)N1(1)
and later releases.
Conditions:
Nexus configured for SAN PortChannels or NPIV trunking mode Nexus connected to UCS via regular F port channel where UCS in NPV mode NPV edge switch: Port WWN OUI from UCS FI or other Cisco manufactured switch: xx:xx:00:2a:6a:xx:xx:xx OR xx:xx:8c:60:4f:xx:xx:xx
Workaround:
Turn-off trunking mode on Nexus 5k TF-port Issue does not happen with standard F-PORT Remove SAN Portchannel config
Further Problem Description:
To verify the issue please collect show flogi internal event-history errors Each time the port is attempted OLS, NOS, LRR counters will increment. This can be determined via the following output, show port internal info all show port internal event-history errors -
Nexus 1000v UCS Manager and Cisco UCS M81KR
Hello everyone
I am confused about how works the integration between N1K and UCS Manager:
First question:
If two VMs on different ESXi and different VEM but in the same VLAN,would like to talk each other, the data flow between them is managed from the upstream switch( in this case UCS Fabric Inteconnect), isn'it?
I created a Ethernet uplink port-profile on N1K in switch port mode access(100), I created a vEthernet port-profile for the VM in switchport mode access(100) as well. In the Fabric Interconnect I created a vNIC profile for the physical NICs of ESXi(where there are the VMs). Also I created the vlan 100(the same in N1K)
Second question: With the configuration above, if I include in the vNIC profile the vlan 100 (not as native vlan) only, the two VMs can not ping each other. Instead if I include in the vNIC profile only the defaul vlan(I think it is the vlan 1) as native vlan evereything works fine. WHY????
Third question: How it works the tagging vlan on Fabric interconnectr and also in N1K.
I tried to read differnt documents, but I did not understand.
ThanksThis document may help...
Best Practices in Deploying Cisco Nexus 1000V Series Switches on Cisco UCS B and C Series Cisco UCS Manager Servers
http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/white_paper_c11-558242.html
If two VMs on different ESXi and different VEM but in the same VLAN,would like to talk each other, the data flow between them is managed from the upstream switch( in this case UCS Fabric Inteconnect), isn'it?
-Yes. Each ESX host with the VEM will have one or more dedicated NICs for the VEMs to communicate with the upstream network. These would be your 'type ethernet' port-profiles. The ustream network would need to bridge the vlan between the two physicall nics.
Second question: With the configuration above, if I include in the vNIC profile the vlan 100 (not as native vlan) only, the two VMs can not ping each other. Instead if I include in the vNIC profile only the defaul vlan(I think it is the vlan 1) as native vlan evereything works fine. WHY????
- The N1K port profiles are switchport access making them untagged. This would be the native vlan in ucs. If there is no native vlan in the UCS configuration, we do not have the upstream networking bridging the vlan.
Third question: How it works the tagging vlan on Fabric interconnectr and also in N1K.
- All ports on the UCS are effectively trunks and you can define what vlans are allowed on the trunk as well as what vlan is passed natively or untagged. In N1K, you will want to leave your vEthernet port profiles as 'switchport mode access'. For your Ethernet profiles, you will want them to be 'switchport mode trunk'. Use an used used vlan as the native vlan. All production vlans will be passed from N1K to UCS as tagged vlans.
Thank You,
Dan Laden
PDI Helpdesk
http://www.cisco.com/go/pdihelpdesk -
New deployment of a pair of 4500X in VSS mode and Cisco UCS.
FI-A has 1 10G link to each 4500X
FI-B has 1 10G link to each 4500X
How should the ports and port channels on the 4500X be configured for UCS uplinks?Hi Reed,
In the end you will just create two port channels, one to each FI.
This is the documentation to create etherchannel on 4500X.
http://www.cisco.com/en/US/docs/switches/lan/catalyst4500/12.2/15.02SG/configuration/guide/channel.html#wp1020670
The interfaces "Ten 1/1" of each 4500X will be part of the first etherchannel andin the second the interfaces "Ten 1/2". (This is just a representation not the real interface number).
Remeber to use the mode active (LACP) of the etherchannel, because this is enabled by default in the Fabric Interconnects.
Richard -
We're getting ready to implement a UCS B Series. I have a question about the 6248UP FI. I've read and used the emulator for UCS Manager and noticed you can configure FI ports as Appliance ports for storage. Is this limited to a certain storage protocol or vendor? We use Dell EQ and plan to connect the EQ 6510X to it. Just wondering if that is supported and if upstream iSCSI traffic would be able to access the storage?
The 6248UPs will be uplinked to a pair of Catalyst 4506-E running VSS. We have a IBM blade chassis and the servers would need to be able to access the EQ6510X too. I'm assuming i just need to trunk the iSCSI VLAN to the 6248UP and the servers would have access?
Thanks!Hi Cowetac,
Yes, not all storage arrays are supported by UCS, but your DELL Equalogic is supported. If you have any questions about other storage arrays compatibility, you can take a look at the UCS Storage Interoperability Matrix (See Link Below).
UCS Storage Interoperability Matrix (Table 10-2 UCS-B Storage Support Matrix)
http://www.cisco.com/en/US/docs/switches/datacenter/mds9000/interoperability/matrix/Matrix8.html
I'm assuming i just need to trunk the iSCSI VLAN to the 6248UP and the servers would have access?
Depends, if you are going to use t an appliance port on UCS to connect directly to the storage, you can only use a single vlan.
if you are connecting via the Ethernet uplinks through a switch, then you would have to set your switchs port as a trunk. -
hi,every .
i use GLC-T to connect ucs 6120 and cisco2960s (RJ45),.have sfp validation failed problem.
I have done speed to 1G then I am getting error " link not connected" even I have connected the nnetwork cable..UCS6120-A(nxos)# show interface e1/7
Ethernet1/7 is down (SFP validation failed)
Hardware: 1000/10000 Ethernet, address: 0005.73d2.934e (bia 0005.73d2.934e)
Description: U: Uplink
MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA
Port mode is trunk
auto-duplex, 1000 Mb/s, media type is 10G
Beacon is turned off
Input flow-control is off, output flow-control is off
Rate mode is dedicated
Switchport monitor is off
EtherType is 0x8100
Last link flapped never
Last clearing of "show interface" counters never
30 seconds input rate 0 bits c, 0 bytes c, 0 packets c
30 seconds output rate 0 bits c, 0 bytes c, 0 packets c
Load-Interval #2: 5 minute (300 seconds)
input rate 0 bps, 0 pps; output rate 0 bps, 0 pps
RX
0 unicast packets 0 multicast packets 0 broadcast packets
0 input packets 0 bytes
--More-- -
Nexus 5548UP - HSRP and vPC, tracking required?
Hi,
We've got two Nexus 5548UPs that are vPC and HSRP peers.
I've had some feedback that I should incorporate the tracking function to close the vPC down in the case of a layer 3 problem, the thing is I'm not sure it's required. I can see in this article it recommends implementing tracking when your L2 peer-link and L3 interfaces are on the same module (which it is in my case).. http://www.cisco.com/en/US/docs/switches/datacenter/sw/design/vpc_design/vpc_best_practices_design_guide.pdf
But in this article it says not to use tracking.. http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/design_guide_c07-625857.pdf
Any one got any real world experience and can offer some feedback.. I don't mind putting it in just want to understand why.
Thanks,
Nick.Hi Nick
there is two tracking can be use din nexus enviroment
HSRP tracking and vPC tracking
for using one line card for the vPC peer link vPC tracking is recomnded
HSRP tracking is used to track L3 uplinks to the core
Using vPC with HSRP/VRRP object tracking may leads to traffic blackholing in case object tracking is triggered
its better to use separate L3 inter switch link instead of using HSRP tracking
hope this help -
Port-Channel issue between UCS FI and MDS 9222i switch
Hi
I have a problem between UCS FI and MDS switch port-channel. When MDS-A is powered down the port-channel fails but UCS blade vHBA does not detect the failure of the port-chanel on UCS-FI and leaves the vHBA online. However, if there is no port-channel between FI-->MDS it works fine.
UCS version
System version: 2.0(2q)
FI - Cisco UCS 6248 Series Fabric Interconnect ("O2 32X10GE/Modular Universal Platform Supervisor")
Software
BIOS: version 3.5.0
loader: version N/A
kickstart: version 5.0(3)N2(2.02q)
system: version 5.0(3)N2(2.02q)
power-seq: Module 1: version v1.0
Module 3: version v2.0
uC: version v1.2.0.1
SFP uC: Module 1: v1.0.0.0
MDS 9222i
Software
BIOS: version 1.0.19
loader: version N/A
kickstart: version 5.0(8)
system: version 5.0(8)
Here is the config from MDS switch
Interface Vsan Admin Admin Status SFP Oper Oper Port
Mode Trunk Mode Speed Channel
Mode (Gbps)
fc1/1 103 auto on trunking swl TF 4 10
fc1/2 103 auto on trunking swl TF 4 10
fc1/9 103 auto on trunking swl TF 4 10
fc1/10 103 auto on trunking swl TF 4 10
This is from FI.
Interface Vsan Admin Admin Status SFP Oper Oper Port
Mode Trunk Mode Speed Channel
Mode (Gbps)
fc1/29 103 NP on trunking swl TNP 4 103
fc1/30 103 NP on trunking swl TNP 4 103
fc1/31 103 NP on trunking swl TNP 4 103
fc1/32 103 NP on trunking swl TNP 4 103
Any thoughts on this?Sultan,
This is a recently found issue and is fixed in UCSM 2.0.3a version .
http://tools.cisco.com/Support/BugToolKit/search/getBugDetails.do?method=fetchBugDetails&bugId=CSCua88227
which got duped to CSCtz21585
It happens only when following conditions are met
FI in End host mode
FC uplinks are configured for portchannel + trunking
Certain link event failures ( such abrupt power loss by upstream MDS switch )
Padma -
Mixing BE6000 UCS Server and "normal" UCS server in the same deployment
Hello,
I have been handed a project which has one high density BE6000 UCS server and a separate UCS C220 M3 server. The latter server was included to host a MediaSense call recording system but this will only use 2 of the available 8 vCPUs on the UCS C220 M3 server.
The total number of users is 200.
I want to implement a resilient system and so would like deploy two servers in a cluster for each of the following applications:
CUCM 10.5
Unity Connection 10.5
IM & Presence 10.5
As well as these applications there will be UCCX 10.6 and Cisco Unified Attendant Console Advanced (10.5) but these will be deployed as single servers.
Looking at the UCS servers they have capacity for me to split the CUCM/CUC/IMP clusters between them.
I cannot see any technical reason why this will not work but do not want to be caught by any Cisco support policies.
If I were to implement the system in this way would there be any issues with the deployment or getting support from TAC.
The separate UCS C220 M3 server has 8 x 8GB RAM (64 total) and 8 x 300GB HDD plus a quad Ethernet card.James first of all I am not sure I understand your query in detail. Do you mean you have two UCS220 M3 servers? and one of them is currently running BE6000?
Having said that, the key here is to carefully plan your deployment against the capacity of the servers you have,.
Eg..Deploying 200 Users, using the ff OVA
UCS 220 M3 server 1: (using default TRC ie 8vCPU with 8GB per vCPU)
Publisher (2500 cucm OVA): 1vCPU (6GB RAM), 80GB HD
IMP-Publisher (1,000 OVA) :1vCPU(2GB RAM), 80GB HD
CUC-publisher (1,000 users OVA) :2vCPU(6GB)--NB 1vCPU is reserved for ESXI, 160GB HD
UCCX-Master: (300 agent OVA): 2 vCPU (8GB) and 292GB HDD
With this placement you have a total of 22GB RAM and 612HDD used up
A break down is shown below..
Server
Server Name
C220 M3S TRC#1 (Medium)
C220 M3S TRC#1 (Medium)
Application Short Name
Application Long Name
Release
VM Name
vCPU
vRAM
vDisk
CUCM
Unified Communications Manager Release
10.x
CallCtrl: 2,500 users
1
4
80
IM&P
IM & Presence Release
10.x
1,000 users
1
2
80
CUC
Unity Connection Release
10.0
1,000 users
1
4
160
ESXi
Unity Connection
ESXi
1*
CUCCX
Cisco Unified Contact Center Express / Unified IP IVR Release
10.x
Main: 300 agents
2
8
292
ESXi
VMware vSphere ESXi
5.5
4**
* Note: This is a 1 physical CPU core per host regardless of the number of Cisco Unity Connection (CUC) VMs.
** Note: This is 4GB physical RAM per host. -
Cisco UCS Blades and vSphere DPM
I followed this guide:
https://supportforums.cisco.com/docs/DOC-8582
And it worked, but I have 2 problems.
1) The blade being put into Standby starts back up immediately - almost like a reboot -
and doesn't stay in Standby mode
2) The Blade being put into Standby mode has Faults on all vFabric's etc because it is
"off"
Any suggestions?
JimHi Rob and Jim,
How did you guys progress with this one? I am having the same issue and be interested to know the solution.
My environment is vSphere 5.0 with B230 M2 and DPM is not yet enabled. UCS Manager and CIMC are running 2.0(1s).
On testing for Standby from the vSphere client, the blade reboots automatically (no power down). A shutdown command gives the same result (which is a reboot) with a host connection failure alert.
Any help is appreciated.
Thanks,
Noli -
Difference between Port Channel and VPc
Hi Friends,
Could you please provide the difference between Port Channel and VPC.
Regards,
ZaheerRead :)
http://www.cisco.com/c/en/us/products/collateral/switches/nexus-3000-series-switches/white_paper_c11-685753.html
Virtual PortChannel Technology
Virtual PortChannels (vPCs) allow links that are physically connected to two different Cisco® switches to appear to a third downstream device to be coming from a single device and as part of a single PortChannel. The third device can be a switch, a server, or any other networking device that supports IEEE 802.3ad PortChannels.
Cisco NX-OS Software vPCs and Cisco Catalyst® Virtual Switching Systems (VSS) are similar technologies. For Cisco EtherChannel technology, the term “multichassis EtherChannel” (MCEC) refers to either technology interchangeably.
vPC allows the creation of Layer 2 PortChannels that span two switches. At the time of this writing, vPC is implemented on the Cisco Nexus® 7000 and 5000 Series platforms (with or without Cisco Nexus 2000 Series Fabric Extenders). -
Difference between vpc peer-switch and vpc+
Hi, I would like to understand the difference between vpc peer-switch when used in vpc and vpc+ when used in fabricapath when both are delivered to achieve the same thing i.e making the 2 nexus switches look like a 1 logical switch to an other device connected to it.
Hi,
vPC+ overcomes a problem we would have when connecting non FabricPath devices to the FabricPath cloud in a resilient way using port-channels.
If you look at the first diagram below, when Host A sends traffic to Host B, it can be sent over either link of the port-channel and so can take the path via either S100 or S200. The problem with this is that from the perspective of the MAC address table on S300, the MAC address of Host A would be constantly flap between being sourced from S100 and S200.
What happens with vPC+ is that S100 and S200 create an emulated switch that effectively becomes the point of connection of the port-channel interface. This is seen in the second diagram as S1000. Now when traffic is sent from Host A it is always seen as originating from S1000 so there's no longer any MAC flapping.
Hope that helps.
Regards -
Hi,
I have confused the concept of 802.11n uplink and downlink. Grateful if you could enlighten me.
1. Does the uplink and downlink are co-existing and co-operating? similar to ethernet switch full duplex?
2. The max. through put of 802.11n is 540M, why some document mentions to be 200M?
rdgsrdgs,
To answer your questions and hopefully provide some clarity to the information out there regarding 802.11n technology.
>1. Does the uplink and downlink are co-existing and co-operating? similar to ethernet switch full duplex?
All wireless has an uplink and downlink in the same manner as a hub does 802.11n is no different (exception to this will be clarified below), but why are they shown seperately in many specs sheets and marketing stuff well it's because it was discovered that some manufacturers access points had a slower uplink speed than the downlink speed. This issue is well documented on many different brands, it is for that reason that some now market the connections seperately.
>2. The max. through put of 802.11n is 540M, why some document mentions to be 200M?
This discrepancy is easy to explain, First lets remember that 802.11 can run in two different modes - 20Mhz Channels or 40Mhz Channels.
bascially a wireless access point is a hub it has a shared bandwidth that is no different with 802.11n when it's a 20Mhz channel, but if you enable the 40Mhz channel with Channel bonding you get a higher data rate since it spreads the transmission and reception across the 40Mhz span. it is not full duplex like a switch supports. The other factor in the throughput can be the M-Drive or Client Link piece which means that the access point can dynamically adjust transmit time, power level, and antenna to optimize a signal to a particular client. The number of clients that any given access point will support varies by model and manufacturer. Right now Cisco is the only one that truly has M-Drive although others have something similar.
Hope this helps. -
UCS Backup and Import Specifics
We are opening up a new data center that will mirror our current one for DR purposes. We have three UCS domains in our current DC and will have three in the new one. If I take an ALL Config backup of my current UCS domains and import that backup to the corresponding domain in the new DC, what actually comes across to the new FI's? The Cisco docs I have been looking at could use some more clarification when they describe what comes across in the system and logical portions of the all config import to a different FI. I don't want to impact my current UCS domains, so I am looking to clarify that so it does not happen.
Hi
I hope I understand your Problem: you would like to clone the UCS Domains from the existing DC to the DR site.
However, taking a full backup (with preserve identities selected), doing a Initial Setup on the DR site, and then Import the backup, will create duplicate pools (mac, uuid,....) and policies. This will give you overlapping mac's, uuid's,..... !
Q. in your existing UCS Domains: did you make sure, that e.g. your mac pools are not overlapping ?
Q. did you consider to use UCS Central ?
Q. do plan to use global Service profiles and therefore global pools, and allow Failover to DC site ?
Have a look at https://supportforums.cisco.com/discussion/12259771/wwnn-and-wwpn-pools
Walter. -
I'm am new to the whole datacenter design and I've recently taken over a data center set up. I'm trying to determine if my vPCs and port-channels are set up correctly. I've attached a drawing of the network set up. It seems pretty straight forward:
NIC teamed server with each NIC going to a separate FEX.
The FEX are straight through to the 5596s.
Using LACP on the switches and servers.
NX-OS 5.0(3)
My questions are 1) Is the FEX number only locally significant to the 5596 it's attached to? Should this number be the same or different on each 5596 or does it not matter? I haven't been able to find anything specifiying what this is supposed to be for this setup.
2) How can I verify the vPCs/portchannels are working properly?
Doing a 'show vpc' shows up/success on both switches.
'show port-channel summary' shows my channel as SU and my port as P.
Doing a 'show interface pXXX' shows my port-channel and vPC if I run the command on both 5596s but they each only show one member in the channel, which is the locally connected interface. I would think I'm supposed to see both interfaces channled together here.
If I take one interface down I don't get any messages on the other switch that the other side of the port-channel has dropped. It also seems like traffic isn't being equally distributed.
I hope this makes sense. I've been trying to figure this environment out for weeks. Just when I think I've got it, it doesn't act the way I'm expecting it to.1) It does not matter if you have same FEX number on both switches. However, This will be a problem when you decide to implement Enhanced vpC where both FEX connected to both 5596 and servers are also connected to both FEXes.
2) show vpc is perfact command to verify. Another usefull command is show vpc consistancy-paramenter vpc 1902. If you have port down on any switch then you will see consitency parameter failed here.
Maybe you are looking for
-
My Presario CQ60-419WM sound card needes to be upgraded, Is it possible or do I just need to get a new laptop? Thanks for your input. :-)<script type="text/javascript">// /******************************************************** This Script will be
-
How to call a oracle procedure with in/out parameter frm unix shell script?
Hi, I need to call an oracle stored procedure from unix script. The procedure has 1 input parameter and 2 output parameter. Please send me the syntax for the same. Based on the output values of procedure, I have to execute some more commands in unix
-
Set default frame size to "large" instead of "tiny"
Hello, The title says it all, really. As I do more scripting than animation, I'd like to have large frames by default. This setting is not even saved with the file. Is there any way ? Thanks. PJ
-
Hello, I will make a new pc for video editing. I work all suite adobe CS6, sometimes with resolve. I want to be fluent in After Effects working whit 3D and realtime playback, less time render, open at the same time AE, PR, AI, PS, firefox whit out pr
-
After installing Oracle Infrastructure 9.0.2 (Release 2) on Solaris 8, I get the message that the install and linking was successful. However, when the install proceeds to automatically run the configuration assistants about 8 of the processes fail.