Nexus 5548 to UCS-C series network connectivity.
Hi There,
I am new to nexus world, mainly telecom side. I have a situation where a vendor like to deploy two Cisco UCS-C series servers for voice deployment.
Each UCS C240 M3 server has 4 NICs, 2 nics bonded 802.1q will connect to primary nexus 5k switch and other two bonded will connect to secondary nexus switch. we have a vPC domain and no FEXes so we have to connect these two servers directly to Nexus 5ks.
My question is it possible with teamed nics to coonect to 2 different nexus switches?
Can anyone guide me how can I achieve this design? see attached.
Thanks Much
It will also depend on the Server network settings such as OS, software switch flavor and NIC teaming option on the server.
x- if you run ESXi, then you may check out the following KB article.
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1004088
x- if you use N1Kv, then you can use LACP or static port-channeling, but be sure to make it consistent on the upstream switches.
hope it helps.
Michael
Similar Messages
-
Cisco UCS C210M2-VCD2 network connectivity
Hi,
I am going to install three c210M2 servers and i am quite new to UCS world. As per the design i am suppose to install
4 x Vm's (CUCM, CUC, Presence, UCCX) Primary or Publisher on UCS 01
4 x Vm's (CUCM, CUC, Presence, UCCX) Subscriber or Failover on UCS 02
2 x Vm's (MeetingPlace) on UCS 03
I have couple of 3750X access switches for network connectivity. My question is how do i configure the network or NIC cards on UCS Server. As per my knowledge every server has 2 x 10Gig NIC cards and 1 x NIC for Integrated Management. I am confused how many NIC cards each VM will have and how do i configure all of these virtual machine NIC's on 2 x physical NIC per UCS. Any suggestions please.
Regards,
AsifAsif,
Nexus 1000 would be helpful here, but if not an option and you are using standard vSwitch then both onboard NIC's should be used as uplinks in a single vSwitch for redundancy.
Here are some QoS suggestions that might help you also.
Rredundant physical LAN interfaces are recommended
One or two pairs of teamed NIC's for UC VM traffic. One pair is sufficient on C210 due to the low load per VM
One pair of NIC's (teamed or dedicated) for VMware specific traffic (mgmt, vMotion, VMware HA, etc)
SIDENOTE: NIC teaming pair can be split across motherboard and the PCIe card if the UCS C-series supports it, protects against PCIe card failure
Once we know exactly what vSwitch you are using like the last post asked, we may be able to pinpoint more, but this is based on using standard vSwitch
Hope this helps. -
I know that there are advantages to using the vNIC capabilities of the VIC M81KR along with the 2100 IO Module up to the 6100 series Fiber Interconnect.
But If the customer was not convinced of the management advantages of the 6100 and the UCS Manager and did not like the idea of having to pay a license fee to turn on the 10Gig ports on the 6100, which essentially doubles the price or more of the 6100, Could we instead do it the old fashion way and put a 10Gig adaptor in each blade and then wire them into a 5548?
I know that it is ugly, but what will they loose doing this?
Thanks.Richard,
Just to add to what Louis mentioned - Though the blades have individual mezz cards, there are no Chassis switches you'd be able to uplink into your 5500's nor the software to manage the IO modules. The Chassis IO modules are specifically built to work with the fabric interconnects.
With each FI purchased you get the first 8 ports included. The sweet spot for most deployments usually involves 2 uplinks per chassis, this will give provide enough included ports for 3 chassis' (24 blades) with two left for uplinks. Pricing the FI's per port beyond the included 8 help offset the overall cost of the system during initial purchase. We actually take a loss overall until roughly around the 1/2 way mark of ports purchased are reached. Compared to competators, when a new Chassis is purchased you'd also need to purchase a pair of Management modules (iLO/DRAC/RAS), two or more SAN switch modules and two or more ethernet modules. This makes scaling additional compute power extremely expensive. With a new UCS Chassis all you need are two very inexpensive IO modules, which provide everything you need to connect your chassis into your already wired infrastructure (Fabric Interconnects). Yes, you'll need to ensure you have enough licensed ports on the FIs to meet your chassis & uplink requirements, but you're still looking at far less scaling investments in the long run.
The management benefits are one of the key value adds UCS offers over it's competators. I'd be a little concerned why your customer does not see the value added in a single management platform for such a wide scope of devices. Also being able to maintain policies for firmware version, QoS levels, consistent network configurations, BIOS, Boot, etc ensure consistency & security across your compute environment. Having a firm understanding in the management advantages requires understanding UCS differentiators.
There's a reason why many vendor are now following Cisco's lead in a "unified" approach to datacenter infrastructure. It's more efficient, easier to manage and scales well beyond traditional designs.
Regards,
Robert -
Nexus 5548UP FCoE to C series UCS
Ok here is the scenario
nexus 5548 switch ---- fex 2232 --- UCS C series (C220)
snippets of the configuration:
vlan 1100
fcoe vsan 1100
vlan database
vsan 1100
fex 100
description 2232-A
fcoe
int ethernet100/1/1
switchport
switchport mode trunk
switchport trunk allowed vlan 10,1100
interface vfc100
bind interface ethernet100/1/1
switchport trunk allowed vsan 1100
no shut
so it is configured above, the vfc interface won't trunk unless I do the following:
vsan database
vsan 1100 interface vfc100
I found this document that describes mapping VSANs to VLANs example configuration (page 8)Each fc and/or vfc interface must be member of a VSAN; by default, they belong all initially to VSAN 1.
-
Connecting IBM v7000 to Nexus 5548
20-Sep-2012 16:51 (in response to feisalb)
IBM V7000 with Nexus 5548UP and Nexus 4000 Design/implemetation guide
Hi Guys
I have a question in regards to connecting IBM v7000 directly to Nexus5548.
CAN WE DO THIS?
Our current setup is IBM v7000 -> MDS 9124 -> Nexus 5548.
But our MDS 9124 are out of warranty now and we need to take them out of production. And only way we can do this is if we connect our IBM v7000 fibre ports directly to our Nexus 5548.
Can someone please point me to the right direction any knowledge base articles etc.
Thanks Heaps
SidDear prkrishn
I am working on the Data Center Solution between two Data Center, details underneath
DC 1 Site
1. 2 x DC Core Switch (Nexus 7009) - will be used for Servers, Application, IBM V7000, Database, etc.
2. 2 x Network Core Switch (Cisco 6509) - Handle Campus and Inter Building connection.
3. IBM V7000 (SAN)
DC 2 Site
1. 2 x DC Core Switch (Nexus 7009) - will be used for Servers, Application, IBM V7000, Database, etc.
2. 2 x Network Core Switch (Cisco 6509) - Handle Campus and Inter Building connection.
3. IBM V7000 (SAN)
With the above mention setup, can I configure FCIP between DC1 & DC2 using Nexus 7009? or I need FCIP capable Switch such as IBM SAN Switch (SAN06-BR), I was wondering if I can configure FCIP on Nexus 7009 DC Switch.
Hoping for your kind response at earliest.
Kind Regards,
Arnold -
Hi,
I have 2 SUN 4270 M2 servers connected with Nexus 5548 switch over 10Gb fiber card. I am getting performance of just 60 MB per second while transfer of 5Gb file across 2 servers. The similar speed i use to get on 1Gb network also. Please suggest how to improve the tranfer speed. On servers, ports ET4 and ETH5 are bonded in bond0 with mode=1. The server envrionment will be used for OVS 2.2.2.
Below are the details of network configuration on server. I quick help will be highly appriciated--
[root@host1 network-scripts]# ifconfig eth4
eth4 Link encap:Ethernet HWaddr 90:E2:BA:0E:22:4C
UP BROADCAST RUNNING SLAVE MULTICAST MTU:9000 Metric:1
RX packets:5648589 errors:215 dropped:0 overruns:0 frame:215
TX packets:3741680 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2492781394 (2.3 GiB) TX bytes:3911207623 (3.6 GiB)
[root@host1 network-scripts]# ifconfig eth5
eth5 Link encap:Ethernet HWaddr 90:E2:BA:0E:22:4C
UP BROADCAST RUNNING SLAVE MULTICAST MTU:9000 Metric:1
RX packets:52961 errors:215 dropped:0 overruns:0 frame:215
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:3916644 (3.7 MiB) TX bytes:0 (0.0 b)
[root@host1 network-scripts]# ethtool eth4
Settings for eth4:
Supported ports: [ FIBRE ]
Supported link modes: 1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 1000baseT/Full
10000baseT/Full
Advertised auto-negotiation: Yes
Speed: 10000Mb/s
Duplex: Full
Port: FIBRE
PHYAD: 0
Transceiver: external
Auto-negotiation: on
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000007 (7)
Link detected: yes
[root@host1 network-scripts]# ethtool eth5
Settings for eth5:
Supported ports: [ FIBRE ]
Supported link modes: 1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 1000baseT/Full
10000baseT/Full
Advertised auto-negotiation: Yes
Speed: 10000Mb/s
Duplex: Full
Port: FIBRE
PHYAD: 0
Transceiver: external
Auto-negotiation: on
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000007 (7)
Link detected: yes
[root@host1 network-scripts]#
[root@host1 network-scripts]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.2.4 (January 28, 2008)
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth4
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth4
MII Status: up
Link Failure Count: 0
Permanent HW addr: 90:e2:ba:0e:22:4c
Slave Interface: eth5
MII Status: up
Link Failure Count: 0
Permanent HW addr: 90:e2:ba:0e:22:4d
[root@host1 network-scripts]# modinfo ixgbe | grep ver
filename: /lib/modules/2.6.18-128.2.1.4.44.el5xen/kernel/drivers/net/ixgbe/ixgbe.ko
version: 3.9.17-NAPI
description: Intel(R) 10 Gigabit PCI Express Network Driver
srcversion: 31C6EB13C4FA6749DF3BDF5
vermagic: 2.6.18-128.2.1.4.44.el5xen SMP mod_unload Xen 686 REGPARM 4KSTACKS gcc-4.1
[root@host1 network-scripts]#brctl show
bridge name bridge id STP enabled interfaces
vlan301 8000.90e2ba0e224c no bond0.301
vlan302 8000.90e2ba0e224c no vif1.0
bond0.302
vlan303 8000.90e2ba0e224c no bond0.303
vlan304 8000.90e2ba0e224c no bond0.304
[root@host2 test]# ifconfig eth5
eth5 Link encap:Ethernet HWaddr 90:E2:BA:0F:C3:15
UP BROADCAST RUNNING SLAVE MULTICAST MTU:9000 Metric:1
RX packets:4416730 errors:215 dropped:0 overruns:0 frame:215
TX packets:2617152 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:190977431 (182.1 MiB) TX bytes:3114347186 (2.9 GiB)
[root@host2 network-scripts]# ifconfig eth4
eth4 Link encap:Ethernet HWaddr 90:E2:BA:0F:C3:15
UP BROADCAST RUNNING SLAVE MULTICAST MTU:9000 Metric:1
RX packets:28616 errors:3 dropped:0 overruns:0 frame:3
TX packets:424 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:4982317 (4.7 MiB) TX bytes:80029 (78.1 KiB)
[root@host2 test]#
[root@host2 network-scripts]# ethtool eth4
Settings for eth4:
Supported ports: [ FIBRE ]
Supported link modes: 1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 1000baseT/Full
10000baseT/Full
Advertised auto-negotiation: Yes
Speed: 10000Mb/s
Duplex: Full
Port: FIBRE
PHYAD: 0
Transceiver: external
Auto-negotiation: on
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000007 (7)
Link detected: yes
[root@host2 test]# ethtool eth5
Settings for eth5:
Supported ports: [ FIBRE ]
Supported link modes: 1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 1000baseT/Full
10000baseT/Full
Advertised auto-negotiation: Yes
Speed: 10000Mb/s
Duplex: Full
Port: FIBRE
PHYAD: 0
Transceiver: external
Auto-negotiation: on
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000007 (7)
Link detected: yes
[root@host2 network-scripts]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.2.4 (January 28, 2008)
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth5
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth5
MII Status: up
Link Failure Count: 0
Permanent HW addr: 90:e2:ba:0f:c3:14
Slave Interface: eth4
MII Status: up
Link Failure Count: 0
Permanent HW addr: 90:e2:ba:0f:c3:15
[root@host2 network-scripts]# modinfo ixgbe | grep ver
filename: /lib/modules/2.6.18-128.2.1.4.44.el5xen/kernel/drivers/net/ixgbe/ixgbe.ko
version: 3.9.17-NAPI
description: Intel(R) 10 Gigabit PCI Express Network Driver
srcversion: 31C6EB13C4FA6749DF3BDF5
vermagic: 2.6.18-128.2.1.4.44.el5xen SMP mod_unload Xen 686 REGPARM 4KSTACKS gcc-4.1
[root@host2 network-scripts]#brctl show
bridge name bridge id STP enabled interfaces
vlan301 8000.90e2ba0fc315 no bond0.301
vlan302 8000.90e2ba0fc315 no bond0.302
vlan303 8000.90e2ba0fc315 no bond0.303
vlan304 8000.90e2ba0fc315 no vif1.0
bond0.304
Thanks....
JayHi,
Thanks for reply..but the RX errors count is keep on increasing and the transfer speed between 2 servers are max 60MB/ps on 10GB FC card. Even on storage also, i am getting the same speed when i try to transfer data from server to storage on 10GB FC card. Servers and storage are connected through Nexus 5548 switch.
#ifconfig eth5
eth5 Link encap:Ethernet HWaddr 90:E2:BA:0E:22:4C
UP BROADCAST RUNNING SLAVE MULTICAST MTU:9000 Metric:1
RX packets:21187303 errors:1330 dropped:0 overruns:0 frame:1330
TX packets:17805543 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:624978785 (596.0 MiB) TX bytes:2897603160 (2.6 GiB)
JP -
Servers connected to Nexus 5548 only getting 200 Mbps of throughput
Servers connected to NExus 5k were only getting 100 Mbps of throughput. So I disabled Flow control recieve on all the ports. After this we are getting 200 Mbps of speed. The servers are connected throuh 10 gig port. Could you guys please suggest why the throguhput is still low? Atleast we should get 1Gbps of thrpoughput.
Hi Adam,
I think we probably need a little more information to go on. Can you answer the following?
What type of servers and NICs?
What OS are you running on the servers?
What cables do you have from the servers to the switch?
Are the two servers in the same subnet or is the traffic between them routed?
If routed, is that in the Nexus 5548 or some other router?
How are you testing throughput?
Presumably you're not seeing any errors on the switch ports that the servers are connected to?
Regards -
Connecting NEXUS 5548 1gig interface to 100mbps
Hi,
I have a 5548 that I need to connect to a firewall that supports 100 Mbps only.
Can I configure interface speed on Nexus 5548 interface (GLC-T) to 100 Mbps inorder to connect it to the firewall??
Regards,
SabihHi Sabih,
The interfaces on a Nexus 5548 can NOT be configured as 100 Mbps.
If you wish to connect to the firewall via a 100 Mbps connection, you will need to make use of a Fabric Extender (Nexus 2000) that supports 100 Mbps.
http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/data_sheet_c78-618603.html
Thanks,
Michael -
Connectivity Issue between Nexus 5548 to VNX 5300
Hi All,
I am doing a lab setup where i want to connect Nexus 5548 UP to VNX 5300 storage directly. The physical connectivity is established between switch and storage but On Nexus the status of the port shows "linkFailure". I tried matching the port mode (like Auto, F) and speed but the port always shows "linkFailure".
The connectivity from Nexus to VNX is FC.
Anyone can suggest the root cause or any troubleshooting steps.
Regards,
AbhilashLinkFailure might be a GUI status.
show interface fcx/y might say,
Link failure or not connected
The physical layer link is not operational.
This means the switch is not detecting light, so the phyiscal layer is the cable and lasers (sfp's, HBAs or whatever adapter the VNX uses). It could mean you need to turn the interfaace up from the vnx side. -
Just like HP servers, does Cisco UCS C-Series Standalone supports Network Fault Tolerance ? if yes then, where can i find this option
Windows :- 2k8 R2Just like HP servers, does Cisco UCS C-Series Standalone supports Network Fault Tolerance ? if yes then, where can i find this option
Windows :- 2k8 R2 -
UCS FI 6248 to Nexus 5548 San port-channel - not working
Hi all,
I'm sure I am missing something fairly obvious and stupid but I need several sets of eyes and help.
Here is the scenario:
I want to be able to create san port-channels between the FI and Nexus. I don't need to trunk yet as I can't even get the channel to come up.
UCS FI 6248:
Interfaces fc1/31-32
Nexus 5548
interfaces fc2/15-16
FI is in end-host mode and Nexus is running NPIV mode with fport-channel-trunk feature enabled.
I'm going to output the relevants configurations below.
Nexus 5548:
NX5KA(config)# show feature | include enabled
fcoe 1 enabled
fex 1 enabled
fport-channel-trunk 1 enabled
hsrp_engine 1 enabled
interface-vlan 1 enabled
lacp 1 enabled
lldp 1 enabled
npiv 1 enabled
sshServer 1 enabled
vpc 1 enabled
interface san-port-channel 133
channel mode active
no switchport trunk allowed vsan all
switchport trunk mode off
interface fc2/15
switchport trunk mode off
channel-group 133 force
no shutdown
interface fc2/16
switchport trunk mode off
channel-group 133 force
no shutdown
NX5KA# show vsan membership
vsan 1 interfaces:
fc2/13 fc2/14
vsan 133 interfaces:
fc2/15 fc2/16 san-port-channel 133
vsan 4079(evfp_isolated_vsan) interfaces:
vsan 4094(isolated_vsan) interfaces:
NX5KA# show san-port-channel summary
U-Up D-Down B-Hot-standby S-Suspended I-Individual link
summary header
Group Port- Type Protocol Member Ports
Channel
133 San-po133 FC PCP (D) FC fc2/15(D) fc2/16(D)
UCS Fabric Interconnect outputs:
UCS-FI-A-A(nxos)# show san-port-channel summary
U-Up D-Down B-Hot-standby S-Suspended I-Individual link
summary header
Group Port- Type Protocol Member Ports
Channel
133 San-po133 FC PCP (D) FC fc1/31(D) fc1/32(D)
UCS-FI-A-A(nxos)#
UCS-FI-A-A(nxos)# show run int fc1/31-32
!Command: show running-config interface fc1/31-32
!Time: Fri Dec 20 22:58:51 2013
version 5.2(3)N2(2.21b)
interface fc1/31
switchport mode NP
channel-group 133 force
no shutdown
interface fc1/32
switchport mode NP
channel-group 133 force
no shutdown
UCS-FI-A-A(nxos)#
UCS-FI-A-A(nxos)# show run int san-port-channel 133
!Command: show running-config interface san-port-channel 133
!Time: Fri Dec 20 22:59:09 2013
version 5.2(3)N2(2.21b)
interface san-port-channel 133
channel mode active
switchport mode NP!Command: show running-config interface san-port-channel 133
!Time: Sat May 16 04:59:07 2009
version 5.1(3)N1(1)
interface san-port-channel 133
channel mode active
switchport mode F
switchport trunk mode off
Changed it as you suggested...
Followed the order of operations for "no shut"
Nexus FC -> Nexus SAN-PC -> FI FC -> FI SAN-PC.
Didn't work:
NX5KA(config-if)# show san-port-channel summary
U-Up D-Down B-Hot-standby S-Suspended I-Individual link
summary header
Group Port- Type Protocol Member Ports
Channel
133 San-po133 FC PCP (D) FC fc2/15(D) fc2/16(D)
NX5KA(config-if)#
Here is the output as you requested:
NX5KA(config-if)# show int san-port-channel 133
san-port-channel 133 is down (No operational members)
Hardware is Fibre Channel
Port WWN is 24:85:00:2a:6a:5a:81:00
Admin port mode is F, trunk mode is off
snmp link state traps are enabled
Port vsan is 133
1 minute input rate 1256 bits/sec, 157 bytes/sec, 0 frames/sec
1 minute output rate 248 bits/sec, 31 bytes/sec, 0 frames/sec
3966 frames input, 615568 bytes
0 discards, 0 errors
0 CRC, 0 unknown class
0 too long, 0 too short
2956 frames output, 143624 bytes
0 discards, 0 errors
46 input OLS, 41 LRR, 73 NOS, 0 loop inits
257 output OLS, 189 LRR, 219 NOS, 0 loop inits
last clearing of "show interface" counters never
Member[1] : fc2/15
Member[2] : fc2/16
NX5KA(config-if)#
NX5KA(config-if)# show int brief
Interface Vsan Admin Admin Status SFP Oper Oper Port
Mode Trunk Mode Speed Channel
Mode (Gbps)
fc2/13 1 auto on sfpAbsent -- -- --
fc2/14 1 auto on sfpAbsent -- -- --
fc2/15 133 F off init swl -- 133
fc2/16 133 F off init swl -- 133 -
Hi
I am using the pro2kxp (INTEL(R) PRO1000 CT Network Connection Diagnostics Utility which not only installs the driver but also a network connection diagnostics utility.
The driver date as shown in Device Manager is 29/08/2003 and the driver version is 7.2.19.0. Does anyone know if there is a later version of this utility (including driver)?
File version as shown by right-clicking the .exe is: 4.0.100.1124.
Regards
DaveHi shrince
That's the one and just what I was looking for.
Many thanks m8.
Regards
Brave01Heart -
Ask the Expert: Cisco UCS B-Series Latest Version New Features
Welcome to this Cisco Support Community Ask the Expert conversation. This is an opportunity to learn and ask questions about the Cisco UCS Manager 2.2(1) release, which delivers several important features and major enhancements in the fabric, compute, and operational areas. Some of these features include fabric scaling, VLANs, VIFs, IGMP groups, network endpoints, unidirectional link detection (UDLD) support, support for virtual machine queue (VMQ), direct connect C-Series to FI without FEX, direct KVM access, and several other features.
Teclus Dsouza is a customer support engineer from the Server Virtualization team at the Cisco Technical Assistance Center in Bangalore, India. He has over 15 years of total IT experience. He has worked across different technologies and a wide range of data center products. He is an expert in Cisco Nexus 1000V and Cisco UCS products. He has more than 6 years of experience on VMware virtualization products.
Chetan Parik is a customer support engineer from the Server Virtualization team at the Cisco Technical Assistance Center in Bangalore, India. He has seven years of total experience. He has worked on a wide range of Cisco data center products such as Cisco UCS and Cisco Nexus 1000V. He also has five years of experience on VMware virtualization products.
Remember to use the rating system to let Teclus and Chetan know if you have received an adequate response.
Because of the volume expected during this event, our experts might not be able to answer every question. Remember that you can continue the conversation in the Data Center community, under subcommunity Unified Computing, shortly after the event. This event lasts through May 9, 2014. Visit this forum often to view responses to your questions and the questions of other Cisco Support Community members.Hi Jackson,
Yes its is possible. Connect the storage array to the fabric interconnects using two 10GB links per storage processor. Connect each SP to both fabric interconnects and configure the ports on the fabric interconnect as “Appliance” ports from UCSM
For more information on how to connect Netapp storage using other protocols like iSCSI or FCOE please check the url below.
http://www.cisco.com/c/en/us/products/collateral/servers-unified-computing/ucs-6100-series-fabric-interconnects/whitepaper_c11-702584.html
Regards
Teclus Dsouza -
I have two nexus 5548's configured as a vPC peers and two ucs 6120's configure in a cluster. The 5548's are connected to two catalyst 6500 via trunks that are in a VSS configuration. My question is, i have the ucs 6120 managment 0 ports connected to the 5k's, 6120-A mgmt0 to 5k-1 and 6120-B to 5k-2, everytime i fail the ucs 6120-A (primary) i lose ping to the cluster ip and can't manage ucs 6120-B at all even though it is pingable. Below are snippets on how the management ports are configured on the 5K's. Any Idea's?
nx5k-1
interface Ethernet1/13
description 6120-A:MGMT0
switchport mode trunk
switchport trunk native vlan 100
switchport trunk allowed vlan 100
speed 1000
channel-group 12 mode active
interface port-channel12
description 6120:A:MGMT
switchport mode trunk
vpc 12
switchport trunk native vlan 100
switchport trunk allowed vlan 100
spanning-tree port type network
speed 1000
nx5k-2
interface Ethernet1/13
description 6120-B:MGMT0
switchport mode trunk
switchport trunk native vlan 100
switchport trunk allowed vlan 100
speed 1000
channel-group 12 mode active
interface port-channel12
description 6120:B:MGMT
switchport mode trunk
vpc 12
switchport trunk native vlan 100
switchport trunk allowed vlan 100
spanning-tree port type network
speed 1000you need to configure UCS management failover monitoring
http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/gui/config/guide/2.0/b_UCSM_GUI_Configuration_Guide_2_0_chapter_0101010.html#concept_8EFB3986365C4F69A6C3B9BBC14D16FE
Sent from Cisco Technical Support iPad App -
C220 M4 w/ VIC1225 and Nexus 5548
I have four UCS C220 M4 servers that each have a VIC 1225 and are connected up to a Nexus 5548. I have set them to trunk ports and have set the appropriate VLANs in ESX and the portgroups. I can ping the gateway and other hosts on the network, but for some reason I can't ping any of the other C220 M4 servers that have a VIC1225. All of the C220's can ping other devices on the network, but not each other. Is there some setting I need to modify on the VIC or switch? Currently the VIC is running as Classical Ethernet mode.
There is no special configuration to make connectivity work in the rack servers, I assume these servers are in standalone mode and not integrated with UCSM.
What L3 device is doing your Inter-Vlan routing? Is it the N5K these servers are connected to or there is another device doing this job? If it is the N5K, are all the servers in the same network segment? I mean, is there a single SVI or multiple? can your Nexus 5K ping each of the servers?
-Kenny
Maybe you are looking for
-
My bookmarks disappeared just before I left for Christmas. Not having time to deal with it then, I found a copy of html bookmarks that seemed to be current and made a copy of it. This copy starts up Firefox and opens in a window. I can use it to get
-
XML File-- XI-- SAP ECC Scenario
Hi guys, We have the following scenario in XI XML File>XI>SAP ECC, and we need to call a BAPI at R/3 side. Which do you think is the best way to implement this, either calling the BAPI directly from XI or using an Inbound Server Proxy ? Thank you.
-
Iphone turns off after a few mn of utilization with the battery full
My iphone 3GS turns off after the battery is fully regenerated. I can only use it a few minutes before it turns off. I have to plug it again to turn it on, and the indicator of charge is around 98 %. The problem does not come from the battery. Has an
-
Hi All, What is the use of T-code HUPAST? What are the advantages with using T-code HUPAST? Regards, Shetty
-
Better Practice - split dimension table or split hierarchy?
Hi All, I have an Org dimension table that has geography and organizational structure within it (columns - state-city-etc.) and (columns division-department-etc.). Is it better to create a dimensional hierarchy with 2 trees (1 being geography sub-hie