Connecting NEXUS 5548 1gig interface to 100mbps
Hi,
I have a 5548 that I need to connect to a firewall that supports 100 Mbps only.
Can I configure interface speed on Nexus 5548 interface (GLC-T) to 100 Mbps inorder to connect it to the firewall??
Regards,
Sabih
Hi Sabih,
The interfaces on a Nexus 5548 can NOT be configured as 100 Mbps.
If you wish to connect to the firewall via a 100 Mbps connection, you will need to make use of a Fabric Extender (Nexus 2000) that supports 100 Mbps.
http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/data_sheet_c78-618603.html
Thanks,
Michael
Similar Messages
-
Connecting Nexus 5548 to Catalyst 6500 VS S720 - 10 G
good day,
Could anyone out-there please assit me with basic connectivity/configuration of the 2 devices for the 2 devcies communicate e.g be able to ping each other managemnet interfaces.
Nexus Configuration:
vrf context management
ip route 0.0.0.0/0 10.200.1.4
vlan 1
interface mgmt0
ip address 10.200.1.2/16
Catalyst 6500:
interface Vlan1
description Nexus
ip address 10.200.1.4 255.255.0.0
interface TenGigabitEthernet5/4
switchport
Note: I am able to get all the devices throught SH CDP NEIG command. assist please.Nexus# sh ip int mgmt0
IP Interface Status for VRF "management"(2)
mgmt0, Interface status: protocol-up/link-up/admin-up, iod: 2,
IP address: 10.13.37.201, IP subnet: 10.13.37.128/25
IP broadcast address: 255.255.255.255
IP multicast groups locally joined: none
IP MTU: 1500 bytes (using link MTU)
IP primary address route-preference: 0, tag: 0
IP proxy ARP : disabled
IP Local Proxy ARP : disabled
IP multicast routing: disabled
IP icmp redirects: enabled
IP directed-broadcast: disabled
IP icmp unreachables (except port): disabled
IP icmp port-unreachable: enabled
IP unicast reverse path forwarding: none
IP load sharing: none
IP interface statistics last reset: never
IP interface software stats: (sent/received/forwarded/originated/consumed)
Unicast packets : 0/83401/0/20/20
Unicast bytes : 0/8083606/0/1680/1680
Multicast packets : 0/18518/0/0/0
Multicast bytes : 0/3120875/0/0/0
Broadcast packets : 0/285/0/0/0
Broadcast bytes : 0/98090/0/0/0
Labeled packets : 0/0/0/0/0
Labeled bytes : 0/0/0/0/0
Nexus# sh cdp nei
Capability Codes: R - Router, T - Trans-Bridge, B - Source-Route-Bridge
S - Switch, H - Host, I - IGMP, r - Repeater,
V - VoIP-Phone, D - Remotely-Managed-Device,
s - Supports-STP-Dispute
Device-ID Local Intrfce Hldtme Capability Platform Port ID
3560 mgmt0 178 S I WS-C3560-24PS Fas0/23
6500 Eth1/32 135 R S I WS-C6509-E Ten5/4
Nexus# ping 10.13.37.201 vrf management
PING 10.13.37.201 (10.13.37.201): 56 data bytes
64 bytes from 10.13.37.201: icmp_seq=0 ttl=255 time=0.278 ms
64 bytes from 10.13.37.201: icmp_seq=1 ttl=255 time=0.174 ms
64 bytes from 10.13.37.201: icmp_seq=2 ttl=255 time=0.169 ms
64 bytes from 10.13.37.201: icmp_seq=3 ttl=255 time=0.165 ms
64 bytes from 10.13.37.201: icmp_seq=4 ttl=255 time=0.165 ms
--- 10.13.37.201 ping statistics ---
5 packets transmitted, 5 packets received, 0.00% packet loss
round-trip min/avg/max = 0.165/0.19/0.278 ms
Nexus# ping 10.13.37.202
PING 10.13.37.202 (10.13.37.202): 56 data bytes
ping: sendto 10.13.37.202 64 chars, No route to host
Request 0 timed out
ping: sendto 10.13.37.202 64 chars, No route to host
Request 1 timed out
ping: sendto 10.13.37.202 64 chars, No route to host
Request 2 timed out
ping: sendto 10.13.37.202 64 chars, No route to host
Request 3 timed out
ping: sendto 10.13.37.202 64 chars, No route to host
Request 4 timed out
--- 10.13.37.202 ping statistics ---
5 packets transmitted, 0 packets received, 100.00% packet loss
Nexus# ping 10.13.37.203
PING 10.13.37.203 (10.13.37.203): 56 data bytes
ping: sendto 10.13.37.203 64 chars, No route to host
Request 0 timed out
ping: sendto 10.13.37.203 64 chars, No route to host
Request 1 timed out
ping: sendto 10.13.37.203 64 chars, No route to host
Request 2 timed out
ping: sendto 10.13.37.203 64 chars, No route to host
Request 3 timed out
ping: sendto 10.13.37.203 64 chars, No route to host
Request 4 timed out
--- 10.13.37.203 ping statistics ---
5 packets transmitted, 0 packets received, 100.00% packet loss
3560#ping 10.13.37.201
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.13.37.201, timeout is 2 seconds:
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/1 ms
Note: Now I want to be able to ping Nexus (10.13.37.201) from the 6509 (10.13.37.203), and again be able to ping both the 3560 (10.13.37.202) and 6509 (10.13.37.203) from the Nexus please. How can I do that. I can ping nexus from 3560 as shown above. -
Nexus 5548 fex interface configuration in vPC scenario
Hello Forum Team!
I have a pair of Nexus 5548UP configured as a vPC domain active-standby couple with six Nexus 2248TP Fex's in a dual-homed scenario (each fex is connected to each 5548UP). At the same time, both 5548UP are dual-homed to a pair of VSS Catalyst 6509.
I have a couple of questions regarding performance and configuration:
1. When configuring a fex interface, I've noticed that if the configuration is not the same or replicated on both 5548UP, the fex interface won't come up; is this normal behaviour? For now I am replicating manually the configuration on both 5548UP (not confyg-sync).
2. When performing ICMP tests from the 2248TP to the VSS, the response time is around 1ms to 2ms; from the 5548UP to the VSS pair is approximately 0.5ms. Is that normal delay response time from the 2248TP? I was expecting less response time from the 2248TP (the fex's have 20 Gigs of uplink bandwidth to the 5548UP pair and the 5548UP pair have 40 gig's of bandwidth to the VSS pair).
Thanks in advanced team!Hi,
1. When configuring a fex interface, I've noticed that if the configuration is not the same or replicated on both 5548UP, the fex interface won't come up; is this normal behaviour?
Yes, that is a normal behavior. Since yours FEXes are connected to BOTH 5ks, you have to configure both 5Ks in order for the FEX to come up. If you connect your FEX physically to only one 5k, then you only configurethat one 5k, but in your case you need to configure both 5ks exactly the same.
2. When performing ICMP tests from the 2248TP to the VSS, the response time is around 1ms to 2ms; from the 5548UP to the VSS pair is approximately 0.5ms. Is that normal delay response time from the 2248TP? I was expecting less response time from the 2248TP (the fex's have 20 Gigs of uplink bandwidth to the 5548UP pair and the 5548UP pair have 40 gig's of bandwidth to the VSS pair).
I have never tested this, but I would think since the 2k needs to to 5k for every packet including talking to another host on the same 2k, the delay would slightly be higher then when you ping directly from the 5k.
HTH -
Connectivity Issue between Nexus 5548 to VNX 5300
Hi All,
I am doing a lab setup where i want to connect Nexus 5548 UP to VNX 5300 storage directly. The physical connectivity is established between switch and storage but On Nexus the status of the port shows "linkFailure". I tried matching the port mode (like Auto, F) and speed but the port always shows "linkFailure".
The connectivity from Nexus to VNX is FC.
Anyone can suggest the root cause or any troubleshooting steps.
Regards,
AbhilashLinkFailure might be a GUI status.
show interface fcx/y might say,
Link failure or not connected
The physical layer link is not operational.
This means the switch is not detecting light, so the phyiscal layer is the cable and lasers (sfp's, HBAs or whatever adapter the VNX uses). It could mean you need to turn the interfaace up from the vnx side. -
Hi,
I have 2 SUN 4270 M2 servers connected with Nexus 5548 switch over 10Gb fiber card. I am getting performance of just 60 MB per second while transfer of 5Gb file across 2 servers. The similar speed i use to get on 1Gb network also. Please suggest how to improve the tranfer speed. On servers, ports ET4 and ETH5 are bonded in bond0 with mode=1. The server envrionment will be used for OVS 2.2.2.
Below are the details of network configuration on server. I quick help will be highly appriciated--
[root@host1 network-scripts]# ifconfig eth4
eth4 Link encap:Ethernet HWaddr 90:E2:BA:0E:22:4C
UP BROADCAST RUNNING SLAVE MULTICAST MTU:9000 Metric:1
RX packets:5648589 errors:215 dropped:0 overruns:0 frame:215
TX packets:3741680 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2492781394 (2.3 GiB) TX bytes:3911207623 (3.6 GiB)
[root@host1 network-scripts]# ifconfig eth5
eth5 Link encap:Ethernet HWaddr 90:E2:BA:0E:22:4C
UP BROADCAST RUNNING SLAVE MULTICAST MTU:9000 Metric:1
RX packets:52961 errors:215 dropped:0 overruns:0 frame:215
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:3916644 (3.7 MiB) TX bytes:0 (0.0 b)
[root@host1 network-scripts]# ethtool eth4
Settings for eth4:
Supported ports: [ FIBRE ]
Supported link modes: 1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 1000baseT/Full
10000baseT/Full
Advertised auto-negotiation: Yes
Speed: 10000Mb/s
Duplex: Full
Port: FIBRE
PHYAD: 0
Transceiver: external
Auto-negotiation: on
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000007 (7)
Link detected: yes
[root@host1 network-scripts]# ethtool eth5
Settings for eth5:
Supported ports: [ FIBRE ]
Supported link modes: 1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 1000baseT/Full
10000baseT/Full
Advertised auto-negotiation: Yes
Speed: 10000Mb/s
Duplex: Full
Port: FIBRE
PHYAD: 0
Transceiver: external
Auto-negotiation: on
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000007 (7)
Link detected: yes
[root@host1 network-scripts]#
[root@host1 network-scripts]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.2.4 (January 28, 2008)
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth4
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth4
MII Status: up
Link Failure Count: 0
Permanent HW addr: 90:e2:ba:0e:22:4c
Slave Interface: eth5
MII Status: up
Link Failure Count: 0
Permanent HW addr: 90:e2:ba:0e:22:4d
[root@host1 network-scripts]# modinfo ixgbe | grep ver
filename: /lib/modules/2.6.18-128.2.1.4.44.el5xen/kernel/drivers/net/ixgbe/ixgbe.ko
version: 3.9.17-NAPI
description: Intel(R) 10 Gigabit PCI Express Network Driver
srcversion: 31C6EB13C4FA6749DF3BDF5
vermagic: 2.6.18-128.2.1.4.44.el5xen SMP mod_unload Xen 686 REGPARM 4KSTACKS gcc-4.1
[root@host1 network-scripts]#brctl show
bridge name bridge id STP enabled interfaces
vlan301 8000.90e2ba0e224c no bond0.301
vlan302 8000.90e2ba0e224c no vif1.0
bond0.302
vlan303 8000.90e2ba0e224c no bond0.303
vlan304 8000.90e2ba0e224c no bond0.304
[root@host2 test]# ifconfig eth5
eth5 Link encap:Ethernet HWaddr 90:E2:BA:0F:C3:15
UP BROADCAST RUNNING SLAVE MULTICAST MTU:9000 Metric:1
RX packets:4416730 errors:215 dropped:0 overruns:0 frame:215
TX packets:2617152 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:190977431 (182.1 MiB) TX bytes:3114347186 (2.9 GiB)
[root@host2 network-scripts]# ifconfig eth4
eth4 Link encap:Ethernet HWaddr 90:E2:BA:0F:C3:15
UP BROADCAST RUNNING SLAVE MULTICAST MTU:9000 Metric:1
RX packets:28616 errors:3 dropped:0 overruns:0 frame:3
TX packets:424 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:4982317 (4.7 MiB) TX bytes:80029 (78.1 KiB)
[root@host2 test]#
[root@host2 network-scripts]# ethtool eth4
Settings for eth4:
Supported ports: [ FIBRE ]
Supported link modes: 1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 1000baseT/Full
10000baseT/Full
Advertised auto-negotiation: Yes
Speed: 10000Mb/s
Duplex: Full
Port: FIBRE
PHYAD: 0
Transceiver: external
Auto-negotiation: on
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000007 (7)
Link detected: yes
[root@host2 test]# ethtool eth5
Settings for eth5:
Supported ports: [ FIBRE ]
Supported link modes: 1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 1000baseT/Full
10000baseT/Full
Advertised auto-negotiation: Yes
Speed: 10000Mb/s
Duplex: Full
Port: FIBRE
PHYAD: 0
Transceiver: external
Auto-negotiation: on
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000007 (7)
Link detected: yes
[root@host2 network-scripts]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.2.4 (January 28, 2008)
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth5
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth5
MII Status: up
Link Failure Count: 0
Permanent HW addr: 90:e2:ba:0f:c3:14
Slave Interface: eth4
MII Status: up
Link Failure Count: 0
Permanent HW addr: 90:e2:ba:0f:c3:15
[root@host2 network-scripts]# modinfo ixgbe | grep ver
filename: /lib/modules/2.6.18-128.2.1.4.44.el5xen/kernel/drivers/net/ixgbe/ixgbe.ko
version: 3.9.17-NAPI
description: Intel(R) 10 Gigabit PCI Express Network Driver
srcversion: 31C6EB13C4FA6749DF3BDF5
vermagic: 2.6.18-128.2.1.4.44.el5xen SMP mod_unload Xen 686 REGPARM 4KSTACKS gcc-4.1
[root@host2 network-scripts]#brctl show
bridge name bridge id STP enabled interfaces
vlan301 8000.90e2ba0fc315 no bond0.301
vlan302 8000.90e2ba0fc315 no bond0.302
vlan303 8000.90e2ba0fc315 no bond0.303
vlan304 8000.90e2ba0fc315 no vif1.0
bond0.304
Thanks....
JayHi,
Thanks for reply..but the RX errors count is keep on increasing and the transfer speed between 2 servers are max 60MB/ps on 10GB FC card. Even on storage also, i am getting the same speed when i try to transfer data from server to storage on 10GB FC card. Servers and storage are connected through Nexus 5548 switch.
#ifconfig eth5
eth5 Link encap:Ethernet HWaddr 90:E2:BA:0E:22:4C
UP BROADCAST RUNNING SLAVE MULTICAST MTU:9000 Metric:1
RX packets:21187303 errors:1330 dropped:0 overruns:0 frame:1330
TX packets:17805543 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:624978785 (596.0 MiB) TX bytes:2897603160 (2.6 GiB)
JP -
Servers connected to Nexus 5548 only getting 200 Mbps of throughput
Servers connected to NExus 5k were only getting 100 Mbps of throughput. So I disabled Flow control recieve on all the ports. After this we are getting 200 Mbps of speed. The servers are connected throuh 10 gig port. Could you guys please suggest why the throguhput is still low? Atleast we should get 1Gbps of thrpoughput.
Hi Adam,
I think we probably need a little more information to go on. Can you answer the following?
What type of servers and NICs?
What OS are you running on the servers?
What cables do you have from the servers to the switch?
Are the two servers in the same subnet or is the traffic between them routed?
If routed, is that in the Nexus 5548 or some other router?
How are you testing throughput?
Presumably you're not seeing any errors on the switch ports that the servers are connected to?
Regards -
I'm new to the Nexus line and I was just wondering do I need to be cautious about connecting the mgmt 0 interface to a production network? As far as spanning-tree or changing the priority of the root bridge? I know bringing on a new switch you always want to be cautious of the configuration, along with what the switch will be used for.
I don't think it would cause a problem, especially if the port is configured for its own VLAN, but I just wanted to be sure.
Cheers,No, you don't need to worry about spanning-tree on the mgmt0 port. It is just like a host port and it is in its own vrf.
HTH -
Connecting IBM v7000 to Nexus 5548
20-Sep-2012 16:51 (in response to feisalb)
IBM V7000 with Nexus 5548UP and Nexus 4000 Design/implemetation guide
Hi Guys
I have a question in regards to connecting IBM v7000 directly to Nexus5548.
CAN WE DO THIS?
Our current setup is IBM v7000 -> MDS 9124 -> Nexus 5548.
But our MDS 9124 are out of warranty now and we need to take them out of production. And only way we can do this is if we connect our IBM v7000 fibre ports directly to our Nexus 5548.
Can someone please point me to the right direction any knowledge base articles etc.
Thanks Heaps
SidDear prkrishn
I am working on the Data Center Solution between two Data Center, details underneath
DC 1 Site
1. 2 x DC Core Switch (Nexus 7009) - will be used for Servers, Application, IBM V7000, Database, etc.
2. 2 x Network Core Switch (Cisco 6509) - Handle Campus and Inter Building connection.
3. IBM V7000 (SAN)
DC 2 Site
1. 2 x DC Core Switch (Nexus 7009) - will be used for Servers, Application, IBM V7000, Database, etc.
2. 2 x Network Core Switch (Cisco 6509) - Handle Campus and Inter Building connection.
3. IBM V7000 (SAN)
With the above mention setup, can I configure FCIP between DC1 & DC2 using Nexus 7009? or I need FCIP capable Switch such as IBM SAN Switch (SAN06-BR), I was wondering if I can configure FCIP on Nexus 7009 DC Switch.
Hoping for your kind response at earliest.
Kind Regards,
Arnold -
Telephony Issues on Nexus 5548
Dear Viewers,
I have Nexus 5548 devices in one of my client data centers and i have one 3750 switch to which all of these Avaya voice servers connect.
The 3750 switch was initially connected through a L2 Link to a 6509 catalyst switch and the telephony applications were working correctly.
The problem arises when i move this 3750 layer 2 link to a Nexus 5548 (OS version 5.1(3)N1 switch. All telephony calls coming from the outside (External calls) are not working as required but the internal calls work as usual.
What is odd is that when i migrate this L2 link back to the 6509 switch, all works as usual. This is just a layer 2 connection and i am wondering why this is not possible.
The vlan is accepted on all relevant trunks. I also deactivated igmp snooping on this voice vlan on the Nexus 5548 thinking it would help but in vain.
Any ideas and suggestions are welcome.
regards.
AlainThis is my radius config...... on a 5K
radius-server timeout 7
radius-server host 10.28.42.20 key 7 "Password" auth-port 1645 acct-port 1646 authentication accounting
radius-server host 10.28.42.21 key 7 "Password" auth-port 1645 acct-port 1646 authentication accounting
aaa group server radius Radius-Servers
server 10.28.42.20
server 10.28.42.21
aaa authentication login default group Radius-Servers
ip radius source-interface Vlan1
aaa authentication login default fallback error local
And it is currently working. On the radius server i also had to do this to make the users admins once logged in:
https://supportforums.cisco.com/document/137181/nexus-integration-admin-access-free-radius -
Hi All,
I have issues with Nexus 5548 ports 1000Gb ports. They go down after sometime with the error " Link not connected " while the links are connected. When i move the connections to other ports they work but after a while the go down again with the same error. I can confirm that is currently down was working and its currently connected. Has anyone seen the error before?
Kindly see the output from the interface thats currently down below:
VNX_NEXUS# sho interface eth1/11
Ethernet1/11 is down (Link not connected)
Hardware: 1000/10000 Ethernet, address: 002a.6a71.1f92 (bia 002a.6a71.1f92)
Description: Link_to_EMC_RPA3
MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA
Port mode is access
auto-duplex, 1000 Mb/s, media type is 10G
Beacon is turned off
Input flow-control is off, output flow-control is off
Rate mode is dedicated
Switchport monitor is off
EtherType is 0x8100
Last link flapped 2d23h
Last clearing of "show interface" counters 5w6d
30 seconds input rate 0 bits/sec, 0 packets/sec
30 seconds output rate 0 bits/sec, 0 packets/sec
Load-Interval #2: 5 minute (300 seconds)
input rate 0 bps, 0 pps; output rate 0 bps, 0 pps
RX
43384443 unicast packets 30 multicast packets 1496 broadcast packets
43385969 input packets 7837558138 bytes
0 jumbo packets 0 storm suppression bytes
0 runts 0 giants 0 CRC 0 no buffer
0 input error 0 short frame 0 overrun 0 underrun 0 ignored
0 watchdog 0 bad etype drop 0 bad proto drop 0 if down drop
0 input with dribble 0 input discard
0 Rx pause
TX
56587244 unicast packets 3937125 multicast packets 1487058 broadcast packets
62011427 output packets 14141808286 bytes
0 jumbo packets
0 output errors 0 collision 0 deferred 0 late collision
0 lost carrier 0 no carrier 0 babble 0 output discard
0 Tx pause
18 interface resetsHi Leo,
What confuses me is that the connection was working. I used the correct cable and the port type as well. How do i resolve this? -
Nexus 5548 not responding to snmp
I've got a Nexus 5548 running 6.0(2)N2(3). It does not use the mgmt interface or management vrf. It's using a vlan interface for all my management access.
I have a simple snmp config set up:
snmp-server community mystring
My SNMP server is directly connected (no firewalls, no acls). I can ping my nexus from the SNMP host, but can't get SNMP replies.
I've done an SNMP debug, nothing happens when I run an snmpwalk. I also checked show snmp, and it's showing no SNMP input packets.
Could this have something to do with trying to use the management vrf? Or something simple I'm missing?
ThanksHa wow -- "sh run snmp" pointed me to the problem. There was a command:
no snmp-server protocol enable
That must be a default, I never entered that. Anyway a 'snmp-server protocol enable' fixed it. I should have caught that. Although an hour with TAC also didn't notice it hehe.
Thanks! -
Hi,
We want to upgrade our pair of Nexus 5548 to the new NX-OS 5.1(3)N2(1a) from the 5.0(3)N1(1c) version. We would like to use the ISSU procedure. But when we execute the command "show spannig-tree issu-impact" we get the following output:
No Active Topology change Found!
Criteria 1 PASSED !!
No Ports with BA Enabled Found!
Criteria 2 PASSED!!
List of all the Non-Edge Ports
Port VLAN Role Sts Tree Type Instance
Ethernet2/8 1803 Desg FWD PVRST 1803
The 1803 vlan is only used for the peer-keepalive link and it only exists on these two Nexus. So one of the two Nexus needs to be the STP root. That makes the ports on that vlan to be in designated-forwarding state, which is not supported for the ISSU:
sh run int e2/8
!Command: show running-config interface Ethernet2/8
!Time: Fri Jun 8 17:04:33 2012
version 5.0(3)N1(1c)
interface Ethernet2/8
switchport access vlan 1803
speed 1000
That is the only port that belongs to that VLAN and it is directly connected to the other Nexus 5548. So the only way we see to avoid this port of being in designated-forwarding state is to apply the "no spanning-tree vlan 1803" command. Would it be a problem?
We can imagine that introducing the "spanning-tree port type edge" should not be a good idea, shouldn´t it?
Thank you very much for your help!
JosuHi,
Reviewing all the prerequisites for the ISSU, we have seen the following:
SSU and Layer 3
Cisco Nexus 5500 Platform switches support Layer 3 functionality. However, the system cannot be upgraded with the ISSU process (non disruptive upgrade) when Layer 3 is enabled. It is required to unconfigure all Layer 3 features to be able to upgrade in a non disruptive way with an ISSU.
We have the interface-vlan feature enabled. But it is only used for two interfaces:
- interface-vlan 510 --> It is only used in order connect to the switch
- interface-vlan 1803 --> The one used for the keepalive
We could administratevely shutdown the interface-vlan 510. But we could not do so with the interface-vlan 1803, since it is used for the keepalive. If we execute "no feature interface-vlan", would the keepalive stop working?
When we execute "sh install all impact ..." command the Nexus does not tell anything about this feature. Is really recommended to disable it? Is it needed for the ISSU procedure?
Thank you very much in advance!!
JOSU -
Nexus 5548 and Define static route to forward traffic to Catalyst 4500
Dear Experts,
Need your technical assistance for the Static routing in between Nexus 5548 and Catalyst 4500.
Further I connected both Nexus 5548 with Catalyst 4500 as individual trunk ports because there is HSRP on Catalyst 4500. So I just took 1 port from each nexus 5548, make it trunk with the Core Switch (Also make trunk from each Switch each port). Change the speed on Nexus to 1000 because other side on Catalyst 4500 line card is 1G RJ45.
*Here is the Config on Nexus 5548 to make port a Trunk:*
N5548-A/ N5548-B
Interface Ethernet1/3
Switchport mode trunk
Speed 1000
Added the static route on both nexus for Core HSRP IP: *ip route 0.0.0.0/0 10.10.150.39 (Virtual HSRP IP )*
But I could not able to ping from N5548 Console to core Switch IP of HSRP? Is there any further configuration to enable routing or ping?
Pleas suggestHello,
Please see attached config for both Nexus 5548. I dont have Catalyst 4500 but below is simple config what I applied:
Both Catalyst 4500
interface gig 3/48
switchport mode trunk
switchport trunk encap dot1q
On Nexus 5548 Port 1/3 is trunk
Thanks,
Jehan -
FCoE with Cisco Nexus 5548 switches and VMware ESXi 4.1
Can someone share with me what needs to be setup on the Cisco Nexus side to work with VMware in the following scenario?
Two servers with two cards dual port FCoE cards with two ports connected to two Nexus 5548 switches that are clusterd together. We want to team the ports together on the VMware side using IP Hash so what should be done on the cisco side for this to work?
Thanks...Andres,
The Cisco Road Map for the 5010 and 5020 doesn't include extending the current total (12) FEX capabities. The 5548 and 5596 will support more (16) per 55xxk, and with the 7K will support upto 32 FEX's.
Documentation has been spotty on this subject, because the term 5k indicates that all 5000 series switches will support extended FEX's which is not the case only the 55xx will support more than 12 FEX. Maybe in the future the terminology for the 5k series should be term 5000 series and 5500 series Nexus, there are several differences and advancements between the two series. -
Command to see transmit qeueing drops in Nexus 5548
Hello, 10g links in our core are getting rather congested as seen by MRTG graphs. Any command on Nexus 5548 to show transmit queuing drops on a given interface?
You could use #show queing interface eth 1/1
it shows you output similar as below:
NEXUS-1# show queuing interface ethernet 1/1
Ethernet1/1 queuing information:
TX Queuing
qos-group sched-type oper-bandwidth
0 WRR 100
RX Queuing
qos-group 0
q-size: 470080, HW MTU: 9216 (9216 configured)
drop-type: drop, xon: 0, xoff: 470080
Statistics:
Pkts received over the port : 222434
Ucast pkts sent to the cross-bar : 199674
Mcast pkts sent to the cross-bar : 22760
Ucast pkts received from the cross-bar : 101087
Pkts sent to the port : 145083
Pkts discarded on ingress : 0
Per-priority-pause status : Rx (Active), Tx (Inactive)
Total Multicast crossbar statistics:
Mcast pkts received from the cross-bar : 43996
Maybe you are looking for
-
Adding vendor name to Position part in report Payment List - s_p99_41000099
HI, How can I add Vendor Name to detail level of the Payment List report (transaction s_p99_41000099). Change Layout window allows to add Vendor Name to the header level and Vendor Number to the detail level. Users want to export report into Excel an
-
How to show end time in week view?
Hello I would like to know how I can get ical, to show the end time of an event? I really want this to show up in the week view. Is this possible? I have been checking around on this forum and many more, but no answer I really hope some one will help
-
Import: CVD Base Value field is blank in MIGO
All SAP gurus, We are running a Import's scenario. For this we have created Z condition types for CVD and BCD, maintained all the condition types in pricing procedure (Customs Duty and CVD etc) Excise defauls are also maintained. problem we are faci
-
PHP/Oracle/Apache on Windows mid tier
Hey, I am trying to set up PHP to connect to my oracle instance using apache. The PHP/Apache set up is on XP, which I also have a 9i client installed. I am trying to then connect to a database on a linux box. the 9i client works fine, SQL*Plus and ev
-
Quicktime Settings are saved ...Where?
Greetings, anyone know where the Quicktime control panel applet saves it's settings? I have Quicktime player installed on XP Pro machines. Latest SPacks and updates applied. In Win2003Domain, and users have mandatory profiles and run at user level. P