Connecting Nexus 5548 to Catalyst 6500 VS S720 - 10 G
good day,
Could anyone out-there please assit me with basic connectivity/configuration of the 2 devices for the 2 devcies communicate e.g be able to ping each other managemnet interfaces.
Nexus Configuration:
vrf context management
ip route 0.0.0.0/0 10.200.1.4
vlan 1
interface mgmt0
ip address 10.200.1.2/16
Catalyst 6500:
interface Vlan1
description Nexus
ip address 10.200.1.4 255.255.0.0
interface TenGigabitEthernet5/4
switchport
Note: I am able to get all the devices throught SH CDP NEIG command. assist please.
Nexus# sh ip int mgmt0
IP Interface Status for VRF "management"(2)
mgmt0, Interface status: protocol-up/link-up/admin-up, iod: 2,
IP address: 10.13.37.201, IP subnet: 10.13.37.128/25
IP broadcast address: 255.255.255.255
IP multicast groups locally joined: none
IP MTU: 1500 bytes (using link MTU)
IP primary address route-preference: 0, tag: 0
IP proxy ARP : disabled
IP Local Proxy ARP : disabled
IP multicast routing: disabled
IP icmp redirects: enabled
IP directed-broadcast: disabled
IP icmp unreachables (except port): disabled
IP icmp port-unreachable: enabled
IP unicast reverse path forwarding: none
IP load sharing: none
IP interface statistics last reset: never
IP interface software stats: (sent/received/forwarded/originated/consumed)
Unicast packets : 0/83401/0/20/20
Unicast bytes : 0/8083606/0/1680/1680
Multicast packets : 0/18518/0/0/0
Multicast bytes : 0/3120875/0/0/0
Broadcast packets : 0/285/0/0/0
Broadcast bytes : 0/98090/0/0/0
Labeled packets : 0/0/0/0/0
Labeled bytes : 0/0/0/0/0
Nexus# sh cdp nei
Capability Codes: R - Router, T - Trans-Bridge, B - Source-Route-Bridge
S - Switch, H - Host, I - IGMP, r - Repeater,
V - VoIP-Phone, D - Remotely-Managed-Device,
s - Supports-STP-Dispute
Device-ID Local Intrfce Hldtme Capability Platform Port ID
3560 mgmt0 178 S I WS-C3560-24PS Fas0/23
6500 Eth1/32 135 R S I WS-C6509-E Ten5/4
Nexus# ping 10.13.37.201 vrf management
PING 10.13.37.201 (10.13.37.201): 56 data bytes
64 bytes from 10.13.37.201: icmp_seq=0 ttl=255 time=0.278 ms
64 bytes from 10.13.37.201: icmp_seq=1 ttl=255 time=0.174 ms
64 bytes from 10.13.37.201: icmp_seq=2 ttl=255 time=0.169 ms
64 bytes from 10.13.37.201: icmp_seq=3 ttl=255 time=0.165 ms
64 bytes from 10.13.37.201: icmp_seq=4 ttl=255 time=0.165 ms
--- 10.13.37.201 ping statistics ---
5 packets transmitted, 5 packets received, 0.00% packet loss
round-trip min/avg/max = 0.165/0.19/0.278 ms
Nexus# ping 10.13.37.202
PING 10.13.37.202 (10.13.37.202): 56 data bytes
ping: sendto 10.13.37.202 64 chars, No route to host
Request 0 timed out
ping: sendto 10.13.37.202 64 chars, No route to host
Request 1 timed out
ping: sendto 10.13.37.202 64 chars, No route to host
Request 2 timed out
ping: sendto 10.13.37.202 64 chars, No route to host
Request 3 timed out
ping: sendto 10.13.37.202 64 chars, No route to host
Request 4 timed out
--- 10.13.37.202 ping statistics ---
5 packets transmitted, 0 packets received, 100.00% packet loss
Nexus# ping 10.13.37.203
PING 10.13.37.203 (10.13.37.203): 56 data bytes
ping: sendto 10.13.37.203 64 chars, No route to host
Request 0 timed out
ping: sendto 10.13.37.203 64 chars, No route to host
Request 1 timed out
ping: sendto 10.13.37.203 64 chars, No route to host
Request 2 timed out
ping: sendto 10.13.37.203 64 chars, No route to host
Request 3 timed out
ping: sendto 10.13.37.203 64 chars, No route to host
Request 4 timed out
--- 10.13.37.203 ping statistics ---
5 packets transmitted, 0 packets received, 100.00% packet loss
3560#ping 10.13.37.201
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.13.37.201, timeout is 2 seconds:
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/1 ms
Note: Now I want to be able to ping Nexus (10.13.37.201) from the 6509 (10.13.37.203), and again be able to ping both the 3560 (10.13.37.202) and 6509 (10.13.37.203) from the Nexus please. How can I do that. I can ping nexus from 3560 as shown above.
Similar Messages
-
Connecting NEXUS 5548 1gig interface to 100mbps
Hi,
I have a 5548 that I need to connect to a firewall that supports 100 Mbps only.
Can I configure interface speed on Nexus 5548 interface (GLC-T) to 100 Mbps inorder to connect it to the firewall??
Regards,
SabihHi Sabih,
The interfaces on a Nexus 5548 can NOT be configured as 100 Mbps.
If you wish to connect to the firewall via a 100 Mbps connection, you will need to make use of a Fabric Extender (Nexus 2000) that supports 100 Mbps.
http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/data_sheet_c78-618603.html
Thanks,
Michael -
VPC on Nexus 5000 with Catalyst 6500 (no VSS)
Hi, I'm pretty new on the Nexus and UCS world so I have some many questions I hope you can help on getting some answers.
The diagram below is the configuration we are looking to deploy, that way because we do not have VSS on the 6500 switches so we can not create only one Etherchannel to the 6500s.
Our blades inserted on the UCS chassis have INTEL dual port cards, so they do not support full failover.
Questions I have are.
- Is this my best deployment choice?
- vPC highly depend on the management interface on the Nexus 5000 for the keep alive peer monitoring, so what is going to happen if the vPC brakes due to:
- one of the 6500 goes down
- STP?
- What is going to happend with the Etherchannels on the remaining 6500?
- the Management interface goes down for any other reason
- which one is going to be the primary NEXUS?
Below is the list of devices involved and the configuration for the Nexus 5000 and 65000.
Any help is appreciated.
Devices
· 2 Cisco Catalyst with two WS-SUP720-3B each (no VSS)
· 2 Cisco Nexus 5010
· 2 Cisco UCS 6120xp
· 2 UCS Chassis
- 4 Cisco B200-M1 blades (2 each chassis)
- Dual 10Gb Intel card (1 per blade)
vPC Configuration on Nexus 5000
TACSWN01
TACSWN02
feature vpc
vpc domain 5
reload restore
reload restore delay 300
Peer-keepalive destination 10.11.3.10
role priority 10
!--- Enables vPC, define vPC domain and peer for keep alive
int ethernet 1/9-10
channel-group 50 mode active
!--- Put Interfaces on Po50
int port-channel 50
switchport mode trunk
spanning-tree port type network
vpc peer-link
!--- Po50 configured as Peer-Link for vPC
inter ethernet 1/17-18
description UCS6120-A
switchport mode trunk
channel-group 51 mode active
!--- Associates interfaces to Po51 connected to UCS6120xp-A
int port-channel 51
swithport mode trunk
vpc 51
spannig-tree port type edge trunk
!--- Associates vPC 51 to Po51
inter ethernet 1/19-20
description UCS6120-B
switchport mode trunk
channel-group 52 mode active
!--- Associates interfaces to Po51 connected to UCS6120xp-B
int port-channel 52
swithport mode trunk
vpc 52
spannig-tree port type edge trunk
!--- Associates vPC 52 to Po52
!----- CONFIGURATION for Connection to Catalyst 6506
Int ethernet 1/1-3
description Cat6506-01
switchport mode trunk
channel-group 61 mode active
!--- Associate interfaces to Po61 connected to Cat6506-01
Int port-channel 61
switchport mode trunk
vpc 61
!--- Associates vPC 61 to Po61
Int ethernet 1/4-6
description Cat6506-02
switchport mode trunk
channel-group 62 mode active
!--- Associate interfaces to Po62 connected to Cat6506-02
Int port-channel 62
switchport mode trunk
vpc 62
!--- Associates vPC 62 to Po62
feature vpc
vpc domain 5
reload restore
reload restore delay 300
Peer-keepalive destination 10.11.3.9
role priority 20
!--- Enables vPC, define vPC domain and peer for keep alive
int ethernet 1/9-10
channel-group 50 mode active
!--- Put Interfaces on Po50
int port-channel 50
switchport mode trunk
spanning-tree port type network
vpc peer-link
!--- Po50 configured as Peer-Link for vPC
inter ethernet 1/17-18
description UCS6120-A
switchport mode trunk
channel-group 51 mode active
!--- Associates interfaces to Po51 connected to UCS6120xp-A
int port-channel 51
swithport mode trunk
vpc 51
spannig-tree port type edge trunk
!--- Associates vPC 51 to Po51
inter ethernet 1/19-20
description UCS6120-B
switchport mode trunk
channel-group 52 mode active
!--- Associates interfaces to Po51 connected to UCS6120xp-B
int port-channel 52
swithport mode trunk
vpc 52
spannig-tree port type edge trunk
!--- Associates vPC 52 to Po52
!----- CONFIGURATION for Connection to Catalyst 6506
Int ethernet 1/1-3
description Cat6506-01
switchport mode trunk
channel-group 61 mode active
!--- Associate interfaces to Po61 connected to Cat6506-01
Int port-channel 61
switchport mode trunk
vpc 61
!--- Associates vPC 61 to Po61
Int ethernet 1/4-6
description Cat6506-02
switchport mode trunk
channel-group 62 mode active
!--- Associate interfaces to Po62 connected to Cat6506-02
Int port-channel 62
switchport mode trunk
vpc 62
!--- Associates vPC 62 to Po62
vPC Verification
show vpc consistency-parameters
!--- show compatibility parameters
Show feature
!--- Use it to verify that vpc and lacp features are enabled.
show vpc brief
!--- Displays information about vPC Domain
Etherchannel configuration on TAC 6500s
TACSWC01
TACSWC02
interface range GigabitEthernet2/38 - 43
description TACSWN01 (Po61 vPC61)
switchport
switchport trunk encapsulation dot1q
switchport mode trunk
no ip address
channel-group 61 mode active
interface range GigabitEthernet2/38 - 43
description TACSWN02 (Po62 vPC62)
switchport
switchport trunk encapsulation dot1q
switchport mode trunk
no ip address
channel-group 62 mode activeihernandez81,
Between the c1-r1 & c1-r2 there are no L2 links, ditto with d6-s1 & d6-s2. We did have a routed link just to allow orphan traffic.
All the c1r1 & c1-r2 HSRP communications ( we use GLBP as well ) go from c1-r1 to c1-r2 via the hosp-n5k-s1 & hosp-n5k-s2. Port channels 203 & 204 carry the exact same vlans.
The same is the case on the d6-s1 & d6-s2 sides except we converted them to a VSS cluster so we only have po203 with 4 *10 Gb links going to the 5Ks ( 2 from each VSS member to each 5K).
As you can tell what we were doing was extending VM vlans between 2 data centers prior to arrivals of 7010s and UCS chassis - which worked quite well.
If you got on any 5K you would see 2 port channels - 203 & 204 - going to each 6500, again when one pair went to VSS po204 went away.
I know, I know they are not the same things .... but if you view the 5Ks like a 3750 stack .... how would you hook up a 3750 stack from 2 6500s and if you did why would you run an L2 link between the 6500s ?
For us using 4 10G ports between 6509s took ports that were too expensive - we had 6704s - so use the 5Ks.
Our blocking link was on one of the links between site1 & site2. If we did not have wan connectivty there would have been no blocking or loops.
Caution .... if you go with 7Ks beware of the inability to do L2/L3 via VPCs.
better ?
one of the nice things about working with some of this stuff is as long as you maintain l2 connectivity if you are migrating things they tend to work, unless they really break -
Catalyst 6500 VS-S720-10G and VRF Capacity
Hi,
I have at the 6500 with vs-s720-10G. the datasheet say 1024 VRFs each populated with up to 700 routes/VRF for MPLS. MPLS in hardware to enable use of layer 3 VPNs and EoMPLS tunneling. Up to 1024 VRFs with a total of up to 256,000 routes per system.
I'am configurating 70 VRF with 883 routes with VRF-lite.
will it support this routes number ??
regardsWith your VRF-lite deployment you described, are you planning to run any dynamic routing protocols, or are all the routes static? If you are using dynamic routing for these VRF lite instances, I would probably be worried about the number of IGP instances needed. However, maybe someone else has run a high number of VRF lite / IGP instances like that and could share their experiences.
Another concern with a 70 VRF deployment using VRF-Lite is the operational overhead, especially if you are running end-to-end VRF-lite. The Path Isolation Design Guide recommends as a rule of thumb no more than 10-15 VRF's when doing end to end VRF lite.
http://www.cisco.com/en/US/docs/solutions/Enterprise/Network_Virtualization/PathIsol.pdf
Good luck,
Matt -
Nexus 5548 and Define static route to forward traffic to Catalyst 4500
Dear Experts,
Need your technical assistance for the Static routing in between Nexus 5548 and Catalyst 4500.
Further I connected both Nexus 5548 with Catalyst 4500 as individual trunk ports because there is HSRP on Catalyst 4500. So I just took 1 port from each nexus 5548, make it trunk with the Core Switch (Also make trunk from each Switch each port). Change the speed on Nexus to 1000 because other side on Catalyst 4500 line card is 1G RJ45.
*Here is the Config on Nexus 5548 to make port a Trunk:*
N5548-A/ N5548-B
Interface Ethernet1/3
Switchport mode trunk
Speed 1000
Added the static route on both nexus for Core HSRP IP: *ip route 0.0.0.0/0 10.10.150.39 (Virtual HSRP IP )*
But I could not able to ping from N5548 Console to core Switch IP of HSRP? Is there any further configuration to enable routing or ping?
Pleas suggestHello,
Please see attached config for both Nexus 5548. I dont have Catalyst 4500 but below is simple config what I applied:
Both Catalyst 4500
interface gig 3/48
switchport mode trunk
switchport trunk encap dot1q
On Nexus 5548 Port 1/3 is trunk
Thanks,
Jehan -
I have an LC/APC fiber patch cord infrastructure and I want to connect it to Cisco Catalyst 6500 & Cisco Access 3750 Switches. what type of transceiver should be used?
I read a note on Cisco website stating the following for Cisco SFP+ transceivers:
Note: "Only connections with patch cords with PC or UPC connectors are supported. Patch cords with APC connectors are not supported. All cables and cable assemblies used must be compliant with the standards specified in the standards section"Thank you, but my question is that I have a single mode fiber patch cord with LC/APC connector while cisco stating a note that only use LC/PC or LC/UPC type of connectors with SFP+ transceiver.
So what type of transceiver should I use to connect LC/APC patch cord to cisco switches? Is there another type or SFP+ still can be used? -
Connectivity Issue between Nexus 5548 to VNX 5300
Hi All,
I am doing a lab setup where i want to connect Nexus 5548 UP to VNX 5300 storage directly. The physical connectivity is established between switch and storage but On Nexus the status of the port shows "linkFailure". I tried matching the port mode (like Auto, F) and speed but the port always shows "linkFailure".
The connectivity from Nexus to VNX is FC.
Anyone can suggest the root cause or any troubleshooting steps.
Regards,
AbhilashLinkFailure might be a GUI status.
show interface fcx/y might say,
Link failure or not connected
The physical layer link is not operational.
This means the switch is not detecting light, so the phyiscal layer is the cable and lasers (sfp's, HBAs or whatever adapter the VNX uses). It could mean you need to turn the interfaace up from the vnx side. -
Replacement catalyst 6500 switches under redundancy environment
Hi everyone,
I plan to replace old core catalyst 6500 switches with new ones for the purpose of reinforcement.
Now two core catalyst 6500 switches are working under redundancy environment.
There are many catalyst 6500 switches as distribution switch connect to each core catalyst
6500 switches as attached.
I think there are two ways to replace core catalyst 6500 switches.
[One]
Replacing one core catalyst 6500 switches first, then one week later, replacing another core
catalyst 6500 switch. And all traffic will be handled another core catalyst 6500 switch automatically
by EIGRP routing during replacement.
Advantage:
One another core catalyst 6500 switch continues operating even if the replacement fail.
Disadvantage:
Two core catalyst 6500 switches will operate in a different version (CatOS, MSFC IOS) for one week.
Any problem might be happened due to this issue.
[Two]
Replacing both core catalyst 6500 switches at the same time.
Advantage:
Replacement will be finished at one time
Disadvantage:
If the replacement fail, whole network goes to down and it cause critical situation.
I have to replace successfully so I would like know good information about this, such as
best practice, case study and so on.
Your information would be greatly appreciated.
Best regards,Hi,
If I were you, I will go for option 1.
This option will give us the time to observe the traffic pattern, time to get the network and EIGRP to stabilize and even to check for any issues on the IOS part.
This will give you time frame to work out for any issue if it happens in between the weeks time.This will gibe you tha time to see for any imcompatibilty issues as such.
HTH, Please rate if it does.
-amit singh -
Connect Nexus 5548UP-L3 to Catalyst 3750G-24T-E Layer 3 Switch
Please help!
Could anyone out there please assist me with basic configuration between Nexus Switch and Catalyst Switch, so that devices connected on the catalyst switch can talk to devices connected on nexus switch and vice-versa? In my current setup all servers on VLAN 40 are connected on the Catalyst Switch A as shown in the diagram below, and all desktops and all other peripherals are connected on the Catalyst Switch B. I am required to implement/add a new Nexus Switch 5548 that in the future will replace the Switch A. From now I just need to connect both switches together and start moving the server from Switch A to the Nexus Switch.
The current network setup is shown as per diagram below:
SWITCH A – this is a layer 3 switch. All servers are connected to this switch on the VLAN 40.
SWITCH B – all desktops, VoIP telephones, and printers are connected on tis switch. This switch is also a layer 3 switch.
I have connected together the Nexus 5548UP and SWITCH A (3750G) using the GLC-T= 1000BASE-T SFP transceiver module for Category 5 copper wire. The new network is shown as per diagram below:
Below is the configuration I have created in both Switches:
SWITCH A - 3750G
interface Vlan40
description ** Server VLAN **
ip address 10.144.40.2 255.255.255.128
ip helper-address 10.144.40.39
ip helper-address 10.144.40.40
interface Vlan122
description connection to N5K-C5548UP Switch mgmt0
ip address 172.16.0.1 255.255.255.128
no ip redirects
interface Port-channel1
description UpLink to N5K-C5548UP Switch e1/1-2
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 1,30,40,100,101,122
switchport mode trunk
interface GigabitEthernet1/0/3
description **Connected to server A**
switchport access vlan 40
no mdix auto
spanning-tree portfast
interface GigabitEthernet1/0/20
description connection to N5K-C5548UP Switch mgmt0
switchport access vlan 122
switchport mode access
spanning-tree portfast
interface GigabitEthernet1/0/23
description UpLink to N5K-C5548UP Switch e1/1
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 1,30,40,100,101,122
switchport mode trunk
channel-group 1 mode active
interface GigabitEthernet1/0/24
description UpLink to N5K-C5548UP Switch e1/2
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 1,30,40,100,101,122
switchport mode trunk
channel-group 1 mode active
N5K-C5548UP Switch
feature interface-vlan
feature lacp
feature dhcp
feature lldp
vrf context management
ip route 0.0.0.0/0 172.16.0.1
vlan 1
vlan 100
service dhcp
ip dhcp relay
interface Vlan1
no shutdown
interface Vlan40
description ** Server VLAN **
no shutdown
ip address 10.144.40.3/25
ip dhcp relay address 10.144.40.39
ip dhcp relay address 10.144.40.40
interface port-channel1
description ** Trunk Link to Switch A g1/0/23-24 **
switchport mode trunk
switchport trunk allowed vlan 1,30,40,100-101,122
speed 1000
interface Ethernet1/1
description ** Trunk Link to Switch A g1/0/23**
switchport mode trunk
switchport trunk allowed vlan 1,30,40,100-101,12
speed 1000
channel-group 1 mode active
interface Ethernet1/2
description ** Trunk Link to Switch A g1/0/24**
switchport mode trunk
switchport trunk allowed vlan 1,30,40,100-101,122
speed 1000
channel-group 1 mode active
interface Ethernet1/3
description **Connected to server B**
switchport access vlan 40
speed 1000
interface mgmt0
description connection to Switch A g2/0/20
no ip redirects
ip address 172.16.0.2/25
I get a successful response from Server A when I ping the N5K-C5548UP Switch (VLAN 40 interface (10.144.40.3) .But if I try to ping from Server A to Server B or vice-versa the ping fails. From N5K-C5548UP I can ping successful either Server A or Server B. What am I doing wrong here? Is there any additional configuration that I need to add on the Nexus Switch? Please Help. Thank you.no, no secret aukhadiev
I made a mistake without realising and the interface e1/3 was showing "Interface Ethernet1/3 is down (Inactive)". After spending sometime trying to figure out what was wrong with that interface or switch, it turned out to be that i forgot to add the vlan 40. Now the config looks like this:
N5K-C5548UP Switch
feature interface-vlan
feature lacp
feature dhcp
feature lldp
vrf context management
ip route 0.0.0.0/0 172.16.0.1
vlan 1
vlan 40
vlan 100
service dhcp
ip dhcp relay
interface Vlan1
no shutdown
interface Vlan40
description ** Server VLAN **
no shutdown
ip address 10.144.40.3/25
ip dhcp relay address 10.144.40.39
ip dhcp relay address 10.144.40.40
interface port-channel1
description ** Trunk Link to Switch A g1/0/23-24 **
switchport mode trunk
switchport trunk allowed vlan 1,30,40,100-101,122
speed 1000
interface Ethernet1/1
description ** Trunk Link to Switch A g1/0/23**
switchport mode trunk
switchport trunk allowed vlan 1,30,40,100-101,12
speed 1000
channel-group 1 mode active
interface Ethernet1/2
description ** Trunk Link to Switch A g1/0/24**
switchport mode trunk
switchport trunk allowed vlan 1,30,40,100-101,122
speed 1000
channel-group 1 mode active
interface Ethernet1/3
description **Connected to server B**
switchport access vlan 40
speed 1000
interface mgmt0
description connection to Switch A g2/0/20
no ip redirects
ip address 172.16.0.2/25
Thank you,
JN -
Servers connected to Nexus 5548 only getting 200 Mbps of throughput
Servers connected to NExus 5k were only getting 100 Mbps of throughput. So I disabled Flow control recieve on all the ports. After this we are getting 200 Mbps of speed. The servers are connected throuh 10 gig port. Could you guys please suggest why the throguhput is still low? Atleast we should get 1Gbps of thrpoughput.
Hi Adam,
I think we probably need a little more information to go on. Can you answer the following?
What type of servers and NICs?
What OS are you running on the servers?
What cables do you have from the servers to the switch?
Are the two servers in the same subnet or is the traffic between them routed?
If routed, is that in the Nexus 5548 or some other router?
How are you testing throughput?
Presumably you're not seeing any errors on the switch ports that the servers are connected to?
Regards -
Connecting IBM v7000 to Nexus 5548
20-Sep-2012 16:51 (in response to feisalb)
IBM V7000 with Nexus 5548UP and Nexus 4000 Design/implemetation guide
Hi Guys
I have a question in regards to connecting IBM v7000 directly to Nexus5548.
CAN WE DO THIS?
Our current setup is IBM v7000 -> MDS 9124 -> Nexus 5548.
But our MDS 9124 are out of warranty now and we need to take them out of production. And only way we can do this is if we connect our IBM v7000 fibre ports directly to our Nexus 5548.
Can someone please point me to the right direction any knowledge base articles etc.
Thanks Heaps
SidDear prkrishn
I am working on the Data Center Solution between two Data Center, details underneath
DC 1 Site
1. 2 x DC Core Switch (Nexus 7009) - will be used for Servers, Application, IBM V7000, Database, etc.
2. 2 x Network Core Switch (Cisco 6509) - Handle Campus and Inter Building connection.
3. IBM V7000 (SAN)
DC 2 Site
1. 2 x DC Core Switch (Nexus 7009) - will be used for Servers, Application, IBM V7000, Database, etc.
2. 2 x Network Core Switch (Cisco 6509) - Handle Campus and Inter Building connection.
3. IBM V7000 (SAN)
With the above mention setup, can I configure FCIP between DC1 & DC2 using Nexus 7009? or I need FCIP capable Switch such as IBM SAN Switch (SAN06-BR), I was wondering if I can configure FCIP on Nexus 7009 DC Switch.
Hoping for your kind response at earliest.
Kind Regards,
Arnold -
Two Nexus 5020 vPC etherchannel with Two Catalyst 6500 VSS
Hi,
we are fighting with an 40 Gbps etherchannel between 2 Nx 5000 and 2 Catalyst 6500 but the etherchannel never comes up. Here is the config:
NK5-1
interface port-channel30
description Trunk hacia VSS 6500
switchport mode trunk
vpc 30
switchport trunk allowed vlan 50-54
speed 10000
interface Ethernet1/3
switchport mode trunk
switchport trunk allowed vlan 50-54
beacon
channel-group 30
interface Ethernet1/4
switchport mode trunk
switchport trunk allowed vlan 50-54
channel-group 30
NK5-2
interface port-channel30
description Trunk hacia VSS 6500
switchport mode trunk
vpc 30
switchport trunk allowed vlan 50-54
speed 10000
interface Ethernet1/3
switchport mode trunk
switchport trunk allowed vlan 50-54
beacon
channel-group 30
interface Ethernet1/4
switchport mode trunk
switchport trunk allowed vlan 50-54
beacon
channel-group 30
Catalyst 6500 VSS
interface Port-channel30
switchport
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 50-54
interface TenGigabitEthernet2/1/2
switchport
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 50-54
channel-protocol lacp
channel-group 30 mode passive
interface TenGigabitEthernet2/1/3
switchport
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 50-54
channel-protocol lacp
channel-group 30 mode passive
interface TenGigabitEthernet1/1/2
switchport
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 50-54
channel-protocol lacp
channel-group 30 mode passive
interface TenGigabitEthernet1/1/3
switchport
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 50-54
channel-protocol lacp
channel-group 30 mode passive
The "Show vpc 30" is as follows
N5K-2# sh vpc 30
vPC status
id Port Status Consistency Reason Active vlans
30 Po30 down* success success -
But the "Show vpc Consistency-parameters vpc 30" is
N5K-2# sh vpc consistency-parameters vpc 30
Legend:
Type 1 : vPC will be suspended in case of mismatch
Name Type Local Value Peer Value
Shut Lan 1 No No
STP Port Type 1 Default Default
STP Port Guard 1 None None
STP MST Simulate PVST 1 Default Default
mode 1 on -
Speed 1 10 Gb/s -
Duplex 1 full -
Port Mode 1 trunk -
Native Vlan 1 1 -
MTU 1 1500 -
Allowed VLANs - 50-54 50-54
Local suspended VLANs - - -
We will apreciate any advice,
Thank you very much for your time...
JoseHi Lucien,
here is the "show vpc brief"
N5K-2# sh vpc brief
Legend:
(*) - local vPC is down, forwarding via vPC peer-link
vPC domain id : 5
Peer status : peer adjacency formed ok
vPC keep-alive status : peer is alive
Configuration consistency status: success
Per-vlan consistency status : success
Type-2 consistency status : success
vPC role : secondary
Number of vPCs configured : 2
Peer Gateway : Disabled
Dual-active excluded VLANs : -
Graceful Consistency Check : Enabled
vPC Peer-link status
id Port Status Active vlans
1 Po5 up 50-54
vPC status
id Port Status Consistency Reason Active vlans
30 Po30 down* success success -
31 Po31 down* failed Consistency Check Not -
Performed
*************************************************************************+
*************************************************************************+
N5K-1# sh vpc brief
Legend:
(*) - local vPC is down, forwarding via vPC peer-link
vPC domain id : 5
Peer status : peer adjacency formed ok
vPC keep-alive status : peer is alive
Configuration consistency status: success
Per-vlan consistency status : success
Type-2 consistency status : success
vPC role : primary
Number of vPCs configured : 2
Peer Gateway : Disabled
Dual-active excluded VLANs : -
Graceful Consistency Check : Enabled
vPC Peer-link status
id Port Status Active vlans
1 Po5 up 50-54
vPC status
id Port Status Consistency Reason Active vlans
30 Po30 down* failed Consistency Check Not -
Performed
31 Po31 down* failed Consistency Check Not -
Performed
I have changed the lacp on both devices to active:
On Nexus N5K-1/-2
interface Ethernet1/3
switchport mode trunk
switchport trunk allowed vlan 50-54
channel-group 30 mode active
interface Ethernet1/4
switchport mode trunk
switchport trunk allowed vlan 50-54
channel-group 30 mode active
On Catalyst 6500
interface TenGigabitEthernet2/1/2-3
switchport
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 50-54
switchport mode trunk
channel-protocol lacp
channel-group 30 mode active
interface TenGigabitEthernet1/1/2-3
switchport
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 50-54
switchport mode trunk
channel-protocol lacp
channel-group 30 mode active
Thanks for your time.
Jose -
Catalyst 6500 - Nexus 7000 migration
Hello,
I'm planning a platform migration from Catalyst 6500 til Nexus 7000. The old network consists of two pairs of 6500's as serverdistribution, configured with HSRPv1 as FHRP, rapid-pvst and ospf as IGP. Futhermore, the Cat6500 utilize mpls/l3vpn with BGP for 2/3 of the vlans. Otherwise, the topology is quite standard, with a number of 6500 and CBS3020/3120 as serveraccess.
In preparing for the migration, VTP will be discontinued and vlans have been manually "copied" from the 6500 to the N7K's. Bridge assurance is enabled downstream toward the new N55K access-switches, but toward the 6500, the upcoming etherchannels will run in "normal" mode, trying to avoid any problems with BA this way. For now, only L2 will be utilized on the N7K, as we're avaiting the 5.2 release, which includes mpls/l3vpn. But all servers/blade switches will be migrated prior to that.
The questions arise, when migrating Layer3 functionality, incl. hsrp. As per my understanding, hsrp in nxos has been modified slightly to better align with the vPC feature and to avoid sub-optimal forwarding across the vPC peerlink. But that aside, is there anything that would complicate a "sliding" FHRP migration? I'm thinking of configuring SVI's on the N7K's, configuring them with unused ip's and assign the same virtual ip, only decrementing the prio to a value below the current standby-router. Also spanning-tree prio will, if necessary, be modified to better align with hsrp.
From a routing perspective, I'm thinking of configuring ospf/bgp etc. similar to that of the 6500's, only tweaking the metrics (cost, localpref etc) to constrain forwarding on the 6500's and subsequently migrate both routing and FHRP at the same time. Maybe not in a big bang style, but stepwise. Is there anything in particular one should be aware of when doing this? At present, for me this seems like a valid approach, but maybe someone has experience with this (good/bad), so I'm hoping someone has some insight they would like to share.
Topology drawing is attached.
Thanks
/UlrichIn a normal scenario, yes. But not in vPC. HSRP is a bit different in the vPC environment. Even though the SVI is not the HSRP primary, it will still forward traffic. Please see the below white paper.
http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/white_paper_c11-516396.html
I will suggest you to set up the SVIs on the N7K but leave them in the down state. Until you are ready to use the N7K as the gateway for the SVIs, shut down the SVIs on the C6K one at a time and turn up the N7K SVIs. When I said "you are ready", it means the spanning-tree root is at the N7K along with all the L3 northbound links (toward the core).
I had a customer who did the same thing that you are trying to do - to avoid down time. However, out of the 50+ SVIs, we've had 1 SVI that HSRP would not establish between C6K and N7K, we ended up moving everything to the N7K on a fly during of the migration. Yes, they were down for about 30 sec - 1 min for each SVI but it is less painful and waste less time because we don't need to figure out what is wrong or any NXOS bugs.
HTH,
jerry -
Hi,
I have 2 SUN 4270 M2 servers connected with Nexus 5548 switch over 10Gb fiber card. I am getting performance of just 60 MB per second while transfer of 5Gb file across 2 servers. The similar speed i use to get on 1Gb network also. Please suggest how to improve the tranfer speed. On servers, ports ET4 and ETH5 are bonded in bond0 with mode=1. The server envrionment will be used for OVS 2.2.2.
Below are the details of network configuration on server. I quick help will be highly appriciated--
[root@host1 network-scripts]# ifconfig eth4
eth4 Link encap:Ethernet HWaddr 90:E2:BA:0E:22:4C
UP BROADCAST RUNNING SLAVE MULTICAST MTU:9000 Metric:1
RX packets:5648589 errors:215 dropped:0 overruns:0 frame:215
TX packets:3741680 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2492781394 (2.3 GiB) TX bytes:3911207623 (3.6 GiB)
[root@host1 network-scripts]# ifconfig eth5
eth5 Link encap:Ethernet HWaddr 90:E2:BA:0E:22:4C
UP BROADCAST RUNNING SLAVE MULTICAST MTU:9000 Metric:1
RX packets:52961 errors:215 dropped:0 overruns:0 frame:215
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:3916644 (3.7 MiB) TX bytes:0 (0.0 b)
[root@host1 network-scripts]# ethtool eth4
Settings for eth4:
Supported ports: [ FIBRE ]
Supported link modes: 1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 1000baseT/Full
10000baseT/Full
Advertised auto-negotiation: Yes
Speed: 10000Mb/s
Duplex: Full
Port: FIBRE
PHYAD: 0
Transceiver: external
Auto-negotiation: on
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000007 (7)
Link detected: yes
[root@host1 network-scripts]# ethtool eth5
Settings for eth5:
Supported ports: [ FIBRE ]
Supported link modes: 1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 1000baseT/Full
10000baseT/Full
Advertised auto-negotiation: Yes
Speed: 10000Mb/s
Duplex: Full
Port: FIBRE
PHYAD: 0
Transceiver: external
Auto-negotiation: on
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000007 (7)
Link detected: yes
[root@host1 network-scripts]#
[root@host1 network-scripts]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.2.4 (January 28, 2008)
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth4
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth4
MII Status: up
Link Failure Count: 0
Permanent HW addr: 90:e2:ba:0e:22:4c
Slave Interface: eth5
MII Status: up
Link Failure Count: 0
Permanent HW addr: 90:e2:ba:0e:22:4d
[root@host1 network-scripts]# modinfo ixgbe | grep ver
filename: /lib/modules/2.6.18-128.2.1.4.44.el5xen/kernel/drivers/net/ixgbe/ixgbe.ko
version: 3.9.17-NAPI
description: Intel(R) 10 Gigabit PCI Express Network Driver
srcversion: 31C6EB13C4FA6749DF3BDF5
vermagic: 2.6.18-128.2.1.4.44.el5xen SMP mod_unload Xen 686 REGPARM 4KSTACKS gcc-4.1
[root@host1 network-scripts]#brctl show
bridge name bridge id STP enabled interfaces
vlan301 8000.90e2ba0e224c no bond0.301
vlan302 8000.90e2ba0e224c no vif1.0
bond0.302
vlan303 8000.90e2ba0e224c no bond0.303
vlan304 8000.90e2ba0e224c no bond0.304
[root@host2 test]# ifconfig eth5
eth5 Link encap:Ethernet HWaddr 90:E2:BA:0F:C3:15
UP BROADCAST RUNNING SLAVE MULTICAST MTU:9000 Metric:1
RX packets:4416730 errors:215 dropped:0 overruns:0 frame:215
TX packets:2617152 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:190977431 (182.1 MiB) TX bytes:3114347186 (2.9 GiB)
[root@host2 network-scripts]# ifconfig eth4
eth4 Link encap:Ethernet HWaddr 90:E2:BA:0F:C3:15
UP BROADCAST RUNNING SLAVE MULTICAST MTU:9000 Metric:1
RX packets:28616 errors:3 dropped:0 overruns:0 frame:3
TX packets:424 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:4982317 (4.7 MiB) TX bytes:80029 (78.1 KiB)
[root@host2 test]#
[root@host2 network-scripts]# ethtool eth4
Settings for eth4:
Supported ports: [ FIBRE ]
Supported link modes: 1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 1000baseT/Full
10000baseT/Full
Advertised auto-negotiation: Yes
Speed: 10000Mb/s
Duplex: Full
Port: FIBRE
PHYAD: 0
Transceiver: external
Auto-negotiation: on
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000007 (7)
Link detected: yes
[root@host2 test]# ethtool eth5
Settings for eth5:
Supported ports: [ FIBRE ]
Supported link modes: 1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 1000baseT/Full
10000baseT/Full
Advertised auto-negotiation: Yes
Speed: 10000Mb/s
Duplex: Full
Port: FIBRE
PHYAD: 0
Transceiver: external
Auto-negotiation: on
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000007 (7)
Link detected: yes
[root@host2 network-scripts]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.2.4 (January 28, 2008)
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth5
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth5
MII Status: up
Link Failure Count: 0
Permanent HW addr: 90:e2:ba:0f:c3:14
Slave Interface: eth4
MII Status: up
Link Failure Count: 0
Permanent HW addr: 90:e2:ba:0f:c3:15
[root@host2 network-scripts]# modinfo ixgbe | grep ver
filename: /lib/modules/2.6.18-128.2.1.4.44.el5xen/kernel/drivers/net/ixgbe/ixgbe.ko
version: 3.9.17-NAPI
description: Intel(R) 10 Gigabit PCI Express Network Driver
srcversion: 31C6EB13C4FA6749DF3BDF5
vermagic: 2.6.18-128.2.1.4.44.el5xen SMP mod_unload Xen 686 REGPARM 4KSTACKS gcc-4.1
[root@host2 network-scripts]#brctl show
bridge name bridge id STP enabled interfaces
vlan301 8000.90e2ba0fc315 no bond0.301
vlan302 8000.90e2ba0fc315 no bond0.302
vlan303 8000.90e2ba0fc315 no bond0.303
vlan304 8000.90e2ba0fc315 no vif1.0
bond0.304
Thanks....
JayHi,
Thanks for reply..but the RX errors count is keep on increasing and the transfer speed between 2 servers are max 60MB/ps on 10GB FC card. Even on storage also, i am getting the same speed when i try to transfer data from server to storage on 10GB FC card. Servers and storage are connected through Nexus 5548 switch.
#ifconfig eth5
eth5 Link encap:Ethernet HWaddr 90:E2:BA:0E:22:4C
UP BROADCAST RUNNING SLAVE MULTICAST MTU:9000 Metric:1
RX packets:21187303 errors:1330 dropped:0 overruns:0 frame:1330
TX packets:17805543 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:624978785 (596.0 MiB) TX bytes:2897603160 (2.6 GiB)
JP -
Telephony Issues on Nexus 5548
Dear Viewers,
I have Nexus 5548 devices in one of my client data centers and i have one 3750 switch to which all of these Avaya voice servers connect.
The 3750 switch was initially connected through a L2 Link to a 6509 catalyst switch and the telephony applications were working correctly.
The problem arises when i move this 3750 layer 2 link to a Nexus 5548 (OS version 5.1(3)N1 switch. All telephony calls coming from the outside (External calls) are not working as required but the internal calls work as usual.
What is odd is that when i migrate this L2 link back to the 6509 switch, all works as usual. This is just a layer 2 connection and i am wondering why this is not possible.
The vlan is accepted on all relevant trunks. I also deactivated igmp snooping on this voice vlan on the Nexus 5548 thinking it would help but in vain.
Any ideas and suggestions are welcome.
regards.
AlainThis is my radius config...... on a 5K
radius-server timeout 7
radius-server host 10.28.42.20 key 7 "Password" auth-port 1645 acct-port 1646 authentication accounting
radius-server host 10.28.42.21 key 7 "Password" auth-port 1645 acct-port 1646 authentication accounting
aaa group server radius Radius-Servers
server 10.28.42.20
server 10.28.42.21
aaa authentication login default group Radius-Servers
ip radius source-interface Vlan1
aaa authentication login default fallback error local
And it is currently working. On the radius server i also had to do this to make the users admins once logged in:
https://supportforums.cisco.com/document/137181/nexus-integration-admin-access-free-radius
Maybe you are looking for
-
Hi, Client's requirement is, Purchase department will create a request for Down Payment using prerequisite-47. And Down payment is posted through F-48. My problem is I am able to create the proceed upto F-48. After this how to print this cheque? Are
-
How to wrap an existing javabean for using inside a form
I have an existing java bean(meterbar.jar) I would like to try and intergrate into a bean area of a form. I have looked at a bunch of documentation but am still lost. How can I use jdeveloper to "wrap" the bean for use inside a form. I am in totally
-
Multi-language support for iPod Touch keyboard
"There might be an app for that" sounds good, but trying to find anything on the Apple site is a nightmare. I'm simply looking for a multi-language touch keyboard application for my iPod Touch. I type quite a bit in German and need to easily switch b
-
Is it possible to get the 'Switch to tab' option higher when typing in the Awesome Bar?
Hello, I'm currently running the latest version of Aurora and although I really like it, there's one thing that kind of annoys me. I'm often running more than 100 tabs at once, and searching all of them to find the desired tab takes a long time, ther
-
Hey im new to the forums so i don't know if anyone else has had this problem. Whenever I plug my iphone (os 3.1) into itunes and go to the application bar, the 'sync' box is always unchecked. This means that I have to check the box and all the apps t