6500 Egress Polcing & Etherchannels
Hi,
Am I correct in thinking that it is not possible to configure egress policing on an EtherChannel, on a 6500 VS720? I can't find any docuementation that says it is not supported...Thanks,
Pete
Hi Jon
Many thanks
I'm using vlan based QOS because later I'll add more vlans to the configuration, this is just initially to see how to use the QOS function on the 6500 - later we'll use this with more vlans. Essentially the port that is currently gi1/1 may later be a truck port with 10+ vlan's bound to it (with associated vlan interfaces on the 6500).
Data arriving from the Server to the 6500 most likely won't have any or valid dscp markings,
allvoip is currently simplified just for icmp traffic for testing - so it's looking like this:
class-map match-all allvoip
match access-group 100
access-list 100 permit icmp any any
What is just concerning me is that when I have a continuous ping running I'm getting deltas in the ping times when I have other data downloading off gi1/1 (which makes me think the strict priority queue isn't quite right).
If I can ask, if I wanted to rate limit the data on vlan6 (say limit the data to 10Mbit) and still also do marking in dscp to enable allocation to the differnet egress queues do you have any suggestions? I can use a police statement classes on the policy-map but I don't really want to police each class seperately
kind of like
vlan 6 entire capacity policed to 10mbit
then inside that
allvoip marked EF (and then assigned COS 5 and 1P)
etc etc
cheers
Mark
Similar Messages
-
6500 Sup 2T Etherchannel DSCP marking
Good Morning,
We are in the middle of a CUCM deployment and on the 6500 I need to set a DSCP or COS value on egress for the CUCM servers. So far I have not found the correct way to set the DSCP. I have attempted to create a service policy and apply it to the physical ports (and I tried the etherchannel just for kicks) and I get the following error:
Policy can not be installed because interface GigabitEthernet2/12 is a member of Port-channel
MQC features are not supported for this interface
How do I correctly set the DSCP value to EF on egress on these ports or the port-channel?
Thank you in advance for your assistance.
JustinHi Jon
Many thanks
I'm using vlan based QOS because later I'll add more vlans to the configuration, this is just initially to see how to use the QOS function on the 6500 - later we'll use this with more vlans. Essentially the port that is currently gi1/1 may later be a truck port with 10+ vlan's bound to it (with associated vlan interfaces on the 6500).
Data arriving from the Server to the 6500 most likely won't have any or valid dscp markings,
allvoip is currently simplified just for icmp traffic for testing - so it's looking like this:
class-map match-all allvoip
match access-group 100
access-list 100 permit icmp any any
What is just concerning me is that when I have a continuous ping running I'm getting deltas in the ping times when I have other data downloading off gi1/1 (which makes me think the strict priority queue isn't quite right).
If I can ask, if I wanted to rate limit the data on vlan6 (say limit the data to 10Mbit) and still also do marking in dscp to enable allocation to the differnet egress queues do you have any suggestions? I can use a police statement classes on the policy-map but I don't really want to police each class seperately
kind of like
vlan 6 entire capacity policed to 10mbit
then inside that
allvoip marked EF (and then assigned COS 5 and 1P)
etc etc
cheers
Mark -
5508 WLC-6500 Series Switch Etherchannel
Hi,
I have a 5508 controller connected to a 6500 VSS pair. Below is the port channel configuration and port configuration. I am just wondering whether we still have to configure a load balancing method as cisco recommends “port-channel load-balance src-dst-ip” as best practice.
Does this still applicable for 5508 controller-6500 Series uplink as the etherchannel is L2 etherchannel?
Port Channel Config:
interface Port-channel1
description To 5508 WLC
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 9
switchport trunk allowed vlan 10,11,12
switchport mode trunk
mls qos trust dscp
end
Interface Config:
interface GigabitEthernet1/1/42
description To 5508 WLC
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 9
switchport trunk allowed vlan 10,11,12
switchport mode trunk
wrr-queue bandwidth 5 25 70
wrr-queue queue-limit 5 25 40
wrr-queue random-detect min-threshold 1 80 100 100 100 100 100 100 100
wrr-queue random-detect min-threshold 2 80 100 100 100 100 100 100 100
wrr-queue random-detect min-threshold 3 50 60 70 80 90 100 100 100
wrr-queue random-detect max-threshold 1 100 100 100 100 100 100 100 100
wrr-queue random-detect max-threshold 2 100 100 100 100 100 100 100 100
wrr-queue random-detect max-threshold 3 60 70 80 90 100 100 100 100
wrr-queue cos-map 1 1 1
wrr-queue cos-map 2 1 0
wrr-queue cos-map 3 1 4
wrr-queue cos-map 3 2 2
wrr-queue cos-map 3 3 3
wrr-queue cos-map 3 4 6
wrr-queue cos-map 3 5 7
mls qos trust dscp
channel-group 1 mode on
endHello,
Please check to following link regarding load balancing between 5508 and WLC 6500:
http://www.learnios.com/viewtopic.php?f=5&t=34555 -
Cisco 6500 Egress Queueing Query (is it working correctly)
hi all
We have a test setup in our lab that we are working with and we believe we must be missing something.
The kit is a 6500, Sup 720B, WS-X6748-48-GE-TX line cards.
We are attempting to implement PFC QOS to ensure uninterupted throughput of real time data (dscp=EF).
Our basic configuration
mls qos map cos-dscp 0 8 16 24 32 46 48 56
mls qos
policy-map QosVoice
class allvoip
set ip dscp ef
class class-default
set ip dscp af21
interface GigabitEthernet1/1
switchport
switchport access vlan 6
switchport mode access
mls qos vlan-based
mls qos trust dscp
interface Vlan6
ip address 10.0.2.149 255.255.255.0
service-policy input QosVoice
In our setup we are passing data from a far end interface into the 6500 and through to vlan internface 6 (and out port gi1/1).
If what I believe is correct then data coming to the switch from vlan6 and out via gi1/1 will pass through QosVoice (and I confirm using wireshark that this is happening), the allvoip class is marking dscp=ef (from the packet capture).
My concern though is that this data is not being placed into the strict priority egress queue on the 6748 line card.
"show queueing interface gi1/1" suggests it is (see end of post).
However when I do some "real world" testing I truely don't believe that traffic in class "allvoip" is really getting strict priority how I would expect. I for example temporarily placed icmp data into this class and when doing large downloads the ping times would be impacted (changing from <1ms to between 3 and 6 ms) - with a strict queue I woudl expect those ping packets to stay totally stable independant of traffic in other queues.
Can I get any input or feedback on our setup?
regards
Mark
Interface GigabitEthernet1/1 queueing strategy: Weighted Round-Robin
Port QoS is enabled
Trust boundary disabled
Trust state: trust DSCP
Extend trust state: not trusted [COS = 0]
Default COS is 0
Queueing Mode In Tx direction: mode-cos
Transmit queues [type = 1p3q8t]:
Queue Id Scheduling Num of thresholds
01 WRR 08
02 WRR 08
03 WRR 08
04 Priority 01
queue thresh cos-map
1 1 0
1 2 1
1 3
1 4
1 5
1 6
1 7
1 8
2 1 2
2 2 3 4
2 3
2 4
2 5
2 6
2 7
2 8
3 1 6 7
3 2
3 3
3 4
3 5
3 6
3 7
3 8
4 1 5Hi Jon
Many thanks
I'm using vlan based QOS because later I'll add more vlans to the configuration, this is just initially to see how to use the QOS function on the 6500 - later we'll use this with more vlans. Essentially the port that is currently gi1/1 may later be a truck port with 10+ vlan's bound to it (with associated vlan interfaces on the 6500).
Data arriving from the Server to the 6500 most likely won't have any or valid dscp markings,
allvoip is currently simplified just for icmp traffic for testing - so it's looking like this:
class-map match-all allvoip
match access-group 100
access-list 100 permit icmp any any
What is just concerning me is that when I have a continuous ping running I'm getting deltas in the ping times when I have other data downloading off gi1/1 (which makes me think the strict priority queue isn't quite right).
If I can ask, if I wanted to rate limit the data on vlan6 (say limit the data to 10Mbit) and still also do marking in dscp to enable allocation to the differnet egress queues do you have any suggestions? I can use a police statement classes on the policy-map but I don't really want to police each class seperately
kind of like
vlan 6 entire capacity policed to 10mbit
then inside that
allvoip marked EF (and then assigned COS 5 and 1P)
etc etc
cheers
Mark -
Two Nexus 5020 vPC etherchannel with Two Catalyst 6500 VSS
Hi,
we are fighting with an 40 Gbps etherchannel between 2 Nx 5000 and 2 Catalyst 6500 but the etherchannel never comes up. Here is the config:
NK5-1
interface port-channel30
description Trunk hacia VSS 6500
switchport mode trunk
vpc 30
switchport trunk allowed vlan 50-54
speed 10000
interface Ethernet1/3
switchport mode trunk
switchport trunk allowed vlan 50-54
beacon
channel-group 30
interface Ethernet1/4
switchport mode trunk
switchport trunk allowed vlan 50-54
channel-group 30
NK5-2
interface port-channel30
description Trunk hacia VSS 6500
switchport mode trunk
vpc 30
switchport trunk allowed vlan 50-54
speed 10000
interface Ethernet1/3
switchport mode trunk
switchport trunk allowed vlan 50-54
beacon
channel-group 30
interface Ethernet1/4
switchport mode trunk
switchport trunk allowed vlan 50-54
beacon
channel-group 30
Catalyst 6500 VSS
interface Port-channel30
switchport
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 50-54
interface TenGigabitEthernet2/1/2
switchport
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 50-54
channel-protocol lacp
channel-group 30 mode passive
interface TenGigabitEthernet2/1/3
switchport
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 50-54
channel-protocol lacp
channel-group 30 mode passive
interface TenGigabitEthernet1/1/2
switchport
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 50-54
channel-protocol lacp
channel-group 30 mode passive
interface TenGigabitEthernet1/1/3
switchport
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 50-54
channel-protocol lacp
channel-group 30 mode passive
The "Show vpc 30" is as follows
N5K-2# sh vpc 30
vPC status
id Port Status Consistency Reason Active vlans
30 Po30 down* success success -
But the "Show vpc Consistency-parameters vpc 30" is
N5K-2# sh vpc consistency-parameters vpc 30
Legend:
Type 1 : vPC will be suspended in case of mismatch
Name Type Local Value Peer Value
Shut Lan 1 No No
STP Port Type 1 Default Default
STP Port Guard 1 None None
STP MST Simulate PVST 1 Default Default
mode 1 on -
Speed 1 10 Gb/s -
Duplex 1 full -
Port Mode 1 trunk -
Native Vlan 1 1 -
MTU 1 1500 -
Allowed VLANs - 50-54 50-54
Local suspended VLANs - - -
We will apreciate any advice,
Thank you very much for your time...
JoseHi Lucien,
here is the "show vpc brief"
N5K-2# sh vpc brief
Legend:
(*) - local vPC is down, forwarding via vPC peer-link
vPC domain id : 5
Peer status : peer adjacency formed ok
vPC keep-alive status : peer is alive
Configuration consistency status: success
Per-vlan consistency status : success
Type-2 consistency status : success
vPC role : secondary
Number of vPCs configured : 2
Peer Gateway : Disabled
Dual-active excluded VLANs : -
Graceful Consistency Check : Enabled
vPC Peer-link status
id Port Status Active vlans
1 Po5 up 50-54
vPC status
id Port Status Consistency Reason Active vlans
30 Po30 down* success success -
31 Po31 down* failed Consistency Check Not -
Performed
*************************************************************************+
*************************************************************************+
N5K-1# sh vpc brief
Legend:
(*) - local vPC is down, forwarding via vPC peer-link
vPC domain id : 5
Peer status : peer adjacency formed ok
vPC keep-alive status : peer is alive
Configuration consistency status: success
Per-vlan consistency status : success
Type-2 consistency status : success
vPC role : primary
Number of vPCs configured : 2
Peer Gateway : Disabled
Dual-active excluded VLANs : -
Graceful Consistency Check : Enabled
vPC Peer-link status
id Port Status Active vlans
1 Po5 up 50-54
vPC status
id Port Status Consistency Reason Active vlans
30 Po30 down* failed Consistency Check Not -
Performed
31 Po31 down* failed Consistency Check Not -
Performed
I have changed the lacp on both devices to active:
On Nexus N5K-1/-2
interface Ethernet1/3
switchport mode trunk
switchport trunk allowed vlan 50-54
channel-group 30 mode active
interface Ethernet1/4
switchport mode trunk
switchport trunk allowed vlan 50-54
channel-group 30 mode active
On Catalyst 6500
interface TenGigabitEthernet2/1/2-3
switchport
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 50-54
switchport mode trunk
channel-protocol lacp
channel-group 30 mode active
interface TenGigabitEthernet1/1/2-3
switchport
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 50-54
switchport mode trunk
channel-protocol lacp
channel-group 30 mode active
Thanks for your time.
Jose -
EtherChannel Across Multiple Slots
The examples I have seen for EtherChannel always bundle multiple ports on the same slot. It seems that bundleing ports across multiple slots would increase the resilency. For example when linking two core switches together. I assume this would allow the link to stay up if the card in a given slot failed. Is this supported? If so, are there any concerns?
Disclaimer
The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
Liability Disclaimer
In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
Posting
It seems that bundleing ports across multiple slots would increase the resilency. For example when linking two core switches together. I assume this would allow the link to stay up if the card in a given slot failed. Is this supported? If so, are there any concerns?
Across slots is indeed done to increase resiliency.
Concerns? Yes, on some chassis, different cards may have different QoS architectures. You may have to configure platform to ignore this. Also, you can channel across different media (fiber and copper).
At least on 6500 VSS, ingress traffic will use same 6500 egress link (this to avoid transiting VSL). -
Catalyst 6500 - Nexus 7000 migration
Hello,
I'm planning a platform migration from Catalyst 6500 til Nexus 7000. The old network consists of two pairs of 6500's as serverdistribution, configured with HSRPv1 as FHRP, rapid-pvst and ospf as IGP. Futhermore, the Cat6500 utilize mpls/l3vpn with BGP for 2/3 of the vlans. Otherwise, the topology is quite standard, with a number of 6500 and CBS3020/3120 as serveraccess.
In preparing for the migration, VTP will be discontinued and vlans have been manually "copied" from the 6500 to the N7K's. Bridge assurance is enabled downstream toward the new N55K access-switches, but toward the 6500, the upcoming etherchannels will run in "normal" mode, trying to avoid any problems with BA this way. For now, only L2 will be utilized on the N7K, as we're avaiting the 5.2 release, which includes mpls/l3vpn. But all servers/blade switches will be migrated prior to that.
The questions arise, when migrating Layer3 functionality, incl. hsrp. As per my understanding, hsrp in nxos has been modified slightly to better align with the vPC feature and to avoid sub-optimal forwarding across the vPC peerlink. But that aside, is there anything that would complicate a "sliding" FHRP migration? I'm thinking of configuring SVI's on the N7K's, configuring them with unused ip's and assign the same virtual ip, only decrementing the prio to a value below the current standby-router. Also spanning-tree prio will, if necessary, be modified to better align with hsrp.
From a routing perspective, I'm thinking of configuring ospf/bgp etc. similar to that of the 6500's, only tweaking the metrics (cost, localpref etc) to constrain forwarding on the 6500's and subsequently migrate both routing and FHRP at the same time. Maybe not in a big bang style, but stepwise. Is there anything in particular one should be aware of when doing this? At present, for me this seems like a valid approach, but maybe someone has experience with this (good/bad), so I'm hoping someone has some insight they would like to share.
Topology drawing is attached.
Thanks
/UlrichIn a normal scenario, yes. But not in vPC. HSRP is a bit different in the vPC environment. Even though the SVI is not the HSRP primary, it will still forward traffic. Please see the below white paper.
http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/white_paper_c11-516396.html
I will suggest you to set up the SVIs on the N7K but leave them in the down state. Until you are ready to use the N7K as the gateway for the SVIs, shut down the SVIs on the C6K one at a time and turn up the N7K SVIs. When I said "you are ready", it means the spanning-tree root is at the N7K along with all the L3 northbound links (toward the core).
I had a customer who did the same thing that you are trying to do - to avoid down time. However, out of the 50+ SVIs, we've had 1 SVI that HSRP would not establish between C6K and N7K, we ended up moving everything to the N7K on a fly during of the migration. Yes, they were down for about 30 sec - 1 min for each SVI but it is less painful and waste less time because we don't need to figure out what is wrong or any NXOS bugs.
HTH,
jerry -
Etherchannel between 2 cat 6500 switches
Folks,
I have 2 6500 at the core. They are connected to each other using 2 gig ports bundled together. Is this configuration right? do i need to do anything else to make sure that the trunks are stable and do not cause any spanning tree issues.
interface GigabitEthernet6/5
no ip address
switchport
switchport trunk encapsulation dot1q
switchport mode trunk
channel-group 1 mode on
interface GigabitEthernet6/6
no ip address
switchport
switchport trunk encapsulation dot1q
switchport mode trunk
channel-group 1 mode onThe configuration is technically correct, although we recommend using channel group desirable as well as trunk desirable. This enables the use of the underlying negotiation protocols and functions as an integrity check to ensure that the partner ports are transmitting and receiving properly. This is detailed in the best practices document:
Trunking (DTP):
http://www.cisco.com/en/US/products/hw/switches/ps700/products_white_paper09186a00801b49a4.shtml#cg4
Etherchannel (PAgP):
http://www.cisco.com/en/US/products/hw/switches/ps700/products_white_paper09186a00801b49a4.shtml#cg6
All that aside, this is ultimately a design guideline. Leaving it as it is will function as a trunked etherchannel properly.
HTH,
Bobby -
6500 - Etherchannel with HP and ESX
I have a couple questions that I cant find the answer to (which surprised me).
Do all NIC(s) in a team need to be plugged into the same blade (switch), or is it technically a single stacked switch?
HP Server NIC teaming and ESX Nic teaming docs have different hash methods. (see links below).
How do you reconcile this or does it matter? Should you specify load-balancing method per switch module and plug all nics in a team into that specfic switch?
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1004048
http://www.cisco.com/c/en/us/support/docs/lan-switching/etherchannel/98469-ios-etherchannel.html
Thank you for any responses,
JTJT,
One other thing you should look into is that sometimes you don't need to have portchannles towards the ESX hosts, because they use NIC teaming on and when one logical NIC fails the traffic simply shifts to the other logical NIC without loosing a ping packet. The same thing happened when we disconnected a physical link. We did this testing with Dell Servers, 2ks and 6ks. The 6ks do not run any portchannel, just simple trunk and it works really well. Now, you are using 6500 and HP and it maybe all different, but just FYI and if you want to test it.
HTH -
Cross switch etherchannel config between two 6500 and 3750
Dear All,
I would like to design the network and got some problem, my network have one 3750 and two 6500, I would like to setup the etherchannel from 3750 (total two uplink port together), one link to the first 6500 and the other link to second 6500, one trunk between 6500 for redundance.
I tried to use PAgP (auto/desirable, on/on), but the channel misconfig error occurred, the etherchannel keep in suspected or standalone state.
Anybody can suggest/recommend some method for this case.
ThanksUnfortunately, you cannot create an etherchannel from one device to two different devices. for example, from 3750 you have gig 1/0/1 and gig 1/0/2. gig 1/0/1 of 3750 connects to port 1/1 of switch A and gig 1/0/2 of that same 3750 connects to port 1/2 of switch B. You can NOT create an etherchannel on 3750 to combine gig 1/0/1 and gig 1/0/2 to create a bigger pipe. That is not how etherchannel is designed to do.
However, if you have gig 1/0/1 and gig 1/0/2 on 3750 connecting to port 1/1 and 1/2 of switch A, you can create a channel on bith devices to create a bigger pipe (4 GBPS @ full duplex) and let's say that on that same 3750, you have an additional gig 1/0/3 and gig 1/0/4 that connects to ports 1/1 and 1/2 of switch B, you can create another separate channel that combines gig 1/0/3 and gig 1/0/4 and switch B's port 1/1 and 1/2, this scenarion is totally acceptable.
I hope that helps clear up channeling.
In your described scenario, channeling is not what you are asking, it's STP and you really do not need to do anything as STP is enabled by default, maybe you just need to make sure that the root is where you wnat it to be and that is configureable. With your looped physical topology, STP will prevent loop from forming and will give you the redundancy you seek as when one link fails, the ones blocked by STP would go forwarding once STP detects that it should forward that port.
Please rate helpful posts. -
EtherChannel load-balance on Catalyst 6500 running CatOS
I know EtherChannel load balancing can use either MAC addresses, IP addresses, or the TCP port numbers.
1 Can I config it to make sure every port under the same Channel group has the same traffic utilization?
2 If one of the Etherchannel physical port has traffic more than its physical bandwidth, why switch can't use another Etherchannel physical port to share the traffic?Normally it will fairly well balance the traffic . There is no way to make sure each channel is exactly the same utilization wise . You can look at how it is load balanced and make a change from say mac to ip address if it looks like you aren't getting the balance you want . It would be very rare that you are going to fill one port on the channel without filling the rest almost the same .Exceptions would be if you most of your traffic is headed to one place like a certain server , even then if you used ip addresses in both directions as the load balance I think it would balance out pretty good. If it got to that point where one link was almost filled you would have to think about adding another port to the channel . This is a real good page http://www.cisco.com/en/US/tech/tk389/tk213/technologies_tech_note09186a0080094714.shtml
-
Etherchannel showing down (SD) and ports are in "I" stand alone state
Hi,
Netapp server is connected to switch 6500 via trunk.
I configured a portchannel but it showing as down.take a look ar below output..
interface Port-channel248
description Netapp-server-1 po248
switchport
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 903
switchport mode trunk
switchport nonegotiate
no ip address
no shut
interface GigabitEthernet3/33
description server-1
switchport
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 903
switchport mode trunk
switchport nonegotiate
no ip address
speed 1000
udld port aggressive
spanning-tree portfast
channel-group 248 mode active
no shut
interface GigabitEthernet4/33
description cnndcfasp002a-e5d
switchport
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 903
switchport mode trunk
switchport nonegotiate
no ip address
speed 1000
udld port aggressive
spanning-tree portfast
channel-group 248 mode active
no shut
Switch-6500#sh etherchannel summary
Flags: D - down P - bundled in port-channel
I - stand-alone s - suspended
H - Hot-standby (LACP only)
R - Layer3 S - Layer2
U - in use f - failed to allocate aggregator
M - not in use, minimum links not met
u - unsuitable for bundling
w - waiting to be aggregated
Number of channel-groups in use: 5
Number of aggregators: 5
Group Port-channel Protocol Ports
------+-------------+-----------+-----------------------------------------------
248 Po248(SD) LACP Gi3/33(I) Gi4/33(I)
#sh etherchannel detail
Group: 248
Group state = L2
Ports: 2 Maxports = 16
Port-channels: 1 Max Port-channels = 16
Protocol: LACP
Minimum Links: 0
Ports in the group:
Port: Gi3/33
Port state = Up Sngl-port-Bndl Mstr Not-in-Bndl
Channel group = 248 Mode = Active Gcchange = -
Port-channel = null GC = - Pseudo port-channel = Po248
Port index = 0 Load = 0x00 Protocol = LACP
Flags: S - Device is sending Slow LACPDUs F - Device is sending fast LACPDUs.
A - Device is in active mode. P - Device is in passive mode.
Local information:
LACP port Admin Oper Port Port
Port Flags State Priority Key Key Number State
Gi3/33 SA indep 32768 0xF8 0xF8 0x321 0x7D
Age of the port in the current state: 0d:02h:04m:58s
Port: Gi4/33
Port state = Up Sngl-port-Bndl Mstr Not-in-Bndl
Channel group = 248 Mode = Active Gcchange = -
Port-channel = null GC = - Pseudo port-channel = Po248
Port index = 0 Load = 0x00 Protocol = LACP
Flags: S - Device is sending Slow LACPDUs F - Device is sending fast LACPDUs.
A - Device is in active mode. P - Device is in passive mode.
Local information:
LACP port Admin Oper Port Port
Port Flags State Priority Key Key Number State
Gi4/33 SA indep 32768 0xF8 0xF8 0x421 0x7D
Age of the port in the current state: 0d:02h:04m:58s
Port-channels in the group:
Port-channel: Po248 (Primary Aggregator)
Age of the Port-channel = 7d:16h:30m:16s
Logical slot/port = 14/3 Number of ports = 0
Port state = Port-channel Ag-Not-Inuse
Protocol = LACP
Any one please let me know what is the issue here...
Thanks
GauthamExactly, the 6500 config is fine, probably the NETAPP is not active or passive and it's just ON that won't work
show lacp 248 neighbor will show if you have a neighbor and if the LACP id is the same on both ports
Core1#sh lacp 2 neighbor
Flags: S - Device is requesting Slow LACPDUs
F - Device is requesting Fast LACPDUs
A - Device is in Active mode P - Device is in Passive mode
Channel group 2 neighbors
Partner's information:
Partner Partner LACP Partner Partner Partner Partner Partner
Port Flags State Port Priority Admin Key Oper Key Port Number Port State
Gi1/7/10 SA bndl 32768 0x0 0x1 0x11A 0x3D
Gi2/7/10 SA bndl 32768 0x0 0x1 0x31D 0x3D
cheers -
ASA Service Module on 6500 montoring console session
We have 6500 with ASA Service Module
On 6500 how can we configure so that if someone logs in to the ASA Service Module and reboots the firewall we can have logs of it in syslog of switch .
Thanks for helpI hate to answer my own posts, but here it is. TAC tells us that there are 2 choices to make this work. Apparently the way that worked on an ISR and ISRG2 does not work on the 4000 series routers. I guess that's progress.
Option 1. Use a physical cable to connect one of the router's interfaces to one of the etherswitches interfaces and treat it just like the etherswitch is a seperate physical switch. I'm sure there is a use case for that but I'll not cover that here.
Option 2. Use the "service instance" feature on the router's internal interface to bind it to a new "BDI" virtual interface on the router. This is what we'll do.
On our router ethernet-internal 1/0/0 maps to Gi0/18 on the etherswitch, all internal to the box. The router will be10.0.0.1 and the switch will be 10.0.0.2.
Router:
interface Ethernet-Internal 1/0/0
service instance 1 ethernet
encapsulation dot1q 50
rewrite ingress tag pop 1
interface BDI 1
mtu 9216
ip address 10.0.0.1 255.255.255.0
Switch:
interface Gi0/18
switchport trunk vlan allowed 50
switchport mode trunk
vlan 50
name Egress vlan
interface vlan 50
ip address 10.0.0.2 255.255.255.0
ip route 0.0.0.0 0.0.0.0 10.0.0.1
Then there are a million ways to design and configure the switch as a normal 3560X switch but that's beyond the scope of my question. -
Pvlan (promiscuous port) not permitted on etherchannel
This is probably an often-asked question: I've read in the 7K v5.0(2) NX-OS docs, the 6500 12.2SXF/SXH IOS docs and the 5K docs that the pvlan feature cannot co-exist with etherchanneling on the same uplink. This would assumedly include promiscuous trunk ports.
This seems so counter-productive and self-defeating. Especially where the 7K vPC feature is cfg'd requiring LACP etherchannels from the single switch into both 7Ks.
What am I missing here? Is there a work-around short of new dedicated links? The cost of the 10G optics for that approach is prohibitive. Is it possibly on the road-map for some future upgrades?
Maybe I could configure an unconditional port-channel with no protocol, no negotiation? I'm waiting on the vDC licensing to let me direct-connect the lab 5Ks.
Thank you,
Ken JohnsonRequired info:
system image file is: bootflash:///n5000-uk9.6.0.2.N2.2.bin
system compile time: 10/4/2013 12:00:00 [10/04/2013 22:23:49]
interface port-channel4001
description SERVER1
switchport mode trunk
switchport access vlan 201
switchport trunk native vlan 201
storm-control broadcast level 1.00
storm-control multicast level 1.00
storm-control unicast level 1.00
vpc 4001
interface Ethernet107/1/47
description SRV1
no lldp transmit
no cdp enable
switchport mode trunk
switchport access vlan 201
switchport trunk native vlan 201
storm-control broadcast level 1.00
storm-control multicast level 1.00
storm-control unicast level 1.00
channel-group 4001 mode active
no shutdown -
VPC on Nexus 5000 with Catalyst 6500 (no VSS)
Hi, I'm pretty new on the Nexus and UCS world so I have some many questions I hope you can help on getting some answers.
The diagram below is the configuration we are looking to deploy, that way because we do not have VSS on the 6500 switches so we can not create only one Etherchannel to the 6500s.
Our blades inserted on the UCS chassis have INTEL dual port cards, so they do not support full failover.
Questions I have are.
- Is this my best deployment choice?
- vPC highly depend on the management interface on the Nexus 5000 for the keep alive peer monitoring, so what is going to happen if the vPC brakes due to:
- one of the 6500 goes down
- STP?
- What is going to happend with the Etherchannels on the remaining 6500?
- the Management interface goes down for any other reason
- which one is going to be the primary NEXUS?
Below is the list of devices involved and the configuration for the Nexus 5000 and 65000.
Any help is appreciated.
Devices
· 2 Cisco Catalyst with two WS-SUP720-3B each (no VSS)
· 2 Cisco Nexus 5010
· 2 Cisco UCS 6120xp
· 2 UCS Chassis
- 4 Cisco B200-M1 blades (2 each chassis)
- Dual 10Gb Intel card (1 per blade)
vPC Configuration on Nexus 5000
TACSWN01
TACSWN02
feature vpc
vpc domain 5
reload restore
reload restore delay 300
Peer-keepalive destination 10.11.3.10
role priority 10
!--- Enables vPC, define vPC domain and peer for keep alive
int ethernet 1/9-10
channel-group 50 mode active
!--- Put Interfaces on Po50
int port-channel 50
switchport mode trunk
spanning-tree port type network
vpc peer-link
!--- Po50 configured as Peer-Link for vPC
inter ethernet 1/17-18
description UCS6120-A
switchport mode trunk
channel-group 51 mode active
!--- Associates interfaces to Po51 connected to UCS6120xp-A
int port-channel 51
swithport mode trunk
vpc 51
spannig-tree port type edge trunk
!--- Associates vPC 51 to Po51
inter ethernet 1/19-20
description UCS6120-B
switchport mode trunk
channel-group 52 mode active
!--- Associates interfaces to Po51 connected to UCS6120xp-B
int port-channel 52
swithport mode trunk
vpc 52
spannig-tree port type edge trunk
!--- Associates vPC 52 to Po52
!----- CONFIGURATION for Connection to Catalyst 6506
Int ethernet 1/1-3
description Cat6506-01
switchport mode trunk
channel-group 61 mode active
!--- Associate interfaces to Po61 connected to Cat6506-01
Int port-channel 61
switchport mode trunk
vpc 61
!--- Associates vPC 61 to Po61
Int ethernet 1/4-6
description Cat6506-02
switchport mode trunk
channel-group 62 mode active
!--- Associate interfaces to Po62 connected to Cat6506-02
Int port-channel 62
switchport mode trunk
vpc 62
!--- Associates vPC 62 to Po62
feature vpc
vpc domain 5
reload restore
reload restore delay 300
Peer-keepalive destination 10.11.3.9
role priority 20
!--- Enables vPC, define vPC domain and peer for keep alive
int ethernet 1/9-10
channel-group 50 mode active
!--- Put Interfaces on Po50
int port-channel 50
switchport mode trunk
spanning-tree port type network
vpc peer-link
!--- Po50 configured as Peer-Link for vPC
inter ethernet 1/17-18
description UCS6120-A
switchport mode trunk
channel-group 51 mode active
!--- Associates interfaces to Po51 connected to UCS6120xp-A
int port-channel 51
swithport mode trunk
vpc 51
spannig-tree port type edge trunk
!--- Associates vPC 51 to Po51
inter ethernet 1/19-20
description UCS6120-B
switchport mode trunk
channel-group 52 mode active
!--- Associates interfaces to Po51 connected to UCS6120xp-B
int port-channel 52
swithport mode trunk
vpc 52
spannig-tree port type edge trunk
!--- Associates vPC 52 to Po52
!----- CONFIGURATION for Connection to Catalyst 6506
Int ethernet 1/1-3
description Cat6506-01
switchport mode trunk
channel-group 61 mode active
!--- Associate interfaces to Po61 connected to Cat6506-01
Int port-channel 61
switchport mode trunk
vpc 61
!--- Associates vPC 61 to Po61
Int ethernet 1/4-6
description Cat6506-02
switchport mode trunk
channel-group 62 mode active
!--- Associate interfaces to Po62 connected to Cat6506-02
Int port-channel 62
switchport mode trunk
vpc 62
!--- Associates vPC 62 to Po62
vPC Verification
show vpc consistency-parameters
!--- show compatibility parameters
Show feature
!--- Use it to verify that vpc and lacp features are enabled.
show vpc brief
!--- Displays information about vPC Domain
Etherchannel configuration on TAC 6500s
TACSWC01
TACSWC02
interface range GigabitEthernet2/38 - 43
description TACSWN01 (Po61 vPC61)
switchport
switchport trunk encapsulation dot1q
switchport mode trunk
no ip address
channel-group 61 mode active
interface range GigabitEthernet2/38 - 43
description TACSWN02 (Po62 vPC62)
switchport
switchport trunk encapsulation dot1q
switchport mode trunk
no ip address
channel-group 62 mode activeihernandez81,
Between the c1-r1 & c1-r2 there are no L2 links, ditto with d6-s1 & d6-s2. We did have a routed link just to allow orphan traffic.
All the c1r1 & c1-r2 HSRP communications ( we use GLBP as well ) go from c1-r1 to c1-r2 via the hosp-n5k-s1 & hosp-n5k-s2. Port channels 203 & 204 carry the exact same vlans.
The same is the case on the d6-s1 & d6-s2 sides except we converted them to a VSS cluster so we only have po203 with 4 *10 Gb links going to the 5Ks ( 2 from each VSS member to each 5K).
As you can tell what we were doing was extending VM vlans between 2 data centers prior to arrivals of 7010s and UCS chassis - which worked quite well.
If you got on any 5K you would see 2 port channels - 203 & 204 - going to each 6500, again when one pair went to VSS po204 went away.
I know, I know they are not the same things .... but if you view the 5Ks like a 3750 stack .... how would you hook up a 3750 stack from 2 6500s and if you did why would you run an L2 link between the 6500s ?
For us using 4 10G ports between 6509s took ports that were too expensive - we had 6704s - so use the 5Ks.
Our blocking link was on one of the links between site1 & site2. If we did not have wan connectivty there would have been no blocking or loops.
Caution .... if you go with 7Ks beware of the inability to do L2/L3 via VPCs.
better ?
one of the nice things about working with some of this stuff is as long as you maintain l2 connectivity if you are migrating things they tend to work, unless they really break
Maybe you are looking for
-
Please help me before the reality of fraud
I got a message from a new contact in Skype and was one of the ([Removed for privacy]) Said : skype claims: Dear Sir/Madam Your skype name has been awarded(500,000 pounds)(G.B.P) in the ongoing skype UK promotion, Your Award code is (*Sorry to hide
-
Is the 2g iphone like a regular phone now, with out a data plan, or extra charge per month?
-
Servo motor encoder pulses/counter data erroneous
First off, I am very new to using labview. I am trying to complete a project a former employee was working on. For a quick background on what I am working with, I am using a NI DAQCard-6036E connected to a SC-2345. The SC-2345 is then connected to
-
Limit for creating users in oracle 9i/10g database
what is limit of users to be created to connect to the database? is there any limit? my database size is 10gb.
-
No matter WHAT I try I always get a 404 error .
I though I had fixed this but it happend again I cannot get this page done because everytime I upload it and try to view it I get a 404 error.. I have had it working a few times but it seems like every time I add a certain amount of text it doesnt wo