Cisco 4500 High CPU-GaliosObflFilesys process
Hi All,
We are using Cisco 4510R-E Switch with 2 WS-X45-SUP6-E in SSO.
The IOS which running on Chassis: cat4500e-ipbasek9-mz.122-54.SG1.bin
From past 3-4 Days we are observing High CPU in this chassis and the process which is consuming maximum CPU is GaliosObflFilesys process.
The output is here:
Yogesh_Cisco4500#sh processes cpu sorted
CPU utilization for five seconds: 59%/0%; one minute: 65%; five minutes: 65%
PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process
80 3582290369 198427228 18053 47.67% 50.69% 50.72% 0 GaliosObflFilesy
Apart from this, we are getting below error messages continously on Console:
384665: Mar 27 08:41:46.933 IST: %C4K_SWITCHINGENGINEMAN-4-TCAMINTERRUPT: (Suppressed 380 times)flCam0 aPErr interrupt. errAddr: 0x1FD9 dPErr: 0 mPErr: 1 valid: 1
384666: Mar 27 08:41:54.538 IST: %C4K_L3HWFORWARDING-4-FLTCAMPARITYERROR: (Suppressed 766 times)FL Tcam Perr with no FwdEntry Hw index: 8152 Hw entry: Sw entry:
384667: Mar 27 08:42:16.972 IST: %C4K_SWITCHINGENGINEMAN-4-TCAMINTERRUPT: (Suppressed 384 times)flCam0 aPErr interrupt. errAddr: 0x1FD9 dPErr: 0 mPErr: 1 valid: 1
384668: Mar 27 08:42:47.006 IST: %C4K_SWITCHINGENGINEMAN-4-TCAMINTERRUPT: (Suppressed 393 times)flCam0 aPErr interrupt. errAddr: 0x1FD9 dPErr: 0 mPErr: 1 valid: 1
384669: Mar 27 08:42:54.571 IST: %C4K_L3HWFORWARDING-4-FLTCAMPARITYERROR: (Suppressed 763 times)FL Tcam Perr with no FwdEntry Hw index: 8152 Hw entry: Sw entry:
Also got the error messages are generated because of SUP in slot 5 which is in Active state:
Yogesh_Cisco4500#sh platform software obfl module all
1 : no obfl storage
2 : no obfl storage
3 : no obfl storage
4 : no obfl storage
5 : slot-5: messages=95200748 int-logged=80533419 int-dropped=0 dirty=yes
5 : slot-5: version=1 sector: size=4096 written=47182041 dirty=21
6 : remote supervisor
7 : no obfl storage
8 : no obfl storage
9 : no obfl storage
10 : no obfl storage
We got one Bug CSCsv17545 which is not hitting in our case since it is resolved 12.2(54)SG1.
Wanted to know whether it is Software issue or hardware and how to isolate the same.
Hi Rizwan,
Here, as per observation, Software is unable to recover these error messages (As the HW Index and ErrAddress in these errors are common) or SUP Engine is having Hardware problem (Since the Errors are collected on Active SUP Engine only). We have to isolate this issue by below method.
Remove the SUP & Reinsert it again / if you have spare SUP then replace Active SUP Engin
If same errors observed, we need to upgrade the Image & if not then need to replace the SUP Engine.
In our case, issue got resolved after removing and reinserting the SUP Engine.
CPU was normal too.
Additionally, you can enable Bootup level complete, so that HW issue of Active SUP can be tackled during Bootup only & then you can remove the Active SUP Engine.
Regards,
YSR.
Similar Messages
-
4500 High CPU due to "K2CpuMan"
I am facing high CPU on one of my 4507R-E Chassis. I am having "WS-X45-SUP6-E" SUP and all of my line cards are "WS-X4648-RJ45V-E".
My current IOS is : 12.2(54)SG1
I have followed the below document.
http://www.cisco.com/c/en/us/support/docs/switches/catalyst-4000-series-switches/65591-cat4500-high-cpu.html
Till the time I have not run any CPU SPAN nor I have taken the debug outputs which I cannot do due to sensitivity of application running over the servers terminated over that switch to avoid any millisecond downtime.
I am pasting the rest of the output. Please suggest what can be the possible cause as we are not using DAI, DHCP Snooping or IGMP.
Please suggest if someone can suggest in this regard
------------------ SHOW PROC CPU ------------------
CPU utilization for five seconds: 81%/1%; one minute: 63%; five minutes: 61%
60 5656124073704026290 152 54.15% 55.05% 54.58% 0 Cat4k Mgmt LoPri
------------------ SHOW PLATFORM HEALTH ------------------
%CPU %CPU RunTimeMax Priority Average %CPU Total
Target Actual Target Actual Fg Bg 5Sec Min Hour CPU
K5CpuMan Review 30.00 60.02 30 34 100 500 62 74 62 32140:33
K2CpuMan Review The process that performs software packet forwarding If you see high CPU utilization due to this process, investigate the packets that hit the CPU with use of the show platform cpu packet statistics command.
---------------------SHOW PLATFORM CPU PACKET STATISTICS-----------------------------
Queue Total 5 sec avg 1 min avg 5 min avg 1 hour avg
Unknown 0 0 0 0 0
Esmp 0 0 0 0 0
Input ACL fwd(snooping) 2715401932 1734 1920 1703 1667
ACL fwd(snooping): Packets that are processed by the DHCP snooping, dynamic ARP inspection, or IGMP snoopingI have got the root cause. I took the CPU Span and right after that I came to know there was a server which was consistently sending Multicast Packets causing high CPU. After disabling the related service on server CPU got normal.
-
Hello, friends.
There are Cisco (C2801-ADVENTERPRISEK9_IVS-M), Version 15.1 (4) M7.
Telephones connected to SCCP. There is one trunk with a SIP-provider.
about two weeks there were no problems. Since yesterday in error logs:
Jun 25 17:31:07.019: %IVR-3-LOW_CPU_RESOURCE: IVR: System experiencing high cpu utilization (97/100).
Call (callID=190) is rejected.
Jun 25 17:31:11.799: %IVR-3-LOW_CPU_RESOURCE: IVR: System experiencing high cpu utilization (97/100).
Call (callID=191) is rejected.
Jun 25 17:31:28.443: %IVR-3-LOW_CPU_RESOURCE: IVR: System experiencing high cpu utilization (97/100).
Call (callID=192) is rejected.
Jun 25 17:33:16.403: %IVR-3-LOW_CPU_RESOURCE: IVR: System experiencing high cpu utilization (97/100).
Call (callID=195) is rejected.
Jun 25 17:34:12.059: %IVR-3-LOW_CPU_RESOURCE: IVR: System experiencing high cpu utilization (97/100).
Call (callID=197) is rejected.
Jun 25 17:35:53.335: %IVR-3-LOW_CPU_RESOURCE: IVR: System experiencing high cpu utilization (97/100).
Call (callID=198) is rejected.
DC(config)# do show proc cpu sort
CPU utilization for five seconds: 98%/54%; one minute: 94%; five minutes: 96%
PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process
127 4265972 410671 10387 31.06% 28.33% 26.47% 0 IP Input
7 1073688 374129 2869 6.96% 6.33% 5.85% 0 Pool Manager
84 122676 17048 7195 2.56% 0.79% 0.72% 0 Skinny Msg Serve
383 89324 17955 4974 0.95% 0.69% 0.66% 0 IP NAT Ager
64 8928 273 32703 0.79% 0.09% 0.06% 0 Per-minute Jobs
287 16476 5739 2870 0.31% 0.08% 0.07% 0 Crypto PAS Proc
103 88684 349633 253 0.31% 0.42% 0.37% 0 Ethernet Msec Ti
173 12576 5995 2097 0.31% 0.09% 0.07% 0 TCP Timer
172 3588 110 32618 0.31% 0.03% 0.00% 0 Licensing Auto U
318 23992 3777 6352 0.23% 0.10% 0.05% 194 SSH Process
how to diagnose where the problem?
On the external interface acl:
Extended IP access list FW-OUT
10 permit icmp any any (1279 matches)
20 permit tcp any host 92.63.108.115 eq 22 (1599 matches)
30 permit gre host 217.197.126.52 host 92.63.108.115
40 permit esp host 217.197.126.52 host 92.63.108.115 (3616549 matches)
50 permit udp host 217.197.126.52 host 92.63.108.115 eq isakmp (3 matches)
60 permit esp any host 92.63.108.115 (1505 matches)
70 permit udp any host 92.63.108.115 eq isakmp (26 matches)
80 permit udp any host 92.63.108.115 eq non500-isakmp (28062 matches)
90 permit tcp any host 92.63.108.115 eq www
100 permit tcp any host 92.63.108.115 eq ftp ftp-data
110 permit gre host 46.165.197.108 host 92.63.108.115 (39348 matches)
120 permit tcp any host 92.63.108.115 eq 1723
130 permit gre any host 92.63.108.115
140 permit ip host 217.150.198.44 host 92.63.108.115
150 permit ip host 178.63.96.3 host 92.63.108.115
160 permit ip host 78.46.95.118 host 92.63.108.115
170 permit ip host 176.9.145.115 host 92.63.108.115 (79 matches)
180 permit ip host 176.9.85.133 host 92.63.108.115
190 permit ip host 5.9.108.25 host 92.63.108.115
200 permit ip host 89.249.23.194 host 92.63.108.115 (156 matches)
210 permit ip host 46.4.53.86 host 92.63.108.115
220 permit ip host 5.9.84.165 host 92.63.108.115
230 permit ip host 144.76.42.108 host 92.63.108.115 (141 matches)
240 permit ip host 178.16.26.122 host 92.63.108.115
250 permit ip host 178.16.26.124 host 92.63.108.115
260 permit ip host 178.63.96.28 host 92.63.108.115
350 deny ip any any (805 matches)Hi,
1) looking at the error message decoder
The alarm generates the following output:-
%IVR-3-LOW_CPU_RESOURCE: IVR: System experiencing high cpu utilization ([dec]/100). Call (callID=[dec]) is rejected.\n
The system does not have enough CPU resources available to accept a new call.
Recommended Action: Ensure that the call setup rate is within the supported capacity of this gateway.
Related documents- No specific documents apply to this error message.
2) Have a read at this link re High CPU - IP INPUT
http://www.cisco.com/c/en/us/support/docs/routers/7500-series-routers/41160-highcpu-ip-input.html
Regards
Alex -
Cisco asa high cpu - 90% -100%
Hi All,
Recently observed constant high cpu in asa firewall with version 8.2.5 - 80% utilization. The process consuming more cpu is - tmatch compile thread around 60%. Do you recommend downgrade to 8.2.3 or is it an opened bug in the current version 8.2.5 BugID - CSCtw75734
regards
SecITWe do have multiple object groups. By getting the number of access list elements, you mean to say that if the number of access-list elements are huge, the higher the cpu and memory utilization. Actually similar issue i have faced few months ago in pix firewall, where the cpu/mem went high due to too many no. of acl elemtents. Hence i reduced it by deleting the object groups and no. of access elements. I though in ASA it is different and there is no restriction like no. of objects and no. of acl entries.
-
Hi,
We are observing high cpu of one of the backbone switch,
We enable eem script & below are the cpture
But we are not able to pin-point the issue.
event manager applet high_cpu
event snmp oid 1.3.6.1.4.1.9.9.109.1.1.1.1.3.1 get-type exact entry-op gt entry-val "65" poll-interval 5 action 1.01 syslog msg "------HIGH CPU DETECTED----, CPU: $_snmp_oid_val %"
action 1.02 cli command "enable"
action 1.03 cli command "term len 0"
action 1.04 cli command "debug platform packet all receive buffer"
action 1.05 cli command "show platform health | redirect slot0:high_cpu1"
action 1.06 cli command "show proc cpu sort | redirect slot0:high_cpu2"
action 1.07 cli command "show platform cpu packet statistics | redirect slot0:high_cpu3"
action 1.08 cli command "show platform cpu packet buffered | redirect slot0:high_cpu4"
action 1.09 cli command "show platform health | redirect slot0:high_cpu5"
action 1.10 cli command "show proc cpu sort | redirect slot0:high_cpu6"
action 1.11 cli command "show platform cpu packet statistics | redirect slot0:high_cpu7"
action 1.12 cli command "show platform cpu packet buffered | redirect slot0:high_cpu8"
action 1.13 cli command "show clock | redirect slot0:high_cpu9"
action 1.14 cli command "undebug all"
action 1.15 cli command "conf t"
action 1.16 cli command "no event
Switch version: cat4500e-entservicesk9-mz.122-50.SG3.bin
Model: WS-C4900M 3 slot switch
Pls suggest
Br/SubhojitHI,
I guess for every redirect command uses a vty session and thats why we might see this issue. I had a look at the logs again.
I see lots of SA Miss which is nothing but increase Source Address Miss. Have you verified if there are any topology changes(show spanning-tree detail | in ieee|exec|from|occurred)
switch##more slot0:high_cpu2 3
RkiosSysPacketMan:
Total 5 sec avg 1 min avg 5 min avg 1 hour avg
641603851 50 9 0 0
Packets Dropped In Processing by CPU event
Event Total 5 sec avg 1 min avg 5 min avg 1 hour avg
Sa Miss 641599177 50 9 0 0
L3 Receive 2 0 0 0 0
Input Acl Fwd 4669 0 0 0 0
Sw Packet for Bridge 3 0 0 0 0
Thanks & Regards,
Karthick Murugan
CCIE#39285 -
Sh run command is showing high cpu / ssh process
HI
When we execute sh run command on 45xx or 65xx switch , cpu spike momentarily to >85 & ssh process is taking most of the cpu cycle
This is normal how IOS behave or something we need to check
4500#sh processes cpu sorted | exc 0.00
CPU utilization for five seconds: 88%/1%; one minute: 41%; five minutes: 36%
PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process
146 4136 871 4748 59.27% 6.05% 1.30% 1 SSH Process
48 16866873922152633381 0 14.55% 14.98% 15.01% 0 Cat4k Mgmt HiPri
103 16552592322880818592 0 6.31% 6.57% 6.59% 0 Spanning Tree
6500#sh processes cpu sorted | exc 0.00
CPU utilization for five seconds: 82%/6%; one minute: 17%; five minutes: 9%
PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process
66 7408 4713 1571 70.47% 9.15% 1.95% 1 SSH Process
273 2680382442700124902 0 1.67% 1.13% 1.02% 0 IP Input
267 176292656 43570676 4046 1.27% 1.09% 1.09% 0 CDP Protocol
12 56465396 138235369 408 0.47% 0.35% 0.32% 0 ARP Input
Br/SubhojitHI,
Our config size is bit high
Apart from the time, when we run the command CPU is low
Is there any way to put some command to fine tune the same
Find 1 command, but not sure about the Impact, Pls suggest
parser config cache interface
Br/Subhojit -
6500/SUP32 High CPU load & process switching troubles.
Hi!
At one of the nodes used catalyst 6500 with sup32 running software12.2(33)SXH5 , there are problems with high load CPU due to interrapts:
CPU utilization for five seconds: 56%/45%; one minute: 57%; five minutes: 55%
PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process
8 67138180 7538435 8906 7.19% 5.74% 5.70% 0 ARP Input
100 17392892 145853 119253 1.35% 1.90% 1.95% 0 HC Counter Timer
89 23849784 388043 61462 0.87% 1.06% 1.11% 0 Compute load avg
196 7210696 749626 9619 0.55% 0.87% 0.85% 0 CEF: IPv4 proces
138 3505932 17454349 200 0.31% 0.44% 0.38% 0 ADJ resolve proc
136 7986672 29865853 267 0.23% 0.35% 0.34% 0 IP Input
181 2713632 106885 25388 0.07% 0.13% 0.11% 0 IPC LC Message H
I installed the SPAN session on RP and found an ordinary network traffic clients (http, etc.), that must be harhware CEF switched.
It is obvious that the traffic enters the process switching:
Interface information:
Interface IBC0/0(idb 0x454A98F0)
Hardware is Mistral IBC (revision 5)
0 minute rx rate 231062000 bits/sec, 40653 packets/sec
0 minute tx rate 230981000 bits/sec, 40542 packets/sec
768441545 packets input, 763672035960 bytes
0 broadcasts received
763863117 packets output, 763298719016 bytes
2 broadcasts sent
0 Inband input packet drops
0 Bridge Packet loopback drops
0 Rx packets dropped with Multicast MAC and Unicast IP
767110946 Packets CEF Switched, 1279 Packets Fast Switched
0 Packets SLB Switched, 0 Packets CWAN Switched
Potential/Actual paks copied to process level 1309036/1308996 (40 dropped, 40 spd drops)
514919227 inband interrupts
71265 transmit ring cleanups
514994756 ibl inputs
71265 total tx interrupts set
71265 tx ints due to packets outstanding
0 tx ints due low free buffers in pool
0 tx ints due to application setting
tx dma done batch size=32
buffers free minimum before tx int=4
mistral ran out of tx descriptors 0 times
mistral tx interrupt inconsisteny occured 0 times
Label switched pkts dropped: 0
Xconnect pkts processed: 0, dropped: 0
IBC resets = 2; last at 14:15:03.733 msd Tue Mar 29 2011
-=-
--sup32-TEST###sh ip cef switching statistics
Reason Drop Punt Punt2Host
RP LES Packet destined for us 0 100098 0
RP LES No adjacency 7609 0 0
RP LES Incomplete adjacency 979470 0 313
RP LES TTL expired 0 0 4511
RP LES Discard 7811 0 0
RP LES Features 3127605 0 288855
RP LES Unclassified reason 833 0 0
RP LES Neighbor resolution req 507057 58 0
RP LES Total 4630385 100156 293679
All Total 4630385 100156 293679
I took the document https: / / supportforums.cisco.com/docs/DOC-14086, but he could not understand on what basis the traffic enters the process switching.
Some logs:
sup32-TEST###sh cef not-cef-switched
% Command accepted but obsolete, see 'show (ip|ipv6) cef switching statistics [feature]'
IPv4 CEF Packets passed on to next switching layer
Slot No_adj No_encap Unsupp'ted Redirect Receive Options Access Frag
RP 0 0 293175 0 99949 0 288300 0
5/0 0 0 0 0 0 0 0 0
or:
sup32-TEST###sh ip cef switching statistics
Reason Drop Punt Punt2Host
RP LES Packet destined for us 0 100098 0
RP LES No adjacency 7609 0 0
RP LES Incomplete adjacency 979470 0 313
RP LES TTL expired 0 0 4511
RP LES Discard 7811 0 0
RP LES Features 3127605 0 288855
RP LES Unclassified reason 833 0 0
RP LES Neighbor resolution req 507057 58 0
RP LES Total 4630385 100156 293679
All Total 4630385 100156 293679
The organization of this site is simple enough:
uplink (Vlan24, 2GB EtherChannel) and about 3500 SVI with IP Unnimbered feature and static routes with 32 bit mask for customers, for example:
interface Vlan101
ip unnumbered Loopback0
ip access-group 110 in
end
ip route 192.168.100.101 255.255.255.255 Vlan 101
interface Loopback0
ip address 192.168.100.1 255.255.255.0
ip address 192.168.101.1 255.255.255.0
etc...
Default route is obtained by the EIGRP
The essence of my question: How can I determine the reason the traffic hits the CPU?
I would be grateful for any ideas, ready to provide any diagnostic data.
Thanks for your help.
Regards,
DmitryL3Aging is related to NDE(Netflow Data Export). Check for any anomalies.
Also, on a side note, I could not help noticing that you are running 6.1(1b). You might want to go latest 6.4(x) which is general deployment to get various security vulnerability, diagnostic enhancement and other bug fixes.
PS: Remember to rate useful posts. -
High CPU using process slowed down after a period of time
Hi,
I am running a Finite element commercial solver, our tipical installations are with windows server 2008 but now we were force to move to windows server 2012.
When we launch the solver, which may be working with up to 20 cores, the CPU usage remains consistent for about 40 minute/1 hour then drops to the usage of only one core without having completed the process, which complete several hours later.
We already tried putting the power management at High performance, but didn't solve.
Are there any suggestion regarding where to look at?
Thanks
RaffaelloHi,
First, please make sure your hardware support windows server 2012. I’m not sure what “Finite element commercial solver” is. So I just treat it as a kind of software. Is there
any compatibility with the software with the new OS?
Process Monitor may help address the issue.
http://technet.microsoft.com/en-us/sysinternals/bb896653
Hope this helps. -
Cisco 3750 High CPU troubleshooting
I am trying to make myself familiar with troubleshooting on the Cat 3750 platform and I've come across the theory of how the CPU works. Here are a few points.
CPU has 16 queuesDepth of CPU Qs cannot be modified
Each queue reserves buffering for specific packet typeThe HW (eg: the port asic) will drop on queue congestion
Overload on one CPU Queue should not affect other Queues
A lot of packets in a specific queue may be normal.
I also got the output of the command "show controllers cpu-interface" which shows the number of packets hitting each queue. What the document did not explain (maybe because its proprietory).
Are these queues like our QOS queues where specific amount of resources are allotted to each queues.
Thanks in Advance !!!
Regards
UmeshWe have upgraded to version c3750e-universalk9-mz.150-2.SE7.bin but even after upgrade we are facing same issue.
I repeat: The most stable IOS for the 3650X/3750X is 12(55)SE10.
Any abnormalities you found in log..??
I did not see anything useful with the files attached. -
Switch High cpu Catalyst 4500 Version 12.2(53)SG5
I need help trying to understand the following.
%CPU %CPU RunTimeMax Priority Average %CPU Total
Target Actual Target Actual Fg Bg 5Sec Min Hour CPU
GalChassisVp-review 3.00 0.10 10 45 100 500 0 0 0 701:22
K2AclCamMan hw stats 3.00 1.48 10 5 100 500 0 0 0 3265:20
K2AclCamMan kx stats 1.00 0.10 10 5 100 500 0 0 0 1917:38
K2L2 Address Table R 2.00 72.93 12 5 100 500 95 87 66 458521:03
K2PortMan Review 3.00 4.73 15 11 100 500 5 5 4 28199:47
K2Fib Consistency Ch 1.00 0.19 5 2 100 500 0 0 0 1082:02
K2FibPbr route map r 2.00 0.21 20 5 100 500 0 0 0 2031:50
K2QosDblMan Rate DBL 2.00 0.11 7 1 100 500 0 0 0 662:26
K2 VlanStatsMan Revi 2.00 0.39 15 2 100 500 0 0 0 2062:11
K2PacketBufMonitor-P 3.00 2.76 10 1 100 500 2 2 2 13968:10
RkiosPortMan Port Re 2.00 0.30 12 29 100 500 0 0 0 1744:23
RkiosIpPbr IrmPort R 2.00 0.22 10 3 100 500 0 0 0 342:05
Grandprix 1-1 Stub R 2.00 0.17 15 5 100 500 0 0 0 1244:55
Grandprix 1-2 Stub R 2.00 0.14 15 5 100 500 0 0 0 866:57
Grandprix 1-3 Stub R 2.00 0.13 15 5 100 500 0 0 0 840:36
Grandprix 1-4 Stub R 2.00 0.16 15 5 100 500 0 0 0 846:55
Grandprix 1-5 Stub R 2.00 0.14 15 5 100 500 0 0 0 855:31
Grandprix 1-6 Stub R 2.00 0.15 15 124 100 500 0 0 0 1051:18
Grandprix 1-1 Stats 2.00 0.64 4 2 100 500 0 0 0 3151:58
Grandprix 1-2 Stats 2.00 0.48 4 2 100 500 0 0 0 675:30
Grandprix 1-3 Stats 2.00 0.20 4 2 100 500 0 0 0 567:34
Grandprix 1-6 Stats 2.00 0.35 4 2 100 500 0 0 0 2072:59
%CPU Totals 241.20 101.19
Allocation ceiling Current allocation
kbytes % in use kbytes % in use
Linecard 1's Store 258.00 62% 161.12 100%
TSM objects ------------------ ------------------
PacketInfoItem 859.37 0% 0.64 0%
VbufNodes1600 55.50 0% 5.20 0%
VbufNodes400 288.00 0% 1.12 50%
PacketBufRaw 25219.50 100% 25219.50 100%
PacketBufRawJumbo 184.29 100% 184.29 100%
Packet 2079.42 0% 54.10 0%
PimPhyports 1054.68 5% 54.84 100%
PimPorts 906.25 11% 105.12 100%
PimModules 162.00 0% 0.63 100%
PimChassis 38.37 6% 2.39 100%
EbmVlans 5088.00 1% 67.07 100%
EbmPorts 344.00 11% 38.63 100%
IrmVrfs 315.00 0% 2.46 50%
IrmFibEntries 12288.00 0% 0.75 6%
IrmMfibEntryMemMan 6656.00 0% 0.10 100%
Acl 1536.00 0% 1.87 100%
Ace24 9215.85 0% 4.92 100%
Ace48 15359.76 0% 0.35 100%
AclListNode 256.00 0% 0.30 100%
AclClassifierActionL 1152.00 0% 1.40 87%
CommandTables 48.00 22% 10.59 100%
K2FibVrfs 152.00 0% 1.18 50%
K2TxPacketInfo 256.00 0% 0.25 0%
Packets Dropped In Hardware By CPU Subport (txQueueNotAvail)
CPU Subport TxQueue 0 TxQueue 1 TxQueue 2 TxQueue 3
1 0 0 0 822
2 0 7538177 0 0
RkiosSysPacketMan:
Packet allocation failures: 0
Packet Buffer(Software Common) allocation failures: 0
Packet Buffer(Software ESMP) allocation failures: 0
Packet Buffer(Software EOBC) allocation failures: 0
Packet Buffer(Software SupToSup) allocation failures: 0
IOS Packet Buffer Wrapper allocation failures: 0
Packets Dropped In Processing Overall
Total 5 sec avg 1 min avg 5 min avg 1 hour avg
881 0 0 0 0
Packets Dropped In Processing by CPU event
Event Total 5 sec avg 1 min avg 5 min avg 1 hour avg
SA Miss 874 0 0 0 0
Packets Dropped In Processing by Priority
Priority Total 5 sec avg 1 min avg 5 min avg 1 hour avg
Normal 7 0 0 0 0
Medium 881 0 0 0 0
Packets Dropped In Processing by Reason
Reason Total 5 sec avg 1 min avg 5 min avg 1 hour avg
SrcAddrTableFilt 87 0 0 0 0
STPDrop 771 0 0 0 0
L2DstDrop 21 0 0 0 0
NoDstPorts 2 0 0 0 0
Total packet queues 16
Packets Received by Packet Queue
Queue Total 5 sec avg 1 min avg 5 min avg 1 hour avg
L2/L3Control 7269264609 236 268 214 205
Host Learning 3617387874 38 49 38 53
L3 Fwd Low 21851501 0 0 0 0
L2 Fwd Low 6786759 0 0 0 0
L3 Rx High 1582 0 0 0 0
L3 Rx Low 10306 0 0 0 0
Packets Dropped by Packet Queue
Queue Total 5 sec avg 1 min avg 5 min avg 1 hour avg
L2/L3Control 73992 0 0 0 0
Host Learning 7561249 0 0 0 0
Queue Total 5 sec avg 1 min avg 5 min avg 1 hour avg
L2/L3Control 7269264609 236 268 214 205
PacketBufRaw 25219.50 100% 25219.50 100%
PacketBufRawJumbo 184.29 100% 184.29 100%
Both of these seem a bit high don't you think?
If you have any idea as to why this is happening. Or is this normal?
ThanksHi,
This could be related to L2 issues like a loop, see that the K2L2 Address Table R process has a target of 2.00% and is currently running at 72.93%, which is way over the expected target, this can be an indication of host/mac flapping issues. Also the host learning queue seems very high and the L2/L3 control.
You can enable mac notification mac move command to see if on the logs are any flapping issues. Also below is a link which is very helpful when dealing with high cpu issues on 4500.
http://www.cisco.com/c/en/us/support/docs/switches/catalyst-4000-series-switches/65591-cat4500-high-cpu.html
Hope this helps. -
High CPU due to dispatch unit in cisco ASA 5540
Hi Any suggestion help
High CPU due to dispatch unit in cisco ASA 5540
ciscoasa# sh processes cpu-usage
PC Thread 5Sec 1Min 5Min Process
0805520c ad5afdf8 0.0% 0.0% 0.0% block_diag
081a8d34 ad5afa08 82.6% 82.1% 82.3% Dispatch Unit
083b6c05 ad5af618 0.0% 0.0% 0.0% CF OIR
08a60aa0 ad5af420 0.0% 0.0% 0.0% lina_int
08069f06 ad5aee38 0.0% 0.0% 0.0% Reload Control Thread
08072196 ad5aec40 0.0% 0.0% 0.0% aaa
08c76f3d ad5aea48 0.0% 0.0% 0.0% UserFromCert Thread
080a6f36 ad5ae658 0.0% 0.0% 0.0% CMGR Server Process
080a7445 ad5ae460 0.0% 0.0% 0.0% CMGR Timer Process
081a815c ad5ada88 0.0% 0.0% 0.0% dbgtrace
0844d75c ad5ad2a8 0.0% 0.0% 0.0% 557mcfix
0844d57e ad5ad0b0 0.0% 0.0% 0.0% 557statspoll
08c76f3d ad5abef8 0.0% 0.0% 0.0% netfs_thread_init
09319755 ad5ab520 0.0% 0.0% 0.0% Chunk Manager
088e3f0e ad5ab328 0.0% 0.0% 0.0% PIX Garbage Collector
088d72d4 ad5ab130 0.0% 0.0% 0.0% IP Address Assign
08ab1cd6 ad5aaf38 0.0% 0.0% 0.0% QoS Support Module
08953cbf ad5aad40 0.0% 0.0% 0.0% Client Update Task
093698fa ad5aab48 0.0% 0.0% 0.0% Checkheaps
08ab6205 ad5aa560 0.0% 0.0% 0.0% Quack process
08b0dd52 ad5aa368 0.0% 0.0% 0.0% Session Manager
08c227d5 ad5a9f78 0.0% 0.0% 0.0% uauth
08bbf615 ad5a9d80 0.0% 0.0% 0.0% Uauth_Proxy
08bf5cbe ad5a9798 0.0% 0.0% 0.0% SSL
08c20766 ad5a95a0 0.0% 0.0% 0.0% SMTP
081c0b4a ad5a93a8 0.0% 0.0% 0.0% Logger
08c19908 ad5a91b0 0.0% 0.0% 0.0% Syslog Retry Thread
08c1346e ad5a8fb8 0.0% 0.0% 0.0% Thread Logger
08e47c82 ad5a81f0 0.0% 0.0% 0.0% vpnlb_thread
08f0f055 ad5a7a10 0.0% 0.0% 0.0% pci_nt_bridge
0827a43d ad5a7620 0.0% 0.0% 0.0% TLS Proxy Inspector
08b279f3 ad5a7428 0.0% 0.0% 0.0% emweb/cifs_timer
086a0217 ad5a7230 0.0% 0.0% 0.0% netfs_mount_handler
08535408 ad5a7038 0.0% 0.0% 0.0% arp_timer
0853d18c ad5a6e40 0.0% 0.0% 0.0% arp_forward_thread
085ad295 ad5a6c48 0.0% 0.0% 0.0% Lic TMR
08c257b1 ad5a6a50 0.0% 0.0% 0.0% tcp_fast
08c28910 ad5a6858 0.0% 0.0% 0.0% tcp_slow
08c53f79 ad5a6660 0.0% 0.0% 0.0% udp_timer
080fe008 ad5a6468 0.0% 0.0% 0.0% CTCP Timer process
08df6853 ad5a6270 0.0% 0.0% 0.0% L2TP data daemon
08df7623 ad5a6078 0.0% 0.0% 0.0% L2TP mgmt daemon
08de39b8 ad5a5e80 0.0% 0.0% 0.0% ppp_timer_thread
08e48157 ad5a5c88 0.0% 0.0% 0.0% vpnlb_timer_thread
081153ff ad5a5a90 0.0% 0.0% 0.0% IPsec message handler
081296cc ad5a5898 0.0% 0.0% 0.0% CTM message handler
089b2bd9 ad5a56a0 0.0% 0.0% 0.0% NAT security-level reconfiguration
08ae1ba8 ad5a54a8 0.0% 0.0% 0.0% ICMP event handler
I want exact troubleshooting.
(1) Steps to follow.
(2) Required configuration
(3) Any good suggestions
(4) Any Tool to troubleshoot.
Suggestions are welcomeHello,
NMS is probably not the right community to t/s this. You probably want to move this to Security group (Security > Firewalling).
In the meanwhile, i have some details to share for you to check, though i am not a security/ASA expert.
The Dispatch Unit is a process that continually runs on single-core ASAs (models 5505, 5510, 5520, 5540, 5550). The Dispatch Unit takes packets off of the interface driver and passes them to the ASA SoftNP for further processing; it also performs the reverse process.
To determine if the Dispatch Unit process is utilizing the majority of the CPU time, use the command show cpu usage and show process cpu-usage sorted non-zero
show cpu usage (and show cpu usage detail) will show the usage of the ASA CPU cores:
ASA# show cpu usage
CPU utilization for 5 seconds = 0%; 1 minute: 1%; 5 minutes: 0%
show process cpu-usage sorted non-zero will display a sorted list of processes that are using the CPU usage.
In the example below, the Dispatch Unit process has used 50 percent of the CPU for the last 5 seconds:
ASA# show process cpu-usage sorted non-zero
0x0827e731 0xc85c5bf4 50.5% 50.4% 50.3% Dispatch Unit
0x0888d0dc 0xc85b76b4 2.3% 5.3% 5.5% esw_stats
0x090b0155 0xc859ae40 1.5% 0.4% 0.1% ssh
0x0878d2de 0xc85b22c8 0.1% 0.1% 0.1% ARP Thread
0x088c8ad5 0xc85b1268 0.1% 0.1% 0.1% MFIB
0x08cdd5cc 0xc85b4fd0 0.1% 0.1% 0.1% update_cpu_usage
If Dispatch Unit is listed as a top consumer of CPU usage, then use this document to narrow down what might be causing the Dispatch Unit process to be so active.
Most cases of high CPU utilization occur because the Dispatch Unit process is high. Common causes of high utilization include:
Oversubscription
Routing loops
Host with a high number of connections
Excessive system logs
Unequal traffic distribution
More t/s details can be shared by the ASA members from the community.
HTH
-Thanks
Vinod -
Hi all,
I'm having high CPU usage with one of my Cisco 3845.
It works as an IP-IP Gateway and the CPU is quite high when the total number of calls only around 100-200 calls.
I check the CPU usage with "show process cpu sort" and it looks like there are some "hidden" processes that consuming CPU.
For example, 41% is total CPU, 25% is due to interrups, so CPU utilization on process level = 41 - 25 = 16%.
But as showed below, processes don't consume that much CPU, only around 7% ???
Please help to advise on this case. Any help is highly appreciated..
Thank you.
3845-GW#show process cpu sort | ex 0.00% 0.00% 0.00%
CPU utilization for five seconds: 41%/25%; one minute: 46%; five minutes: 47%
PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process
382 6619708 1473171 4493 1.59% 1.81% 1.92% 0 CCSIP_SPI_CONTRO
141 4228940 10181955 415 1.35% 1.51% 1.57% 0 IP Input
65 2450824 163102 15026 1.19% 1.16% 1.17% 0 Per-Second Jobs
370 2702292 3709512 728 0.87% 0.88% 0.88% 0 VOIP_RTCP
224 321680 245640 1309 0.47% 0.49% 0.50% 0 AFW_application_
112 93940 18093506 5 0.39% 0.31% 0.32% 0 Ethernet Msec Ti
384 1058280 1553567 681 0.23% 0.28% 0.30% 0 CCSIP_UDP_SOCKET
2 18148 32905 551 0.07% 0.03% 0.02% 0 Load Meter
137 35644 4657843 7 0.07% 0.04% 0.05% 0 IPAM Manager
189 206392 267959 770 0.07% 0.05% 0.07% 0 TCP Protocols
30 30792 198554 155 0.07% 0.01% 0.00% 0 ARP Input
368 145456 176151 825 0.07% 0.04% 0.05% 0 CC-API_VCM
28 9628 32759 293 0.00% 0.01% 0.00% 0 Environmental mo
48 221352 37922 5837 0.00% 0.11% 0.11% 0 Net Background
63 16728 32924 508 0.00% 0.01% 0.00% 0 Compute load avg
64 72080 2781 25918 0.00% 0.01% 0.00% 0 Per-minute Jobs
6 371644 29792 12474 0.00% 0.14% 0.12% 0 Check heaps
176 12216 240288 50 0.00% 0.01% 0.00% 0 CEF: IPv4 proces
284 36416 4929826 7 0.00% 0.02% 0.01% 0 MMON MENG
307 12168 806151 15 0.00% 0.01% 0.00% 0 Atheros LED Ctro
335 35300 19755 1786 0.00% 3.16% 1.00% 708 Virtual Exec
3845-GW#sh int g0/0
GigabitEthernet0/0 is up, line protocol is up
MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full Duplex, 1Gbps, media type is RJ45
output flow-control is XON, input flow-control is XON
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:00, output 00:00:00, output hang never
Last clearing of "show interface" counters never
Input queue: 0/75/2/56803 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 1551000 bits/sec, 5751 packets/sec
5 minute output rate 4207000 bits/sec, 7643 packets/sec
925128804 packets input, 939078510 bytes, 0 no buffer
Received 62732 broadcasts (0 IP multicasts)
0 runts, 0 giants, 2 throttles
2 input errors, 0 CRC, 0 frame, 2 overrun, 0 ignored
0 watchdog, 3763438515 multicast, 0 pause input
1472816545 packets output, 3214770103 bytes, 0 underruns
0 output errors, 2067720191 collisions, 1 interface resets
0 unknown protocol drops
0 babbles, 2281155551 late collision, 0 deferred
2 lost carrier, 0 no carrier, 0 pause output
0 output buffer failures, 0 output buffers swapped out
3845-GW#sh int g0/1
GigabitEthernet0/1 is up, line protocol is up
MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full Duplex, 1Gbps, media type is RJ45
output flow-control is XON, input flow-control is XON
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:00, output 00:00:00, output hang never
Last clearing of "show interface" counters never
Input queue: 0/75/0/30335 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 1684000 bits/sec, 7697 packets/sec
5 minute output rate 3372000 bits/sec, 5632 packets/sec
1484558664 packets input, 2383177786 bytes, 0 no buffer
Received 208998 broadcasts (0 IP multicasts)
0 runts, 0 giants, 0 throttles
2 input errors, 0 CRC, 0 frame, 2 overrun, 0 ignored
0 watchdog, 3060386282 multicast, 0 pause input
903478941 packets output, 2814588854 bytes, 0 underruns
0 output errors, 2910776303 collisions, 1 interface resets
0 unknown protocol drops
0 babbles, 4157448025 late collision, 0 deferred
2 lost carrier, 0 no carrier, 0 pause output
0 output buffer failures, 0 output buffers swapped outHas this been something that just recently started happening, or have you had this issue for a while? Have you installed any new programs recently?
You may want to download Glary Utilities, which is a free software(they will ask you if you want to go Pro, just say no, the free version works very well). There is a module for startup manager. You can go in and disable stuff that starts with the computer. I would advise unchecking adobe, java, quicktime, printers, etc. Anything that doesn't REALLY need to start with the computer. The nice thing with Glary is that you can restart the computer, and if you find that you need one of the programs to start with windows, you can go back in and enable it again.
The Celeron 925 processor in your computer is a decent entry level processor, but if there are too many programs running in the background, it can bog down quick. I would also recommend downloading and running Malwarebytes Anti-malware, to be sure that there is nothing malicous running in the background.
Qosmio X875 i7-3630QM, 32GB RAM, OCZ SSD Qosmio X505 i7-920XM, PM55, 16GB RAM, OCZ SSD
Satellite Pro L350 T9900, GM45, 8GB RAM , Intel 320 SSD (my baby) Satellite L655 i7-620M, HM55, 8GB RAM, Intel 710 SSD (travel system) -
High CPU usage in cisco 7613 with rsp720-3cxl
Hi everybody,
our cisco 7613 has about 4.5 Gbps Tx/Rx IP traffic in total, and we run ospf with other cisco cloud for routing I list in the following some our router show.what is your idea about our high cpu usage .Is it in normal range with the listed cards and modules.How can I tune the rsp720 and other SIP-200,400,600 for better performances
why our interrupt rate is high ,and one thing more the total sum of 5sec in separate rows not equal to cpu utilization for five second 50%
show proc cpu sor
CPU utilization for five seconds: 50%/46%; one minute: 54%; five minutes: 59%
PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process
8 196795220 12640741 15568 1.51% 0.38% 0.26% 0 Check heaps
224 1048610528 4169501364 0 1.19% 1.45% 1.44% 0 IP Input
13 374006320 3155162661 0 0.23% 0.26% 0.24% 0 ARP Input
217 119862004 985030884 121 0.15% 0.32% 0.25% 0 ADJ resolve pro
c
185 537716 1825736183 0 0.07% 0.03% 0.02% 0 ACE Tunnel Task
260 1550992 2983272818 0 0.07% 0.13% 0.15% 0 Ethernet Msec T
i
305 38186336 58050485 657 0.07% 0.02% 0.00% 0 XDR mcast
34 67208 11707798 5 0.07% 0.00% 0.00% 0 IPC Loadometer
27 232776 57160812 4 0.07% 0.01% 0.00% 0 IPC Periodic Ti
m
325 17539200 92894502 188 0.07% 0.15% 0.15% 0 CEF: IPv4 proce
s
195 7406636 43782487 169 0.07% 0.00% 0.00% 0 esw_vlan_stat_p
r
show ip route summ
IP routing table name is default (0x0)
IP routing table maximum-paths is 32
Route Source Networks Subnets Replicates Overhead Memory (bytes)
static 1 120 0 7620 20812
connected 0 313 0 18860 53836
ospf 98 17 4892 0 589020 863984
Intra-area: 89 Inter-area: 383 External-1: 0 External-2: 0
NSSA External-1: 0 NSSA External-2: 4437
bgp 12880 0 1 0 60 172
External: 1 Internal: 0 Local: 0
ospf 410 0 269 0 16220 47344
Intra-area: 1 Inter-area: 0 External-1: 0 External-2: 268
NSSA External-1: 0 NSSA External-2: 0
internal 137 260544
Total 155 5595 0 631780 1246692
sh module
Mod Ports Card Type Model Serial No.
1 0 4-subslot SPA Interface Processor-200 7600-SIP-200
2 0 4-subslot SPA Interface Processor-400 7600-SIP-400
3 24 CEF720 24 port 1000mb SFP WS-X6724-SFP
6 1 1-subslot SPA Interface Processor-600 7600-SIP-600
7 2 Route Switch Processor 720 (Active) RSP720-3CXL-GE
8 2 Route Switch Processor 720 (Cold) RSP720-3CXL-GE
show ver
System image file is "bootdisk:c7600rsp72043-adventerprisek9-mz.122-33.SRE2.bin"
1 SIP-200 controller .
1 SIP-400 controller (1 Channelized OC3/STM-1).
1 SIP-600 controller (1 TenGigabitEthernet).
2 Virtual Ethernet interfaces
28 Gigabit Ethernet interfaces
1 Ten Gigabit Ethernet interface
1 Channelized STM-1 port
1 Channelized STM-1 port
show int vlan 1
Encapsulation ARPA, loopback not set
Keepalive not supported
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:00, output 00:00:00, output hang never
Last clearing of "show interface" counters 2d23h
Input queue: 0/75/2886/1830 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 2380531000 bits/sec, 287383 packets/sec
5 minute output rate 422133000 bits/sec, 254113 packets/sec
L2 Switched: ucast: 1200869468 pkt, 101172643240 bytes - mcast: 253599 pkt, 78
873415 bytes
L3 in Switched: ucast: 60947040633 pkt, 68919665115039 bytes - mcast: 0 pkt, 0
bytes mcast
L3 out Switched: ucast: 52594517004 pkt, 9869168832783 bytes mcast: 0 pkt, 0 b
ytes
62147839148 packets input, 69016175499764 bytes, 0 no buffer
Received 257634 broadcasts (0 IP multicasts)
0 runts, 0 giants, 15 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
53647248858 packets output, 10292998021217 bytes, 0 underruns
0 output errors, 0 interface resets
0 unknown protocol drops
0 output buffer failures, 0 output buffers swapped outThank you for your hints and replying
These are our show ibc in 1 min interval
Interface information:
Interface IBC0/0
5 minute rx rate 20045000 bits/sec, 30183 packets/sec
5 minute tx rate 47394000 bits/sec, 60212 packets/sec
19879272237 packets input, 4006174536193 bytes
19835355282 broadcasts received
19808585787 packets output, 3981305571968 bytes
90548 broadcasts sent
0 Bridge Packet loopback drops
19756362091 Packets CEF Switched, 1320184 Packets Fast Switched
0 Packets SLB Switched, 0 Packets CWAN Switched
Label switched pkts dropped: 0 Pkts dropped during dma: 339549
Invalid pkts dropped: 0 Pkts dropped(not cwan consumed): 0
IPSEC pkts dropped: 635184
Xconnect pkts processed: 0, dropped: 0
Xconnect pkt reflection drops: 0
Total paks copied for process level 0
Total short paks sent in route cache 2605317676
Total throttle drops 265338 Input queue drops 5831090
total spd packets classified (120217214 low, 174503 medium, 3073 high)
total spd packets dropped (339549 low, 0 medium, 0 high)
spd prio pkts allowed in due to selective throttling (0 med, 0 high)
IBC resets = 1; last at 23:52:49.004 Sat Jan 19 2013
Driver Level Counters: (Cumulative, Zeroed only at Reset)
Frames Bytes
Rx(0) 26537712 3421085217
Rx(1) 3449063135 2838813650
Tx(0) 3390340306 2016620276
Input Drop Frame Count
Rx0 = 0 Rx1 = 2488435
Per Queue Receive Errors:
FRME OFLW BUFE NOENP DISCRD DISABLE BADCOUNT
Rx0 0 0 0 0 0 0 0
Rx1 0 0 0 3633 0 0 0
Tx Errors/State:
One Collision Error = 0 More Collisions = 0
No Encap Error = 0 Deferred Error = 0
Loss Carrier Error = 0 Late Collision Error = 0
Excessive Collisions = 0 Buffer Error = 0
Tx Freeze Count = 0 Tx Intrpt Serv timeout= 1
Tx Flow State = FLOW_ON
Tx Flow Off Count = 0 Tx Flow On Count = 0
Counters collected at Idb:
Is input throttled = 0 Throttle Count = 0
Rx Resource Errors = 0 Input Drops = 2488435
Input Errors = 194243
Output Drops = 0 Giants/Runts = 0/0
Dma Mem Error = 0 Input Overrun = 0
Hash match table for multicast (in use 0, maximum 64 entries):
show ibc
Interface information:
Interface IBC0/0
5 minute rx rate 20194000 bits/sec, 30412 packets/sec
5 minute tx rate 47753000 bits/sec, 60663 packets/sec
19891125514 packets input, 4007158118761 bytes
19847185365 broadcasts received
19820407164 packets output, 3982279276274 bytes
90576 broadcasts sent
0 Bridge Packet loopback drops
19768178233 Packets CEF Switched, 1321008 Packets Fast Switched
0 Packets SLB Switched, 0 Packets CWAN Switched
Label switched pkts dropped: 0 Pkts dropped during dma: 339549
Invalid pkts dropped: 0 Pkts dropped(not cwan consumed): 0
IPSEC pkts dropped: 635574
Xconnect pkts processed: 0, dropped: 0
Xconnect pkt reflection drops: 0
Total paks copied for process level 0
Total short paks sent in route cache 2606549061
Total throttle drops 265338 Input queue drops 5831090
total spd packets classified (120252754 low, 174531 medium, 3074 high)
total spd packets dropped (339549 low, 0 medium, 0 high)
spd prio pkts allowed in due to selective throttling (0 med, 0 high)
IBC resets = 1; last at 23:52:49.004 Sat Jan 19 2013
Driver Level Counters: (Cumulative, Zeroed only at Reset)
Frames Bytes
Rx(0) 26550723 3422835145
Rx(1) 3461063605 176652699
Tx(0) 3402319442 3368513724
Input Drop Frame Count
Rx0 = 0 Rx1 = 2490155
Per Queue Receive Errors:
FRME OFLW BUFE NOENP DISCRD DISABLE BADCOUNT
Rx0 0 0 0 0 0 0 0
Rx1 0 0 0 3633 0 0 0
Tx Errors/State:
One Collision Error = 0 More Collisions = 0
No Encap Error = 0 Deferred Error = 0
Loss Carrier Error = 0 Late Collision Error = 0
Excessive Collisions = 0 Buffer Error = 0
Tx Freeze Count = 0 Tx Intrpt Serv timeout= 1
Tx Flow State = FLOW_ON
Tx Flow Off Count = 0 Tx Flow On Count = 0
Counters collected at Idb:
Is input throttled = 0 Throttle Count = 0
Rx Resource Errors = 0 Input Drops = 2490155
Input Errors = 194358
Output Drops = 0 Giants/Runts = 0/0
Dma Mem Error = 0 Input Overrun = 0
Hash match table for multicast (in use 0, maximum 64 entries):
and sorry what is your idea about total sum of 5sec in separate rows not equal to cpu utilization for five second 50% -
Process Snmp (PDU Dispatcher) causes High CPU value
Hello,
I have a Catalyst Switch 6000. Inoticed that after the execution of the job inventory in LMS, and after the snmp queries reached the equipement, the cpu value reached 98-99 % due to the process : PDU DISPATCHER (what’s it ?)
I have also a script running in the switch to observe the differents logs. For exemple, when high cpu is obtained, the message bellow appear :
« %SNMP-3-INPUT_QFULL_ERR: Packet dropped due to input
« %SNMP-3-INPUT_QFULL_ERR: Packet dropped due to input
« %SNMP-3-INPUT_QFULL_ERR: Packet dropped due to input
Could you give me any idea or help ?
Thanks in advance.This msg is normally generated when to many snmp request overrun the buffer allocated for such requests. There are a number of things that can cause this as well as a number of known issues that can be worked around using snmp views. Without reviewing show logging and other information available in a show tech, hard to say what the best method to resolve this. Should probably open a TAC SR to troubleshoot the issue further...
Sent from Cisco Technical Support iPad App -
Hi,
We have Cisco 4507R-E and we see a high CPU even at nights when there is no much more traffic on the switch.
Here are the some cpu cpmmand outputs.
sh proc cpu sort
69 52873350 108228799 488 13.35% 10.93% 10.60% 0 Cat4k Mgmt LoPri
68 62281851 265686819 234 7.19% 6.88% 7.03% 0 Cat4k Mgmt HiPri
sh platform health
K5L3Unicast Adj Tabl 2.00 14.56 15 7 100 500 15 15 8 607:36
sh platform cpu packet stat
Event Total 5 sec avg 1 min avg 5 min avg 1 hour avg
Sa Miss 1618486 8 1 0 0
Input Acl Fwd 1 0 0 0 0
Packets Dropped In Processing by Priority
Priority Total 5 sec avg 1 min avg 5 min avg 1 hour avg
Medium 1618486 8 1 0 0
High 1 0 0 0 0
Packets Dropped In Processing by Reason
Reason Total 5 sec avg 1 min avg 5 min avg 1 hour avg
STPDrop 3044 0 0 0 0
Tx Mode Drop 1615443 8 1 0 0
Total packet queues 64
Packets Received by Packet Queue
Queue Total 5 sec avg 1 min avg 5 min avg 1 hour avg
Esmp 169650262 97 92 94 87
Input ACL fwd(snooping) 3438 0 0 0 0
Host Learning 1618477 9 1 0 0
L2 Control 8684569 5 3 2 0
Ttl Expired 12559 0 0 0 0
Adj SameIf Fail 23 0 0 0 0
Bfd 1614 0 0 0 0
L2 router to CPU, 7 29932895 12 7 10 11
L3 Glean, 7 65336474 20 20 25 23
L3 Receive, 7 3202292 2 3 1 0
Packets Dropped by Packet Queue
Queue Total 5 sec avg 1 min avg 5 min avg 1 hour avg
L3 Glean, 7 84216 0 0 0 0
I cannot find anything about the process K5L3Unicast Adj Tabl on yhe internet.Hi
The platform-process “K5L3Unicast Adj Tabl”, that causes the high CPU usage is a platform-process that is active when a new MAC address has been learned and the adjacency table is rewritten. This takes place when in the switch receives an unknown source MAC address, it’s forwarded to the CPU for MAC address learning.
It could be possible to have a device connected to a switch port sending lots of MAC address randomly and massively forcing the switch to spend all CPU capacity just to process the new MAC addresses.
So next step is check the interface statistics looking for the highest traffic rate. Use this link below in order to troubleshoot this issue and find the top offender interface.
http://www.cisco.com/en/US/products/hw/switches/ps663/products_tech_note09186a00804cef15.shtml#trouble
Hope this helps.
Maybe you are looking for
-
Firstly Im in the UK so I know that affects what is available in iCloud etc. But my question involves past purchases and how they are stored iniCloud. If I was to buy a TV Series download it and watch it on my iPad and then delete it from my iPad it
-
Text doesn't always show up?
Texts always make my iPhone but not always iPad. How do I make sure always gets to both?
-
CATIA v4 3D & 2D .model files
Hi, I have a CATIA v4 .model file that I open with Acrobat Pro Extended. This model has 3D and 2D views in it. These appear in CATIA as separate "pages" or really views. When I open the .model in Acrobat Pro Extended I get a single page 3D document o
-
Messages OM011 & RRP251 in APO
Hi experts, I am working with RECIPE in R/3 and PDS in APO, if I do not use in R/3 the Setup Family (setup type key and group) I CAN create process orders in APO whitout any problema, but if I use the Setup Family (setup type key and group) in R/3
-
Displayed Header Info Prints Differently
The header info. that's displayed on-screen, is different than what gets printed. How do that happen? Specifically, the displayed header info. shows; "To: My Name" but when printed that same line is changed to; "To: my email address". I need the head