Could high "Total Output Drops" on one interface on a 3560G, be caused by faulty hardware on another interface?

Hi All,
I have been trying to diagnose a issue we have been having with packet loss on video calls (which I think we may have now resolved as the problem lay elsewhere), but in the process we have trailed some equipment from PathView and this seems to have created a new problem.
We have a standalone 3560G switch which connects into a providers 3750G as part of an MPLS network. There is a single uplink to the 3750 from the 3560 (@ 1Gbps) and whilst I can  manage the 3560, I have no access to the providers switch. Our 3560 has a fairly vanilla config on it with no QoS enabled.
There are only a few ports used on the 3560, mainly for Cisco VCS (Video Conferencing Servers) and a PathView traffic analysis device.The VCS devices are used to funnel videoconferencing traffic across the MPLS network into another institutions network.The PathView device can be used to send traffic bursts (albeit relatively small compared with the Bandwidth that is available) across the same route as the VC traffic to an opposing device, however, I have also disabled all of these paths for the moment.
I can run multiple VC calls which utilise the VCS devices so traffic is routing into the relevant organisations and everything is good. In fact, I have 5 x 2Mb calls in progress now and there are 0 (or very, very few) errors.
However, I have actually shut-down the port (Gi0/3) connected to the PathView device for the moment. If I re-enable it I start to see a lot of errors on the VC calls, and the Total Output Drops on the UPLINK interface (Gi0/23) starts rising rapidly. As soon as I shut-down the PathView port again (Gi0/3), the error stop and all returns to normal.
I have read that issues on the Output queue are often attributed to a congested network/interface, but I don't believe that this is the case in this instance. My 5 VC calls would only come in at 10Mbps so is a way short of the 1000Mpbs available. Even the PathView device only issue burst up to 2Mbps, and with the Paths actually disabled even this shouldn't be happening, so only a small amount of management traffic should be flowing. Still, as soon as I enable the port, problems start.
So, is it possible that either the port on the switch, cable or PathView device is actually faulty and cause such errors? Has anyone seen anything like this?
Cheers
Chris

"As far as I know, such drops shouldn't be caused by faulty hardware, but if the hardware is really faulty, you would need to involve TAC."
Ok, thanks.
"BTW, all the other interfaces, which have the low bandwidth rates you describe, are physically running at low bandwidth settings on the interface, e.g. 10 Mbps?  If not, you can have short transient micros bursts which can cause drops.  This can happen even when average bandwidth utilization is low.  (NB: if these other ports average utilization is so low, if not already doing so, you could run the ports at 10 Mbps too.)"
No. All ports on the switch connect to devices with 1Gb capable interfaces. They have been left to auto negotiate and have negotiated at 1000/full. The bandwidth described is more with regard to the actual data throughput of a call. Technically, the VCS devices are licence to handle 50 simultaneous call of up to 4Mbps so potentially could require a bandwidth of 200Mbps, although it is unlikely that we will see this amount of traffic.
"Also, even if you have physically low bandwidth ingress, with a high bandwidth egress, and even if the egress's bandwidth is more than the aggregate of all the ingress, you can still have drops caused by concurrent arrivals."
In general, the ingress and the egress should be similar. Think of this as a stub network - one path in and out (via Gi0/23). The VCS act as a kind or proxy/router for video traffic, simply terminating inbound legs, and generating a next hop outbound leg. The traffic coming in  to the VCS should be the same as the traffic go out.
There will of course be certain management traffic, but this will be relatively low volume, and of course the PathView traffic analyser can generate a burst of UDP packets to simulate voice traffic.
"Some other "gotchas" include, you mention you don't have QoS configured, but you're sure QoS is disabled too?"
Yes.
switch#show mls qos
QoS is disabled
QoS ip packet dscp rewrite is enabled
I can't see a lot of point enabling QoS on this particular switch. Pretty much all of the traffic passing through it will be QoS tagged at the same level. Therefore it ALL prioritised.
Indeed running a test overnight with these multiple calls live and the PathView port shutdown, resulted in 0 Total Output Drops.Each leg did suffer a handful of dropped packets end-to-end, but I think I can live with 100 packets dropped in 10 million during a 12 hour period (and this, I suspect, will be somewhere else on the network).
"Lastly, Cisco has documented, at least for the 3750X, that uplink ports have the same buffer RAM resources as 24 copper edge ports.  Assuming the earlier series are similar, there might be benefit to moving your uplink, now on g0/23, to an uplink port (if your 3650G has them)."
Unfortunately, no can do. we are limited to the built in ports on the switch as we have no SFP modules installed.
Apologies about the formatting - this is yet another thing that has been broken in these new forums. I looks a lok better in the Reply window than it looks in this normal view.

Similar Messages

  • 3750ME Total output drops, OutDiscards

    Hi,
    I am testing a 3750ME switch as L2 device with iperf and Agilent router tester. I have a physical loop on 2 fastethernet ports - one port is access in vlan A and the other is access in vlan B. On the switch uplink both vlans are allowed. The test traffic comes from the uplink via vlan A, loops to vlan B via the physical loop and then goes back via vlan B through the uplink.
    I have tested a lot of Cisco switches in this way and had no issues until now. Now I have 18 OutDiscards (Total output drops) on one of the fastethernet interfaces, connected via the physical loop.
    The IOS is 12.2(44)SE1. I've read the release notes for this IOS, aka
    http://www.cisco.com/en/US/docs/switches/metro/catalyst3750m/software/release/12.2_44_se/release/notes/OL14631.html
    where it says:
    CSCsj53001
    The Total- output-drops field in the show interfaces privileged EXEC command output now displays accurate ASIC drops.
    so the counters are correct.
    I generate less then 5Mbps duplex traffic, so the switch must not be overloaded.
    Do you have any idea why I get these 18 output errors?
    Regards,
    Mladen

    Please generate more definitive test - clear the counters and generate much more traffic - like 100 Mbps (full port speed if you're not using the uplinks on 3750ME).
    Also, be sure the port is in "switchport" mode, because there could be an issue with mac addresses when switch is routing.
    Is your test setup in pure L2? without L3?

  • Increasing Total Output Drops number

    I have an autonomous Cisco AP1242 running on channel 11 (best channel avail) with only one client associated.
    Signal Strength and Channel Utilization look good.
    By design this client is constantly sending UDP/Multicast packets, so I had to disable IGMP Snooping on the AP. However, I have noticed data dropout and have been able to correlate it by running the command:
    show interface dot11radio 0
    Every-time I run the above command the Total Output Drops increases:
    Dot11Radio0 is up, line protocol is up
      Hardware is 802.11G Radio, address is 001c.b0eb.eb70 (bia 001c.b0eb.eb70)
      MTU 1500 bytes, BW 54000 Kbit, DLY 1000 usec,
         reliability 255/255, txload 1/255, rxload 1/255
      Encapsulation ARPA, loopback not set
      ARP type: ARPA, ARP Timeout 04:00:00
      Last input 00:00:00, output 00:00:00, output hang never
      Last clearing of "show interface" counters 00:37:46
      Input queue: 0/1127/0/0 (size/max/drops/flushes); Total output drops: 3178
      Queueing strategy: fifo
      Output queue: 0/30 (size/max)
      5 minute input rate 43000 bits/sec, 14 packets/sec
      5 minute output rate 92000 bits/sec, 17 packets/sec
         29799 packets input, 12551639 bytes, 0 no buffer
         Received 17376 broadcasts, 0 runts, 0 giants, 0 throttles
         0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
         0 input packets with dribble condition detected
         41308 packets output, 25121942 bytes, 0 underruns
         0 output errors, 0 collisions, 0 interface resets
         0 unknown protocol drops
         0 babbles, 0 late collision, 0 deferred
         0 lost carrier, 0 no carrier
         0 output buffer failures, 0 output buffers swapped out
    I cleared the statistics and ran the command after a few minutes.
    Any ideas what could be causing packets to be dropped?
    QOS is disabled on the AP.
    Thanks

    Hi,
    There is only one wireless client.
    Just took a 5 min Wireshark reading and it giving the following:
    Packets: 2286
    Avg. packets/sec: 7.729
    Avg packet size: 671.527 bytes
    Avg bytes/sec: 5190.457
    I am new to this. Is the above considered high volume for one client?
    I just compared a wired vs wireless captures... I am only losing packets on the wireless medium.
    When you say that the radio may not have enough buffer... are you reffering to wireless adapater or the Acess Point?
    Thanks

  • OIB value for Total output drops

    Hi, we have a Cisco C7200P router at work running IOS 12.4(12.2r)T, and we monitor it using Zenoss 3.1. We want to be able to capture the total output drops for a Gigabit Ethernet interface. I created a custom monitoring template and I added the following data source:
    Name: cieIfOutputQueueDrops
    OIB: 1.3.6.1.4.1.9.9.276.1.1.1.1.11
    The total output drops as viewed via the CLI are as follows:
    Input queue: 0/75/1335749/399902 (size/max/drops/flushes); Total output drops: 53882894
    However the graph on Zenoss reports a completely different value of ~360M. Here is the output of snmpwalk:
    SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.1 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.2 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.3 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.4 = Counter32: 363270064 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.5 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.6 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.7 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.12 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.13 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.14 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.15 = Counter32: 653008 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.26 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.125 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.139 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.140 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.194 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.196 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.254 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.288 = Counter32: 0
    The value it retunrs is incorrect. I would appreciate some assistance.

    Did you tried using ifOutDiscards (.1.3.6.1.2.1.2.2.1.19). These are counted as output drops as shown in the show interfaces command.
    It shows the number of outbound packets which were chosen to be discarded even though no errors had been detected to prevent their being transmitted. One possible reason for discarding such a packet could be to free up buffer space.
    For more details on interface couter please check following document :
    SNMP Counters: Frequently Asked Questions
    -Thanks
    Vinod
    **Encourage Contributors. RATE Them.**

  • Total output drops & dot1dBridgeEventsV2

    I am seen a lot of "Total output drops: " in the LAN/WAN Router, does any one have any documents that plains the cause of "Total output drops" and what it it's?
    Also I am getting a lot of traps in the LAN, but I can't find documents that explains the event, "dot1dBridgeEventsV2" Can you guys guide me to a document where it explains the events?
    Thanks

    Total output drops is the number of packets in the output queue that have been dropped because of a full queue. Check out the following link for troubleshooting input queue drops and output queue drops :
    http://www.cisco.com/warp/public/63/queue_drops.html

  • Total output drops

    Hi people
    I have a problem with drop packet in many interefaces of a WS-C6506-E (R7000) (s72033-ipservices_wan-mz.122-33.SXJ3.bin)
    in the picture you can see that
    most of the interfaces have connected servers
    which can be the cause of this?, what troubleshooting I can do  to find the source of the problem?
    thank

    Thank for you response aninchat
    Yes, these drops increment in bursts and then stay estable for a while, just as you say 
    Below are output command show counter, show queueing, show module, show interface vlan 30
    #show counters interface g1/21
    64 bit counters:
     0.                      rxHCTotalPkts = 361202
     1.                      txHCTotalPkts = 1411690021
     2.                    rxHCUnicastPkts = 361200
     3.                    txHCUnicastPkts = 410142854
     4.                  rxHCMulticastPkts = 0
     5.                  txHCMulticastPkts = 900686865
     6.                  rxHCBroadcastPkts = 2
     7.                  txHCBroadcastPkts = 100860302
     8.                         rxHCOctets = 33528526
     9.                         txHCOctets = 374407450833
    10.                 rxTxHCPkts64Octets = 678400385
    11.            rxTxHCPkts65to127Octets = 352093183
    12.           rxTxHCPkts128to255Octets = 157537314
    13.           rxTxHCPkts256to511Octets = 35132219
    14.          rxTxHCpkts512to1023Octets = 21865647
    15.         rxTxHCpkts1024to1518Octets = 167022475
    16.                    txHCTrunkFrames = 0
    17.                    rxHCTrunkFrames = 0
    18.                     rxHCDropEvents = 0
    32 bit counters:
     0.                   rxCRCAlignErrors = 0
     1.                   rxUndersizedPkts = 0
     2.                    rxOversizedPkts = 0
     3.                     rxFragmentPkts = 0
     4.                          rxJabbers = 0
     5.                       txCollisions = 0
     6.                         ifInErrors = 0
     7.                        ifOutErrors = 0
     8.                       ifInDiscards = 0
     9.                  ifInUnknownProtos = 0
    10.                      ifOutDiscards = 79993129
    11.            txDelayExceededDiscards = 0
    12.                              txCRC = 0
    13.                         linkChange = 1
    14.                   wrongEncapFrames = 0
    All Port Counters
     1.                          InPackets = 361204
     2.                           InOctets = 33528702
     3.                        InUcastPkts = 361202
     4.                        InMcastPkts = 0
     5.                        InBcastPkts = 2
     6.                         OutPackets = 1411691426
     7.                          OutOctets = 374408006548
     8.                       OutUcastPkts = 410143207
     9.                       OutMcastPkts = 900687369
    10.                       OutBcastPkts = 100860850
    11.                           AlignErr = 0
    12.                             FCSErr = 0
    13.                            XmitErr = 0
    14.                             RcvErr = 0
    15.                          UnderSize = 0
    16.                          SingleCol = 0
    17.                           MultiCol = 0
    18.                            LateCol = 0
    19.                       ExcessiveCol = 0
    20.                       CarrierSense = 0
    21.                              Runts = 0
    22.                             Giants = 0
    23.                         InDiscards = 0
    24.                        OutDiscards = 79993129
    25.                           InErrors = 0
    26.                          OutErrors = 0
    27.                    InUnknownProtos = 0
    28.                              txCRC = 0
    29.                      TrunkFramesTx = 0
    30.                      TrunkFramesRx = 0
    31.                         WrongEncap = 0
    32.     Broadcast_suppression_discards = 0
    33.     Multicast_suppression_discards = 0
    34.       Unicast_suppression_discards = 0
    35.                 rxTxHCPkts64Octets = 678400875
    36.            rxTxHCPkts65to127Octets = 352093605
    37.           rxTxHCPkts128to255Octets = 157537435
    38.           rxTxHCPkts256to511Octets = 35132277
    39.          rxTxHCpkts512to1023Octets = 21865661
    40.         rxTxHCpkts1024to1518Octets = 167022777
    41.                         DropEvents = 0
    42.                     CRCAlignErrors = 0
    43.                     UndersizedPkts = 0
    44.                      OversizedPkts = 0
    45.                       FragmentPkts = 0
    46.                            Jabbers = 0
    47.                         Collisions = 0
    48.              DelayExceededDiscards = 0
    49.                        bpduOutlost = 0
    50.                        qos0Outlost = 79993129
    51.                        qos1Outlost = 0
    52.                        qos2Outlost = 0
    53.                        qos3Outlost = 0
    54.                        qos4Outlost = 0
    55.                        qos5Outlost = 0
    56.                        qos6Outlost = 0
    57.                        qos7Outlost = 0
    58.                        qos8Outlost = 0
    59.                        qos9Outlost = 0
    60.                       qos10Outlost = 0
    61.                       qos11Outlost = 0
    62.                       qos12Outlost = 0
    63.                       qos13Outlost = 0
    64.                       qos14Outlost = 0
    65.                       qos15Outlost = 0
    66.                       qos16Outlost = 0
    67.                       qos17Outlost = 0
    68.                       qos18Outlost = 0
    69.                       qos19Outlost = 0
    70.                       qos20Outlost = 0
    71.                       qos21Outlost = 0
    72.                       qos22Outlost = 0
    73.                       qos23Outlost = 0
    74.                       qos24Outlost = 0
    75.                       qos25Outlost = 0
    76.                       qos26Outlost = 0
    77.                       qos27Outlost = 0
    78.                       qos28Outlost = 0
    79.                       qos29Outlost = 0
    80.                       qos30Outlost = 0
    81.                       qos31Outlost = 0
    82.                    bpduCbicOutlost = 0
    83.                    qos0CbicOutlost = 0
    84.                    qos1CbicOutlost = 0
    85.                    qos2CbicOutlost = 0
    86.                    qos3CbicOutlost = 0
    87.                         bpduInlost = 0
    88.                         qos0Inlost = 0
    89.                         qos1Inlost = 0
    90.                         qos2Inlost = 0
    91.                         qos3Inlost = 0
    92.                         qos4Inlost = 0
    93.                         qos5Inlost = 0
    94.                         qos6Inlost = 0
    95.                         qos7Inlost = 0
    96.                         qos8Inlost = 0
    97.                         qos9Inlost = 0
    98.                        qos10Inlost = 0
    99.                        qos11Inlost = 0
    100.                        qos12Inlost = 0
    101.                        qos13Inlost = 0
    102.                        qos14Inlost = 0
    103.                        qos15Inlost = 0
    104.                        qos16Inlost = 0
    105.                        qos17Inlost = 0
    106.                        qos18Inlost = 0
    107.                        qos19Inlost = 0
    108.                        qos20Inlost = 0
    109.                        qos21Inlost = 0
    110.                        qos22Inlost = 0
    111.                        qos23Inlost = 0
    112.                        qos24Inlost = 0
    113.                        qos25Inlost = 0
    114.                        qos26Inlost = 0
    115.                        qos27Inlost = 0
    116.                        qos28Inlost = 0
    117.                        qos29Inlost = 0
    118.                        qos30Inlost = 0
    119.                        qos31Inlost = 0
    120.                         pqueInlost = 0
    121.                           Overruns = 0
    122.                           maxIndex = 0
    show queueing interface g1/21
    Interface GigabitEthernet1/21 queueing strategy:  Weighted Round-Robin
      Port QoS is enabled
    Trust boundary disabled
      Port is untrusted
      Extend trust state: not trusted [COS = 0]
      Default COS is 0
        Queueing Mode In Tx direction: mode-cos
        Transmit queues [type = 1p3q8t]:
        Queue Id    Scheduling  Num of thresholds
           01         WRR                 08
           02         WRR                 08
           03         WRR                 08
           04         Priority            01
        WRR bandwidth ratios:  100[queue 1] 150[queue 2] 200[queue 3]
        queue-limit ratios:     50[queue 1]  20[queue 2]  15[queue 3]  15[Pri Queue]
        queue tail-drop-thresholds
        1     70[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8]
        2     70[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8]
        3     100[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8]
        queue random-detect-min-thresholds
          1    40[1] 70[2] 70[3] 70[4] 70[5] 70[6] 70[7] 70[8]
          2    40[1] 70[2] 70[3] 70[4] 70[5] 70[6] 70[7] 70[8]
          3    70[1] 70[2] 70[3] 70[4] 70[5] 70[6] 70[7] 70[8]
        queue random-detect-max-thresholds
          1    70[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8]
          2    70[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8]
          3    100[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8]
        WRED disabled queues:
        queue thresh cos-map
        1     1      0
        1     2      1
        1     3
        1     4
        1     5
        1     6
        1     7
        1     8
        2     1      2
        2     2      3 4
        2     3
        2     4
        2     5
        2     6
        2     7
        2     8
        3     1      6 7
        3     2
        3     3
        3     4
        3     5
        3     6
        3     7
        3     8
        4     1      5
        Queueing Mode In Rx direction: mode-cos
        Receive queues [type = 1q2t]:
        Queue Id    Scheduling  Num of thresholds
           1         Standard            2
        queue tail-drop-thresholds
        1     100[1] 100[2]
        queue thresh cos-map
        1     1      0 1 2 3 4 5 6 7
        1     2
      Packets dropped on Transmit:
        BPDU packets:  0
        queue thresh             dropped  [cos-map]
        1     1                 5001508  [0 ]
        1     2                       0  [1 ]
        2     1                       0  [2 ]
        2     2                       0  [3 4 ]
        3     1                       0  [6 7 ]
        4     1                       0  [5 ]
      Packets dropped on Receive:
        BPDU packets:  0
        queue thresh              dropped  [cos-map]
        1     1                       0  [0 1 2 3 4 5 6 7 ]
    show module
    Mod Ports Card Type                              Model
      1   48  48-port 10/100/1000 RJ45 EtherModule   WS-X6148A-GE-TX
      2   48  48-port 10/100/1000 RJ45 EtherModule   WS-X6148A-GE-TX
      3   48  48-port 10/100/1000 RJ45 EtherModule   WS-X6148A-GE-TX
      5    5  Supervisor Engine 720 10GE (Active)    VS-S720-10G
    interface Vlan30 (CORE 1)
     ip address x.x.x.x a.b.c.d secondary
     ip address x.x.x.x a.b.c.d secondary
     ip address x.x.x.x x.x.x.x secondary
     ip address x.x.x.x x.x.x.x
     no ip redirects
     no ip proxy-arp
     glbp 30 ip x.x.x.x
     glbp 30 ip x.x.x.x secondary
     glbp 30 ip x.x.x.x secondary
     glbp 30 ip x.x.x.x secondary
     glbp 30 priority 105
    interface Vlan30 (core 2)
     ip address x.x.x.x x.x.x.x secondary
     ip address x.x.x.x x.x.x.x secondary
     ip address x.x.x.x x.x.x.x secondary
     ip address x.x.x.x x.x.x.x
     no ip redirects
     no ip proxy-arp
     glbp 30 ip x.x.x.x
     glbp 30 ip x.x.x.x secondary
     glbp 30 ip x.x.x.x secondary
     glbp 30 ip x.x.x.x secondary
    end

  • WAAS interface - Input queue: output drops

    I'm seeing total output drops increment every now and then. We are using 3750E stack switch and are configured for WCCP L2 forward and return. Anyone know why I'm seeing out drops on the WAAS connected interface? The WAAS interfaces are setup as standby. The model is 7371...
    interface GigabitEthernet1/0/4
    description ****WAAS1 GIG 1/0****
    switchport access vlan 738
    mls qos trust dscp
    spanning-tree portfast
    end
    GigabitEthernet1/0/4 is up, line protocol is up (connected)
    Hardware is Gigabit Ethernet, address is 0022.be97.9804 (bia 0022.be97.9804)
    Description: ****WAAS1 GIG 1/0****
    MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,
    reliability 255/255, txload 1/255, rxload 1/255
    Encapsulation ARPA, loopback not set
    Keepalive set (10 sec)
    Full-duplex, 1000Mb/s, media type is 10/100/1000BaseTX
    input flow-control is off, output flow-control is unsupported
    ARP type: ARPA, ARP Timeout 04:00:00
    Last input 00:00:03, output 00:00:00, output hang never
    Last clearing of "show interface" counters never
    Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 281
    Queueing strategy: fifo
    Output queue: 0/40 (size/max)
    5 minute input rate 5967000 bits/sec, 1691 packets/sec
    5 minute output rate 5785000 bits/sec, 1606 packets/sec
    9301822868 packets input, 3537902554734 bytes, 0 no buffer
    Received 179580 broadcasts (172889 multicasts)
    0 runts, 0 giants, 0 throttles
    0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
    0 watchdog, 172889 multicast, 0 pause input
    0 input packets with dribble condition detected
    7661948806 packets output, 2639805900461 bytes, 0 underruns
    0 output errors, 0 collisions, 5 interface resets
    0 babbles, 0 late collision, 0 deferred
    0 lost carrier, 0 no carrier, 0 PAUSE output
    0 output buffer failures, 0 output buffers swapped out

    It looks like this could be related:
    CSCtf27580 Ethernet interface input queue wedge from broadcast/uniGRE traffic
    Is there any GRE traffic going through this AP?
    The workarounds are:
    Reboot APs to bring APs back up for time being.
    OR
    go back to 6.0.188.0 code on WLC.
    OR
    Route GRE traffic away from AP's.
    It appears that it definitely exists in your code:
    12.4(21a)JHA          12.4(21a)JA01          006.000(196.000)

  • DMVPN in Cisco 3945 output drop in tunnel interface

    I configured DMVPN in Cisco 3945 and checked the tunnel interface. I found out that I have output drop. How can I remove that output drop? I already set the ip mtu to 1400.
    CORE-ROUTER#sh int tunnel 20
    Tunnel20 is up, line protocol is up
      Hardware is Tunnel
      Description: <Voice Tunneling to HO>
      Internet address is 172.15.X.X./X
      MTU 17878 bytes, BW 1024 Kbit/sec, DLY 50000 usec,
         reliability 255/255, txload 1/255, rxload 1/255
      Encapsulation TUNNEL, loopback not set
      Keepalive not set
      Tunnel source 10.15.X.X (GigabitEthernet0/1)
       Tunnel Subblocks:
          src-track:
             Tunnel20 source tracking subblock associated with GigabitEthernet0/1
              Set of tunnels with source GigabitEthernet0/1, 1 member (includes iterators), on interface <OK>
      Tunnel protocol/transport multi-GRE/IP
        Key 0x3EA, sequencing disabled
        Checksumming of packets disabled
      Tunnel TTL 255, Fast tunneling enabled
      Tunnel transport MTU 1438 bytes
      Tunnel transmit bandwidth 8000 (kbps)
      Tunnel receive bandwidth 8000 (kbps)
      Tunnel protection via IPSec (profile "tunnel_protection_profile_2")
      Last input 00:00:01, output never, output hang never
     --More--           Last clearing of "show interface" counters never
      Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 7487
      Queueing strategy: fifo
      Output queue: 0/0 (size/max)
      30 second input rate 0 bits/sec, 0 packets/sec
      30 second output rate 0 bits/sec, 0 packets/sec
         48007 packets input, 4315254 bytes, 0 no buffer
         Received 0 broadcasts (0 IP multicasts)
         0 runts, 0 giants, 0 throttles
         0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
         42804 packets output, 4638561 bytes, 0 underruns
         0 output errors, 0 collisions, 0 interface resets
         0 unknown protocol drops
         0 output buffer failures, 0 output buffers swapped out
    interface Tunnel20
     description <Bayantel Voice tunneling>
     bandwidth 30720
     ip address 172.15.X.X 255.255.255.128
     no ip redirects
     ip mtu 1400
     no ip next-hop-self eigrp 20
     no ip split-horizon eigrp 20
     ip nhrp authentication 0r1x@IT
     ip nhrp map multicast dynamic
     ip nhrp network-id 1002
     ip nhrp holdtime 300
     ip tcp adjust-mss 1360
     tunnel source FastEthernet0/0/1
     tunnel mode gre multipoint
     tunnel key 1002
     tunnel protection ipsec profile tunnel_protection_profile_2 shared

    Hi,
    Thanks for the input. If the radio is sending out the packet but client did not receive, not output drop should be seen since packet is sent out, right?
    From my understanding, output drop is related to congested interface. Outgoing interface cannot take the rate packets coming in and thus droping it. What I don't understand is input and output rate has not reached limit yet. Also input queue is seeing drop of packet as well even though input queue is empty.
    Any idea?

  • High CPU Usage / Dropped Packets - Switch Blade WS-CBS3120X-S

    Hi all,
    I have a couple of Switches Blade 3120, working as active-standby model (HSRP) on a new site deployment. There are other 20 sites more or less, working on the same model, without issues. But in this one, we are seeing a high cpu usage. The traffic going through the platform is 600Mbps (on peaks), and in this case we have 40% of CPU usage. Traffic should be close to 3 Gbps. When we tried to send the whole traffic through the platform, active switch began to drop packets on the majority of interfaces.
    When we analyze the CPU usage, there is a special process called "HL3U bkgrd proce" always have the most CPU use, but we do not know what concerns. We do not know if it is caused because there are PBRs configured. It should not matter. How I mentioned, there are other sites working fine and have had always the same PBR number.
    Could you guys help us?. Any idea what is causing the high usage?. Is there a special debug we could to perform to diagnose the issue?. Also, we have seen a high interrupt CPU usage (9% in this case).
    Find attached the whole diagnosis outputs.
    Thanks for your assistance guys.
    Cheers,
    Juan Pablo
    bog-sib-INT-rtr-1#show processes cpu sorted 5sec
    CPU utilization for five seconds: 30%/9%; one minute: 25%; five minutes: 23%
    PID Runtime(ms)     Invoked      uSecs   5Sec   1Min   5Min TTY Process
    157   140004809   107071220       1307 14.24% 10.19%  9.01%   0 HL3U bkgrd proce
    119     6860957     1519183       4516  0.79%  0.59%  0.53%   0 hpm counter proc
    166     2511492      302802       8294  0.15%  0.15%  0.15%   0 HQM Stack Proces
    199     4182906    15255882        274  0.15%  0.21%  0.20%   0 IP Input        
    357      237531      782101        303  0.15%  0.03%  0.00%   0 IP SNMP         
    186         101         148        682  0.15%  0.09%  0.02%   1 Virtual Exec    
    242       63071     2330717         27  0.15%  0.02%  0.00%   0 CEF: IPv4 proces
      12      163754      620353        263  0.15%  0.01%  0.00%   0 ARP Input       
       9           0           2          0  0.00%  0.00%  0.00%   0 License Client N
       8          41        1827         22  0.00%  0.00%  0.00%   0 WATCH_AFS       
      11          50           4      12500  0.00%  0.00%  0.00%   0 Image License br
       7           0           2          0  0.00%  0.00%  0.00%   0 Timers          
    bog-sib-INT-rtr-1#sh ip cef summary
    IPv4 CEF is enabled for distributed and running
    VRF Default
    119 prefixes (119/0 fwd/non-fwd)
    Table id 0x0
    Database epoch:        2 (119 entries at this epoch)

    Hi Leolaohoo,
    I had not played with this one too !!!!...
    1). IOS version (It was recently updated)
    bog-sib-INT-rtr-1#sh ver
    Cisco IOS Software, CBS31X0 Software (CBS31X0-UNIVERSALK9-M), Version 12.2(58)SE1, RELEASE SOFTWARE (fc1)
    Technical Support: http://www.cisco.com/techsupport
    Copyright (c) 1986-2011 by Cisco Systems, Inc.
    Compiled Thu 05-May-11 04:08 by prod_rel_team
    ROM: Bootstrap program is CBS31X0 boot loader
    BOOTLDR: CBS31X0 Boot Loader (CBS31X0-HBOOT-M) Version 12.2(0.0.951)SE3, CISCO DEVELOPMENT TEST VERSION
    bog-sib-INT-rtr-1 uptime is 2 weeks, 3 days, 17 hours, 14 minutes
    System returned to ROM by power-on
    System restarted at 00:59:27 UTC Sat Jun 9 2012
    System image file is "flash:cbs31x0-universalk9-mz.122-58.SE1.bin"
    2). What interface do you want to see?, do you want to see all interfaces? . This switch has 16 interfaces that connect servers, and other going to our client. Below, the state of the two kind of interfaces:
    Interface to Client (Bearer)
    TenGigabitEthernet1/0/1 is up, line protocol is up (connected)
      Hardware is Ten Gigabit Ethernet, address is 001f.275d.d81b (bia 001f.275d.d81b)
      Description: BearerNContent_Aggregrate
      MTU 1500 bytes, BW 10000000 Kbit/sec, DLY 10 usec,
         reliability 255/255, txload 10/255, rxload 14/255
      Encapsulation ARPA, loopback not set
      Keepalive not set
      Full-duplex, 10Gb/s, link type is auto, media type is 10GBase-LR
      input flow-control is off, output flow-control is unsupported
      ARP type: ARPA, ARP Timeout 04:00:00
      Last input 00:00:00, output 2w3d, output hang never
      Last clearing of "show interface" counters 07:07:56
      Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
      Queueing strategy: fifo
      Output queue: 0/40 (size/max)
      5 minute input rate 562469000 bits/sec, 83641 packets/sec
      5 minute output rate 430500000 bits/sec, 73141 packets/sec
         2020563158 packets input, 1739897855828 bytes, 0 no buffer
         Received 13257 broadcasts (13257 multicasts)
         0 runts, 0 giants, 0 throttles
         0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
         0 watchdog, 13257 multicast, 0 pause input
         0 input packets with dribble condition detected
         1745065310 packets output, 1347244137726 bytes, 0 underruns
         0 output errors, 0 collisions, 0 interface resets
         0 unknown protocol drops
         0 babbles, 0 late collision, 0 deferred
         0 lost carrier, 0 no carrier, 0 pause output
         0 output buffer failures, 0 output buffers swapped out
    Interface to Server
    GigabitEthernet1/0/8 is up, line protocol is up (connected)
      Hardware is Gigabit Ethernet, address is 001f.275d.d808 (bia 001f.275d.d808)
      Description: bog-15
      MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec,
         reliability 255/255, txload 15/255, rxload 12/255
      Encapsulation ARPA, loopback not set
      Keepalive set (10 sec)
      Full-duplex, 1000Mb/s, link type is auto, media type is 1000BaseX
      input flow-control is off, output flow-control is unsupported
      ARP type: ARPA, ARP Timeout 04:00:00
      Last input never, output 00:00:17, output hang never
      Last clearing of "show interface" counters 07:09:12
      Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 19418
      Queueing strategy: fifo
      Output queue: 0/40 (size/max)
      5 minute input rate 47705000 bits/sec, 7155 packets/sec
      5 minute output rate 58897000 bits/sec, 8011 packets/sec
         178178750 packets input, 153802177226 bytes, 0 no buffer
         Received 4091 broadcasts (0 multicasts)
         0 runts, 0 giants, 0 throttles
         0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
         0 watchdog, 0 multicast, 0 pause input
         0 input packets with dribble condition detected
         212233312 packets output, 206621942776 bytes, 0 underruns
         0 output errors, 0 collisions, 0 interface resets
         0 unknown protocol drops
         0 babbles, 0 late collision, 0 deferred
         0 lost carrier, 0 no carrier, 0 pause output
         0 output buffer failures, 0 output buffers swapped out
    Thanks for your help. I am losing my hair with this issue.
    Cheers,
    Juan P.

  • Output Drop by RESOLVE_VPLS_REFLECTION_FILTER_DROP_CNT

    Hello!
    How i can determine a reason of output drops?
    >sh inter tenGigE 0/0/0/6              
    Fri Nov  2 15:26:05.358 MSK
    TenGigE0/0/0/6 is up, line protocol is up
      Interface state transitions: 11
      Hardware is TenGigE, address is 108c.cf1d.f326 (bia 108c.cf1d.f326)
      Layer 1 Transport Mode is LAN
      Description: To_XXX
      Internet address is 10.1.11.77/30
      MTU 9194 bytes, BW 10000000 Kbit (Max: 10000000 Kbit)
         reliability 255/255, txload 2/255, rxload 5/255
      Encapsulation ARPA,
      Full-duplex, 10000Mb/s, LR, link type is force-up
      output flow control is off, input flow control is off
      loopback not set,
      ARP type ARPA, ARP timeout 04:00:00
      Last input 00:00:00, output 00:00:00
      Last clearing of "show interface" counters 50w1d
      30 second input rate 218575000 bits/sec, 41199 packets/sec
      30 second output rate 115545000 bits/sec, 30555 packets/sec
         481020016118 packets input, 287815762466192 bytes, 876403 total input drops
         0 drops for unrecognized upper-level protocol
         Received 29 broadcast packets, 39255653 multicast packets
                  0 runts, 17 giants, 0 throttles, 0 parity
         17 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
         368901547057 packets output, 180820085800502 bytes, 28931652 total output drops
         Output 5 broadcast packets, 39284266 multicast packets
         0 output errors, 0 underruns, 0 applique, 0 resets
         0 output buffer failures, 0 output buffers swapped out
         10 carrier transitions
    >show controllers np counters np7  location 0/0/CPU0 | i DROP
    Fri Nov  2 15:27:03.815 MSK
      31  PARSE_INGRESS_DROP_CNT                                849353           0
      32  PARSE_EGRESS_DROP_CNT                                1236171           0
      33  RESOLVE_INGRESS_DROP_CNT                              868559           0
      34  RESOLVE_EGRESS_DROP_CNT                           3636654813         293
      37  MODIFY_EGRESS_DROP_CNT                                   669           0
      84  RESOLVE_AGE_NOMAC_DROP_CNT                                 1           0
      85  RESOLVE_AGE_MAC_STATIC_DROP_CNT                    187392316           8
    371  MPLS_PLU_DROP_PKT                                          1           0
    468  RESOLVE_VPLS_SPLIT_HORIZON_DROP_CNT                 28931887           6
    469  RESOLVE_VPLS_REFLECTION_FILTER_DROP_CNT           3293536501         272
    481  RESOLVE_L2_EGR_PW_UIDB_MISS_DROP_CNT                       4           0
    491  RESOLVE_VPLS_EGR_PW_FLOOD_UIDB_DOWN_DROP_CNT                 1           0
    499  RESOLVE_MAC_NOTIFY_CTRL_DROP_CNT                   313463638          16
    500  RESOLVE_MAC_DELETE_CTRL_DROP_CNT                     1591242           0
    622  EGR_DHCP_PW_UNTRUSTED_DROP                           1236171           0
    Input drops by RESOLVE_VPLS_REFLECTION_FILTER_DROP_CNT was considered at https://supportforums.cisco.com/thread/2099283
    But how we can apply it for output?

    Last column at "show controllers np counters np7  location 0/0/CPU0 | i DROP" is a pps. So we see 293pps
    RESOLVE_EGRESS_DROP_CNT and 0pps RESOLVE_INGRESS_DROP_CNT. Therefore RESOLVE_VPLS_REFLECTION_FILTER_DROP_CNT is a part of RESOLVE_EGRESS_DROP_CNT, aren't it?
    Also, counters egress_drop are increases, but ingress_drop are not:
      33  RESOLVE_INGRESS_DROP_CNT                              868559           0
      34  RESOLVE_EGRESS_DROP_CNT                           3637707596         149
    469  RESOLVE_VPLS_REFLECTION_FILTER_DROP_CNT           3294483194         129
    And one minute later:
      33  RESOLVE_INGRESS_DROP_CNT                              868559           0
      34  RESOLVE_EGRESS_DROP_CNT                           3637718845         156
    469  RESOLVE_VPLS_REFLECTION_FILTER_DROP_CNT           3294492975         135
    Also no new input drops at "sh inter":
    sh inter tenGigE 0/0/0/6 | i drops
    Fri Nov  2 16:57:39.828 MSK
         481200652943 packets input, 287931866783215 bytes, 876403 total input drops
         0 drops for unrecognized upper-level protocol
         369034005321 packets output, 180881208804090 bytes, 28963679 total output drops
    One minute later:
    sh inter tenGigE 0/0/0/6 | i drops
    Fri Nov  2 16:59:23.441 MSK
         481203274011 packets input, 287933491017363 bytes, 876403 total input drops
         0 drops for unrecognized upper-level protocol
         369035900847 packets output, 180882007120600 bytes, 28964280 total output drops

  • Output drops on cisco link connecting to F5 Loadbalancer's management port

    On a connection like below:
    Cisco 6509: gi x/y <<-->> F5 BIGIP LTM: mgmt (Management Port)
    We observed incrementing packet drops on the F5 BIGIP mgmt interface.
    Also, at the cisco end, incrementing output drops were observed.
    tcpdump (packet capture) on the F5 BIGIP's mgmt port show brodcast packets/ multicast including the HSRP hellos being received from the cisco device. It is an expected behaviour that, F5 will reject any packets it cant understand (including the cdp, hsrp and other broadcast), and this will cause the packet drop counter of F5 BIGIP's mgmt port to increase. (F5 TAC acknowledged this behaviour)
    Will this cause the output drop counter at the cisco interface to roll up?
    Note: On the cisco interface, i do not see any other errors, also utilisation on the link is very minimal.
    Thanks
    Sudheer Nair

    Hi, this is probably late, but the software counters for output drops on these types of switches (3750's, blade switches) are not reliable.
    What you need to check is "show platform port-asic statistics drop" for a reliable drop counter on an interface. This will give you the hardware counters
    https://tools.cisco.com/bugsearch/bug/CSCtq86186/?reffering_site=dumpcr
    Switch stack shows incorrect values for output drops/discards
    on show interfaces. For e.g.,
    --- show interfaces ---
    GigabitEthernet2/0/5 is up, line protocol is up (connected)
    Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 4294967163
    Conditions:
    This is seen on Stackable switches running 12.2(58)SE or later.
    Workaround:
    None.

  • High output drops

    I am usign a Ospry 450e capture card to stream live video using FMLE 3. Midway into the stream the out video moves like its sticking for a few seconds and the and the output Drops (fps) are over 4300! ....over a 90 minute period
    I am streaming at 200k video and 48k audio [mono]

    You are experiencing high frame drops because of lack ofVideo Bitrate. Encoder is not able to accomodate all frames in 200 kbps. please try increasing your Video Bitrate value and let me know if it works

  • The digital output drop to 0

    Hi,
    i have this circuit to control the direction of brushless dc motor
    to remove the brake the brake should connected to Black and this happen after i give D0 and D3 high and this one work fine
    to move back same we have to connect brake + back + black then D0+D1+D3+D4should high and this one work fine
    when i want to move Forward should BRAKE+BACK +FRW+BLACK then the 6 pins should be high , but in this situation all the pins is low
    any suggestion ?
    the module i have 9403 and cdaq 9188

    How much current do you need for those optocouplers?  If any more than 10mA, then this actually makes sense.
    The 9403 has an overcurrent protection in it.  When you reach a certain total output current (spec claims 64mA), then the OCP kicks in an turns all of the outputs to high impedance.  I would recommend finding some digital buffers that can supply more current for you optocouplers.  I frequency use the 74ACT245SC.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines

  • 3G drops to one bar accessing internet

    Hi all,
         I posted this in another forum but thought I would post here to see if someone from Verizon might be monitoring and could help us.  We got our Fascinate in December and we have always seemed to have a buggy 3G connection.  A few things going on:  one, we have never seen full signal strength on our phone, ever.  I think the most we have seen is 3 bars.  But more importantly, we have a strange thing happening when trying to access any Internet applications (browser, Facebook, etc).  As soon as we click the icon, the bars drop down to either 1 or none.  I understand that sometimes the strength indicator isn't necessarily always accurate...but half the time our pages are loading slow or not loading at all.  It feels like for whatever reason the signal is indeed dropping as soon as we try to connect.  I have a blackberry through verizon for work and sit it right next to the Fascinate and show full strength where the Fascinate shows maybe a bar or two.  This happened before the 2.2 update and is still happening after the update.
    Verizon had us reset the phone a couple of days ago and I have done the *228 a few times.  We still have the issue.  They are sending us a replacement phone.  I thought I had seen postings from others who had issues and were able to get tech support to do something around resetting their "data connection" (whatever that means).
    Any ideas what might be causing the drop to one or no signal bars with internet access?  If I am paying $30 for this a month, I should be able to access it whenever I want at decent speeds.

    Thank you for your inquiry. The signal strength is based on the area you are in and the phone itself. Normally, resetting the data connection won't increase signal. It will help if you are getting no data at all in an area you should. It is not a known issue for the Fascinate to get significantly lower signal or drop bars when going to the internet. In this case it does sound like this is an equipment issue. Are you seeing this issue with the replacement phone? 

  • I am trying to drag and drop one page of a .pdf into another .pdf in Acrobat Reader.  I used to be able to drag and drop from one .pdf to another.

    I am trying to drag and drop one page of a .pdf into another .pdf in Acrobat Reader.  I used to be able to drag and drop from one .pdf to another.

    If you could drag and drop pages before, it wasn't in Reader. You no doubt had Adobe Acrobat (Pro or Standard) which shouldn't be confused with Adobe Acrobat Reader. They recently added Acrobat to the name of Adobe Reader so the confusion about which product you had and/or have is understandable.

Maybe you are looking for

  • Office 2013 local Click to Run deployment error

    I have downloaded the x32 version of Office2013 to a local deployment share, making use of the Office Deployment tools. The download succeeded. Share is read access for everyone. When launching the installer from the share, on a x64 Win7 client, the

  • I can log in, but everything seems to be reset to "factory"

    I can't seem to find this problem detailed anywhere, so here goes: My Intel Mac mini running 10.5 froze up tonight, so I had to do a hard shut-down. When it restarted, the machine is in a state that was (is) very bizarre. It recognizes my user name,

  • Swf files do not load correctly anymore...

    I maintain a site with many online  tutorials (www.edutorials.gr) that i record with various desktop recording software and export them in swf format. The last few months some of these tutorials do not load correctly, and freeze after few seconds. Th

  • Adjust Date/Batch Change

    Since upgrading to iPhoto 11 (9.2), I can no longer adjust the date on my photos.  I had thousands of photos scanned in and I am trying to correct the date from the date scanned to the date taken (which I have on paper).  I could always do this befor

  • ISE and Two distinct Windows Domains

    All, I have a customer who wants to integrate ISE with two seperate Windows Domains, they have no trust releationship. We can integrate with one of the domains and can make use of LDAP for the other but can only get Machine Authentication working wit