WAAS interface - Input queue: output drops

I'm seeing total output drops increment every now and then. We are using 3750E stack switch and are configured for WCCP L2 forward and return. Anyone know why I'm seeing out drops on the WAAS connected interface? The WAAS interfaces are setup as standby. The model is 7371...
interface GigabitEthernet1/0/4
description ****WAAS1 GIG 1/0****
switchport access vlan 738
mls qos trust dscp
spanning-tree portfast
end
GigabitEthernet1/0/4 is up, line protocol is up (connected)
Hardware is Gigabit Ethernet, address is 0022.be97.9804 (bia 0022.be97.9804)
Description: ****WAAS1 GIG 1/0****
MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 1000Mb/s, media type is 10/100/1000BaseTX
input flow-control is off, output flow-control is unsupported
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:03, output 00:00:00, output hang never
Last clearing of "show interface" counters never
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 281
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 5967000 bits/sec, 1691 packets/sec
5 minute output rate 5785000 bits/sec, 1606 packets/sec
9301822868 packets input, 3537902554734 bytes, 0 no buffer
Received 179580 broadcasts (172889 multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 172889 multicast, 0 pause input
0 input packets with dribble condition detected
7661948806 packets output, 2639805900461 bytes, 0 underruns
0 output errors, 0 collisions, 5 interface resets
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 PAUSE output
0 output buffer failures, 0 output buffers swapped out

It looks like this could be related:
CSCtf27580 Ethernet interface input queue wedge from broadcast/uniGRE traffic
Is there any GRE traffic going through this AP?
The workarounds are:
Reboot APs to bring APs back up for time being.
OR
go back to 6.0.188.0 code on WLC.
OR
Route GRE traffic away from AP's.
It appears that it definitely exists in your code:
12.4(21a)JHA          12.4(21a)JA01          006.000(196.000)

Similar Messages

  • Ethernet interface input queue 81/80 (size/max)

    Hello,
    Does anyone had the problem that the Gigabit interface of a C1140-K9W7 (or C1135)
    sometimes "hangs" due to queue problems (from what I understood it was queue problems)?
    I've got this AP, C1140-K9W7 with IOS 12.4(21a)JA1 and noticed it was not processing
    any input packets at the Gigabit interface, the drop count was 0 but strangely the input queue
    information was that it had size 81 and max 80... looks to me that the queue processing
    code hanged somewhere...
    The interface output is OK however (the AP is sending arp requests..) .
    I've done some search but was not able to find any information about it, also followed the
    steps in [1] to try troubleshoot what was causing that, no success. The IP traffic listing
    shows that the interface is receiving packets but they aren't being processed and "aren't"
    droped also (at least the drop count is 0).
    If I reboot the AP it does work OK yes... I can still access the console (via serial) and it
    still is at that state in case there's any suggestion of procedure.
    Thanks for your time.
    Jean Mousinho
    [1] http://www.cisco.com/en/US/products/hw/routers/ps133/products_tech_note09186a0080094791.shtml

    It looks like this could be related:
    CSCtf27580 Ethernet interface input queue wedge from broadcast/uniGRE traffic
    Is there any GRE traffic going through this AP?
    The workarounds are:
    Reboot APs to bring APs back up for time being.
    OR
    go back to 6.0.188.0 code on WLC.
    OR
    Route GRE traffic away from AP's.
    It appears that it definitely exists in your code:
    12.4(21a)JHA          12.4(21a)JA01          006.000(196.000)

  • Input Queue drops

    Hi guys I am having lot of input queue drops on one of our remote router which has got an ipsec protected gre tunnel towards our main branch
     Input queue: 0/75/168173/8 (size/max/drops/flushes); Total output drops: 0
     Throttle count         37
     Drops         RP    1770332         SP          0
    This is the output of an interface connecting to ISP
    The other thing I would like to mention here is we have some users who connects via cisco vpn client to other sites over an already established ipsec/gre tunnel so would that be a reason of having drops?
    Do I need to alter the mtu for an additional gre overhead of second tunnel
    Currently tunnel interface settings are
    ip mtu 1476
    ip tcp adjust-mss 1380
    Many thanks

    Disclaimer
    The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
    Liability Disclaimer
    In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
    Posting
    Input queue drops can be an indication of process switching (which is unable to keep up with what's offered).
    If it's just short bursts, increasing the input queue value might mitigate.
    Have you reviewed Cisco's documents for troubleshooting this?
    BTW, if your GRE tunnel is "protected", the MTU is likely too large as might also be your MSS adjust.
    If you're just doing GRE, your MSS adjust is too small.

  • ME 2600X input queue drops

    We have started to install ME2600X as access switch for FTTH
    Trunk ports are configured with rep and service instances
    These interfaces are facing Cat 4500X switches with rep edge ports. Northbound is a 6880X VSS and this is connected to the legacy network consisting of a couple of Cat6500 + loads of Catalyst switches
    We see loads of input queue drops in ME 2600X on the trunk interfaces. Even if I limit the allowed vlans out from the 4500X to the ME2600X the amount of dropped packets are still about half of the number of packets received on the interface.
    Captured traffic going out of the Cat4500X towards the ME 2600X showed mostly what I suspect is REP traffic. "Show mac traffic interface" show that all packers dropped are destined for "RP".
    We do not have clients on any ports yet so all traffic are inbound to the switch
    I need info and help troubleshooting this. What are the criteria for drops and how do I find what is dropped on this model?

    Config of REP port except the service instances
    interface TenGigabitEthernet0/45
     description TRAMAN-STH-02
     no ip address
     carrier-delay msec 200
     rep segment 1
     no keepalive
     soak link notification 10
     ip dhcp snooping trust
     l2protocol peer cdp lacp
     l2protocol forward stp vtp dtp pagp dot1x
    Here is an example of the amount of drops ver input
    Switch#show int te0/45
      Input queue: 0/75/337/0 (size/max/drops/flushes); Total output drops: 0
      5 minute input rate 59000 bits/sec, 10 packets/sec
    2659 packets input, 1976424 bytes, 0 no buffer
         Received 745 broadcasts (0 IP multicasts)
         0 runts, 0 giants, 0 throttles
         0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
         0 watchdog, 0 multicast, 0 pause input
    Switch#show int te0/45 summ
     *: interface is up
     IHQ: pkts in input hold queue     IQD: pkts dropped from input queue
     OHQ: pkts in output hold queue    OQD: pkts dropped from output queue
     RXBS: rx rate (bits/sec)          RXPS: rx rate (pkts/sec)
     TXBS: tx rate (bits/sec)          TXPS: tx rate (pkts/sec)
     TRTL: throttle count
      Interface                   IHQ       IQD       OHQ       OQD      RXBS      RXPS      TXBS      TXPS      TRTL
    * Te0/45                        0       533         0         0     64000        16      3000         3         0
    switch#show int te0/45 switching
    TenGigabitEthernet0/45 TRAMAN-STH-02
              Throttle count          0
                       Drops         RP    5491108         SP          0
                 SPD Flushes       Fast          0        SSE          0
                 SPD Aggress       Fast          0
                SPD Priority     Inputs          0      Drops          0
        Protocol  CDP
              Switching path    Pkts In   Chars In   Pkts Out  Chars Out
                     Process      17267    8029155      19179    7690779
                Cache misses          0          -          -          -
                        Fast          0          0          0          0
                   Auton/SSE          0          0          0          0
        Protocol  Other
              Switching path    Pkts In   Chars In   Pkts Out  Chars Out
                     Process          0          0    1150382  109509702
                Cache misses          0          -          -          -
                        Fast          0          0          0          0
                   Auton/SSE          0          0          0          0
        NOTE: all counts are cumulative and reset only after a reload.

  • DMVPN in Cisco 3945 output drop in tunnel interface

    I configured DMVPN in Cisco 3945 and checked the tunnel interface. I found out that I have output drop. How can I remove that output drop? I already set the ip mtu to 1400.
    CORE-ROUTER#sh int tunnel 20
    Tunnel20 is up, line protocol is up
      Hardware is Tunnel
      Description: <Voice Tunneling to HO>
      Internet address is 172.15.X.X./X
      MTU 17878 bytes, BW 1024 Kbit/sec, DLY 50000 usec,
         reliability 255/255, txload 1/255, rxload 1/255
      Encapsulation TUNNEL, loopback not set
      Keepalive not set
      Tunnel source 10.15.X.X (GigabitEthernet0/1)
       Tunnel Subblocks:
          src-track:
             Tunnel20 source tracking subblock associated with GigabitEthernet0/1
              Set of tunnels with source GigabitEthernet0/1, 1 member (includes iterators), on interface <OK>
      Tunnel protocol/transport multi-GRE/IP
        Key 0x3EA, sequencing disabled
        Checksumming of packets disabled
      Tunnel TTL 255, Fast tunneling enabled
      Tunnel transport MTU 1438 bytes
      Tunnel transmit bandwidth 8000 (kbps)
      Tunnel receive bandwidth 8000 (kbps)
      Tunnel protection via IPSec (profile "tunnel_protection_profile_2")
      Last input 00:00:01, output never, output hang never
     --More--           Last clearing of "show interface" counters never
      Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 7487
      Queueing strategy: fifo
      Output queue: 0/0 (size/max)
      30 second input rate 0 bits/sec, 0 packets/sec
      30 second output rate 0 bits/sec, 0 packets/sec
         48007 packets input, 4315254 bytes, 0 no buffer
         Received 0 broadcasts (0 IP multicasts)
         0 runts, 0 giants, 0 throttles
         0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
         42804 packets output, 4638561 bytes, 0 underruns
         0 output errors, 0 collisions, 0 interface resets
         0 unknown protocol drops
         0 output buffer failures, 0 output buffers swapped out
    interface Tunnel20
     description <Bayantel Voice tunneling>
     bandwidth 30720
     ip address 172.15.X.X 255.255.255.128
     no ip redirects
     ip mtu 1400
     no ip next-hop-self eigrp 20
     no ip split-horizon eigrp 20
     ip nhrp authentication 0r1x@IT
     ip nhrp map multicast dynamic
     ip nhrp network-id 1002
     ip nhrp holdtime 300
     ip tcp adjust-mss 1360
     tunnel source FastEthernet0/0/1
     tunnel mode gre multipoint
     tunnel key 1002
     tunnel protection ipsec profile tunnel_protection_profile_2 shared

    Hi,
    Thanks for the input. If the radio is sending out the packet but client did not receive, not output drop should be seen since packet is sent out, right?
    From my understanding, output drop is related to congested interface. Outgoing interface cannot take the rate packets coming in and thus droping it. What I don't understand is input and output rate has not reached limit yet. Also input queue is seeing drop of packet as well even though input queue is empty.
    Any idea?

  • Could high "Total Output Drops" on one interface on a 3560G, be caused by faulty hardware on another interface?

    Hi All,
    I have been trying to diagnose a issue we have been having with packet loss on video calls (which I think we may have now resolved as the problem lay elsewhere), but in the process we have trailed some equipment from PathView and this seems to have created a new problem.
    We have a standalone 3560G switch which connects into a providers 3750G as part of an MPLS network. There is a single uplink to the 3750 from the 3560 (@ 1Gbps) and whilst I can  manage the 3560, I have no access to the providers switch. Our 3560 has a fairly vanilla config on it with no QoS enabled.
    There are only a few ports used on the 3560, mainly for Cisco VCS (Video Conferencing Servers) and a PathView traffic analysis device.The VCS devices are used to funnel videoconferencing traffic across the MPLS network into another institutions network.The PathView device can be used to send traffic bursts (albeit relatively small compared with the Bandwidth that is available) across the same route as the VC traffic to an opposing device, however, I have also disabled all of these paths for the moment.
    I can run multiple VC calls which utilise the VCS devices so traffic is routing into the relevant organisations and everything is good. In fact, I have 5 x 2Mb calls in progress now and there are 0 (or very, very few) errors.
    However, I have actually shut-down the port (Gi0/3) connected to the PathView device for the moment. If I re-enable it I start to see a lot of errors on the VC calls, and the Total Output Drops on the UPLINK interface (Gi0/23) starts rising rapidly. As soon as I shut-down the PathView port again (Gi0/3), the error stop and all returns to normal.
    I have read that issues on the Output queue are often attributed to a congested network/interface, but I don't believe that this is the case in this instance. My 5 VC calls would only come in at 10Mbps so is a way short of the 1000Mpbs available. Even the PathView device only issue burst up to 2Mbps, and with the Paths actually disabled even this shouldn't be happening, so only a small amount of management traffic should be flowing. Still, as soon as I enable the port, problems start.
    So, is it possible that either the port on the switch, cable or PathView device is actually faulty and cause such errors? Has anyone seen anything like this?
    Cheers
    Chris

    "As far as I know, such drops shouldn't be caused by faulty hardware, but if the hardware is really faulty, you would need to involve TAC."
    Ok, thanks.
    "BTW, all the other interfaces, which have the low bandwidth rates you describe, are physically running at low bandwidth settings on the interface, e.g. 10 Mbps?  If not, you can have short transient micros bursts which can cause drops.  This can happen even when average bandwidth utilization is low.  (NB: if these other ports average utilization is so low, if not already doing so, you could run the ports at 10 Mbps too.)"
    No. All ports on the switch connect to devices with 1Gb capable interfaces. They have been left to auto negotiate and have negotiated at 1000/full. The bandwidth described is more with regard to the actual data throughput of a call. Technically, the VCS devices are licence to handle 50 simultaneous call of up to 4Mbps so potentially could require a bandwidth of 200Mbps, although it is unlikely that we will see this amount of traffic.
    "Also, even if you have physically low bandwidth ingress, with a high bandwidth egress, and even if the egress's bandwidth is more than the aggregate of all the ingress, you can still have drops caused by concurrent arrivals."
    In general, the ingress and the egress should be similar. Think of this as a stub network - one path in and out (via Gi0/23). The VCS act as a kind or proxy/router for video traffic, simply terminating inbound legs, and generating a next hop outbound leg. The traffic coming in  to the VCS should be the same as the traffic go out.
    There will of course be certain management traffic, but this will be relatively low volume, and of course the PathView traffic analyser can generate a burst of UDP packets to simulate voice traffic.
    "Some other "gotchas" include, you mention you don't have QoS configured, but you're sure QoS is disabled too?"
    Yes.
    switch#show mls qos
    QoS is disabled
    QoS ip packet dscp rewrite is enabled
    I can't see a lot of point enabling QoS on this particular switch. Pretty much all of the traffic passing through it will be QoS tagged at the same level. Therefore it ALL prioritised.
    Indeed running a test overnight with these multiple calls live and the PathView port shutdown, resulted in 0 Total Output Drops.Each leg did suffer a handful of dropped packets end-to-end, but I think I can live with 100 packets dropped in 10 million during a 12 hour period (and this, I suspect, will be somewhere else on the network).
    "Lastly, Cisco has documented, at least for the 3750X, that uplink ports have the same buffer RAM resources as 24 copper edge ports.  Assuming the earlier series are similar, there might be benefit to moving your uplink, now on g0/23, to an uplink port (if your 3650G has them)."
    Unfortunately, no can do. we are limited to the built in ports on the switch as we have no SFP modules installed.
    Apologies about the formatting - this is yet another thing that has been broken in these new forums. I looks a lok better in the Reply window than it looks in this normal view.

  • How to get input and output using math interface toolkit

    Hi,
    I am fairly new to labview and i am trying to convert my labview code
    into matlab mex files using math interface toolkit. I cant see any
    input or output terminals when i try to convert the code to mex files
    even though my vi has plenty of inputs and outputs that should be
    available during conversion.
    just to cross  check i made another vi in which i inputted an
    array of data to an fft and outputted it to an array again. i tried to
    convert this code to mex files but was still not able to see any input
    or output terminals, which makes me believe that i must be doing
    something wrong at the very basic level and inspite of trying really
    hard for some days now i have not been able to figure out that might be.
    So please help.
    I am attaching the basic vi that i created along with the link that i followed for converting labview code to mex files.
    http://zone.ni.com/devzone/conceptd.nsf/webmain/EEFA8F98491D04C586256E490002F100
    I am using labview 7.1
    Thanks
    Attachments:
    test.vi ‏17 KB

    Yes, you've made a very basic mistake. You have front panel controls and indicators but none of them are connected to the VI's connector pane. right click on the VI's icon and select "Show Connector". You use the wiring tool to select a connection there and then select a control or indicator. Use the on-line help and look up the topic "connector panes". There are some sub-topics on how to assign, confirm, delete, etc.

  • Increasing Total Output Drops number

    I have an autonomous Cisco AP1242 running on channel 11 (best channel avail) with only one client associated.
    Signal Strength and Channel Utilization look good.
    By design this client is constantly sending UDP/Multicast packets, so I had to disable IGMP Snooping on the AP. However, I have noticed data dropout and have been able to correlate it by running the command:
    show interface dot11radio 0
    Every-time I run the above command the Total Output Drops increases:
    Dot11Radio0 is up, line protocol is up
      Hardware is 802.11G Radio, address is 001c.b0eb.eb70 (bia 001c.b0eb.eb70)
      MTU 1500 bytes, BW 54000 Kbit, DLY 1000 usec,
         reliability 255/255, txload 1/255, rxload 1/255
      Encapsulation ARPA, loopback not set
      ARP type: ARPA, ARP Timeout 04:00:00
      Last input 00:00:00, output 00:00:00, output hang never
      Last clearing of "show interface" counters 00:37:46
      Input queue: 0/1127/0/0 (size/max/drops/flushes); Total output drops: 3178
      Queueing strategy: fifo
      Output queue: 0/30 (size/max)
      5 minute input rate 43000 bits/sec, 14 packets/sec
      5 minute output rate 92000 bits/sec, 17 packets/sec
         29799 packets input, 12551639 bytes, 0 no buffer
         Received 17376 broadcasts, 0 runts, 0 giants, 0 throttles
         0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
         0 input packets with dribble condition detected
         41308 packets output, 25121942 bytes, 0 underruns
         0 output errors, 0 collisions, 0 interface resets
         0 unknown protocol drops
         0 babbles, 0 late collision, 0 deferred
         0 lost carrier, 0 no carrier
         0 output buffer failures, 0 output buffers swapped out
    I cleared the statistics and ran the command after a few minutes.
    Any ideas what could be causing packets to be dropped?
    QOS is disabled on the AP.
    Thanks

    Hi,
    There is only one wireless client.
    Just took a 5 min Wireshark reading and it giving the following:
    Packets: 2286
    Avg. packets/sec: 7.729
    Avg packet size: 671.527 bytes
    Avg bytes/sec: 5190.457
    I am new to this. Is the above considered high volume for one client?
    I just compared a wired vs wireless captures... I am only losing packets on the wireless medium.
    When you say that the radio may not have enough buffer... are you reffering to wireless adapater or the Acess Point?
    Thanks

  • Nexus 7k input queuing

    On our 7K’s we run our interfaces in dedicated and not shared mode.  Since we are running in dedicated mode, does one need to be concerned with the input queuing policy or can we just let the egress policy take care of the queuing?
      Service-policy (queuing) input:   default-in-policy
        SNMP Policy Index:  301990105
        Class-map (queuing):   in-q1 (match-any)
          queue-limit percent 50
          bandwidth percent 80
          queue dropped pkts : 0
        Class-map (queuing):   in-q-default (match-any)
          queue-limit percent 50
          bandwidth percent 20
          queue dropped pkts : 0

    Hi,
    Please check output of command " show hardware internal interface indiscard-stats front-port x "
    Support for Granular Input Packet Discards Information
    Beginning with Cisco NX-OS Release 5.0(3)U2(1), you can get a more detailed information on what specific condition led to an input discard on a given interface. Use the show hardware internal interface indiscard-stats front-port x command to determine the condition that could be potentially responsible for the input discards that are seen on port eth1/x. The switch output shows the discards for IPv4, STP, input policy, ACL specific discard, generic receive drop, and VLAN related discards.
    Use the show hardware internal interface indiscard-stats front-port x command to determine the condition that could be potentially responsible for the input discards.

  • Output drops on cisco link connecting to F5 Loadbalancer's management port

    On a connection like below:
    Cisco 6509: gi x/y <<-->> F5 BIGIP LTM: mgmt (Management Port)
    We observed incrementing packet drops on the F5 BIGIP mgmt interface.
    Also, at the cisco end, incrementing output drops were observed.
    tcpdump (packet capture) on the F5 BIGIP's mgmt port show brodcast packets/ multicast including the HSRP hellos being received from the cisco device. It is an expected behaviour that, F5 will reject any packets it cant understand (including the cdp, hsrp and other broadcast), and this will cause the packet drop counter of F5 BIGIP's mgmt port to increase. (F5 TAC acknowledged this behaviour)
    Will this cause the output drop counter at the cisco interface to roll up?
    Note: On the cisco interface, i do not see any other errors, also utilisation on the link is very minimal.
    Thanks
    Sudheer Nair

    Hi, this is probably late, but the software counters for output drops on these types of switches (3750's, blade switches) are not reliable.
    What you need to check is "show platform port-asic statistics drop" for a reliable drop counter on an interface. This will give you the hardware counters
    https://tools.cisco.com/bugsearch/bug/CSCtq86186/?reffering_site=dumpcr
    Switch stack shows incorrect values for output drops/discards
    on show interfaces. For e.g.,
    --- show interfaces ---
    GigabitEthernet2/0/5 is up, line protocol is up (connected)
    Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 4294967163
    Conditions:
    This is seen on Stackable switches running 12.2(58)SE or later.
    Workaround:
    None.

  • Show Interface Input Errors

    When doing a show inteface - What do Input Errors indicate?

    Ankur,
    Here is the output from show int ...The only thing incrementing is the input errors but what type of problem does that indicate?
    JSC-1#sh int gigabitEthernet 11/1
    GigabitEthernet11/1 is up, line protocol is up (connected)
    Hardware is C6k 1000Mb 802.3, address is 0012.0092.a260 (bia 0012.0092.a260
    Description: GL1
    MTU 1500 bytes, BW 100000 Kbit, DLY 10 usec,
    reliability 255/255, txload 1/255, rxload 1/255
    Encapsulation ARPA, loopback not set
    Full-duplex, 100Mb/s
    input flow-control is off, output flow-control is off
    Clock mode is auto
    ARP type: ARPA, ARP Timeout 04:00:00
    Last input never, output 00:00:35, output hang never
    Last clearing of "show interface" counters 1d03h
    Input queue: 0/2000/41/0 (size/max/drops/flushes); Total output drops: 0
    Queueing strategy: fifo
    Output queue: 0/40 (size/max)
    5 minute input rate 645000 bits/sec, 66 packets/sec
    5 minute output rate 305000 bits/sec, 63 packets/sec
    1152632 packets input, 910547914 bytes, 0 no buffer
    Received 2068 broadcasts (1650 multicast)
    0 runts, 0 giants, 0 throttles
    41 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
    0 watchdog, 0 multicast, 0 pause input
    0 input packets with dribble condition detected
    1917609 packets output, 964325920 bytes, 0 underruns
    0 output errors, 0 collisions, 0 interface resets
    0 babbles, 0 late collision, 0 deferred
    0 lost carrier, 0 no carrier, 0 PAUSE output
    0 output buffer failures, 0 output buffers swapped out

  • OIB value for Total output drops

    Hi, we have a Cisco C7200P router at work running IOS 12.4(12.2r)T, and we monitor it using Zenoss 3.1. We want to be able to capture the total output drops for a Gigabit Ethernet interface. I created a custom monitoring template and I added the following data source:
    Name: cieIfOutputQueueDrops
    OIB: 1.3.6.1.4.1.9.9.276.1.1.1.1.11
    The total output drops as viewed via the CLI are as follows:
    Input queue: 0/75/1335749/399902 (size/max/drops/flushes); Total output drops: 53882894
    However the graph on Zenoss reports a completely different value of ~360M. Here is the output of snmpwalk:
    SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.1 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.2 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.3 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.4 = Counter32: 363270064 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.5 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.6 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.7 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.12 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.13 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.14 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.15 = Counter32: 653008 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.26 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.125 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.139 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.140 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.194 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.196 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.254 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.288 = Counter32: 0
    The value it retunrs is incorrect. I would appreciate some assistance.

    Did you tried using ifOutDiscards (.1.3.6.1.2.1.2.2.1.19). These are counted as output drops as shown in the show interfaces command.
    It shows the number of outbound packets which were chosen to be discarded even though no errors had been detected to prevent their being transmitted. One possible reason for discarding such a packet could be to free up buffer space.
    For more details on interface couter please check following document :
    SNMP Counters: Frequently Asked Questions
    -Thanks
    Vinod
    **Encourage Contributors. RATE Them.**

  • Input and output on same device, producer/consumer structure

    Hello interested people,
    I have a question about using the same device for both digital inputs
    and outputs.  I have written a simple program of one while loop
    that continuously polls the device, processes, and requests.  I
    have addressed the device using two DAQmx Asst. and I have attached
    them with their error in/out cluster terminals to provide data flow and
    eliminate the chance of addressing the devices at the same time (which
    produces an error).  Now I want to change this program structure
    to a producer/consumer loop foundation with state machine.  
    In this design, I will have the DI in the producer loop and the DO in
    the consumer loop, under one of the states.  I can't simply
    connect the error in/out ports in this configuration, so my question is
    how to avoid the error caused by addressing the same device
    simultaneously with two different tasks (input and output)?  I
    have attached two VI's, the "One Loop" vi is the original configuration
    (simplified), and the Producer-Consumer vi is a NONSENSICAL program
    that simply represents the desired configuration.  (I don't need
    any comments on the programming of this vi, it is only an example for
    illustration of the problem). 
    I am thinking about bundling the input data and the error cluster, both
    from the PXI 6528 DI, into one cluster, queueing that up, and
    unbundling the de-queued elements for some kind of data flow between
    the producer loop and the "Request" state of the consumer loop. 
    Is this the right approach, or am I barking up the wrong tree?
    Thanks
    Attachments:
    One Loop DO DI.vi ‏102 KB
    Producer-Consumer DI-DO.vi ‏106 KB

    Hello,
    It sounds to me like you really have two modes:
    1. user interface actions determine execution
    2. user interface is locked, and execution is automated.
    I think it would make sense to use the producer consumer for an architecture.  Basically you would do the following:
    1. program the producer to handle the user interface as you normally would.
    2. provide one additional event case in the producer which would be your "automated handling" case.  In that case, you could put a state machine which could run until whatever conditions were met to put your program back in "user interface mode".
    Keep in mind that you can use custom USER EVENTS to programmatically generate events ie. you can trigger the start of your "automated handling" form anywhere in your code at virtually any time.
    I think this would allow you to take advantage of the producer consumer architecture in its intended spirit, while integrating an automated routine.
    I hope this helps!
    Best Regards,
    JLS
    Best,
    JLS
    Sixclear

  • Output Drop by RESOLVE_VPLS_REFLECTION_FILTER_DROP_CNT

    Hello!
    How i can determine a reason of output drops?
    >sh inter tenGigE 0/0/0/6              
    Fri Nov  2 15:26:05.358 MSK
    TenGigE0/0/0/6 is up, line protocol is up
      Interface state transitions: 11
      Hardware is TenGigE, address is 108c.cf1d.f326 (bia 108c.cf1d.f326)
      Layer 1 Transport Mode is LAN
      Description: To_XXX
      Internet address is 10.1.11.77/30
      MTU 9194 bytes, BW 10000000 Kbit (Max: 10000000 Kbit)
         reliability 255/255, txload 2/255, rxload 5/255
      Encapsulation ARPA,
      Full-duplex, 10000Mb/s, LR, link type is force-up
      output flow control is off, input flow control is off
      loopback not set,
      ARP type ARPA, ARP timeout 04:00:00
      Last input 00:00:00, output 00:00:00
      Last clearing of "show interface" counters 50w1d
      30 second input rate 218575000 bits/sec, 41199 packets/sec
      30 second output rate 115545000 bits/sec, 30555 packets/sec
         481020016118 packets input, 287815762466192 bytes, 876403 total input drops
         0 drops for unrecognized upper-level protocol
         Received 29 broadcast packets, 39255653 multicast packets
                  0 runts, 17 giants, 0 throttles, 0 parity
         17 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
         368901547057 packets output, 180820085800502 bytes, 28931652 total output drops
         Output 5 broadcast packets, 39284266 multicast packets
         0 output errors, 0 underruns, 0 applique, 0 resets
         0 output buffer failures, 0 output buffers swapped out
         10 carrier transitions
    >show controllers np counters np7  location 0/0/CPU0 | i DROP
    Fri Nov  2 15:27:03.815 MSK
      31  PARSE_INGRESS_DROP_CNT                                849353           0
      32  PARSE_EGRESS_DROP_CNT                                1236171           0
      33  RESOLVE_INGRESS_DROP_CNT                              868559           0
      34  RESOLVE_EGRESS_DROP_CNT                           3636654813         293
      37  MODIFY_EGRESS_DROP_CNT                                   669           0
      84  RESOLVE_AGE_NOMAC_DROP_CNT                                 1           0
      85  RESOLVE_AGE_MAC_STATIC_DROP_CNT                    187392316           8
    371  MPLS_PLU_DROP_PKT                                          1           0
    468  RESOLVE_VPLS_SPLIT_HORIZON_DROP_CNT                 28931887           6
    469  RESOLVE_VPLS_REFLECTION_FILTER_DROP_CNT           3293536501         272
    481  RESOLVE_L2_EGR_PW_UIDB_MISS_DROP_CNT                       4           0
    491  RESOLVE_VPLS_EGR_PW_FLOOD_UIDB_DOWN_DROP_CNT                 1           0
    499  RESOLVE_MAC_NOTIFY_CTRL_DROP_CNT                   313463638          16
    500  RESOLVE_MAC_DELETE_CTRL_DROP_CNT                     1591242           0
    622  EGR_DHCP_PW_UNTRUSTED_DROP                           1236171           0
    Input drops by RESOLVE_VPLS_REFLECTION_FILTER_DROP_CNT was considered at https://supportforums.cisco.com/thread/2099283
    But how we can apply it for output?

    Last column at "show controllers np counters np7  location 0/0/CPU0 | i DROP" is a pps. So we see 293pps
    RESOLVE_EGRESS_DROP_CNT and 0pps RESOLVE_INGRESS_DROP_CNT. Therefore RESOLVE_VPLS_REFLECTION_FILTER_DROP_CNT is a part of RESOLVE_EGRESS_DROP_CNT, aren't it?
    Also, counters egress_drop are increases, but ingress_drop are not:
      33  RESOLVE_INGRESS_DROP_CNT                              868559           0
      34  RESOLVE_EGRESS_DROP_CNT                           3637707596         149
    469  RESOLVE_VPLS_REFLECTION_FILTER_DROP_CNT           3294483194         129
    And one minute later:
      33  RESOLVE_INGRESS_DROP_CNT                              868559           0
      34  RESOLVE_EGRESS_DROP_CNT                           3637718845         156
    469  RESOLVE_VPLS_REFLECTION_FILTER_DROP_CNT           3294492975         135
    Also no new input drops at "sh inter":
    sh inter tenGigE 0/0/0/6 | i drops
    Fri Nov  2 16:57:39.828 MSK
         481200652943 packets input, 287931866783215 bytes, 876403 total input drops
         0 drops for unrecognized upper-level protocol
         369034005321 packets output, 180881208804090 bytes, 28963679 total output drops
    One minute later:
    sh inter tenGigE 0/0/0/6 | i drops
    Fri Nov  2 16:59:23.441 MSK
         481203274011 packets input, 287933491017363 bytes, 876403 total input drops
         0 drops for unrecognized upper-level protocol
         369035900847 packets output, 180882007120600 bytes, 28964280 total output drops

  • ASR9K Interface Input Errors?

    All,
    I got a ticket today and it was in regard to the error count on one of the TenG interfaces on one of our ASR9K's. In looking at the interface I noticed quite a few Input Errors on the port. I cleared the counters and monitored. In checking the port again I saw the Input Errors again incrementing. On the smaller Cisco switches you can ususally run the show interface gig 1/1/1 counters error command. With the ASRs you can not. I am wondering if someone could explain what the Input Errors could mean. Media issue? Incorrect Setting? Not sure what I need to be looking at being I have little time on the ASR's at this point. Thanks.          
    Regards,
    Mark       

    Here you are:
    RP/0/RSP1/CPU0:rx-cssclabqa-b217-3-core#sh int tenGigE 0/0/0/30
    Thu Jan 23 09:45:23.702 CST
    TenGigE0/0/0/30 is up, line protocol is up
      Interface state transitions: 5
      Hardware is TenGigE, address is 8478.ac2b.8ce6 (bia 8478.ac2b.8ce6)
      Layer 1 Transport Mode is LAN
      Internet address is Unknown
      MTU 1514 bytes, BW 10000000 Kbit (Max: 10000000 Kbit)
         reliability 248/255, txload 0/255, rxload 0/255
      Encapsulation ARPA,
      Full-duplex, 10000Mb/s, link type is force-up
      output flow control is off, input flow control is off
      loopback not set,
      Last input 00:00:08, output never
      Last clearing of "show interface" counters 20:39:33
      5 minute input rate 0 bits/sec, 0 packets/sec
      5 minute output rate 0 bits/sec, 0 packets/sec
         1239 packets input, 341964 bytes, 0 total input drops
         0 drops for unrecognized upper-level protocol
         Received 0 broadcast packets, 1239 multicast packets
                  0 runts, 0 giants, 0 throttles, 0 parity
        409 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
         0 packets output, 0 bytes, 0 total output drops
         Output 0 broadcast packets, 0 multicast packets
         0 output errors, 0 underruns, 0 applique, 0 resets
         0 output buffer failures, 0 output buffers swapped out
         0 carrier transitions
    Regards,
    Mark

Maybe you are looking for

  • Restricting the pl/sql error in report region(sql report)

    Hi, Is there any way to hiding pl/sql error in report region when we use generic column names*(Use Generic Column Names (parse query at runtime only)* ). and type is sql query else displaying alternative error message on that particular report region

  • Unable to install Windows 7 via Bootcamp

    I have an early 2008 Apple Mac Pro Dual 2.8 GHZ processors with 16 GB of RAM. I have a WD Veliciraptor 300 GB hard drive in Bay 1 and a WD 320 GB drive in bay two. After I got the Mac system set up with all of my programs/user accounts/software updat

  • I am having issues keeping WCS running...

    I have a WCS server that is an old WLSE box that I have installed Windows Server 2003 on to be able to run WCS. I was running 4.0.97.0 and it was running fine. Then one day, the application stopped running. I have since upgraded to 4.1.83.0 and insta

  • Problem with Cross Dissolve: The video cuts out during the transistion

    I'm encountering a problem with the Cross Dissolve transistion wherein during the period in which the transition should be happening the video instead drops out.  There's plenty of heads and tails available on both clips, but as soon as the timeline

  • Possible for slide show pictures to be bigger?

    Hey there, Got a photo web page made. I'd like for the slide show pictures to appear bigger than they currently are. Is it possible to change the size of them ? thanks, t