Output Drop by RESOLVE_VPLS_REFLECTION_FILTER_DROP_CNT

Hello!
How i can determine a reason of output drops?
>sh inter tenGigE 0/0/0/6              
Fri Nov  2 15:26:05.358 MSK
TenGigE0/0/0/6 is up, line protocol is up
  Interface state transitions: 11
  Hardware is TenGigE, address is 108c.cf1d.f326 (bia 108c.cf1d.f326)
  Layer 1 Transport Mode is LAN
  Description: To_XXX
  Internet address is 10.1.11.77/30
  MTU 9194 bytes, BW 10000000 Kbit (Max: 10000000 Kbit)
     reliability 255/255, txload 2/255, rxload 5/255
  Encapsulation ARPA,
  Full-duplex, 10000Mb/s, LR, link type is force-up
  output flow control is off, input flow control is off
  loopback not set,
  ARP type ARPA, ARP timeout 04:00:00
  Last input 00:00:00, output 00:00:00
  Last clearing of "show interface" counters 50w1d
  30 second input rate 218575000 bits/sec, 41199 packets/sec
  30 second output rate 115545000 bits/sec, 30555 packets/sec
     481020016118 packets input, 287815762466192 bytes, 876403 total input drops
     0 drops for unrecognized upper-level protocol
     Received 29 broadcast packets, 39255653 multicast packets
              0 runts, 17 giants, 0 throttles, 0 parity
     17 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
     368901547057 packets output, 180820085800502 bytes, 28931652 total output drops
     Output 5 broadcast packets, 39284266 multicast packets
     0 output errors, 0 underruns, 0 applique, 0 resets
     0 output buffer failures, 0 output buffers swapped out
     10 carrier transitions
>show controllers np counters np7  location 0/0/CPU0 | i DROP
Fri Nov  2 15:27:03.815 MSK
  31  PARSE_INGRESS_DROP_CNT                                849353           0
  32  PARSE_EGRESS_DROP_CNT                                1236171           0
  33  RESOLVE_INGRESS_DROP_CNT                              868559           0
  34  RESOLVE_EGRESS_DROP_CNT                           3636654813         293
  37  MODIFY_EGRESS_DROP_CNT                                   669           0
  84  RESOLVE_AGE_NOMAC_DROP_CNT                                 1           0
  85  RESOLVE_AGE_MAC_STATIC_DROP_CNT                    187392316           8
371  MPLS_PLU_DROP_PKT                                          1           0
468  RESOLVE_VPLS_SPLIT_HORIZON_DROP_CNT                 28931887           6
469  RESOLVE_VPLS_REFLECTION_FILTER_DROP_CNT           3293536501         272
481  RESOLVE_L2_EGR_PW_UIDB_MISS_DROP_CNT                       4           0
491  RESOLVE_VPLS_EGR_PW_FLOOD_UIDB_DOWN_DROP_CNT                 1           0
499  RESOLVE_MAC_NOTIFY_CTRL_DROP_CNT                   313463638          16
500  RESOLVE_MAC_DELETE_CTRL_DROP_CNT                     1591242           0
622  EGR_DHCP_PW_UNTRUSTED_DROP                           1236171           0
Input drops by RESOLVE_VPLS_REFLECTION_FILTER_DROP_CNT was considered at https://supportforums.cisco.com/thread/2099283
But how we can apply it for output?

Last column at "show controllers np counters np7  location 0/0/CPU0 | i DROP" is a pps. So we see 293pps
RESOLVE_EGRESS_DROP_CNT and 0pps RESOLVE_INGRESS_DROP_CNT. Therefore RESOLVE_VPLS_REFLECTION_FILTER_DROP_CNT is a part of RESOLVE_EGRESS_DROP_CNT, aren't it?
Also, counters egress_drop are increases, but ingress_drop are not:
  33  RESOLVE_INGRESS_DROP_CNT                              868559           0
  34  RESOLVE_EGRESS_DROP_CNT                           3637707596         149
469  RESOLVE_VPLS_REFLECTION_FILTER_DROP_CNT           3294483194         129
And one minute later:
  33  RESOLVE_INGRESS_DROP_CNT                              868559           0
  34  RESOLVE_EGRESS_DROP_CNT                           3637718845         156
469  RESOLVE_VPLS_REFLECTION_FILTER_DROP_CNT           3294492975         135
Also no new input drops at "sh inter":
sh inter tenGigE 0/0/0/6 | i drops
Fri Nov  2 16:57:39.828 MSK
     481200652943 packets input, 287931866783215 bytes, 876403 total input drops
     0 drops for unrecognized upper-level protocol
     369034005321 packets output, 180881208804090 bytes, 28963679 total output drops
One minute later:
sh inter tenGigE 0/0/0/6 | i drops
Fri Nov  2 16:59:23.441 MSK
     481203274011 packets input, 287933491017363 bytes, 876403 total input drops
     0 drops for unrecognized upper-level protocol
     369035900847 packets output, 180882007120600 bytes, 28964280 total output drops

Similar Messages

  • Increasing Total Output Drops number

    I have an autonomous Cisco AP1242 running on channel 11 (best channel avail) with only one client associated.
    Signal Strength and Channel Utilization look good.
    By design this client is constantly sending UDP/Multicast packets, so I had to disable IGMP Snooping on the AP. However, I have noticed data dropout and have been able to correlate it by running the command:
    show interface dot11radio 0
    Every-time I run the above command the Total Output Drops increases:
    Dot11Radio0 is up, line protocol is up
      Hardware is 802.11G Radio, address is 001c.b0eb.eb70 (bia 001c.b0eb.eb70)
      MTU 1500 bytes, BW 54000 Kbit, DLY 1000 usec,
         reliability 255/255, txload 1/255, rxload 1/255
      Encapsulation ARPA, loopback not set
      ARP type: ARPA, ARP Timeout 04:00:00
      Last input 00:00:00, output 00:00:00, output hang never
      Last clearing of "show interface" counters 00:37:46
      Input queue: 0/1127/0/0 (size/max/drops/flushes); Total output drops: 3178
      Queueing strategy: fifo
      Output queue: 0/30 (size/max)
      5 minute input rate 43000 bits/sec, 14 packets/sec
      5 minute output rate 92000 bits/sec, 17 packets/sec
         29799 packets input, 12551639 bytes, 0 no buffer
         Received 17376 broadcasts, 0 runts, 0 giants, 0 throttles
         0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
         0 input packets with dribble condition detected
         41308 packets output, 25121942 bytes, 0 underruns
         0 output errors, 0 collisions, 0 interface resets
         0 unknown protocol drops
         0 babbles, 0 late collision, 0 deferred
         0 lost carrier, 0 no carrier
         0 output buffer failures, 0 output buffers swapped out
    I cleared the statistics and ran the command after a few minutes.
    Any ideas what could be causing packets to be dropped?
    QOS is disabled on the AP.
    Thanks

    Hi,
    There is only one wireless client.
    Just took a 5 min Wireshark reading and it giving the following:
    Packets: 2286
    Avg. packets/sec: 7.729
    Avg packet size: 671.527 bytes
    Avg bytes/sec: 5190.457
    I am new to this. Is the above considered high volume for one client?
    I just compared a wired vs wireless captures... I am only losing packets on the wireless medium.
    When you say that the radio may not have enough buffer... are you reffering to wireless adapater or the Acess Point?
    Thanks

  • Output drops on cisco link connecting to F5 Loadbalancer's management port

    On a connection like below:
    Cisco 6509: gi x/y <<-->> F5 BIGIP LTM: mgmt (Management Port)
    We observed incrementing packet drops on the F5 BIGIP mgmt interface.
    Also, at the cisco end, incrementing output drops were observed.
    tcpdump (packet capture) on the F5 BIGIP's mgmt port show brodcast packets/ multicast including the HSRP hellos being received from the cisco device. It is an expected behaviour that, F5 will reject any packets it cant understand (including the cdp, hsrp and other broadcast), and this will cause the packet drop counter of F5 BIGIP's mgmt port to increase. (F5 TAC acknowledged this behaviour)
    Will this cause the output drop counter at the cisco interface to roll up?
    Note: On the cisco interface, i do not see any other errors, also utilisation on the link is very minimal.
    Thanks
    Sudheer Nair

    Hi, this is probably late, but the software counters for output drops on these types of switches (3750's, blade switches) are not reliable.
    What you need to check is "show platform port-asic statistics drop" for a reliable drop counter on an interface. This will give you the hardware counters
    https://tools.cisco.com/bugsearch/bug/CSCtq86186/?reffering_site=dumpcr
    Switch stack shows incorrect values for output drops/discards
    on show interfaces. For e.g.,
    --- show interfaces ---
    GigabitEthernet2/0/5 is up, line protocol is up (connected)
    Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 4294967163
    Conditions:
    This is seen on Stackable switches running 12.2(58)SE or later.
    Workaround:
    None.

  • OIB value for Total output drops

    Hi, we have a Cisco C7200P router at work running IOS 12.4(12.2r)T, and we monitor it using Zenoss 3.1. We want to be able to capture the total output drops for a Gigabit Ethernet interface. I created a custom monitoring template and I added the following data source:
    Name: cieIfOutputQueueDrops
    OIB: 1.3.6.1.4.1.9.9.276.1.1.1.1.11
    The total output drops as viewed via the CLI are as follows:
    Input queue: 0/75/1335749/399902 (size/max/drops/flushes); Total output drops: 53882894
    However the graph on Zenoss reports a completely different value of ~360M. Here is the output of snmpwalk:
    SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.1 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.2 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.3 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.4 = Counter32: 363270064 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.5 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.6 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.7 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.12 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.13 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.14 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.15 = Counter32: 653008 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.26 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.125 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.139 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.140 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.194 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.196 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.254 = Counter32: 0 SNMPv2-SMI::enterprises.9.9.276.1.1.1.1.11.288 = Counter32: 0
    The value it retunrs is incorrect. I would appreciate some assistance.

    Did you tried using ifOutDiscards (.1.3.6.1.2.1.2.2.1.19). These are counted as output drops as shown in the show interfaces command.
    It shows the number of outbound packets which were chosen to be discarded even though no errors had been detected to prevent their being transmitted. One possible reason for discarding such a packet could be to free up buffer space.
    For more details on interface couter please check following document :
    SNMP Counters: Frequently Asked Questions
    -Thanks
    Vinod
    **Encourage Contributors. RATE Them.**

  • ME 3800 output drops with Copper SFP

    We have installed a copper SFP (GCL-T) in a access port on an ME-3800 running 12.2(52r)EY2.  The port connects to an ONS-CE-100 copper line card on an FE Port.  Both ports are set to auto-negotiate.  We see output drops on the interface.  We tried hard setting the speed on both sides, but the drops persisted.  We tried hard setting the duplex to full, but that made things worse.  On ports where we use the optical SFPs we do not see these issues.  
    Has anybody else run into this issue? Does the ME 3800 support auto-negotiation with the copper SFPs?  Any thoughts on this would be appreciated. 

    Disclaimer
    The  Author of this posting offers the information contained within this  posting without consideration and with the reader's understanding that  there's no implied or expressed suitability or fitness for any purpose.  Information provided is for informational purposes only and should not  be construed as rendering professional advice of any kind. Usage of this  posting's information is solely at reader's own risk.
    Liability Disclaimer
    In  no event shall Author be liable for any damages whatsoever (including,  without limitation, damages for loss of use, data or profit) arising out  of the use or inability to use the posting's information even if Author  has been advised of the possibility of such damage.
    Posting
    I'm not familiar with the ME series, but if you can "tune" interface egress buffer/queue sizes, increasing resources for bursts can often mitigate and/or eliminate egress drops.  Of course, this also can increase latency.

  • 3750ME Total output drops, OutDiscards

    Hi,
    I am testing a 3750ME switch as L2 device with iperf and Agilent router tester. I have a physical loop on 2 fastethernet ports - one port is access in vlan A and the other is access in vlan B. On the switch uplink both vlans are allowed. The test traffic comes from the uplink via vlan A, loops to vlan B via the physical loop and then goes back via vlan B through the uplink.
    I have tested a lot of Cisco switches in this way and had no issues until now. Now I have 18 OutDiscards (Total output drops) on one of the fastethernet interfaces, connected via the physical loop.
    The IOS is 12.2(44)SE1. I've read the release notes for this IOS, aka
    http://www.cisco.com/en/US/docs/switches/metro/catalyst3750m/software/release/12.2_44_se/release/notes/OL14631.html
    where it says:
    CSCsj53001
    The Total- output-drops field in the show interfaces privileged EXEC command output now displays accurate ASIC drops.
    so the counters are correct.
    I generate less then 5Mbps duplex traffic, so the switch must not be overloaded.
    Do you have any idea why I get these 18 output errors?
    Regards,
    Mladen

    Please generate more definitive test - clear the counters and generate much more traffic - like 100 Mbps (full port speed if you're not using the uplinks on 3750ME).
    Also, be sure the port is in "switchport" mode, because there could be an issue with mac addresses when switch is routing.
    Is your test setup in pure L2? without L3?

  • High output drops

    I am usign a Ospry 450e capture card to stream live video using FMLE 3. Midway into the stream the out video moves like its sticking for a few seconds and the and the output Drops (fps) are over 4300! ....over a 90 minute period
    I am streaming at 200k video and 48k audio [mono]

    You are experiencing high frame drops because of lack ofVideo Bitrate. Encoder is not able to accomodate all frames in 200 kbps. please try increasing your Video Bitrate value and let me know if it works

  • Total output drops & dot1dBridgeEventsV2

    I am seen a lot of "Total output drops: " in the LAN/WAN Router, does any one have any documents that plains the cause of "Total output drops" and what it it's?
    Also I am getting a lot of traps in the LAN, but I can't find documents that explains the event, "dot1dBridgeEventsV2" Can you guys guide me to a document where it explains the events?
    Thanks

    Total output drops is the number of packets in the output queue that have been dropped because of a full queue. Check out the following link for troubleshooting input queue drops and output queue drops :
    http://www.cisco.com/warp/public/63/queue_drops.html

  • Could high "Total Output Drops" on one interface on a 3560G, be caused by faulty hardware on another interface?

    Hi All,
    I have been trying to diagnose a issue we have been having with packet loss on video calls (which I think we may have now resolved as the problem lay elsewhere), but in the process we have trailed some equipment from PathView and this seems to have created a new problem.
    We have a standalone 3560G switch which connects into a providers 3750G as part of an MPLS network. There is a single uplink to the 3750 from the 3560 (@ 1Gbps) and whilst I can  manage the 3560, I have no access to the providers switch. Our 3560 has a fairly vanilla config on it with no QoS enabled.
    There are only a few ports used on the 3560, mainly for Cisco VCS (Video Conferencing Servers) and a PathView traffic analysis device.The VCS devices are used to funnel videoconferencing traffic across the MPLS network into another institutions network.The PathView device can be used to send traffic bursts (albeit relatively small compared with the Bandwidth that is available) across the same route as the VC traffic to an opposing device, however, I have also disabled all of these paths for the moment.
    I can run multiple VC calls which utilise the VCS devices so traffic is routing into the relevant organisations and everything is good. In fact, I have 5 x 2Mb calls in progress now and there are 0 (or very, very few) errors.
    However, I have actually shut-down the port (Gi0/3) connected to the PathView device for the moment. If I re-enable it I start to see a lot of errors on the VC calls, and the Total Output Drops on the UPLINK interface (Gi0/23) starts rising rapidly. As soon as I shut-down the PathView port again (Gi0/3), the error stop and all returns to normal.
    I have read that issues on the Output queue are often attributed to a congested network/interface, but I don't believe that this is the case in this instance. My 5 VC calls would only come in at 10Mbps so is a way short of the 1000Mpbs available. Even the PathView device only issue burst up to 2Mbps, and with the Paths actually disabled even this shouldn't be happening, so only a small amount of management traffic should be flowing. Still, as soon as I enable the port, problems start.
    So, is it possible that either the port on the switch, cable or PathView device is actually faulty and cause such errors? Has anyone seen anything like this?
    Cheers
    Chris

    "As far as I know, such drops shouldn't be caused by faulty hardware, but if the hardware is really faulty, you would need to involve TAC."
    Ok, thanks.
    "BTW, all the other interfaces, which have the low bandwidth rates you describe, are physically running at low bandwidth settings on the interface, e.g. 10 Mbps?  If not, you can have short transient micros bursts which can cause drops.  This can happen even when average bandwidth utilization is low.  (NB: if these other ports average utilization is so low, if not already doing so, you could run the ports at 10 Mbps too.)"
    No. All ports on the switch connect to devices with 1Gb capable interfaces. They have been left to auto negotiate and have negotiated at 1000/full. The bandwidth described is more with regard to the actual data throughput of a call. Technically, the VCS devices are licence to handle 50 simultaneous call of up to 4Mbps so potentially could require a bandwidth of 200Mbps, although it is unlikely that we will see this amount of traffic.
    "Also, even if you have physically low bandwidth ingress, with a high bandwidth egress, and even if the egress's bandwidth is more than the aggregate of all the ingress, you can still have drops caused by concurrent arrivals."
    In general, the ingress and the egress should be similar. Think of this as a stub network - one path in and out (via Gi0/23). The VCS act as a kind or proxy/router for video traffic, simply terminating inbound legs, and generating a next hop outbound leg. The traffic coming in  to the VCS should be the same as the traffic go out.
    There will of course be certain management traffic, but this will be relatively low volume, and of course the PathView traffic analyser can generate a burst of UDP packets to simulate voice traffic.
    "Some other "gotchas" include, you mention you don't have QoS configured, but you're sure QoS is disabled too?"
    Yes.
    switch#show mls qos
    QoS is disabled
    QoS ip packet dscp rewrite is enabled
    I can't see a lot of point enabling QoS on this particular switch. Pretty much all of the traffic passing through it will be QoS tagged at the same level. Therefore it ALL prioritised.
    Indeed running a test overnight with these multiple calls live and the PathView port shutdown, resulted in 0 Total Output Drops.Each leg did suffer a handful of dropped packets end-to-end, but I think I can live with 100 packets dropped in 10 million during a 12 hour period (and this, I suspect, will be somewhere else on the network).
    "Lastly, Cisco has documented, at least for the 3750X, that uplink ports have the same buffer RAM resources as 24 copper edge ports.  Assuming the earlier series are similar, there might be benefit to moving your uplink, now on g0/23, to an uplink port (if your 3650G has them)."
    Unfortunately, no can do. we are limited to the built in ports on the switch as we have no SFP modules installed.
    Apologies about the formatting - this is yet another thing that has been broken in these new forums. I looks a lok better in the Reply window than it looks in this normal view.

  • DMVPN in Cisco 3945 output drop in tunnel interface

    I configured DMVPN in Cisco 3945 and checked the tunnel interface. I found out that I have output drop. How can I remove that output drop? I already set the ip mtu to 1400.
    CORE-ROUTER#sh int tunnel 20
    Tunnel20 is up, line protocol is up
      Hardware is Tunnel
      Description: <Voice Tunneling to HO>
      Internet address is 172.15.X.X./X
      MTU 17878 bytes, BW 1024 Kbit/sec, DLY 50000 usec,
         reliability 255/255, txload 1/255, rxload 1/255
      Encapsulation TUNNEL, loopback not set
      Keepalive not set
      Tunnel source 10.15.X.X (GigabitEthernet0/1)
       Tunnel Subblocks:
          src-track:
             Tunnel20 source tracking subblock associated with GigabitEthernet0/1
              Set of tunnels with source GigabitEthernet0/1, 1 member (includes iterators), on interface <OK>
      Tunnel protocol/transport multi-GRE/IP
        Key 0x3EA, sequencing disabled
        Checksumming of packets disabled
      Tunnel TTL 255, Fast tunneling enabled
      Tunnel transport MTU 1438 bytes
      Tunnel transmit bandwidth 8000 (kbps)
      Tunnel receive bandwidth 8000 (kbps)
      Tunnel protection via IPSec (profile "tunnel_protection_profile_2")
      Last input 00:00:01, output never, output hang never
     --More--           Last clearing of "show interface" counters never
      Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 7487
      Queueing strategy: fifo
      Output queue: 0/0 (size/max)
      30 second input rate 0 bits/sec, 0 packets/sec
      30 second output rate 0 bits/sec, 0 packets/sec
         48007 packets input, 4315254 bytes, 0 no buffer
         Received 0 broadcasts (0 IP multicasts)
         0 runts, 0 giants, 0 throttles
         0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
         42804 packets output, 4638561 bytes, 0 underruns
         0 output errors, 0 collisions, 0 interface resets
         0 unknown protocol drops
         0 output buffer failures, 0 output buffers swapped out
    interface Tunnel20
     description <Bayantel Voice tunneling>
     bandwidth 30720
     ip address 172.15.X.X 255.255.255.128
     no ip redirects
     ip mtu 1400
     no ip next-hop-self eigrp 20
     no ip split-horizon eigrp 20
     ip nhrp authentication 0r1x@IT
     ip nhrp map multicast dynamic
     ip nhrp network-id 1002
     ip nhrp holdtime 300
     ip tcp adjust-mss 1360
     tunnel source FastEthernet0/0/1
     tunnel mode gre multipoint
     tunnel key 1002
     tunnel protection ipsec profile tunnel_protection_profile_2 shared

    Hi,
    Thanks for the input. If the radio is sending out the packet but client did not receive, not output drop should be seen since packet is sent out, right?
    From my understanding, output drop is related to congested interface. Outgoing interface cannot take the rate packets coming in and thus droping it. What I don't understand is input and output rate has not reached limit yet. Also input queue is seeing drop of packet as well even though input queue is empty.
    Any idea?

  • WAAS interface - Input queue: output drops

    I'm seeing total output drops increment every now and then. We are using 3750E stack switch and are configured for WCCP L2 forward and return. Anyone know why I'm seeing out drops on the WAAS connected interface? The WAAS interfaces are setup as standby. The model is 7371...
    interface GigabitEthernet1/0/4
    description ****WAAS1 GIG 1/0****
    switchport access vlan 738
    mls qos trust dscp
    spanning-tree portfast
    end
    GigabitEthernet1/0/4 is up, line protocol is up (connected)
    Hardware is Gigabit Ethernet, address is 0022.be97.9804 (bia 0022.be97.9804)
    Description: ****WAAS1 GIG 1/0****
    MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,
    reliability 255/255, txload 1/255, rxload 1/255
    Encapsulation ARPA, loopback not set
    Keepalive set (10 sec)
    Full-duplex, 1000Mb/s, media type is 10/100/1000BaseTX
    input flow-control is off, output flow-control is unsupported
    ARP type: ARPA, ARP Timeout 04:00:00
    Last input 00:00:03, output 00:00:00, output hang never
    Last clearing of "show interface" counters never
    Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 281
    Queueing strategy: fifo
    Output queue: 0/40 (size/max)
    5 minute input rate 5967000 bits/sec, 1691 packets/sec
    5 minute output rate 5785000 bits/sec, 1606 packets/sec
    9301822868 packets input, 3537902554734 bytes, 0 no buffer
    Received 179580 broadcasts (172889 multicasts)
    0 runts, 0 giants, 0 throttles
    0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
    0 watchdog, 172889 multicast, 0 pause input
    0 input packets with dribble condition detected
    7661948806 packets output, 2639805900461 bytes, 0 underruns
    0 output errors, 0 collisions, 5 interface resets
    0 babbles, 0 late collision, 0 deferred
    0 lost carrier, 0 no carrier, 0 PAUSE output
    0 output buffer failures, 0 output buffers swapped out

    It looks like this could be related:
    CSCtf27580 Ethernet interface input queue wedge from broadcast/uniGRE traffic
    Is there any GRE traffic going through this AP?
    The workarounds are:
    Reboot APs to bring APs back up for time being.
    OR
    go back to 6.0.188.0 code on WLC.
    OR
    Route GRE traffic away from AP's.
    It appears that it definitely exists in your code:
    12.4(21a)JHA          12.4(21a)JA01          006.000(196.000)

  • Core audio output drops out after upgrading boot HD

    All,
    I have a strange thing happening after I cloned and replaced my OS boot drive. I cloned my OS boot drive to another larger drive. When I boot to this new drive and open projects (located on another dedicated HD), playing certain instrument elements cause all of the audio to drop out. It makes a pop, then another pop where the audio would tail out. So if I had a piano EXS instrument doing this behavior and I play a note, I get a pop, then where the audio would finish accessing the file, I would get another pop. This affects the entire output of all audio in the project. I don't think this affects iTunes, but I will check again tonight.
    In some cases, I have narrowed this down to a custom IR file loaded in Space designer. When I bypass this plug with a custom IR loaded, all audio will work fine, until I un-bypass this plug. If I reload this IR file, everything works fine.
    This now is happening on some of the Logic drumkits and EXS instruments and reloading the questionable instrument does not fix this issue. It seems like a possible database/pathing issue, but not sure.
    Has anyone seen behavior like this? Any way to fix?
    Some things I have tried on this new OS clone:
    Repair permissions
    Remove the logic plist file and reboot
    Run all of the maintenance/clean up scripts in Onyx
    BTW: This occurred with 8.0.1 and also 8.0.2
    If I physically re-install my original OS boot drive, these projects work fine.
    I am using Core Audio and am using the embedded audio interface. I also have a PCIx UAD card installed.

    This thread got lost with the 8.0.2 discussions.
    Has anyone run into this kind of behavior or any possible steps to recover?
    Thanks!
    Tad

  • All playback/output dropping out.

    I'm using an M-Audio mobilepre usb audio interface and keep experiencing the same problem in both Garageband and Logic Express. During playback, all sound will drop entirely. The signal is still there for all tracks (only 6 tracks so not too much going on considering this is a new imac). I have to unplug my audio interface, answer yes that an audio device has been removed, replug in the interface and it picks it up on the scan. Voila~ All is well again for a while, then it drops again. Any suggestions? Thanks!!!

    In fact, you can add markers directly in Mainstage - open up the playback plug-in, put your cursor exactly on the point of the waveform where you want your marker and control-click to open the pop-up menu from which you can add, rename and remove markers. Click/drag on the waveform to move to the required location.
    To sync, just set the tempo of each song at the set or patch level (others have noted problems getting everything to sync properly, so to avoid this problem I usually just create an audio file of my click track and load it up into a separate playback channel - this also makes it easier to route it separately to your FOH output).
    As for tempo change, well, I'm afraid MS2 doesn't support tempo change as such, although if you vary the tempo of you backings in PT and bounce down a stem, then that is just an audio file, and when MS2 plays it back it will play as recorded (ie with tempo changes). This is another good reason for using an audio file of a click track.
    Good luck.
    Message was edited by: littleeden

  • Total output drops

    Hi people
    I have a problem with drop packet in many interefaces of a WS-C6506-E (R7000) (s72033-ipservices_wan-mz.122-33.SXJ3.bin)
    in the picture you can see that
    most of the interfaces have connected servers
    which can be the cause of this?, what troubleshooting I can do  to find the source of the problem?
    thank

    Thank for you response aninchat
    Yes, these drops increment in bursts and then stay estable for a while, just as you say 
    Below are output command show counter, show queueing, show module, show interface vlan 30
    #show counters interface g1/21
    64 bit counters:
     0.                      rxHCTotalPkts = 361202
     1.                      txHCTotalPkts = 1411690021
     2.                    rxHCUnicastPkts = 361200
     3.                    txHCUnicastPkts = 410142854
     4.                  rxHCMulticastPkts = 0
     5.                  txHCMulticastPkts = 900686865
     6.                  rxHCBroadcastPkts = 2
     7.                  txHCBroadcastPkts = 100860302
     8.                         rxHCOctets = 33528526
     9.                         txHCOctets = 374407450833
    10.                 rxTxHCPkts64Octets = 678400385
    11.            rxTxHCPkts65to127Octets = 352093183
    12.           rxTxHCPkts128to255Octets = 157537314
    13.           rxTxHCPkts256to511Octets = 35132219
    14.          rxTxHCpkts512to1023Octets = 21865647
    15.         rxTxHCpkts1024to1518Octets = 167022475
    16.                    txHCTrunkFrames = 0
    17.                    rxHCTrunkFrames = 0
    18.                     rxHCDropEvents = 0
    32 bit counters:
     0.                   rxCRCAlignErrors = 0
     1.                   rxUndersizedPkts = 0
     2.                    rxOversizedPkts = 0
     3.                     rxFragmentPkts = 0
     4.                          rxJabbers = 0
     5.                       txCollisions = 0
     6.                         ifInErrors = 0
     7.                        ifOutErrors = 0
     8.                       ifInDiscards = 0
     9.                  ifInUnknownProtos = 0
    10.                      ifOutDiscards = 79993129
    11.            txDelayExceededDiscards = 0
    12.                              txCRC = 0
    13.                         linkChange = 1
    14.                   wrongEncapFrames = 0
    All Port Counters
     1.                          InPackets = 361204
     2.                           InOctets = 33528702
     3.                        InUcastPkts = 361202
     4.                        InMcastPkts = 0
     5.                        InBcastPkts = 2
     6.                         OutPackets = 1411691426
     7.                          OutOctets = 374408006548
     8.                       OutUcastPkts = 410143207
     9.                       OutMcastPkts = 900687369
    10.                       OutBcastPkts = 100860850
    11.                           AlignErr = 0
    12.                             FCSErr = 0
    13.                            XmitErr = 0
    14.                             RcvErr = 0
    15.                          UnderSize = 0
    16.                          SingleCol = 0
    17.                           MultiCol = 0
    18.                            LateCol = 0
    19.                       ExcessiveCol = 0
    20.                       CarrierSense = 0
    21.                              Runts = 0
    22.                             Giants = 0
    23.                         InDiscards = 0
    24.                        OutDiscards = 79993129
    25.                           InErrors = 0
    26.                          OutErrors = 0
    27.                    InUnknownProtos = 0
    28.                              txCRC = 0
    29.                      TrunkFramesTx = 0
    30.                      TrunkFramesRx = 0
    31.                         WrongEncap = 0
    32.     Broadcast_suppression_discards = 0
    33.     Multicast_suppression_discards = 0
    34.       Unicast_suppression_discards = 0
    35.                 rxTxHCPkts64Octets = 678400875
    36.            rxTxHCPkts65to127Octets = 352093605
    37.           rxTxHCPkts128to255Octets = 157537435
    38.           rxTxHCPkts256to511Octets = 35132277
    39.          rxTxHCpkts512to1023Octets = 21865661
    40.         rxTxHCpkts1024to1518Octets = 167022777
    41.                         DropEvents = 0
    42.                     CRCAlignErrors = 0
    43.                     UndersizedPkts = 0
    44.                      OversizedPkts = 0
    45.                       FragmentPkts = 0
    46.                            Jabbers = 0
    47.                         Collisions = 0
    48.              DelayExceededDiscards = 0
    49.                        bpduOutlost = 0
    50.                        qos0Outlost = 79993129
    51.                        qos1Outlost = 0
    52.                        qos2Outlost = 0
    53.                        qos3Outlost = 0
    54.                        qos4Outlost = 0
    55.                        qos5Outlost = 0
    56.                        qos6Outlost = 0
    57.                        qos7Outlost = 0
    58.                        qos8Outlost = 0
    59.                        qos9Outlost = 0
    60.                       qos10Outlost = 0
    61.                       qos11Outlost = 0
    62.                       qos12Outlost = 0
    63.                       qos13Outlost = 0
    64.                       qos14Outlost = 0
    65.                       qos15Outlost = 0
    66.                       qos16Outlost = 0
    67.                       qos17Outlost = 0
    68.                       qos18Outlost = 0
    69.                       qos19Outlost = 0
    70.                       qos20Outlost = 0
    71.                       qos21Outlost = 0
    72.                       qos22Outlost = 0
    73.                       qos23Outlost = 0
    74.                       qos24Outlost = 0
    75.                       qos25Outlost = 0
    76.                       qos26Outlost = 0
    77.                       qos27Outlost = 0
    78.                       qos28Outlost = 0
    79.                       qos29Outlost = 0
    80.                       qos30Outlost = 0
    81.                       qos31Outlost = 0
    82.                    bpduCbicOutlost = 0
    83.                    qos0CbicOutlost = 0
    84.                    qos1CbicOutlost = 0
    85.                    qos2CbicOutlost = 0
    86.                    qos3CbicOutlost = 0
    87.                         bpduInlost = 0
    88.                         qos0Inlost = 0
    89.                         qos1Inlost = 0
    90.                         qos2Inlost = 0
    91.                         qos3Inlost = 0
    92.                         qos4Inlost = 0
    93.                         qos5Inlost = 0
    94.                         qos6Inlost = 0
    95.                         qos7Inlost = 0
    96.                         qos8Inlost = 0
    97.                         qos9Inlost = 0
    98.                        qos10Inlost = 0
    99.                        qos11Inlost = 0
    100.                        qos12Inlost = 0
    101.                        qos13Inlost = 0
    102.                        qos14Inlost = 0
    103.                        qos15Inlost = 0
    104.                        qos16Inlost = 0
    105.                        qos17Inlost = 0
    106.                        qos18Inlost = 0
    107.                        qos19Inlost = 0
    108.                        qos20Inlost = 0
    109.                        qos21Inlost = 0
    110.                        qos22Inlost = 0
    111.                        qos23Inlost = 0
    112.                        qos24Inlost = 0
    113.                        qos25Inlost = 0
    114.                        qos26Inlost = 0
    115.                        qos27Inlost = 0
    116.                        qos28Inlost = 0
    117.                        qos29Inlost = 0
    118.                        qos30Inlost = 0
    119.                        qos31Inlost = 0
    120.                         pqueInlost = 0
    121.                           Overruns = 0
    122.                           maxIndex = 0
    show queueing interface g1/21
    Interface GigabitEthernet1/21 queueing strategy:  Weighted Round-Robin
      Port QoS is enabled
    Trust boundary disabled
      Port is untrusted
      Extend trust state: not trusted [COS = 0]
      Default COS is 0
        Queueing Mode In Tx direction: mode-cos
        Transmit queues [type = 1p3q8t]:
        Queue Id    Scheduling  Num of thresholds
           01         WRR                 08
           02         WRR                 08
           03         WRR                 08
           04         Priority            01
        WRR bandwidth ratios:  100[queue 1] 150[queue 2] 200[queue 3]
        queue-limit ratios:     50[queue 1]  20[queue 2]  15[queue 3]  15[Pri Queue]
        queue tail-drop-thresholds
        1     70[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8]
        2     70[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8]
        3     100[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8]
        queue random-detect-min-thresholds
          1    40[1] 70[2] 70[3] 70[4] 70[5] 70[6] 70[7] 70[8]
          2    40[1] 70[2] 70[3] 70[4] 70[5] 70[6] 70[7] 70[8]
          3    70[1] 70[2] 70[3] 70[4] 70[5] 70[6] 70[7] 70[8]
        queue random-detect-max-thresholds
          1    70[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8]
          2    70[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8]
          3    100[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8]
        WRED disabled queues:
        queue thresh cos-map
        1     1      0
        1     2      1
        1     3
        1     4
        1     5
        1     6
        1     7
        1     8
        2     1      2
        2     2      3 4
        2     3
        2     4
        2     5
        2     6
        2     7
        2     8
        3     1      6 7
        3     2
        3     3
        3     4
        3     5
        3     6
        3     7
        3     8
        4     1      5
        Queueing Mode In Rx direction: mode-cos
        Receive queues [type = 1q2t]:
        Queue Id    Scheduling  Num of thresholds
           1         Standard            2
        queue tail-drop-thresholds
        1     100[1] 100[2]
        queue thresh cos-map
        1     1      0 1 2 3 4 5 6 7
        1     2
      Packets dropped on Transmit:
        BPDU packets:  0
        queue thresh             dropped  [cos-map]
        1     1                 5001508  [0 ]
        1     2                       0  [1 ]
        2     1                       0  [2 ]
        2     2                       0  [3 4 ]
        3     1                       0  [6 7 ]
        4     1                       0  [5 ]
      Packets dropped on Receive:
        BPDU packets:  0
        queue thresh              dropped  [cos-map]
        1     1                       0  [0 1 2 3 4 5 6 7 ]
    show module
    Mod Ports Card Type                              Model
      1   48  48-port 10/100/1000 RJ45 EtherModule   WS-X6148A-GE-TX
      2   48  48-port 10/100/1000 RJ45 EtherModule   WS-X6148A-GE-TX
      3   48  48-port 10/100/1000 RJ45 EtherModule   WS-X6148A-GE-TX
      5    5  Supervisor Engine 720 10GE (Active)    VS-S720-10G
    interface Vlan30 (CORE 1)
     ip address x.x.x.x a.b.c.d secondary
     ip address x.x.x.x a.b.c.d secondary
     ip address x.x.x.x x.x.x.x secondary
     ip address x.x.x.x x.x.x.x
     no ip redirects
     no ip proxy-arp
     glbp 30 ip x.x.x.x
     glbp 30 ip x.x.x.x secondary
     glbp 30 ip x.x.x.x secondary
     glbp 30 ip x.x.x.x secondary
     glbp 30 priority 105
    interface Vlan30 (core 2)
     ip address x.x.x.x x.x.x.x secondary
     ip address x.x.x.x x.x.x.x secondary
     ip address x.x.x.x x.x.x.x secondary
     ip address x.x.x.x x.x.x.x
     no ip redirects
     no ip proxy-arp
     glbp 30 ip x.x.x.x
     glbp 30 ip x.x.x.x secondary
     glbp 30 ip x.x.x.x secondary
     glbp 30 ip x.x.x.x secondary
    end

  • PCI-6723 voltage output drop observed for 8 channels coninuous updation at 100ks/sec

    Hi all,
    Iam working on labview 7.1 NIDAQmx7.3 and pci-6723 card.i am generating 8 analog outputs simultaneously,and the voltages and frequency can be changed during fly.The problem is during the run time iam observing some voltage drop of 1mv to 4mv this is really worst for me .out of 8 channels 6 are dc voltages, for these channels also iam sampling at 100ks/sec.is this really a problem.bcos dc voltage needs 1 sample also.if sampling is the problem how can i change individual channel sampling frequency or number of sampples,gothrough the attachment please.
    THANKS AND REGARDS
    LABVIEW BOY
    Attachments:
    multipleanalog9.vi ‏154 KB

    Here is the modified vi. Hope it works or at least illustates the idea.
    Attachments:
    multipleanalog9_modified_1.vi ‏140 KB

Maybe you are looking for

  • How to change default value in "Project file" dialog

    I'm new to Labview, and I've encountered an example with a dialog that allows to define a path to a file.  If I open the properties on the block diagram of this block the name of the block appears to be "Path Properties: Project file:"   The default

  • How do I get iMessage/Facetime to work on my old Verizon iPhone 4?

    I recently upgraded to the iPhone 5s and I'm keeping my iPhone 4 for music and other things. One of those things would be iMessage. Except iMessage keeps saying it's "Waiting for activation...' with no end. I recently had the same problem with an AT&

  • Pull data from SQL Table and display it in mail

    I have a requirement to pull the data from SQL table and send it in email.  Currently I am sending the hard coded info in email but is it possible to pull some data from SQL Table and than format it and send it across in the same email?  Can you guid

  • Layout with rows defined by variables

    Joint Venture Name is one of our characteristics. We will create a planning layout with two rows, for two different Joint Venture Names selected by the user. The Joint Venture Names should be selected by the user, preferably when opening the layout u

  • URL to resource in jar?

    Hello Does anyone know how to properly write an URL to point to a file within the Midlet's jar? Say for instance that I have a file "music.mid" in the root of the jar-file of my Midlet suite, and want to access this via Manager.createPlayer(String ur