High Sync low throughput - ongoing issue

Hello all - any ideas help gratefully recieved.
Since 17th November I've had speed problems on my ADSL connection.  Started with drop in speed which I reported as a fault.  I received a fault number, spoke to customer support colleagues in India who could not resolve it.  All the usual - test socket, filter change, reset stuff. It was escalated even though I noticed that the fault was reported to be resolved on my BT profile.  
My IP profile was reset and I was sent a new hub.  Initially, the IP reset seemed to work - the hub made no difference.  A few days later it happened again.   So, I should have download speed of around 8mbps but I get 1mbps.  I live opposite the exchange. 
I raised it again.  Given a new fault number around the 3rd December.  A week of calls every few days to and from India - they could see what I could see - high sync rate low throughput.  Line says the right speed, IP profile says the right speed but throughput stuck at 1mbps.  Fault was escalated and I also raised it with the complaints team.
On the 10th Dec an engineer came to the exchange (across the road) I was called to say it was all fixed.  It wasn't. On the 12th Dec an engineer came to the house.  No problem found. PQ Test, Eclipse Test, APTS Test, BTAS Test, Ping Test - all passed.  Cabling in the house fine.  Changed wires, hubs etc - no change.  He was convinced it is the exchange but he could not fix it.
Now dealing with helpful BT compliants handlers.  They are helpful in terms of the contact bit there is still no resolution or detail or even a sense that anyone knows what the problem is.  Since Thursday 12th I have been told every few days that BT wholesale are 'invesitgating' and 'testing' and that I should wait 48 hours. Most recently yesterday.
This has been going on too long - communication with BT is OK but they just can't fix it. Any ideas gratefully received.
Merry Christmas and kindest regards
Richie

Connection Information
Line state: Connected
Connection time: 4 days, 04:47:16
Downstream: 7.938 Mbps
Upstream: 448 Kbps
ADSL Settings
VPI/VCI: 0/38
Type: PPPoA
Modulation: G.992.1 Annex A
Latency type: Interleaved
Noise margin (Down/Up): 15.6 dB / 25.0 dB
Line attenuation (Down/Up): 0.4 dB / 0.5 dB
Output power (Down/Up): 7.9 dBm / 12.1 dBm
FEC Events (Down/Up): 29171 / 0
CRC Events (Down/Up): 31 / 0
Loss of Framing (Local/Remote): 0 / 0
Loss of Signal (Local/Remote): 0 / 0
Loss of Power (Local/Remote): 0 / 0
HEC Events (Down/Up): 47 / 0
Error Seconds (Local/Remote): 1048 / 5

Similar Messages

  • 2800s, AIM-VPN-SSL2, vrf aware IPSEC, high CPU low throughput

    We have a couple of new 2821s deployed across a fibre link and they were originally running 12.4 (non T) versions using software encryption. We would get around 8Mb/s throughput. Upgrading to T to use the installed AIM cards we now see the AIM cards in use (show cry isakmp sa det shows then engine as aim vpn), but we still get the same throughput and high CPU. allowing CEF on the interface doubles throughput but with the same high CPU. The only process I can see going high is IP Input. Is this because of vrf aware ipsec - or any other suggestions?

    Hi Nick,
    I am having the same issue. We have a 2851 as a IPSEC VPN headend with an AIM VPN module but we are seeing high CPU usage(80%) with just 4-5mbps worth of traffic. I have an idea that I might have a NAT issue.
    We are currently running, NAT, ZFW, and IPSEC site 2 site VPN on the router.
    When I look at my ZONE firewall policy-map output it is showing all of my VPN traffic as process switched.
    Inspect
    Packet inspection statistics [process switch:fast switch]
    tcp packets: [14809800:0]
    udp packets: [145107:0]
    icmp packets: [20937:12]
    I have disabled the ZFW and still see high cpu although it is a little lower.
    Packets are not fragmented, CEF and fast switching looks to be enabled. I am using a route-map for my nonats. That is the only thing I can think of now.
    I have tried IOS 12.4(20)T3,4 and 12.4(15)T9. Same results.
    Anyone have some ideas?

  • Two xmas presents from BT - higher sync, lower ip ...

    Hi there.
    After months of reasonably consistent 2.5Mbit, I noticed by hub reconnected at 8.8Mbit, wow! My noise has gone down from 20 to around 14. Perhaps the addition of xmas tree lights has acted as a noise cancelling device. (joke, the lights aren't always on).
    Unfortunately this coincided with an ip profile being lowered from 2.0 (when I had 2.5Mbit) to 1.0 (now that I have 8.8). 
    I would imagine the BRAS would have dropped to 1.0 while my line was being run in for the 10 day period BT have mandated. Do I simply let it run it's course for a further 7 days? Even though they say 10 days, does this normally equalise more quickly? I can't wait to start downloading 2 things!
    Connection information
    Line state
    Connected
    Connection time
    3 days, 0:25:12
    Downstream
    8,867 Kbps
    Upstream
    888 Kbps
    ADSL settings
    VPI/VCI
    0/38
    Type
    PPPoA
    Modulation
    ITU-T G.992.5
    Latency type
    Interleaved
    Noise margin (Down/Up)
    13.8 dB / 10.0 dB
    Line attenuation (Down/Up)
    33.5 dB / 16.2 dB
    Output power (Down/Up)
    0.0 dBm / 7.1 dBm
    Loss of Framing (Local)
    33
    Loss of Signal (Local)
    3
    Loss of Power (Local)
    0
    FEC Errors (Down/Up)
    713261 / 4294967169
    CRC Errors (Down/Up)
    1376 / 2147480000
    HEC Errors (Down/Up)
    nil / 0
    Error Seconds (Local)
    1995
    Solved!
    Go to Solution.

    BT haven't informed me of any changes to my service level. 
    I am plugged into the master socket. This is internally routed into the test socket via the faceplate. I have no extensions. I had this master socket installed in July after months of various problems (any call to the BT number would reboot the hub) with the 1960s master socket that I found after I moved in last year.
    I'm pretty sure the line noise is outside of my flat. I cannot prove it but I'm pretty sure since I've turned everything off and this didn't change anything. I live in the Barbican complex London, which has some seriously old BT lines. I noted a week ago my usual noise of 20dB had dropped to around 13dB. And again this has been constant whether I switch everything off or not.
    I don't have a corded phone so I cannot check the quiet line test.
    Here are the results: ip profile - you can plainly see it's set to 1000. 
    What I reckon is that the week old drop in noise has triggered my "upgrade"...
    FAQ
    Test1 comprises of two tests
    1. Best Effort Test:  -provides background information.
    Download Speed
    685 Kbps
    0 Kbps
    1000 Kbps
    Max Achievable Speed
     Download speedachieved during the test was - 685 Kbps
     For your connection, the acceptable range of speedsis 400-1000 Kbps.
     Additional Information:
     Your DSL Connection Rate :8867 Kbps(DOWN-STREAM), 888 Kbps(UP-STREAM)
     IP Profile for your line is - 1000 Kbps
    The throughput of Best Efforts (BE) classes achieved during the test is - 4.41:18.68:76.91 (SBE:NBEBE)
    These figures represent the ratio while sententiously passing Sub BE, Normal BE and Priority BE marked traffic.
    The results of this test will vary depending on the way your ISP has decided to use these traffic classes.
    2. Upstream Test:  -provides background information.
    Upload Speed
    727 Kbps
    0 Kbps
    888 Kbps
    Max Achievable Speed
    >Upload speed achieved during the test was - 727 Kbps
     Additional Information:
     Upstream Rate IP profile on your line is - 888 Kbps
    We were unable to identify any performance problem with your service at this time.
    It is possible that any problem you are currently, or had previously experienced may have been caused by traffic congestion on the Internet or by the server you were accessing responding slowly.
    If you continue to encounter a problem with a specific server, please contact the administrator of that server in the first instance.
    Please visit FAQ section if you are unable To understand the test results.

  • CUA sync with child client issue for indirect role assignment.

    Hello Security experts,
    we have a indirect role assignment set up in our ECC environment. there is a syncronization issue from the parent CUA to the chlild client. The role assignments have been made to role although they are not always reaching target system without having to sync up either the role or the IDu2019s position # manually.   This has been an ongoing issue CUA has on any role or user from time to time.   any hint on fixing this issue. please help..

    Whole idea of CUA is to manage your roles and users centrally, on the contrary you can manage the roles/profiles by setting up the attributes for the CUA thorugh Central user Management console - SCUM Transaction.
    CUA has its own pros -
    Central rep,Users Sync,Role Provisioning statergy - Global composites(consists of individual child roles) Distibuted model -Provisioing at individual child systems for roles, etc.Central user store,easy maintenance.
    on the contrary - change documents is always a concern ( because cua uses - interface Ids or the RFC ids to push the idocs from cua to child system), CUA maintenance while system refresh - Copied distribution models have to be deleted and re-created, system backups has to be defined per you distribution model, password maintenance if defined global then Child systems act as inactive nodes, reading the roles into cua which are created in childs so as to establish a pointer to that system.
    It also depends on the number of systems you have in your landscape so that you can calculate the overhead and then have a Go -no-Go decison on CUA.
    Overall, I consider CUA as a good approach provided we streamline the process of provisioning, de-provisioning per the cua standards.
    Rakesh

  • Identify meter reading is too high, too low, correct

    hi all,
    i have requirement where customer will enter meter reading online (java portal).
    after that i need to check meter reading is too high, too low, correct.
    and date of meter reading is early, late, correct.
    is their any standard FM which i can use?
    or do we have maximum, minimum values predefined for meter reading in any table?
    i have checked table TE121 but no use.
    please guide me
    Regards,
    sidh

    Hi,
    Hi,
    The way the others have suggested is correct. you can use the tolerance in config validation.
    you can enter consumption range against validation class with positive & negative deviation.
    example
    RESI               1 to 100                +deviation 300                   -ve deviation 30
    this validation works on base period category
    If the base period is previous period, then it will take the consumption of previous month (else previous period last year)
    Now the total number of days for previous month are 30 and unit consumed is 900
    now we will calculate the number of days to be assessed & multiple it with per day consumption, this will yield expected consumption. for ex. no of assessed days are
    28 then 28 *30 = 840 units
    from expected consumption you can calculate the min/max consumption depending what you have defined in tolerance
    like here 300%  positive deviation means total 400% means 840 *4 = 3360 unit max consumption
    & 30% negative deviation means total 70% (100-30) means 840*.7 = 588 unit min consumption
    Means in this range the consumption will not be implausible (from 588 to 3360), otherwise system will automatically make the reading implausible.
    so the
    Expected MR                   last reading +840
    Upper limit of MR             last reading +3360
    Lower limit of MR            last reading +588
    I hope, it will help in solving your issue.
    Regards
    rohjin

  • How can I extract high and low parts of a string that represents a 64bits decimal number?

    I want to extract the high and low parts to interpret it and convert to binary code, but such a hugh number (represented by a string) isn't easy to extract from the string directly its high and low parts.

    LabVIEW can't handle a 64-bit integer. You will have to store it as two 32-bit integers. If you need exact math on those 64-bit intergers you will have to make up your own routines to handle the carries and what not. If you just need pretty good accuracy, covert both 32-bit integers to double, multiply the upper 32-bit number by 2*32 (also a double) and then add the lower 32-bit number to it.
    Good luck.

  • Make USB-6001 digital output always high or low in C

    Hello all,
    I am new to the NI DAQ interface. I have an USB-6001 and I am trying to use this device to control some logic circuit in C. So what I want to do is:
    * set some digital output lines to high or low levels, and change their status when needed (in C).
    I have tested the device in NI MAX -> Test Panels, and found that the device is able to do this. Then I try to do it in C. I hace checked the examples, and the function I should use is the one called "DAQmxWriteDigitalU32". I have problem in understanding its input parameters. I have tried something with my own understanding, but it does not work as I expected. Here is one test I did:
    uInt32   data=1;
    int32 written;
    TaskHandle taskHandle=0;
    DAQmxErrChk (DAQmxCreateTask("",&taskHandle));
    DAQmxErrChk (DAQmxCreateDOChan(taskHandle,"Dev1/port0/line7","",DAQmx_Val_ChanForAllLines));
    DAQmxErrChk (DAQmxStartTask(taskHandle));
    DAQmxErrChk (DAQmxWriteDigitalU32(taskHandle,1,1,10.0,DAQmx_Val_GroupByChannel,&data,&written,NULL));
    taskHandle=0;
    DAQmxErrChk (DAQmxCreateTask("",&taskHandle));
    DAQmxErrChk (DAQmxCreateDOChan(taskHandle,"Dev1/port0/line0","",DAQmx_Val_ChanForAllLines));
    DAQmxErrChk (DAQmxStartTask(taskHandle));
    DAQmxErrChk (DAQmxWriteDigitalU32(taskHandle,1,1,10.0,DAQmx_Val_GroupByChannel,&data,&written,NULL));
    I want to simply set "Dev1/port0/line7" and "Dev1/port0/line0" to high level, but only "Dev1/port0/line0" responds me. The second parameter of the function DAQmxWriteDigitalU32 corresponds to numSampsPerChan. If i replace it (currently 1) with a larger value, e.g. 100, I can see that "Dev1/port0/line7" sends a number of 1 out, then return back to 0. So I guess the problem is just that I do not understand well all parameters of the function DAQmxWriteDigitalU32. Can anyone please tell me how I can set a digital output line to 1 or 0? 
    Thanks!
    Hongkun
    Solved!
    Go to Solution.

    Hello,
    Here is a link explaining the inputs of function:
    http://zone.ni.com/reference/en-XX/help/370471W-01/daqmxcfunc/daqmxwritedigitalu32/
    Also here you can find a lot of examples in ANSI C
    http://www.ni.com/example/6999/en/
    Randy @Rscd27@

  • What are high and low values in sharepoint 2013 user permissions?

    So I hit this api:
    http://win-a3q7ml82p8f/sharepoint_site/_api/web/roledefinitions/
    And got the different high and low values. But I am not clear with what they mean:
    For eg:
    High: 176, Low: 138612833
    and
    High: 176, Low: 138612801
    So for different values of Low how does it change the permissions?
    For 176 its binary is 10110000. So looking at this table here: http://www.dctmcontent.com/sharepoint/Articles/Permissions%20and%20Mask%20Values.aspx
    I can understand that 176 would mean the following set of permissions:
    DeleteVersions
    OpenItems
    ApproveItems
    But what's confusing me is, that user has OpenItems permissions but not ViewListItems permission? Am I wrong in understanding this?
    Also how does the value of Low change the overall user permissions?
    Note: I looked at this answer: http://social.msdn.microsoft.com/Forums/sharepoint/en-US/9d6df168-e8f5-4323-8c34-0646c03eff68/rest-api-what-are-high-and-low-in-effectivebasepermissions-and-getusereffectivepermissions?forum=sharepointdevelopment
    But honestly I cant understand what that means. Can someone help please?

    check this blog may explain you...
    http://jamestsai.net/Blog/post/Understand-SharePoint-Permissions---Part-1-SPBasePermissions-in-Hex2c-Decimal-and-Binary---The-Basics.aspx
    Please remember to mark your question as answered &Vote helpful,if this solves/helps your problem. ****************************************************************************************** Thanks -WS MCITP(SharePoint 2010, 2013) Blog: http://wscheema.com/blog

  • I need to set high and low temperatur​es, which will set off a sound if passed.

    I'm doing this on a DaqBook 2k0. I spilt the channels up and converted then into F. The readings are then shown on 15 bar graphs. I want to set this up on each channel (one at a time). I want to get the line of data (temp) for each channel on an High, low alarm. When the high or low is passed i want some kind of sound to be set off. I found that your samples used LabView hardware so they just linked into that.Using 3rd party i need to get the alarm to read the data, not device.
    -Thank You

    VinnyC,
    You should be able to use parts of the LabVIEW example programs, but you will need to replace the NI-DAQ VIs with VIs provided to you by your 3rd party hardware manufacturer. Please contact them for their LabVIEW driver and examples of how to use it. If you need help with non-DAQ programming to send an alarm, please post again in the LabVIEW category. Hope this helps. Have a great day!

  • AI Acquire Waveforms High and Low Limit

    Hey everybody,
    I am currently working with Labview 7.1 and a daq card. I am collecting 4 channels and using ai acquire waveforms to do this. I want to have different high and low voltage limits for each channel though. Is there a way to do this with ai acquire waveforms or would can I use something else.
    Thanks for the help!
    Devon

    Hey Devon,
    In addition to what David said, there is a great knowledgebase article here with more information about setting multiple ranges for Analog Input.  The Traditional DAQ vi, AI Acquire Waveforms, doesn't expose the ability to set multle input ranges per channel, but there is a quick fix that you can do to be able to add it to the vi.  See below:
    The block diagram of the AI Acquire waveforms is as follows:
    As you can see, the inputs of the vi that are exposed are simply the first element in a array of clusters including the high and low limits per channel.  To set the high and low limits for other channels, you need to simply add another element to the array for channels 1, 2, and 3.  See below:
    Here you can see I simply added one element, you need to add 2 more for your 4 channels.  If it turns out that you are using DAQmx and this doesn't apply, please refer to the knowledgebase above for details and an examples on how to set different ranges.
    Regards,
    Paul C.

  • TS3999 I see high and low temps in place of scheduled events. Why do I see weather instead of events? Is there a way to switch ?

    Anyone out there with a solution?  I don't know how it got there but I now see high and low temps in my Calender view 68/45 and I don't know how to get back to only events.
    Thanks

    Check Settings>Privacy>Calendars.  You may have a weather app that is accessing your calendar and entering this information.  If so, turn it to Off and see if it stops.

  • Netflix movie resolution phases in and out of high and low.

    When I'm watching a netflix movie even with WiFi the resolution phases in and out of high and low resolution. Does anyone know what is causing this?

    Yes, you download speed is changing. If the download speed is not great enough for HD it reverts to SD.

  • Custom high and low times for square waves

    Hi, is there a way to create a square wave that has an incremental HIGH pulse width and a fixed LOW pulse width?
    Sorry I'm quite new with labview, any help is appreciated, thanks!
    Solved!
    Go to Solution.

    Hi GBPC,
    My name is Jack and I work at National Instruments UK.
    I understand you are trying to output a square wave signal with a fixed low time and variable high time in labVIEW.
    The reply above works great if you want to use the signal internally. Alternatively if you are trying to output the signal with a piece of internal or external Data Acquisition (DAQ) hardware you will want to use LabVIEWs DAQmx Tool palette.
    I have created a quick example below for you to look at that should help you come to a solution. It uses the DAQmx VIs to create and initialize a channel and then runs it in a loop with controls to change the high and low times using a property node.
    Hope this helps
    Jack. W
    Applications Engineer
    National Instruments
    Attachments:
    Variable High Time Pulse Generator.png ‏37 KB

  • Changing high and low units while tangosol is running

    Hi,
         i've a 3 machine setup. on the first one i've weblogic and tangosol with local storage as false.
         on the other two i've have tangosol running with local storage enabled and i'm running a distributed service.
         initially i've kept high units as 1000 and low units as 500. i would like to provide a screen to the amdin user where by he can change the high and low unit limit.
         i'm accessing the local cache using the following code
         CacheService service = dealCacheCoarseGrained.getCacheService() ;
         Map mapReadWrite = ( ( DefaultConfigurableCacheFactory.Manager ) service.getBackingMapManager() ) .getBackingMap( bundle.getString( "DistCache" ) ) ;
         if ( mapReadWrite instanceof ReadWriteBackingMap ) {
         rdwMap = ( ReadWriteBackingMap ) mapReadWrite;
         Map mapLocal = rdwMap.getInternalCache() ;
         if ( mapLocal instanceof LocalCache ) {
         localCache = ( LocalCache ) mapLocal;
         localCache.setHighUnits( evictionDetails.getHighUnits() ) ;
         localCache.setLowUnits( evictionDetails.getLowUnits() ) ;
         but in the above case as the tangosol operating within the weblogic JVM has local storage as false,
         the condition if ( mapLocal instanceof LocalCache ) {...
         is never fulfilled and as a result
         i'm not able to bring about the changes from the GUI.
         Can i achieve my objective with this setup?
         Please help
         Thanks
         Jigs

    Hi jigs,
         I would have the GUI kick off an Invocation Task that uses the code that you have here. Then send that Invocation Task to all storage enabled nodes. You can retreive the set of storage enabled nodes from DistributedCacheService.getStorageEnabledMembers().
         Later,
         Rob Misek
         Tangosol, Inc.

  • High, mid, low reference level calculation

    Hi,
        Does anyone know the equations for calculating the high, mid, low reference voltage?    I have set the reference voltages to 10%, 50% and 90%. I just don't know based on what the digitizer calculates the level.    For example 1, the Mid level is calculated as (Vtop+Vbase)*0.5.  Should it be  (Vtop-Vbase)*0.5?   The Hi and low reference calculations also do not add up.  I was  using a signal generator to generate the signals.  
     I am using PXI-5224 and niScope.lib verison 3.5.
    Example 1
    3Mhz Square Wave 3.3VPP    
    ch0                              ch1
    Vpp=4.704506            Vpp=4.575151
    Vtop=3.304177           Vtop=3.304474
    Vbase=-0.003679       Vbase=-0.618361
    VHigh=3.304177          VHigh=3.304474
    VLow=-0.003679         VLow=-0.001788
    VMax=4.011691           VMax=3.956790
    VMin=-0.692815           VMin=-0.618361
    LowRef =0.327107 V    LowRef =-0.226078 V
    MidRef =1.650249 V     MidRef =1.343056 V
    HiRef =2.973391 V        HiRef =2.912190 V
    RiseTime =7.437264      RiseTime =141.967975
    FallTime =7.438606       FallTime =9.800559
    Example 2 
    ch0                                 ch1
    Vpp=4.612737               Vpp=4.580664
    Vtop=3.295510             Vtop=3.311956
    Vbase=-0.001876         Vbase=-0.016182
    VHigh=3.295510          VHigh=3.311956
    VLow=-0.001876          VLow=-0.016182
    VMax=3.971204            VMax=3.965059
    VMin=-0.641533            VMin=-0.615605
    LowRef =0.327863 V    LowRef =0.316632 V
    MidRef =1.646817 V      MidRef =1.647887 V
    HiRef =2.965772 V        HiRef =2.979142 V
    RiseTime =7.605729     RiseTime =7.745428
    FallTime =7.536156       FallTime =7.708538
    Message Edited by kkwong on 04-16-2009 03:56 AM
    Solved!
    Go to Solution.

    Hi all,
        I think I messed up the calcuation.  The reference voltages are are correct.  It is the (Vop+Vbase)/2 for mid ref level. The Lo and hi are also correct. 

Maybe you are looking for