High cardinality and line dimenson

Hi
Are High cardinality and line dimenson both are dependent?
my understanding is that if the dimenson is more than 10% size of the fact table then  we go for line dimenson. Highcardinality should be given only if the dimenson is more  than 10% of fact table.By choosing line dimenson,fact table will be directly linked to  sid table as there will be no dimenson table. Does it mean if I choose line dimenson, I can't go for high cardinality as there is no dimenson? Please let me know the relationship the between the two?
Thank you
Sriya

When compared to a fact table, dimensions ideally have a small cardinality. However, there is an exception to this rule. For example, there are InfoCubes in which a characteristic Document is used, in which case almost every entry in the fact table is assigned to a different Document. This means that the dimension (or the associated dimension table) has almost as many entries as the fact table itself. We refer here to a degenerated dimension.
Generally, relational and multi-dimensional database systems have problems to efficiently process such dimensions. You can use the indicators line item and high cardinality to execute the following optimizations:
       1.      Line item: This means the dimension contains precisely one characteristic. This means that the system does not create a dimension table. Instead, the SID table of the characteristic takes on the role of dimension table. Removing the dimension table has the following advantages:
○       When loading transaction data, no IDs are generated for the entries in the dimension table. This number range operation can compromise performance precisely in the case where a degenerated dimension is involved.
○       A table- having a very large cardinality- is removed from the star schema. As a result, the SQL-based queries are simpler. In many cases, the database optimizer can choose better execution plans.
Nevertheless, it also has a disadvantage: A dimension marked as a line item cannot subsequently include additional characteristics. This is only possible with normal dimensions
  High cardinality: This means that the dimension is to have a large number of instances (that is, a high cardinality). This information is used to carry out optimizations on a physical level in depending on the database platform. Different index types are used than is normally the case. A general rule is that a dimension has a high cardinality when the number of dimension entries is at least 20% of the fact table entries. If you are unsure, do not select a dimension having high cardinality.
For example a dimension having Sales Doc number and Sales Org  can be set as High Cardinal as Sales Doc number will occur many times.
Hope this helps
Raja

Similar Messages

  • Differences between High cardinality and line item dimension

    Please Search the forum
    Friends,
    can any one tell me the differences between High Cardinality and Line item Dimension and their use.
    Thanks in Advance.
    Jose
    Edited by: Pravender on May 13, 2010 5:34 PM

    please search in SDN

  • High cardinality and BTREE

    Hello,
    CAN ANY ONE EXPLAIN  HIGH CARDINALITY  AND WITH EXAMPLE...
    AND ALSO IN THAT WHAT B TREE INDEXES...
    how this would be helpful in performance point of view comparing to Lineitemdimension
    Looking for reply
    thanks

    Hi Guru,
    High  Cardinality means that the dimension is to have a large number of instances/datasets (that is, a high cardinality). A general rule is that a dimension has a high cardinality when the number of dimension entries is at least 20% of the fact table entries. If you are unsure, do not select a dimension having high cardinality.
    This information is used to carry out optimizations on a physical level in depending on the database platform. Different index types are used than is normally the case.
    Additionally, you have the option of giving dimensions the indicator High Cardinality. This function needs to be switched on if the dimension is larger than ten percent of the fact table. Then you can create B tree indexes instead of Bitmap indexes.
    So whenever you have a high instance dimension mark it as High Cardinality while defining Dimensions.
    You use Line Item when you have only 1 charaterstic in your Dimension.This means that the system does not create a dimension table. Instead, the SID table of the characteristic takes on the role of dimension table. This helps asn when we load transaction data, no IDs are generated for the entries in the dimension table and secondly table- having a very large cardinality- is removed from the star schema. As a result, the SQL-based queries are simpler.
    Hope it helps.
    Thanks
    CK
    Message was edited by:
            Chitrarth Kastwar

  • Regarding High Cardinality

    Hi,
    Can any one give difference between High Cardinality and Line Item Dimension.
    Regards
    YJ

    Hi YJ,
    Refer these links:
    Line Item Dimension
    Re: Line Item Dimension
    Cardinality
    Re: High Cardinality Flag
    Bye
    Dinesh

  • Any relation between indexes and high cardinality.............

    HI,
      what is difference b/w index and high cardinality?
       and also difference b/w index and line item dimension?
    Thanks

    Hi,
    High Cardinality:
    Please Refer this link, especially the post from PB:
    line item dimension and high cardinality?
    Line Item Dimension:
    Please go through this link from SAP help for line item dimension
    http://help.sap.com/saphelp_nw04/helpdata/en/a7/d50f395fc8cb7fe10000000a11402f/content.htm
    Also in this thread the topic has been discussed
    Re: Line Item Dimension
    BI Index:
    There are two types of indexes in BW on Oracle, bitmap and b-tree
    Bitmap indexes are created by default on each dimension column of a fact table
    Setting the high cardinality flag for dimension usually affects query performance if the dimension is used in a query.
    You can change the bitmap index on the fact table dimension column to a b-tree index by setting the high cardinality flag. For this purpose it is not necessary
    to delete the data from the InfoCube
    Refer:
    Re: Bitmap vs BTree
    How to create B-Tree and Bitmap index in SAP
    Re: Cardinality
    Line Item Dimesnion
    Hope it helps...
    Cheers,
    Habeeb

  • SSAS Tabular. MDX slow when reporting high cardinality columns.

    Even with small fact tables( ~20 million rows) MDX is extremely slow when there are high cardinality columns in the body of the report.
    e.g. The DAX query is subsecond.
    Evaluate
    SUMMARIZE (
    CALCULATETABLE('Posted Entry',
    'Cost Centre'[COST_CENTRE_ID]="981224"
    , 'Vendor'[VENDOR_NU]="100001"
    ,'Posted Entry'[DR_CR]="S")
    ,'Posted Entry'[DOCUMENT_ID]
    ,'Posted Entry'[DOCUMENT_LINE_DS]
    ,'Posted Entry'[TAX_CODE_ID]
    ,"Posted Amount",[GL Amount]
    ,"Document Count",[Document Count]
    ,"Record Count",[Row Count]
    ,"Document Line Count",[Document Line Count]
    ,"Vendor Count",[Vendor Count]
    order by
    'Posted Entry'[GL Amount] desc
    The MDX equivalent takes 1 minute 13 seconds.
    Select
    { [Measures].[Document Count],[Measures].[Document Line Count],[Measures].[GL Amount], [Measures].[Row Count],[Measures].[Vendor Count]} On Columns ,
    NON EMPTY [Posted Entry].[DOCUMENT_ID_LINE].[DOCUMENT_ID_LINE].AllMembers * [Posted Entry].[DOCUMENT_LINE_DS].[DOCUMENT_LINE_DS].AllMembers * [Posted Entry].[TAX_CODE_ID].[TAX_CODE_ID].AllMembers On Rows
    From [Scrambled Posted Entry]
    WHERE ( [Cost Centre].[COST_CENTRE_ID].&[981224] ,[Vendor].[VENDOR_NU].&[100001] ,{[Posted Entry].[DR_CR].&[S]})
    I've tried this under 2012 SP1 and it is still a problem. The slow MDX happens when there is a high cardinality column in the rows and selection is done on joined tables. DAX performs well; MDX doesn't. Using client generated MDX or bigger fact tables makes
    the situation worse.
    Is there a go fast switch for MDX in Tabular models?

    Hi,
    There's only 50 rows returned. The MDX is still slow if you only return a couple of rows.
    It comes down to the DAX produces a lot more efficient queries against the engine.
    FOR DAX
    e.g.
    after a number of reference queries in the trace the main vertipaq se query is
    SELECT
    [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[DOCUMENT_ID], [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[DOCUMENT_LINE_DS], [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[TAX_CODE_ID],
    SUM([Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[POSTING_ENTRY_AMT])
    FROM [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1]
    WHERE
     ([Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[DOCUMENT_ID], [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[DOCUMENT_LINE_DS], [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[TAX_CODE_ID]) IN {('0273185857', 'COUOXKCZKKU:CKZTCO CCU YCOT
    XY UUKUO ZTC', 'P0'), ('0272325356', 'ZXOBWUB ZOOOUBL CCBW ZTOKKUB:YKB 9T KOD', 'P0'), ('0271408149', 'ZOUYOWU ZOCO CLU:Y/WTC-KC YOBT 7.3ZT BUY', 'P0'), ('0273174968', 'ZOUYOWU ZOCO CLU:Y/WTC-KC YOBT 3ZT KBW', 'P0'), ('0273785256', 'ZOUYOWU ZOCO CLU:Y/WTC-KC
    YOBT 3ZT JXO', 'P0'), ('0273967993', 'ZOUYOWU ZOCO CLU:Y/WTC-KC YOBT 3ZT KCB', 'P0'), ('0272435413', 'ZOUYOWU ZOCO CLU:Y/WTC-KC YOBT 3ZT BUY', 'P0'), ('0273785417', 'ZOUYOWU ZOCO CLU:Y/WTC-KC YOBT 3ZT BUY', 'P0'), ('0272791529', 'ZOUYOWU ZOCO CLU:Y/WTC-KC
    YOBT 7.3ZT JXO', 'P0'), ('0270592030', 'ZOUYOWU ZOCO CLU:Y/WTC-KC YOBT 89.3Z JXO', 'P0')...[49 total tuples, not all displayed]};
    showing a CPU time of 312 and duration of 156. It looks like it has constructed an in clause for every row it is retrieving.
    The total for the DAX query from the profiler is 889 CPU time and duration of 1669
    For the MDX
    after a number of reference queries in the trace the expensive vertipaq se query is
    SELECT
    [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[DOCUMENT_ID_LINE], [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[DOCUMENT_LINE_DS], [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[TAX_CODE_ID]
    FROM [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1]
    WHERE
    [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[DR_CR] = 'S';
    showing a CPU time of 49213 and duration of 25818.
    It looks like it is only filtering by a debit/credit indicator .. this will be half the fact table.
    After that it fires up some tuple based queries (similar to the MDX but with crossjoins)
    SELECT
    [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[DOCUMENT_ID_LINE], [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[DOCUMENT_LINE_DS], [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[TAX_CODE_ID]
    FROM [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1]
    LEFT OUTER JOIN [Vendor_6b7b13d5-69b8-48dd-b7dc-14bcacb6b641] ON [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[VENDOR_NU]=[Vendor_6b7b13d5-69b8-48dd-b7dc-14bcacb6b641].[VENDOR_NU]
    LEFT OUTER JOIN [Cost Centre_f181022d-ef5c-474a-9871-51a30095a864] ON [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[COST_CENTRE_ID]=[Cost Centre_f181022d-ef5c-474a-9871-51a30095a864].[COST_CENTRE_ID]
    WHERE
    [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[DR_CR] = 'S' VAND
    ([Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[DOCUMENT_ID_LINE], [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[DOCUMENT_LINE_DS], [Posted Entry_053caf72-f8ab-4675-bc0b-237ff9ba35e1].[TAX_CODE_ID]) IN {('0271068437/1', 'ZOUYOWU ZOCO CLU:Y/WTC-KC YOBT 7.3ZT ZTC', 'P0'), ('0272510444/1', 'ZOUYOWU ZOCO CLU:Y/WTC-KC YOBT 3ZT KBW', 'P0'), ('0272606954/1', null, 'P0'), ('0273967993/1', 'ZOUYOWU ZOCO CLU:Y/WTC-KC YOBT 3ZT KCB', 'P0'), ('0272325356/1', 'ZXOBWUB ZOOOUBL CCBW ZTOKKUB:YKB 9T KOD', 'P0'), ('0272325518/1', 'ZOUYOWU ZOCO CLU:Y/WTC-KC YOBT 3ZT KUW', 'P0'), ('0273231318/1', 'ZOUYOWU ZOCO CLU:Y/WTC-KC YOBT 7.3ZT ZWB', 'P0'), ('0273967504/1', 'ZOUYOWU ZOCO CLU:Y/WTC-KC YOBT 3ZT KBW', 'P0'), ('0274055644/1', 'YBUCC OBUC YTT OYX:OD 5.3F81.3ZT TOZUT', 'P5'), ('0272435413/1', 'ZOUYOWU ZOCO CLU:Y/WTC-KC YOBT 3ZT BUY', 'P0')...[49 total tuples, not all displayed]};
    This query takes 671 CPU and duration 234; more expensive than the most expensive part of the DAX query but still insignificant compared to the expensive part of the MDX.
    The total for the MDX query from the profiler is 47206 CPU time and duration of 73024.
    To me the problem looks like the MDX fires a very expensive query against the fact table and only filters by 1 element of the fact table; then goes about refining the set later on.

  • If v check only High Cardinality?

    Hello all
    If v check only cardinality in dimension assignment,then dimension table will be available.(I hope so)
    upto how <b>many characteristics can v assign</b> this cardinality.
    Is there any mandatory like,while v select the cardinality <b>should v select the Line Item also</b>?
    many thanks
    balaji

    hi pizzaman
    thanks for your info., given.
    In your statements u said like "<b>when just High Cardinality is selected for a dimension, a b-tree index is created instead of a bitmap index</b>", but if v check only line item then which index will create? (then also only b-tree index will create,is it this way).
    if both <b>line item</b> and <b>high cardinality</b> checked then which index will create?
    Mant Thanks
    balaji

  • High cardinality

    my cube e-fact table has 214510 entries and the z-sales order
    line item dim sid tables has 1438296 entries , shall the 'high
    cardinality' setting be marked/unmarked in this situation?

    When compared to a fact table, dimensions ideally have a small cardinality.  However, there is an exception to this rule. For example, there are InfoCubes in which a characteristic document is used, in which case almost every entry in the fact table is assigned to a different document. This means that the dimension (or the associated dimension table) has almost as many entries as the fact table itself. We refer here to a degenerated dimension. In BW 2.0, this was also known as a line item dimension, in which case the characteristic responsible for the high cardinality was seen as a line item. Generally, relational and multi-dimensional database systems have problems to efficiently process such dimensions. You can use the indicators line item and high cardinality to execute the following optimizations
    This means that the dimension is to have a large number of instances (that is, a high cardinality). This information is used to carry out optimizations on a physical level in depending on the database platform. Different index types are used than is normally the case. A general rule is that a dimension has a high cardinality when the number of dimension entries is at least 20% of the fact table entries. If you are unsure, do not select a dimension having high cardinality
    Message was edited by:
            Benjamin Edwards

  • How to draw a graph of tree structure (using shapes and lines)?

    Hello,
    I tried to search this solution in the forum, and I see people asking and replying with solutions to similar situation, but I don't get what I am looking for. Also because I have never tried with graphs before.
    So, my problem is, I need a function that takes a string with tree structure, as in automata or tree graph, and displays the nodes in tree form. "Tree" is not important, but important is that each object should be displayed as a node and lines connecting them. Please see the image below (with three possible options):
    So, basically, the tree structure could be like X(a, X(a,b), c) where X(a,b) is a sub-tree of higher level X. The program knows the parent-child relationship, so this function only needs to display those elements in a graphical fashion.
    I pass the string in the form of a 2D array showing the hierarchy (to simplify).
    In the image, I am showing three possible options for showing the tree. The third option eliminates those circles and rectangles, if that simplifies.
    I hope I explained clearly.
    Thanks ahead!
    Vaibhav

    I would start drawing from the top. The nodes will be the easy part.
    Begin with the root node centered in the drawing area horizontally and against the top of the drawing area. The second row of nodes would be located vertically (as in my example) 1.5x pixels below the first one, and either distributed horizontally across the available drawing area or at a fixed distance - like 1.5x again, or someother distance you define.
    The tricky part will be drawing the lines since they need run between the edges of nodes. This is where the high-school geometry might come in.
    Keep us posted on what you come up with. Extra points for coming up with a solution that will automatically resize itself to fit the available drawing area! (I've already given you all the clues for how to do that too.)
    Mike...
    Certified Professional Instructor
    Certified LabVIEW Architect
    LabVIEW Champion
    "... after all, He's not a tame lion..."
    Be thinking ahead and mark your dance card for NI Week 2015 now: TS 6139 - Object Oriented First Steps

  • My daq card PCI6025e' digital ports PA,PB,PC are in high state and cannot be changed even in MAX 2.1 testing panel and continuously output 5 volts even set for output.

    my digital ports PA,PB,PC are in high state and cannot be set low even if it is set for outputs.
    Thanks

    The MAX utility is the closest to the driver level and will eliminate software configuration as a possible issue. Please disconnect anything that you have connected to the DIO lines.
    Use the MAX test panel for your DIO lines, configure them as outputs, and set them to a low state. Use a multimeter (DMM) to observe the line state. If it is still high then you may have a problem with your hardware. If this is the case, I advise calling National Instruments support and investigate a possible RMA (repair).
    Best Regards,
    Justin Britten
    Applications Engineer
    National Instruments

  • High Cardinality Flag

    If a Dimension only has the High Cardinality flag checked, I understand that the index will be B-Tree.  However, if I am to determine whether this setting is correct if I want to check the cardinality i.e.
    Number of Distinct Values/Number of Records
    This is to be done using the Number of Records in the Dimension table and not the Fact table is that correct?  Thanks

    You're right, for a fact table of 8.5 million, you would NOT want a dimension with only 6,000 values to be marked high cardinality.
    The approach to calculate the dimension relative to the fact table is fine.  The challenge in the initial design of dimensions is that <b>without expert domain knowledge</b>, it is difficult to figure out how big a dimension will be until you have built the cube and loaded data.  Unless you can analyze the data from R3 some way, you have to go thru load and review process.
    Yes every Dim ID is a distinct value by design.  What you are trying to avoid is putting characteristics that by themselves, each have low cardinality, but have no relationship to one another, and when put together in the same dimension, result in a large dimension, e.g.
    Let's take your existing dimension with 6,000 values (6,000 different combinations of the characteristics currently in the dimension), and you add another characteristic that has 1,000 distinct values by itself.
    Adding this characteristic to this dimension could result in no new dimension rows if the new characteristic was directly related to an existing characteristic(s),
    e.g. lets say you were adding a char called Region, which was nothing more than a concatenation of Division and Business Area. Dim still has only 6,000 values.   (When you have parsings like this where  a characteristic is a piece of another characteristic, you would want them in the same dimension).
    Or lets say you were adding a characteristic to this dimension that has no relationship to any of the existing chars, a Posting Date. Each occurence of the 6,000 dimension combinations has all 1,000 Posting dates associated with it.  Now your dimension table is 6,000 * 1,000 = 6,000,000 rows !!!  Now your dimension IDs would be considered to have high cardinality.  The answer in this design however, is NOT to set this dimension to high caridnality, but rather, to put Posting Date in it's own dimension.
    Hope this helps a little.

  • High cardinality flag viewer

    hello all,
    im looking for a way to see all the infocubes that have the high cardinality flag set in the system.
    is there a transaction i can call or a program i can run to see this?

    Hi Jason
    Use  RSDDIMEV - View of dimensions with texts
    In selection for Card height put "X" and execute
    hope this helps
    PBI

  • Benchmark for High Cardinality

    In the link below SAP uses 20% as a benchmark figure for high cardinality
    http://help.sap.com/saphelp_nw04/helpdata/en/a7/d50f395fc8cb7fe10000000a11402f/content.htm
    Whereas in the link below Oracle (we are using 9.0.2.x) uses 1% as a benchmark for high cardinality
    http://www.lc.leidenuniv.nl/awcourse/oracle/server.920/a96524/toc.htm
    Why is there such a stark difference in benchmark values and which is the correct benchmark to consider.
    Thank You for any help offered

    I'm not sure that you are comparing apples to apples.
    SAP is refering to a high cardinality dimension in it's stmt, not a single column that Oracle's doc is talking about. Oracle's doc also mentions that the advantage is greatest when ratio of the number of distinct values vs the nbr of rows is 1%. Remember, both SAP's and Oracle's stmts are general rules of thumb.  
    Let's use Oracle's example:
    - Table with 1 million rows
    - Col A as 10,000 distinct values, or 1%.
    But now, let's talk about a dimension that consists of   two characteristics, Col A, and Col B.
    - Col A has 10,000 distinct values as before (low cardinality as per Oracle)
    - Col B has 100 distinct values (very low cardinality per Oracle)
    Now what happens if for every value of Col A, at least one row exists with every Col B value - the number of rows in your dimension in your dimension table is 10,000 x 100, or 1,000,000 or 100% of the fact table.
    Dimension tables can grow very quickly when you have characteristics that individually have low cardinality, but when combined, result in a dimension that does not.
    Hope this makes sense and clears things up bit.
    Harry -
    You have had several posts related to Cardinality.  I would suggest that rather than closing out each question, and posting another question, just go ahead and post a follow-up question in the original thread (assuming it's related).  I think this creates a more educational thread for other forum members to follow.
    Cheers,
    Pizzaman

  • MSI R9 280x 3gb GAMES CRASH AND LINES DOWN MY SCREEN

    I have been using the MSI r9 280x for sometime now, possible 7months and all has been good untill the other day; Playing CSGO My screen went to a set of lines running down and and a loop of the last noise I heard was playing over and over.
    Only a hard restart of my PC would solve the issue; this occurred when also playing Wolfenstein: The New Order and Skyrim (tends to be high intensity games).
    I have cleaned out my GPU, all of my fans, have tried to play these games on the older drivers and latest drivers and still the problem occurs.
    It manages to last maybe 5minutes when I have my fans on 100%. I haven't overclocked or underclocked by GPU but still have the issue. I know it can possible be a GPU problem as I tested my GPU with furMark. As soon as I started a 1080p benchmark, my PC froze instantly and lines all down my screen again.
    PC Specs:
    CPU: AMD FX8320 Black Edition 8 Core 3.5
    CPU Cooler: H100i Water Cooler
    GPU: MSI Radeon r9 280x 3gb
    Motherboard: Gigabyte GA-990FXA-UD5
    PSU: Modular CX750M ATX Power Supply - Bronze
    RAM: (4x2) 8GB Kingston 1333hz DDR3
    PC Case: Corsair Air 540 (2 basic fans at the front, 120mm corsair at the back)
    PLEASE PLEASE I really need help on this, thank you!

    Thank you for the Advice.
    I ran into a second problem this morning, when on start-up or even loading Google chrome, the same problem occurred with lines all down my screen yet again.
    I am currently using my old GPU (Radeon HD 7770) and I am able to run CSGO; Wolfenstein etc on high intensity without my PC crashing or this problem occurring, leading me to believe that it must be my GPU.
    My r9 280x runs at around 45C when idle, and sometimes that's even when my fans are running at 75-100% speed. Looking online I found that the Average Idle temp should be around 34C; something is wrong here :(
    I'm not sure if there are any other measurements you need me to take to make sure that yes 100% it is my GPU. But looking at it how it is, unfortunately it is.
    Any more ideas, suggestions would be really helpful thank you!

  • High Pings and lag in games.

    The problem I have is with gaming early in a morning and towards lunch time are usually fine and late a night into the early hours are usually ok aswell but as soon as peak times roll in my online play is impossible due to high pings and very bad lag, any idea what could be causing my problem please?
    Steps I have taken to date, contact BT and was passed onto tier 2 tech support who then ran a noise check which last 24 hours on my line, which seems to have left my line with a higher noise value than before.
    Tried a different ethernet cable, different micro filter, unplugged everything from the socket except for my Homehub 2, turned off my firewall and virus checker, reset my computer, closed all unused programs. Tried a friends Homehub 2, tried a laptop, all have the same result with high pings and lag in games.
    Also BT quote my line speed should be an estimated 17MB.
    ADSL line status
    Line state
    Connected
    Connection time
    4 days, 22:09:10
    Downstream
    8,128 Kbps
    Upstream
    445 Kbps
    ADSL settings
    VPI/VCI
    0/38
    Type
    PPPoA
    Modulation
    ITU-T G.992.5
    Latency type
    Interleaved
    Noise margin (Down/Up)
    15.6 dB / 30.2 dB
    Line attenuation (Down/Up)
    24.0 dB / 14.1 dB
    Output power (Down/Up)
    0.0 dBm / 10.9 dBm
    Loss of Framing (Local)
    0
    Loss of Signal (Local)
    0
    Loss of Power (Local)
    0
    FEC Errors (Down/Up)
    0 / 0
    CRC Errors (Down/Up)
    3334 / N/A
    HEC Errors (Down/Up)
    N/A / 0
    Error Seconds (Local)
    2211

    Test1 comprises of two tests
    1. Best Effort Test: -provides background information.
    Download  Speed
    6592 Kbps
    0 Kbps
    7150 Kbps
    Max Achievable Speed
     Download speedachieved during the test was - 6592 Kbps
     For your connection, the acceptable range of speeds is 2000-7150 Kbps.
     Additional Information:
     Your DSL Connection Rate :8128 Kbps(DOWN-STREAM), 445 Kbps(UP-STREAM)
     IP Profile for your line is - 7170 Kbps
    2. Upstream Test: -provides background information.
    Upload Speed
    363 Kbps
    0 Kbps
    445 Kbps
    Max Achievable Speed
    >Upload speed achieved during the test was - 363 Kbps
     Additional Information:
     Upstream Rate IP profile on your line is - 445 Kbps

Maybe you are looking for