Grant bandwidth (CBWFQ?)

Hi Team,
i am new to the community so i greet you all!
My doubts and problems are related to the need to shape the traffic of a class granting a minimum bandwidth.
We have a 7206-VXR so we have Gigabit physical interfaces, our SP is granting 200Mbps.
The shape is working fine but the bandwidth command don't trigger couse, for my router, there's no congestion.
I think that this is a common problema and maybe i am trying to solve in the wrong way.
Any suggestion is appreciated
Thanks in advance
Denis

Hi Denis,
Shaping using the 'shape' cli defines the max rate for a class; whereas, the 'bandwidth' cli defines the minimum bandwidth guarantee for that class under functional congestion.
For example,
policy-map test
class A
  shape average percent 20
class B
   bandwidth percent 10
  class C
    bandwidth percent 30
- Traffic through class A will *always* get shaped to 20% of intf bandwidth. Doesn't matter whether the interface is congested or not.
- Traffic through class B and Class C will get a *minimum* guarantee of 10 and 30% of intf bandwidth when the interface is congested. When the interface is not congested, then there is really no need for a queueing policy (~ bandwidth cli) and the queueing part of the policy won't kick in. So, class B / class C can basically use the entire intf bandwidth if there is no traffic through the other (bandwidth) classes. However, should the physical interface be congested (tx_ring is full) then queueing kicks in and each class is given a minimum guarantee that's configured. The remaining bandwidth is shared amongst the bandwidth classes in the ratio of guarantees.
- Abhi

Similar Messages

  • ASR9K: bandwidth and bandwidth remaining cannot be used together. How to solve the problem to grant a quota and equally assign the remaining quota?

    Hi everyone
    The problem should be trivial. We want to grant a quota to specific classes and use equally the remaining quota of available bandwidth to all the requesting classes. Let's clarify with an example:
    Class 7 ==> priority queue, level 1 with police 20%
    Class 5 ==> priority queue, level 2 with police 40%
    Class 6 ==> CIR 12%
    Class 3 ==> CIR 11%
    Class 2 ==> CIR 8%
    Class 1 ==> CIR 5%
    Class 0 ==> CIR 4%
    To simplify let's suppose that there is no traffic on class 7 and 5 and that all remaining classes are generating traffic at a rate of 300Mbps each. Outgoing interface is 1G so congestion occurs. We want that each class 6,3,2,1,0 receive its granted value (so, respectively, 120M, 110M, 80M, 50M and 40M for a total of 400M) and that the remaining available bandwidth (600M) will be equally assigned, so 120M to each class.
    Documentation from IOS-XR 5.2.2 let's understand that this should be the default behavior but if we run the policy shown below what we get is a weighted assignment of the remaining quota.
    The policy used is the following:
    policy-map TEST-POLICY
     class qos7
      police rate percent 20
      priority level 1
     class qos5
      police rate percent 40
      priority level 2
     class qos6
      bandwidth percent 12
     class qos3
      bandwidth percent 11
     class qos2
      bandwidth percent 8
     class qos1
      bandwidth percent 5
     class qos0
      bandwidth percent 4
     class class-default
     end-policy-map
    The documentation of IOS-XR 5.2.2 states that both "bandwidth percent" and "bandwidth remaining percent" could be used in the same class (which could be a solution to force the requested behavior) but using both generates the following error:
    !!% Both bandwidth and bandwidth-remaining actions cannot be configured together in leaf-level of the queuing hierarchy: InPlace Modify Error: Policy TEST-POLICY: 'qos-ea' detected the 'warning' condition 'Both bandwidth and bandwidth-remaining actions cannot be configured together in leaf-level of the queuing hierarchy'
    How could be solved the problem? Maybe a hierarchical QoS with the granted quota in the parent policy and a "bandwidth remaining percent 20" in the child?

    Hi everyone
    just to provide my contribution, the hierarchical QoS policy works balancing the remaining bandwidth after granting the requested bandwidth (see the policy implemented below). However for priority queues it is granted the policer quota but sending more flows these appears to be unbalanced. So the problem to have both PQ served (in a balanced way between flows) AND have the remaining bandwidth distributed equally remains open ...
    policy-map TEST-POLICY-parent
     class qos6
      service-policy TEST-POLICY-child
      bandwidth percent 12
     class qos3
      service-policy TEST-POLICY-child
      bandwidth percent 11
     class qos2
      service-policy TEST-POLICY-child
      bandwidth percent 8
     class qos1
      service-policy TEST-POLICY-child
      bandwidth percent 5
     class qos0
      service-policy TEST-POLICY-child
      bandwidth percent 4
     class class-default
      service-policy TEST-POLICY-child
     end-policy-map
    policy-map TEST-POLICY-child
     class qos7
      police rate percent 20
      priority level 1
     class qos5
      police rate percent 40
      priority level 2
     class qos6
      bandwidth remaining percent 20
     class qos3
      bandwidth remaining percent 20
     class qos2
      bandwidth remaining percent 20
     class qos1
      bandwidth remaining percent 20
     class qos0
      bandwidth remaining percent 20
     class class-default
     end-policy-map

  • Bandwidth allocation | default class|CBWFQ

    Hi everybody
    Let say we have 100 mig circuit. Max -reserved bandwidth is 100 mig as well.  We make following allocations:
    Class A
    bandwidth 20
    Class B
    bandwidth 60
    Class  Default.
    1)We did not make any bandwidth  allocations for default class.  Assuming We are congested ( i.e class A, class B ), What is the maximum bandwidth Class Default can use?
    2) Let say we are congested ( classA,classB) but there is no traffic in default class. How will this  unused 20 mig will be distributed among these classA and class B?
    ++++++++++++++++++++++++++++++++++
    I am getting confusing answers:
    For example:
    From one of theblog ( Dont want pick on author so did not quote it)
    You'll want to configure a bandwidth command under the class class-default Otherwise, IOS will divide any unallocated bandwidth equally among all classes; this can result in the class-defaulthaving a very small amount of bandwidth.
    Cisco QOS  Documentation says:
    http://www.cisco.com/en/US/docs/ios/qos/command/reference/qos_cr.pdf
    From the above link:
    The following output from theshow policy-map interfacecommand on serial interface 3/2 shows that 500 kbps of bandwidth is guaranteed for the class
    named voice1. The classes named class1 and class2 receive 50 percent and 25 percent of the remaining bandwidth, respectively. Any unallocated bandwidth
    is divided proportionally among class1, class2, and any best-effort traffic classes
    Which One is true statement?
    If Cisco documentation is correct, then what proportion of unallocated bandwidth is given to default class as there is no bandwidth percentage configured under default class.
    Thanks

    Disclaimer
    The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
    Liability Disclaimer
    In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
    Posting
    #2 If there's just traffic for classes A and B, they will proportionally share it 20:60 or 1:3 (or 25%:75%).  (NB: the former is assuming they want "more".  If not, actual usage might not reflect bandwidth allocation.  For example, if class B only used/wanted 50% of link, class A could obtain the other 50%.)
    #1 recall finding some blog that really went deep into what class-default gets when you don't explicitly allocate bandwidth.  I also recall it may have been IOS version dependent and whether FIFO or FQ was defined in class-default (this also was pre-HQF).
    It was very complicated, and IMO, best avoided by defining bandwidth in class-default, if class-default usage, relative to other defined classes, is important to your QoS policy.
    Generally, if something isn't clearly documented as expected behavior, I avoid relying on "discovered" actual behavior, because it might change with next IOS release.

  • Questions on CBWFQ

    Hi every body
    CBWFQ uses both custom based ( with a little enhancement i.e we used percentage of interface bandwidth rather than byte counts) and weighted fair queuing using class map and policy maps
    Here is my confusion.
    In Custom based queuing we have queues with bytes counts and scheduler uses round robin method to empty them, for example :
    http traffic assigns to q1 byte count 2000
    ftp traffic assigns to 2 byte count 200
    q1 will be serviced first until byte count reaches zero then scheduler moves to q2 and services it until q 2 byte count reaches zero.
    Here we have control as to which traffic goes toq 1 and thereby we can control which traffic get serviced first before others.
    Now if we compare this with CBWFQ,specifically custom-based portion of CBWQ we do not have this control.
    for example.
    Class HTTP
    match http
    Class FTP
    match ftp
    policy-map NEE
    class Http
    bandwidth 10
    class-map Ftp
    bandwidth 20
    int s0/0
    service-policy  Nee output
    Above , we are saying http traffic will get upto 10 percent bandwidth of s0/0 ,ftp will get 20 percent.
    But which queue will be served first?  http queue? or ftp? queue? ( because my understanding is in CBWFQ, we still follow round robin algorithm to empty our queues, that means the q1 will be service first than q2. When we use CBWFQ we do not know which queue will be mapped to http or ftp in my example)
    thanks.
    have a great weekend

    Disclaimer
    The  Author of this posting offers the information contained within this  posting without consideration and with the reader's understanding that  there's no implied or expressed suitability or fitness for any purpose.  Information provided is for informational purposes only and should not  be construed as rendering professional advice of any kind. Usage of this  posting's information is solely at reader's own risk.
    Liability Disclaimer
    In  no event shall Author be liable for any damages whatsoever (including,  without limitation, damages for loss of use, data or profit) arising out  of the use or inability to use the posting's information even if Author  has been advised of the possibility of such damage.
    Posting
    Above , we are saying http traffic will get upto 10 percent bandwidth of s0/0 ,ftp will get 20 percent.
    Yea, the policy says that, but that's only true if CBWFQ is active, all the classes are active, all the class allocations add to 100%, and all the classes are trying to exceed their allocations.
    But which queue will be served first?  http queue? or ftp? queue? ( because my understanding is in CBWFQ, we still follow round robin algorithm to empty our queues, that means the q1 will be service first than q2. When we use CBWFQ we do not know which queue will be mapped to http or ftp in my example)
    Actually, CBWFQ for non-LLQs is very similar to custom-queuing.  Queues are serviced to try to maintain the ratios between them.  In your example, FTP will be dequeued such that it transmits twice as much data as HTTP; because of the 20 and 10 percentages or 2:1 ratio between the two classes.
    If you wanted to know, from an "even start", which class would transmit first, I don't know, and as it isn't documented, I wouldn't count on it always being the same between platforms and/or IOS versions.
    As it's packets that are transmitted, during short time intevals, if there a large size difference between the packet sizes for HTTP vs. FTP, the 2:1 ratio won't be exact, but it should average out over a longer time interval.

  • Bandwidth allocation per vrf

    Hello,
    in my lab i have 3 sites each with 3 VRF's configured. A diagram ist attached. I like to configure fixed bandwidth for each vrf. the central vrf should have 768 kbps and the the other ones ones should have 256 kbps each.
    What are the options i have to achive this?
    Thanks a lot in advanced
    Alex

    Hi Alex
    Since you have already policed the bandwidth at the access, would there be any excess bandwidth that will leak from this policing.
    Besides, ideally you would configure your core with a standard llq+cbwfq config and give priority to voice. You will in production have multiple customers and you cant have sich a bandwidth restriction in place.
    Also, no you cannot police bw in core per vrf. But at the same time I can think of a non-conventional way of doing it by using TE but that is a very bad way of doing it.
    Sent from Cisco Technical Support Android App

  • QOs detoriation with increase in bandwidth utilisation

    we are the MPLS service provider providing MPLS services in India.we notice that Voice quality detoriates as soon as the bandwdith utilisation of the customer links increase more than 60% of link bandwdith particularly with FTP. we have implemented QOS properly. i am told that qos is ineffective if bandwdith utilisation increases more than 60% of link bandwdith. customer should be adviced to increase the bandwdith. Is it true? please help

    Hi
    AFAIK if u configure strict priority to your voice traffic (i.e.,LLQ for your voice)it shuldnt affect your voice traffic at all regardless to your bandwidth utilsation since it reserves a particular amount of B/W for your voip traffic which again can be configured manually.
    The same you can do under your policy map configurations,hope you are having LLQ in place for your voice otherwise would suggest to look onto that and try out the same.
    And when theres no congestion thts your h/w queue is ample enough to serve your traffic the software queues will be bypassed (which are manually configured) and if theres some congestion then your S/W queues kicksin.
    so in a ideal customer network with voip and other traffic i would go with llq for voip and cbwfq for other traffic based on the traffic patterns using the DSCP,IP Prec values..
    to be more precise you can go for LFI also to slice your packets so that your voip packets dont get backlogged..
    regds

  • Questions about the bandwidth.

    Let me explain my existing environment
    as follows :
    - I have two 2801 routers with
    C2801-SPSERVICESK9-M IOS
    - Each router has Four-port FXS
    connected to the phones
    - The bandwidth between these routers
    is 256 Kbps ( Leased Line )
    My qouestions are as follows :
    1. How many bandwidth are a call
    comsumed ( per call ) ?
    2. To ensure that the bandwidth of
    VoIP is not excess the maximum
    bandwidth of leased line. And to
    ensure that the priority of VoIP
    is always higher than the priority
    Data. I will use RSVP and CBWFQ
    with ip precedence. Are these
    features suitable for the
    situation ?
    Thanks!! : )

    hi
    regarding bandwidth consumption it depends upon the kinda codec which you are gonna use up there.
    ofcourse g.729 does uses minimal amount of bandwidth and also that get reflects in ur quality.
    If you are gonna used some kinda RTP compression out there that can always reduce the packet length drastically to lesser size which will be easier as well u will be able to use the B/W effectively .
    regarding the priority part you can always use up LLQ to have high priority for your VOICE traffic and deploy CBWFQ for your data traffic.
    if u wanna have sample configs would suggest to do a search in cisco about the same ,if u need further assistance do revert..
    regds

  • Ask a CBWFQ and LLQ question

    we config qos on mpls network.use llq in CBWFQ. i have a question.
    if i config LLQ for bandwidth 1m on 2M E1 line, and business data 0.5M(CBWFQ),defautl class 0.5M(CBWFQ). can i config this llq burst to 2M if line does not congest. or can i config business data or default class burst to 2M when line does not congest.
    IF line congest. can llq and business,default class get gurantee bandwith?
    i find when i config llq,i can use burst parameter:
    priority bandwidth burst
    when i need llq to burst to 2m, Does i need config this burst parameter. or i don't need config this burst parameter, i can get burst too.
    thank you!
    Tom

    (Yet another explanation.)
    "if i config LLQ for bandwidth 1m on 2M E1 line, and business data 0.5M(CBWFQ),defautl class 0.5M(CBWFQ). can i config this llq burst to 2M if line does not congest."
    LLQ should allow up to the full 2 Mbps, if there's no congestion. If there's congestion, LLQ will police itself at the rate specified. More on bursting, below.
    "or can i config business data or default class burst to 2M when line does not congest."
    Either the business data or default classes should also be able to obtain full bandwidth, if there's no congestion. Excess demand will queue the excess packets.
    "IF line congest. can llq and business,default class get gurantee bandwith?"
    Yes, if there's congestion, LLQ will police its traffic, i.e. immediately drop excess, other classes will queue excess traffic; will drop if queues overflow (and/or WRED drop).
    "i find when i config llq,i can use burst parameter:
    priority bandwidth burst
    when i need llq to burst to 2m, Does i need config this burst parameter. or i don't need config this burst parameter, i can get burst too."
    If you specify burst, you're really adjusting the time interval the policier operates. I believe the default is based on 200ms, so if there's more than 25,000 bytes (1 Mbps * .2 sec / 8 bits/byte), the excess will be dropped. (All the non-dropped bytes will transmit at full rate.)
    When to adjust burst? If your traffic's long term average is under your LLQ bandwidth setting, but short term traffic rate is very variable (e.g. many video streams), you should increase the burst size to avoid dropping packets.

  • Internet bandwidth management using cisco devices

    Hi all
    I have a need to maintain an internal SLA with some of the clients that are sitting in our premises. They want reassurance that of the total 16MB leased internet link that we have to the cloud, a minimum of 4MB is assured to them at all times. I have a web gateway / proxy appliance that doesn't have such a functionality.
    I also have ASA and 2800 routers at the edge. Can I do this using them in any way? What are my other 3rd party options?

    "When you define the class 'cust1', where do we pick the 'cust1' value from and the class contituents? "
    That would be defined in a "class-map". Attributes you can match on are, depending on the platforms, an enhancement of ACLs (although ACLs can be part). Lots of information within the Cisco site - look for CBWFQ.
    "Can I use my Windows Radius 802.1x (IAS) integratiion with my Router to define the AD groups within my router classes? "
    Don't believe so, unless somehow you can AD groups tag packets. (Might be possible at the host level.)
    "Unfortunately, the clients are spread much beyong the value of 4, they are about 50 and very dynamically changing within my AD groups."
    As prior answer, AD relationship can be an issue. On issue of 50 groups, you can defined that many with CBWFQ (more actually), but the reason I noted 4, beyond 4 groups you can't guarantee all 50 groups might obtain 4 Mbps unless you have 50x that bandwidth. You might guarantee each group obtains an equal share of bandwidth.

  • CBWFQ - How is a classes unused BW allocated during congestion

    Hi.
    I've been reading the forum and the net trying to get a straight answer on this and can't seem to find anything concrete.
    So basically If I have a CBWFQ setup for three classes, af3x (55%), af2x(30%) and class default(15%). applied outgoing on a WAN interface.
    policy-map CBWFQ-POLICY
      class Af3x
      bandwidth percent 55
      random-detect
      class Af2x
      bandwidth percent 30
      random-detect
      class class-default
      bandwidth percent 15
      random-detect
    Now I get  that in normal operation all classes will burst either way and during congestion, traffic in a specific class will be guaranteed it's bandwidth
    But how is unused bandwidth from a class allocated during congestion. So for example if I have this link being congested primarily with Af2x and default traffic but have available capacity in Af3x, for this example say I have 30% AF3x still available.
    Will the router split it  evenly15% Af2x and 15% default or will it split proportionatly based on the original BW allocations, ie 20% Af2x and 10% default.
    Thanks

    Disclaimer
    The  Author of this posting offers the information contained within this  posting without consideration and with the reader's understanding that  there's no implied or expressed suitability or fitness for any purpose.  Information provided is for informational purposes only and should not  be construed as rendering professional advice of any kind. Usage of this  posting's information is solely at reader's own risk.
    Liability Disclaimer
    In  no event shall Author be liable for any damages whatsoever (including,  without limitation, damages for loss of use, data or profit) arising out  of the use or inability to use the posting's information even if Author  has been advised of the possibility of such damage.
    Posting
    What actually happens is the scheduler assigns weights to classes based on the bandwidth specification.  Class flows get their relative bandwidth based on the weight.  So, for example, if your class AF3x was using no bandwidth, and AF2x and class-default wanted all the bandwidth, they would split it 2:1 (30:15).

  • RV042 bandwidth management issues

    I am trying to set up my router to grant http traffic a minimum banwidth of - for example - 5,000 kBit (if there is any http traffic).
    So I set http min. rate to 5,000 while I set nntp min. rate to 1 (see enclosed screen shot)
    However, when I run nntp downloads on several connections (e.g. 10) my single http download never goes above 1,000 kBit. Without any other connections I reach 8,000 kBit.
    Btw. I am using a single 12 MBit line.
    What is wrong?
    Thank you for your help!
    Otto

    I am looking at your setting and it seems that the router is functioning properly. It has a lot of other factors that you are not thinking about also. If you do a speed test what are the speeds that you are getting? If you move the router out of place and do the same time what are the down load speeds that you are getting?
    Also I see that when you have only the single one downloading you want to get the speed for the max bandwidth that you have selected. If you down load this same file with the router not in place do you get the same out come? If it is different how much difference are you getting? Most of the time if your max download speed is 10 to 12 Meg, you will get roughly a third of that when downloading files. You must depend on the device at the other end also.
    What were you getting before you created any of these rules?
    Thanks
    Q

  • CBWFQ with Q-in-Q (ME-3600X-24TS-M)

    Hi!
    I wonder if it's possible to run mark inbound traffic on L2 interfaces and then use CBWFQ on the q-in-q trunk's for bandwidth management or
    do you need to have routed interfaces inbetween the two ME3600 switches to be able to use CBWFQ?

    Hi Juan,
    Yes that is correct.
    You can now enable the additional TenGig ports with cmd: “sdm prefer 4” (for two additional 10GE ports) or “sdm prefer 3” (for 1 additional 10GE port).
    adam

  • CBWFQ and Priority Q Scheduling with IOS

    All,
    I have a question in regards to scheduling in QoS.
    I have below 2 priority queues (both pri quese go into one queue we beleive), and 3 CBWFQs.
    The qestion is, how are these queus scheduled. I know that priority Qs will be emtied before moving onto the CBWFQ.
    On the CBWFQ side, how are these scheduled, in a round-robin way? or is the amount of time they get serviced increased, the bigger the bandwidth statement is configured (Like custom-queuing and bigger queue configured in terms of bytes equates to more time being scheduled to empty the queue before moving onto the next queue)
    policy-map carrier_cos
    class carrier_EF
    priority
    police 1605000 8000 8000 conform-action transmit exceed-action drop
    class carrier_AF4
    priority
    police 1530000 71500 71500 conform-action transmit exceed-action drop
    class carrier_AF1
    bandwidth 6120
    class carrier_bulk
    bandwidth 5480
    class class-default
    bandwidth 15265
    Sounds to me like CBQFQ is a mixture of priority-queueing and custom-queuing combined.
    Many kind regards for you help with this question.
    Ken

    Thx for that.
    So, if I have a 1M/bit cct and have CBWFQ running.
    CBWFQ 1 = Bandwdith 600
    CBWFQ 2 = Bandwdith 200
    CBWFQ 3 = Bandwdith 200 (class class-default)
    assume interface is congested. How does the scheduler work?
    Is it, within a time period, (let say 1second) it will spend 600ms servicing Q1, then move onto Q2 and spend 200ms on that queue before moving onto Q3 where it will spend a 200MS servicing that Queue and then go back to Q1?
    Correct me if Im wrong, but is that the same as Custom Qeueing?
    1 Second is a long time so what timer interval is used.
    Remember is a 1M/bps circuit.
    Kindest regards, and many thx.
    Ken

  • CBWFQ & IPSec VPN

    Hello,
    We have an IPSec tunnel established between our office and another site using 2 ASA 5510s running 8.0(3).
    We have a T1 connecting these sites. I want to be able to use CBWFQ on the serial interfaces of the routers. How can I copy the "copy" the DSCP value into the IP header of the ESP packet on the ASA, if the DSCP is set on the ingress interface of the ASA? I want certain VPN traffic to be placed into different queues on the serial interfaces. I see there the "qos pre-classify" command that exists for routers. Does the ASA have something simular? If no, what can I do?
    Thanks!

    i agree with Farrukh
    according to cisco SRND
    In Cisco AVVID solutions, the IP Phone and gateways provide the capability to set the ToS byte so
    routers can make the appropriate QoS decision. However, most data applications do not set the ToS byte
    and queuing decisions must be based on other fields of the IP header, including source/destination IP
    address, port numbers, and protocol
    Once the original IP packet is encrypted by IPSec, fields other than ToS byte, such as port numbers,
    protocol and source/destination IP address fields, are no longer in clear text and cannot match an output
    service policy. QoS Pre-Classify is an Cisco IOS software feature to allow fancy queuing,
    CBWFQ/WFQ, at the output interface to match on these other fields in the original IP header, even after
    the original IP header is encrypted
    howver
    u can use matching in the calss map and make the matching based on ur vpn tunnel-gourp that u have
    in the case u can play with priority or bandwidth limitation
    check the following link
    PIX/ASA 7.x and Later: Bandwidth Management(Rate Limit) Using QoS Policies
    http://www.cisco.com/en/US/products/hw/vpndevc/ps2030/products_tech_note09186a008084de0c.shtml
    good luck
    please, if helpful Rate

  • Virtual Switch Bandwidth

    Running Windows Server 2012 DTC, and creating an LACP NIC team of two 10GbE network adapters results in a network interface that could theoretically support up to 20Gbps. 
    If a Hyper-V virtual switch is created on top of this interface it appears that the bandwidth is knocked down to 10Gbps. 
    My question is that if multiple vNICs are created and attached to this vSwitch ... could the sum of these exceed 10Gbps?
    Or is the Hyper-V vSwitch capped out at 10Gbps right now?

    There is a common misperception that LACP grants magical bandwidth aggregation powers. This is false. LACP has no more bandwidth aggregation capabilities than a static team. What it adds is diagnostic and dynamic reconfiguration functionality.
    Yes, your team of 2 10Gbps NICs can provide a theoretical maximum of 20 Gbps, virtual switch or not. However, a single TCP stream cannot take two separate hardware paths, so any one stream will be limited to a theoretical maximum of 10 Gbps. The new Dynamic
    load-balancing method in 2012 R2 changes the rules on this, but that's a discussion for a different time.
    What you're going to find, though, is that 10 Gbps is really fast. Fast enough that your CPUs are going to struggle at times, depending on what's going on. The virtual switch adds the need for some additional processing overhead, and with 10 Gbps adapters,
    you're going to be easily able to see the difference. If you get enough different streams going across enough different sources and destinations, you will break 10 Gbps, but be careful what you expect.
    Eric Siron Altaro Hyper-V Blog
    I am an independent blog contributor, not an Altaro employee. I am solely responsible for the content of my posts.
    "Every relationship you have is in worse shape than you think."

Maybe you are looking for

  • "Save As" page comes up when I try printing from Firefox

    When I use my Verizon DSL and Firefox (this doesn't happen if I use Int. Explorer) and I wish to print internet info, I do a Ctl. P to print. Instead of printing, I receive a "Save As" page that I cannot stop from appearing. The page includes: "Save

  • Itunes makes my comp freeze

    I just bought my first ipod a few weeks back. I downloaded itunes and moved all of the music off of my computer onto my itunes, then synced it to my ipod. Now, when I try to open itunes - it opens but then freezes. It keeps saying it's analyzing my s

  • Error message; messages sitting in Outbox

    I have been trying to send some e-mails today but I keep getting an error message that says, "This message could not be delivered and will remain in your outbox until it can be delivered. The server 'smtp.mac.com' cannot be contacted on port 587." I

  • Using Windows 8 with my macbook pro

    Hello, I currently have a problem to install Windows 8. Last week, I bought a Windows' 8 license. The configuration of my macbook pro is the following : - i7 2,3 Ghz (four cores) - 8 Gb of RAM - A 256 Go SSD - A 750 Go Hybrid-disc - OS X 10.8.2 So I

  • Sat Pro A60: ati2dvag display driver not working properly

    help.... i know very little about computer and need to know what, how and where i can get a solution to this problem. It occurs more and more frequently now, it messes up the display and i have to restart only for it to happen again. i tried to down