Bandwidth weight

Hello everyone,
I have a test lab with 2012R2 Hyper-V host, vmm 2012 r2 and 2 vms with 2012 R2.
Host have 2 pNics 1Gbps. 
I've deployed logical switch to host and teamed adapters.
for vms i used 2 different port profiles:
1: Min Bendwidth weight 1
2: Min Bendwidth weight 100
The problem that Min Bendwidth weight have no impact on network bandwidth testing results (pic below).
Im using jperf for tests. Hyper-V host is jperf server and vms are clients.
How does the setting minimal bendwidth weight is really work?
Thanks.

Minimum Bandwidth Weight controls how much bandwidth the virtual network adapter can use in relation
to other virtual network adapters. The weight describes how much bandwidth to provide to the virtual network adapter relative
to other virtual network adapters connected to the same virtual switch.
As long as there are sufficient bandwidth available, you won't see any spikes between the two profiles here.
Kristian (Virtualization and some coffee: http://kristiannese.blogspot.com )

Similar Messages

  • Cataylst 3750 Bandwidth limiting

    Hi,
    on cat3750 you can limit the Bandwidth on a n egress interface using the SRRcommand:
    srr-queue bandwidth limit "weight1"
    the weight1 is a variable in the range from 10 to 100 (default)
    This means that in a 3750G a giga interface the egress queue can be limited only to 100Mbit/s (setting the weight1=10)...or the siwtch is able to interpret the weight in correlation of the port settings
    for example a gigabit port is connected to a device and the settings are 100FD
    if the srr weight1 is set equal to 10 then the egress BW is 100Mbits, seeing it's a giga interface, or 10Mbit/s seeing it's a giga interface BUT connected as 100FD ?
    thanks for replay
    Omar

    Under which IOS image could you find this command?
    For example, using a stack of three 3750G, running under the 12.2 (25)-SEC2, the command is as follows:
    Switch(config)#mls qos srr-queue input bandwidth ?
    <1-100> enter bandwidth weight for queue id 1
    and there are 2 queues
    or
    Switch(config)#mls qos srr-queue output ?
    cos-map Configure cos-map for a queue id
    dscp-map Configure dscp-map for a queue id
    1-100 are weights that act on a port settings. Seeing that traffic is serviced depending upon
    its class of service (CoS) or differentiated services code point (DSCP) designation and this command is from commands conditioning delivery.

  • Network Questions on 2012 R2 Hyper-V Cluster

    I am going through the setup and configuration of a clustered Windows Server 2012 R2 Hyper-V host. 
    I’ve followed as much documentation as I can find, and the Cluster Validation is passing with flying colors, but I have three questions about the networking setup.
    Here’s an overview as well as a diagram of our configuration:
    We are running two Server 2012 R2 nodes on a Dell VRTX Blade Chassis. 
    We have 4-dual port 10 GBe Intel NICS installed in the VRTX Chassis. 
    We have two Netgear 12-Port 10 GBe switches, both uplinked to our network backbone switch.
    Here’s what I’ve done on each 2012 R2 node:
    -Created a NIC team using two 10GBe ports from separate physical cards in the blade chassis.
    -Created a Virtual Switch using this team called “Cluster Switch” with “ManagementOS” specified.
    -Created 3 virtual Nics that connect to this “Cluster Switch”: 
    Mangement (10.1.10.x), Cluster (172.16.1.x), Live Migration (172.16.2.x)
    -Set up VLAN ID 200 on the Cluster NIC using Powershell.
    -Set Bandwidth Weight on each of the 3 NICS.  Mangement has 5, Cluster has 40, Live Migration has 20.
    -Set a Default Minimum Bandwidth for the switch at 35 (for the VM traffic.)
    -Created two virtual switches for iSCSI both with 
    “-AllowManagementOS $false” specified.
    -Each of these switches is using a 10GBe port from separate physical cards in the blade chassis.
    -Created a virtual NIC for each of the virtual switches: 
    ISCSI1 (172.16.3.x) and ISCSI2 (172.16.4.x)
    Here’s what I’ve done on the Netgear 10GB switches:
    -Created a LAG using two ports on each switch to connect them together.
    -Currently, I have no traffic going across the LAG as I’m not sure how I should configure it.
    -Spread out the network connections over each Netgear switch so traffic from the virtual switch “Cluster Switch” on each node is connected to both Netgear 10 GB switches.
    -Connected each virtual iSCSI switch from each node to its own port on each Netgear switch.
    First Question:  As I mentioned, the cluster validation wizard thinks everything is great. 
    But what about the traffic the Host and Guest VMs use to communicate with the rest of the corporate network? 
    That traffic is on the same subnet as the Management NIC. 
    Should the Management traffic be on that same corporate subnet, or should it be on its own subnet? 
    If Management is on its own subnet, then how do I manage the cluster from the corporate network? 
    I feel like I’m missing something simple here.
    Second Question:  Do I even need to implement VLANS in this configuration? 
    Since everything is on its own subnet, I don’t see the need.
    Third Question:  I’m confused how the LAG will work between the two 10 Gbe switches when both have separate uplinks to the backbone switch. 
    I see diagrams that show this setup, but I’m not sure how to achieve it without causing a loop.
    Thanks!

    "First Question:  As I mentioned, the cluster validation wizard thinks everything is great. 
    But what about the traffic the Host and Guest VMs use to communicate with the rest of the corporate network? 
    That traffic is on the same subnet as the Management NIC. 
    Should the Management traffic be on that same corporate subnet, or should it be on its own subnet? 
    If Management is on its own subnet, then how do I manage the cluster from the corporate network? 
    I feel like I’m missing something simple here."
    This is an operational question, not a technical question.  You can have all VM and management traffic on the same network if you want.  If you want to isolate the two, you can do that, too.  Generally, recommended
    practice is to create separate networks for host management and VM access, but it is not a strict requirement.
    "Second Question:  Do I even need to implement VLANS in this configuration? 
    Since everything is on its own subnet, I don’t see the need."
    No, you don't need VLANs if separation by IP subnet is sufficient.  VLANs provide a level of security against snooping that simple subnet isolation provides.  Again, up to you as to how you want to configure things. 
    I've done it both ways, and it works both ways.
    "Third Question:  I’m confused how the LAG will work between the two 10 Gbe switches when both have separate uplinks to the backbone switch. 
    I see diagrams that show this setup, but I’m not sure how to achieve it without causing a loop."
    This is pretty much outside the bounds of a clustering question.  You might want to take network configuration questions to a networking forum.  Or, you may want to talk with Netgear specialist.  Different networking
    vendors can accomplish this in different ways.
    .:|:.:|:. tim

  • Slow migration rates for shared-nothing live migration over teaming NICs

    I'm trying to increase the migration/data transfer rates for shared-nothing live migrations (i.e., especially the storage migration part of the live migration) between two Hyper-V hosts. Both of these hosts have a dedicated teaming interface (switch-independent,
    dynamic) with two 1GBit/s NICs which is used for only for management and transfers. Both of the NICs for both hosts have RSS enabled (and configured), and the teaming interface also shows RSS enabled, as does the corresponding output from Get-SmbMultichannelConnection).
    I'm currently unable to see data transfers of the physical volume of more than around 600-700 MBit/s, even though the team is able to saturate both interfaces with data rates going close to the 2GBit/s boundary when transferring simple files over SMB. The
    storage migration seems to use multichannel SMB, as I am able to see several connections all transferring data on the remote end.
    As I'm not seeing any form of resource saturation (neither the NIC/team is full, nor is a CPU, nor is the storage adapter on either end), I'm slightly stumped that live migration seems to have a built-in limit to 700 MBit/s, even over a (pretty much) dedicated
    interface which can handle more traffic when transferring simple files. Is this a known limitation wrt. teaming and shared-nothing live migrations?
    Thanks for any insights and for any hints where to look further!

    Compression is not configured on the live migrations (but rather it's set to SMB), but as far as I understand, for the storage migration part of the shared-nothing live migration this is not relevant anyway.
    Yes, all NICs and drivers are at their latest version, and RSS is configured (as also stated by the corresponding output from Get-SmbMultichannelConnection, which recognizes RSS on both ends of the connection), and for all NICs bound to the team, Jumbo Frames
    (9k) have been enabled and the team is also identified with 9k support (as shown by Get-NetIPInterface).
    As the interface is dedicated to migrations and management only (i.e., the corresponding Team is not bound to a Hyper-V Switch, but rather is just a "normal" Team with IP configuration), Hyper-V port does not make a difference here, as there are
    no VMs to bind to interfaces on the outbound NIC but just traffic from the Hyper-V base system.
    Finally, there are no bandwidth weights and/or QoS rules for the migration traffic bound to the corresponding interface(s).
    As I'm able to transfer close to 2GBit/s SMB traffic over the interface (using just a plain file copy), I'm wondering why the SMB(?) transfer of the disk volume during shared-nothing live migration is seemingly limited to somewhere around 700 MBit/s on the
    team; looking at the TCP-connections on the remote host, it does seem to use multichannel SMB to copy the disk, but I might be mistaken on that.
    Are there any further hints or is there any further information I might offer to diagnose this? I'm currently pretty much stumped on where to go on looking.

  • Tweaking QoS port parameters and policing

    Hi,
    Is there a mathematical method of configuring bandwidth weights and queue limits or is it more art than science? For example, using the 3550 series switches, when you perform auto-qos, it chooses the following parameters:
    wrr-queue bandwidth 10 20 70 1
    wrr-queue queue-limit 50 25 15 10
    I need to know the reason for how this values were chosen, in order to understand how changing these values affect the overall queueing process. Is there some kind of best practice (recommended) values for setting them? I notice a pattern that bandwidth weights with the exception of the priority queue (qid 4) are larger; whereas the queue-limit values are lower for higher priority traffic, i.e. they get the smallest slice of the egress buffers.
    Also the burst-byte value parameter in policing under policy map. How do you obtain an appropriate value for this? How does that relate to the access-rate?
    In the auto-qos it gives the same 8000 byte value to the burst byte, see below:
    policy-map AutoQoS-Police-SoftPhone
    class AutoQoS-VoIP-RTP-Trust
    set dscp ef
    police 320000 8000 exceed-action policed-dscp-transmit
    class AutoQoS-VoIP-Control-Trust
    set dscp cs3
    police 32000 8000 exceed-action policed-dscp-transmit
    Any help is greatly appreciated.
    Many thanks

    To allocate bandwidth between standard transmit queue 1 (low priority) and standard transmit queue 2 (high priority), use the wrr-queue bandwidth command is used. Use the no form of this command to return to the default settings.
    http://www.cisco.com/en/US/products/hw/switches/ps708/products_command_reference_chapter09186a00801026fa.html#wp1085797

  • Catalyst 3750 Ingress SPQ/SRR behavior

    Do Cisco engineers review this community at all?
    I am working on the latest version of QoS standard for our Enterprise and noticed the following conflicting information officially provided by Cisco.
    My question relates to ingress/pre-ring Strict Priority Queue (SPQ) logic.
    Cisco Catalyst 3750 QoS Configuration Examples document states that SPQ on ingress is configured and serviced as follows
    mls qos srr-queue input priority-queue 2 bandwidth 10
    mls qos srr-queue input bandwidth 90 10
    SPQ services Q2 up to the configured 10% of ingress bandwidth
    Any excessive traffic in Q2 is not dropped, but is serviced by SRR in accordance with the configured weights
    For example, a momentary 5Gbps of aggregated ingress EF traffic will be serviced in the following way
    SQP services 10% of total ring's bandwidth, or 3.2Gbp, leaving 1.8Gbps for SRR processing
    SRR services excessive 1.8Gbps in accordance w/ weights Q1 - 90 and Q2 - 10, such as Q1 gets 25.92Gbps and Q2 get 2.88Gbps more.
    The following pictures provides in-depth look into Ingress queuing logic.
    Alternatively, Cisco Medianet Campus Design v4.0 provides the following example w/ comments
    C3750-E(config)#mls qos srr-queue input priority-queue 2 bandwidth 30
    ! Q2 is enabled as a strict-priority ingress queue with 30% BW
    C3750-E(config)#mls qos srr-queue input bandwidth 70 30
    ! Q1 is assigned 70% BW via SRR shared weights
    ! Q2 SRR shared weight is ignored (as it has been configured as a PQ)
    Basically, they now say Q2 bandwidth weight is ignore because it is configured as Strict Priority Queue.  Doesn't it look contradictory?
    In my humble opinion Medianet (or SRND v4.0!!!) provides an incorrect information re ingress queuing on Catalyst 3750 platform.
    I am not sure I can easily test it, providing that an internal ring must experience a congestion. I don't think I can send more than 32Gbps of traffic into any of my lab 3750 switches.
    Also, I don't think this mistake can be critical in my environment as I don't expect to have momentary full capacity load on those... but it can be critical for others.
    Much appreciate
    Tim

  • Mls qos map

    Hello,
    I have two switches with ios, I want to configure the qos, but I have the error message:
    Will I need to update the ios or is the platform switch that does not supporting this command?
    Switch Ports Model SW Version SW Image
    * 1 50 WS-C2960-48TT-S 12.2 (46) SE C2960-LANLITEK9-M
    * 1 50 WS-C2960-48TC-S 12.2 (50) SE5 C2960-LANLITEK9-M
    Error:
    SW(config)#mls qos map policed-dscp 0 10 18 to 8
                                      ^
    Best Regards

    I am answering this myself.
    SW7(config)#mls qos srr input bandwidth ?
      <1-100>  enter bandwidth weight for queue id 1
    SW7(config)#mls qos srr input bandwidth 1 ?
      <1-100>  enter bandwidth weight for queue id 2
    SW7(config)#mls qos srr input bandwidth 1 1 ?
      <cr>
    Its the weight that we mention here. Not the percentage directly.
    SW2#show mls qos input-queue 
    Queue     :       1       2
    buffers   :      90      10
    bandwidth :     100     100
    priority  :      40       0
    threshold1:     100     100
    threshold2:     100     100
    This will give 40% priority BW to queue and remaining 60% will equally share between bot the queues as their weights are same.
    CF

  • Bandwidth percent - ios xr

    hi everyone,
    in a policy map - when you configure your classes - ex
    class 1
    bandwidth percent 25
    class 2
    bandwidth percent 25
    class 3
    bandwidth percent 25
    class 4
    bandwidth percent 25
    - will this negate p2mdrr or mddr? ie. have I configured the policy to "not" have any remaining available bandwidth to any other classes?
    thanks,
    Andrew

    hi andrew,
    bw percent gives you a defined assigned CIR based on the parent shaping bandwidth.
    bandwidth remaining (percent) will get a modified deficted  rate of the left over bandwidth after all the classes have been served that have an assigned CIR.
    you can see how the programming of the shaped classes has been done with the command show qos int <interface> <direction> which gives you the CIR that is currently running on that class and their (excess) weight ratio as determined by the scheduler, available bw and all that.
    cheers
    xander

  • Available bandwindth and 'max-reserved bandwidth'

    Is the max-reserved bandwidth only important when working with Qos classes and the bandwidth statement? Is the default 75% available bandwidth only used then?
    In other words if I have a 100MB link with a service policy applied for Voice, Call-Control and video. After that I notice the available bandwidth on thie 100MB link is 61280 kilobits/sec.
    If I put in a 'max-reserved bandwidth 95' would I reclaim another 20MB of bandwidth for the class-default? Would leaving 5% on the 100MB link for routing and other stuff be acceptable?
    Here is the config and show commands:
    class-map match-any Call-Control
    match ip dscp cs3
    match ip dscp af31
    class-map match-any Video
    match ip dscp af41
    class-map match-any Voice
    match ip dscp ef
    policy-map QOS_classes_to_ACN
    class Voice
    priority 10000
    class Call-Control
    bandwidth 500
    class Video
    bandwidth 3220
    class class-default
    fair-queue
    random-detect
    interface FastEthernet6/0
    description 100MB Link to ACN
    ip address xxx.xxx.xxx.xxx xxx.xxx.xxx.xxx
    ip route-cache flow
    no ip mroute-cache
    load-interval 30
    duplex full
    speed 100
    service-policy output QOS_classes_to_ACN
    ROC-RT7206-QMOE#sh int f6/0
    FastEthernet6/0 is up, line protocol is up
    Hardware is i82543 (Livengood), address is 00b0.4a28.3ca8 (bia 00b0.4a28.3ca8)
    Description: 100MB Link to ACN
    Internet address is xxx.xxx.xxx.xxx/xx
    MTU 1500 bytes, BW 100000 Kbit, DLY 100 usec,
    reliability 255/255, txload 183/255, rxload 21/255
    Encapsulation ARPA, loopback not set
    Keepalive set (10 sec)
    Full-duplex, 100Mb/s, 100BaseTX/FX
    ARP type: ARPA, ARP Timeout 04:00:00
    Last input 00:00:03, output 00:00:00, output hang never
    Last clearing of "show interface" counters 01:13:30
    Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 5211742
    Queueing strategy: Class-based queueing
    Output queue: 70/1000/64/5211742 (size/max total/threshold/drops)
    Conversations 2/35/256 (active/max active/max total)
    Reserved Conversations 2/2 (allocated/max allocated)
    Available Bandwidth 61280 kilobits/sec <--- Available bandwidth
    30 second input rate 8615000 bits/sec, 6860 packets/sec
    30 second output rate 71788000 bits/sec, 7484 packets/sec
    31692173 packets input, 4263195179 bytes
    Received 1204 broadcasts, 0 runts, 0 giants, 0 throttles
    0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
    0 watchdog
    0 input packets with dribble condition detected
    34536300 packets output, 2513155446 bytes, 0 underruns
    0 output errors, 0 collisions, 0 interface resets
    0 babbles, 0 late collision, 0 deferred
    0 lost carrier, 0 no carrier
    0 output buffer failures, 0 output buffers swapped out

    Here is the output of show policy-pam int:
    ROC-RT7206-QMOE#sh policy-map int f6/0
    FastEthernet6/0
    Service-policy output: QOS_classes_to_ACN
    Class-map: Voice (match-any)
    3417571 packets, 934178998 bytes
    30 second offered rate 1722000 bps, drop rate 0 bps
    Match: ip dscp ef (46)
    3417571 packets, 934178998 bytes
    30 second rate 1722000 bps
    Queueing
    Strict Priority
    Output Queue: Conversation 264
    Bandwidth 10000 (kbps) Burst 250000 (Bytes)
    (pkts matched/bytes matched) 1908656/521903140
    (total drops/bytes drops) 0/0
    Class-map: Call-Control (match-any)
    615085 packets, 48926098 bytes
    30 second offered rate 84000 bps, drop rate 0 bps
    Match: ip dscp cs3 (24)
    588857 packets, 47299978 bytes
    30 second rate 81000 bps
    Match: ip dscp af31 (26)
    26228 packets, 1626120 bytes
    30 second rate 2000 bps
    Queueing
    Output Queue: Conversation 265
    Bandwidth 500 (kbps) Max Threshold 64 (packets)
    (pkts matched/bytes matched) 337953/26882724
    (depth/total drops/no-buffer drops) 0/0/0
    Class-map: Video (match-any)
    146136 packets, 82165408 bytes
    30 second offered rate 90000 bps, drop rate 0 bps
    Match: ip dscp af41 (34)
    146136 packets, 82165408 bytes
    30 second rate 90000 bps
    Queueing
    Output Queue: Conversation 266
    Bandwidth 3220 (kbps) Max Threshold 64 (packets)
    (pkts matched/bytes matched) 81687/45950190
    (depth/total drops/no-buffer drops) 0/0/0
    Class-map: class-default (match-any)
    35227089 packets, 47492000208 bytes
    30 second offered rate 87718000 bps, drop rate 14714000 bps
    Match: any
    Queueing
    Flow Based Fair Queueing
    Maximum Number of Hashed Queues 256
    (total queued/total drops/no-buffer drops) 0/5171786/0
    exponential weight: 9
    class Transmitted Random drop Tail drop Minimum Maximum Mark
    pkts/bytes pkts/bytes pkts/bytes thresh thresh prob
    0 30181523/39910255774 1297726/1944176143 3893194/5836883998 20 40 1/10
    1 0/0 0/0 0/0 22 40 1/10
    2 0/0 0/0 0/0 24 40 1/10
    3 0/0 0/0 0/0 26 40 1/10
    4 0/0 0/0 0/0 28 40 1/10
    5 0/0 0/0 0/0 30 40 1/10
    6 1213/88749 0/0 0/0 32 40 1/10
    7 0/0 0/0 0/0 34 40 1/10
    rsvp 0/0 0/0 0/0 36 40 1/10

  • Calculating Bandwidth

    How to calculate total bandwidth utilization of a circuit if i have In and Out data rate? Is there any formula?

    Um, monitoring the circuit using MRTG or something similar would be a good way of doing this. To work out the average utilization, you could do the following: Average_Input=Total_Input (in BITS)/ Time (in SEC), Avg_Out=Total_Out (BITS) / Time (sec)
    Eg:
    Last clearing of "show interface" counters 11w3d
    Input queue: 0/75/0 (size/max/drops); Total output drops: 2051
    Queueing strategy: weighted fair
    Output queue: 0/1000/64/2046 (size/max total/threshold/drops)
    Conversations 0/81/256 (active/max active/max total)
    Reserved Conversations 0/0 (allocated/max allocated)
    30 second input rate 33000 bits/sec, 3 packets/sec
    30 second output rate 7000 bits/sec, 2 packets/sec
    53836304 packets input, 2893045321 bytes, 0 no buffer
    Received 0 broadcasts, 1384 runts, 3 giants, 0 throttles
    1424 input errors, 2 CRC, 0 frame, 0 overrun, 0 ignored, 16 abort
    58226253 packets output, 1732843968 bytes, 0 underruns
    Avg_Input=2893045321bytes / 11w3d
    Avg_Input=23144362568bits / 80days so
    Avg_Inout=23144362568bits / 6912000seconds... Therefore
    Avg_Input=3348 bits-per-second (bps) (Hehe, this is on a T3, the customer is paying us for a T3 for an average of 3.3kbps!)
    This is purely an average. They could have been idle for 11w2day and then apssed all of the traffic in the last day. Without historical data there is not way of knowing...

  • Bandwidth and Police command

    I have seen this config in one of the examples in cisco site
    policy-map mqcp
    class hub
    bandwidth 200
    police cir 5000000
    Please help in understanding the bandwidth and police command setting in this example

    Bandwidth is a Queing mechanism (class based Weighted Fair Queing) where in the bandwidth specified is reserved for the traffic when there is congestion. Policing is like Committed access rate(CAR) which sizes ur bandwidth(doesnt shape).

  • Bandwidth over utillised

    Hi there , I am still new in networking .
    I am facing a bandwidth over utilised problem to a remote site . When I ping the eo interfaces at some time , they will be a lot of time out and , I tried pinging a server , it gave the same result .
    I have checked with my service provider . But they say , my line is over utilised . The serial interface is almost 90 % utilised .
    The question is how do I know , or how to check from my router that the Serial interface is fully utilised or the internal traffic is high . Should I check the Ethernet interface also ?
    Any commands to check ....
    Please help .

    Thanks for your prompt reply . I am afraid I can't make any changes in the router , because the router belong to our ISP . I have only read access .
    I did not get you , what is the input/output rate ..
    Below is the output from my serial interface
    Serial0 is up, line protocol is up
    Hardware is PowerQUICC Serial
    Description: Cisco
    Internet address is 1.1.1.1
    MTU 1500 bytes, BW 128 Kbit, DLY 20000 usec,
    reliability 255/255, txload 41/255, rxload 93/255
    Encapsulation HDLC, loopback not set
    Keepalive set (10 sec)
    Last input 00:00:00, output 00:00:00, output hang never
    Last clearing of "show interface" counters 8w6d
    Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 13106
    Queueing strategy: weighted fair
    Output queue: 0/1000/64/13106 (size/max total/threshold/drops)
    Conversations 0/17/32 (active/max active/max total)
    Reserved Conversations 1/1 (allocated/max allocated)
    Available Bandwidth 88 kilobits/sec
    5 minute input rate 47000 bits/sec, 28 packets/sec
    5 minute output rate 21000 bits/sec, 43 packets/sec
    41504534 packets input, 2178405105 bytes, 0 no buffer
    Received 545100 broadcasts, 0 runts, 0 giants, 0 throttles
    0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
    54043477 packets output, 483672500 bytes, 0 underruns
    0 output errors, 0 collisions, 1 interface resets
    0 output buffer failures, 0 output buffers swapped out
    0 carrier transitions
    DCD=up DSR=up DTR=up RTS=up CTS=up
    Clerify with me , sorry , i am still new ...

  • Video/Audio conference light weight Format in RTP

    hi frndz
    is there any light weight codec for audio,video for video conference application other than H263 for smooth transmission and efficient way to handle the bandwidth over internet. if so plz help me in finding it
    rgds
    -Venkat

    hi
    All r referring GSM format
    but this one also having some delay while either transmission or receiving
    Arunkumar

  • ASR9K: bandwidth and bandwidth remaining cannot be used together. How to solve the problem to grant a quota and equally assign the remaining quota?

    Hi everyone
    The problem should be trivial. We want to grant a quota to specific classes and use equally the remaining quota of available bandwidth to all the requesting classes. Let's clarify with an example:
    Class 7 ==> priority queue, level 1 with police 20%
    Class 5 ==> priority queue, level 2 with police 40%
    Class 6 ==> CIR 12%
    Class 3 ==> CIR 11%
    Class 2 ==> CIR 8%
    Class 1 ==> CIR 5%
    Class 0 ==> CIR 4%
    To simplify let's suppose that there is no traffic on class 7 and 5 and that all remaining classes are generating traffic at a rate of 300Mbps each. Outgoing interface is 1G so congestion occurs. We want that each class 6,3,2,1,0 receive its granted value (so, respectively, 120M, 110M, 80M, 50M and 40M for a total of 400M) and that the remaining available bandwidth (600M) will be equally assigned, so 120M to each class.
    Documentation from IOS-XR 5.2.2 let's understand that this should be the default behavior but if we run the policy shown below what we get is a weighted assignment of the remaining quota.
    The policy used is the following:
    policy-map TEST-POLICY
     class qos7
      police rate percent 20
      priority level 1
     class qos5
      police rate percent 40
      priority level 2
     class qos6
      bandwidth percent 12
     class qos3
      bandwidth percent 11
     class qos2
      bandwidth percent 8
     class qos1
      bandwidth percent 5
     class qos0
      bandwidth percent 4
     class class-default
     end-policy-map
    The documentation of IOS-XR 5.2.2 states that both "bandwidth percent" and "bandwidth remaining percent" could be used in the same class (which could be a solution to force the requested behavior) but using both generates the following error:
    !!% Both bandwidth and bandwidth-remaining actions cannot be configured together in leaf-level of the queuing hierarchy: InPlace Modify Error: Policy TEST-POLICY: 'qos-ea' detected the 'warning' condition 'Both bandwidth and bandwidth-remaining actions cannot be configured together in leaf-level of the queuing hierarchy'
    How could be solved the problem? Maybe a hierarchical QoS with the granted quota in the parent policy and a "bandwidth remaining percent 20" in the child?

    Hi everyone
    just to provide my contribution, the hierarchical QoS policy works balancing the remaining bandwidth after granting the requested bandwidth (see the policy implemented below). However for priority queues it is granted the policer quota but sending more flows these appears to be unbalanced. So the problem to have both PQ served (in a balanced way between flows) AND have the remaining bandwidth distributed equally remains open ...
    policy-map TEST-POLICY-parent
     class qos6
      service-policy TEST-POLICY-child
      bandwidth percent 12
     class qos3
      service-policy TEST-POLICY-child
      bandwidth percent 11
     class qos2
      service-policy TEST-POLICY-child
      bandwidth percent 8
     class qos1
      service-policy TEST-POLICY-child
      bandwidth percent 5
     class qos0
      service-policy TEST-POLICY-child
      bandwidth percent 4
     class class-default
      service-policy TEST-POLICY-child
     end-policy-map
    policy-map TEST-POLICY-child
     class qos7
      police rate percent 20
      priority level 1
     class qos5
      police rate percent 40
      priority level 2
     class qos6
      bandwidth remaining percent 20
     class qos3
      bandwidth remaining percent 20
     class qos2
      bandwidth remaining percent 20
     class qos1
      bandwidth remaining percent 20
     class qos0
      bandwidth remaining percent 20
     class class-default
     end-policy-map

  • Bandwidth available when classifying

    I'm having trouble with my queuing config and was hoping that someone could take a look please? I have created the following:
    class-map match-any critical
    match protocol rtp
    class-map match-any priority
    match access-group 180
    policy-map queue
    class critical
    priority percent 35
    class priority
    bandwidth percent 40
    class class-default
    fair-queue
    random-detect dscp-based
    And then:
    int s0/0/0:0
    service-policy output queue
    Then when I do a show int, the available bandwidth goes to 1kpbs (formerly 1536kbps)
    Output queue: 0/1000/64/916 (size/max total/threshold/drops)
    Conversations 0/2/256 (active/max active/max total)
    Reserved Conversations 2/2 (allocated/max allocated)
    Available Bandwidth 1 kilobits/sec
    This is a 2mbps serial interface on a 2800 running Version 12.3(8r)T7.
    So, why would the available bandwidth become 1kpbs? Is available bandwidth referring to the bandwidth that's left for the class-default? Or the bandwidth availble to the critical & priority class?
    I should also mention that it wouldn't allow me to increase the priority bandwidth to above 35%. Also, bandwidth is configured as 2048 on the interface.
    Any help gratefully recieved!
    Thanks,
    J

    Hi Spremkumar,
    Thanks for your response. Here is the output. I have changed the values though to:
    class critical 20%
    class prioirty 30%
    It doesn't look like it's working though:
    show policy-map interface s0/0/0:0
    Serial0/0/0:0
    Service-policy output: queue
    Class-map: critical (match-any)
    0 packets, 0 bytes
    5 minute offered rate 0 bps, drop rate 0 bps
    Match: protocol rtp
    0 packets, 0 bytes
    5 minute rate 0 bps
    Queueing
    Strict Priority
    Output Queue: Conversation 264
    Bandwidth 20 (%)
    Bandwidth 409 (kbps) Burst 10225 (Bytes)
    (pkts matched/bytes matched) 0/0
    (total drops/bytes drops) 0/0
    Class-map: priority (match-any)
    0 packets, 0 bytes
    5 minute offered rate 0 bps, drop rate 0 bps
    Match: access-group 180
    0 packets, 0 bytes
    5 minute rate 0 bps
    Queueing
    Output Queue: Conversation 265
    Bandwidth 30 (%)
    Bandwidth 614 (kbps) Max Threshold 64 (packets)
    (pkts matched/bytes matched) 0/0
    (depth/total drops/no-buffer drops) 0/0/0
    Class-map: class-default (match-any)
    72727 packets, 38704060 bytes
    5 minute offered rate 769000 bps, drop rate 0 bps
    Match: any
    Queueing
    Flow Based Fair Queueing
    Maximum Number of Hashed Queues 256
    (total queued/total drops/no-buffer drops) 3/100/0
    exponential weight: 9
    dscp Transmitted Random drop Tail drop Minimum Maximum Mark
    pkts/bytes pkts/bytes pkts/bytes thresh thresh prob
    af11 0/0 0/0 0/0 32 40 1/10
    af12 0/0 0/0 0/0 28 40 1/10
    af13 0/0 0/0 0/0 24 40 1/10
    af21 0/0 0/0 0/0 32 40 1/10
    af22 0/0 0/0 0/0 28 40 1/10
    af23 0/0 0/0 0/0 24 40 1/10
    af31 0/0 0/0 0/0 32 40 1/10
    af32 0/0 0/0 0/0 28 40 1/10
    af33 0/0 0/0 0/0 24 40 1/10
    af41 0/0 0/0 0/0 32 40 1/10
    af42 0/0 0/0 0/0 28 40 1/10
    af43 0/0 0/0 0/0 24 40 1/10
    cs1 26/3166 0/0 0/0 22 40 1/10
    cs2 0/0 0/0 0/0 24 40 1/10
    cs3 0/0 0/0 0/0 26 40 1/10
    cs4 0/0 0/0 0/0 28 40 1/10
    cs5 0/0 0/0 0/0 30 40 1/10
    cs6 41/3928 0/0 0/0 32 40 1/10
    cs7 0/0 0/0 0/0 34 40 1/10
    ef 0/0 0/0 0/0 36 40 1/10
    rsvp 0/0 0/0 0/0 36 40 1/10
    default 72888/38781710 100/73423 0/0 20 40 1/10

Maybe you are looking for

  • How to generate a new segment in IDoc for multiple occurance of Control Num

    Hi Experts, In my scenario, i need to generate a new segment in IDoc(Target Structure) based on  Control Number Field in the Source Structure. The segment need to be created for multiple occurance of the Control Number. Ex: Control Number - 100 appea

  • .ToShortDateString() format has changed on new server

    I'm managing an asp.net 4.0 website with a backend written in c#. After changing from a 2008 to a 2012 windows server (now on azure) the date time format has changed from Swedish to English. I have tried to change the language setting on the server b

  • Has anyone got Contact and Calendar sharing to work?

    I cannot believe I've been duped again. This borders on false advertising as far as I am concerned. How is it that Apple can connect and share Contact and Calendars with Exchange beautifully, but can't create this functionality in their own Server? M

  • PO item order.

    Hi Gurus,                                                                                i have a problem on a PO creation on SRM, in details this PO reports the shopping cart's items but in mix order.                                                 

  • IN clausule wih a PreparedStatement

    Hi Folks, Does anyone knows if there is any way to use the SQL clausule IN with a PreparedStatment? Could send me an example? Thanks a lot!!