Broadcast, multicast and bandwidth

multi-casting and broad-casting consume the same amount of
bandwidth, right?
switches know the (ip ----> MAC) mappings.
however, in the run-time, multi-cast groups are entered and exited.
so only java knows what datagrams to drop.
so, switches/routers must propogate all
datagrams to all targeted sub-nets and
then to ALL mac addresses.
logically there are plenty of reasons to use multicast groups.
bandwidth-wise, there are none.
rights?

Dead wrong.
Routers only propagate multicasts if they know there is a member of that group on the other side.
That's what the IGMP protocol is for.

Similar Messages

  • What is the diffrence between multicasting and broadcasting?

    hi friends
    What is the diffrence between multicasting and broadcasting?
    i'm bit confused in multicasting and broadcasting.

    Broadcasts go everywhere within a range determined by the sender.
    Broadcasting is deprecated and unliikely to go beyond the nearest router.
    Multicasts go everywhere where receivers have declared they are present.
    Multicast can be implemented beyond routers in a WAN which you control but ISP routers generally don't support it.

  • ACL restriction of multicast and broadcast on SRW2016

    Hello all,
    I seem to be having difficulty setting up an ACL that restricts multicast and broadcast packets to a specified port on the SRW2016.
    In brief, I have one (physical) port that I need to prevent any broadcast or multicast packets from being sent to.  I need to allow clients which are on that port to send broadcast, however.  My take on this was to create an ACL with one rule of the type:
    Type: Deny
    Protocol: Any
    Source IP: 10.0.0.0/255.255.255.255
    Destination IP: 224.0.0.0/0.255.255.255
    Another type I tried was a 2-rule ACL to explicitly allow only a valid sender and deny all:
    Type: Allow
    Protocol: UDP
    Dest Port: 1234
    Source IP: 10.1.0.100/0.0.0.0
    Dest IP: 10.1.0.101/0.0.0.0
    Type: Deny
    Protocol: All
    I have tried various permutations these types of ACL (changing ordering, etc) but everything I have tried so far has allowed the multicast packets through unless I block it at the sending port (which obviously blocks it from all ports).
    Any suggestions or comments would be appreciated.  Is what I'm trying to do even possible in the SRW2016?
    Thanks,
    Mike

    Just to make sure I was creating/applying the ACLs correctly, I did a simple test with a very basic rule: I just set type to deny (basically a deny all rule).  I applied this rule to one port of the switch and verified that it was working by attempting to access the switch's web configuration interface (which correctly was inaccessable).  However, the multicast packets were still being delivered (verified via both an Ethernet dump and visual inspection of the switch's LED).
    Based on the above information, I feel it's fairly safe to say that Multicast is not filtered correctly via ACLs on the SRW2016.  Apparently Multicast packets take a different logical path than "normal" packets.  Since I don't expect an immediate firmware patch, I suspect that I need to see if I can get a router in addition or as a replacement for the switch.
    Edit: I found a method that appears to restrict the multicast packets via the "Bridge Multicast" interface (basically created a rule for the MAC related to my multicast address, set to Forbidden on one port, but this is not a generic solution for all multicast and I don't seem to be able to have more than 1 MAC address in the list...), but broadcast still gets through, regardless of the ACL I set up for the port.
    I'm beginning to wonder if my understanding of ACLs is flawed - does anyone know if they're applied to incoming packets for a port, outgoing packets for a port or both?  My assumption was both, but if the rule were only applied to incoming packets, it would explain the behavior I'm observing.
    Message Edited by michael.beresford on 03-02-2009 02:46 PM

  • Broadcasting/multicasting UDP

    Is it possible to broadcast UDP packets directly from the host machine's IP address on a selected port so that anyone listening on that IP address/port can receive them? If not, how should a server go about finding an appropriate IP address/port within the 224.0.0.0 - 239.255.255.255 range to send packets on?

    Dear Josh,
    You send the UDP paket to a multicast (class D) IP address/port and the rest is the duty of the multicasting server.
    The mechnism of multicasting and how could Java programmer make use of it is some thing missed in the documents availabe. I've searched for that but all I got is an RMS "Reliable Multicast Service" Libraries like:
    JRMS at www.experimentalstuff.com
    JGroups at www.JGroups.org
    But these are libraries and focusing on the reliable multicast. If we need to multicast an audio stream we need only basic multicast using "MulticastSoket" from java.net package.
    But again the system is no only a set of API's. The question is HOW to build a multicasting audio service using java and whta do we need with java??
    the answer could be of great help
    Ahmad Khalafallah
    [email protected]

  • Broadcast/multicast counters does not increase on vlan interface

    Hi,
    on a Cat6500 we try to monitor interface packet statistics via snmp, in detail we want to get information about the relation between unicast, multicast and broadcast packet counter.
    What we found out is that while on physical l2 interfaces all counters (ifHCInUcastPkts, ifHCInMulticastPkts, fHCInBroadcastPkts, ifHCOutUcastPkts, ifHCOutMulticastPkts, ifHCOutBroadcastPkts) are filled, on vlan interfaces multicast in/out and broadcast out packets stay zero whole the time. We use arp, hsrp, ospf and other well know broadcast and multicast based protocols.
    Does anybody know why this counters do not increase?
    Attached you find an excel sheet which shows an example of interface counter vs. vlan counter.
    many thanks in advance,
    Thorsten Steffen

    Hi jon,
    belown the result of sh sdm prefer,so need i a licence ip service to apply the route-maap on the interface vlan,or just entrer the config"sdm prefer routing" and reboot the switch?
    SWBB0#sh sdm prefer
    The current template is "desktop default" template.
    The selected template optimizes the resources in
    the switch to support this level of features for
    8 routed interfaces and 1024 VLANs.
      number of unicast mac addresses:                  6K
      number of IPv4 IGMP groups + multicast routes:    1K
      number of IPv4 unicast routes:                    8K
        number of directly-connected IPv4 hosts:        6K
        number of indirect IPv4 routes:                 2K
      number of IPv6 multicast groups:                  64
      number of directly-connected IPv6 addresses:      74
      number of indirect IPv6 unicast routes:           32
      number of IPv4 policy based routing aces:         0
      number of IPv4/MAC qos aces:                      0.5K
      number of IPv4/MAC security aces:                 0.875k
      number of IPv6 policy based routing aces:         0
      number of IPv6 qos aces:                          0
      number of IPv6 security aces:                     60

  • I have to send messages through UDP multicast and unicast from same port. In Labview I tried that it throws error. I heard it is possible by means of Datagram (UDP unicast and multicast) Port Sharing. How can it be achieved in Labview?

    I have to send UDP multicast and Unicast messages to a remote port from a single source/local port. I tried by opening UDP unicast and multicast in the same port and got the expected error. I tried by opening a unicast connection and sending unicast messages.After that when multicast messages has to send I closed unicast and opened multicast in the same port.This is not throwing any error. But my requirenment is to comminicate with another application in C ++ which recieves this data, throwing an error of lost connectivity and both the applications are not abled to communicate properly. 
    In the other application with C++ this is implemented using port sharing. So how port sharing can be implemented in labview so that I can send both multicast and unicast messages from the same port?
    Thanks in advance

    UDP is a sessionless protocol, meaning that anyone listening on the specified port CAN receive the data. CAN because as you noted there is no guarantee in the protocol that it will be received. And if you send the data not to a specific address but a multicast address not only one computer can receive it but in fact every computer on the same subnet listening to that multicast address and depending on the TTL of the packet also computers in neighbouring subnets, although that last one is not a very reliable operation since routers can be configured to drop multicast packages anyhow despite of a different TTL saying otherwise.
    Accordingly there is no real way to make sure that a receiving UDP port is not already in use, since you don't build up a connection. UDP is more or less analogous to shouting your messages through a megaphone, and anyone listening on the right frequency (port) can hear it. You do bind the sender socket to a specific port number but that makes little difference.
    Rolf Kalbermatter
    CIT Engineering Netherlands
    a division of Test & Measurement Solutions

  • Multicast and wirerless

    I have a 4404 controller running 6.0.202 code and more more people have Mac running bonjour and wanting to use Airplay.  I see how to turn on Multicasting and even provide a Multicast address for IGMP snooping but does anyone have a good feel as to the overall load Multicast adds to the wireless network?
    Thanks,
    Gary

    It depends on the type of deployment you have.
    If the network infrastructure supports multicast, you should enable Multicast - Multicast as the controller multicast mode and choose a multicast address in the 239.X.X.X range.
    If your network is not capable of supporting multicast, you would want to select Multicast - Unicast mode. This mode puts a load on the controller and on the wireless network as the Multicast is then sent as a unicast to each access point instead.
    These support articles should help you.
    Bonjour Deployment Guide
    http://www.cisco.com/en/US/products/hw/wireless/ps4570/products_tech_note09186a0080bb1d7c.shtml
    Multicast Deployment Guide
    https://supportforums.cisco.com/docs/DOC-14713

  • Re: complaint to FCC about Comcast's broadcast stations and prices!

    It has been years since we have used Comcast services.Today we just had it reconnected. We were supposed to get the 105mbs service.It is supposed to be great for gaming and yadda yadda yadda.Well we haven't been able to use our service all day. We try to do some online gaming. NOPE!The connection wont stay connected. The service tech was HORRIBLE!He did not even run any testing to make the connection was ok. Well I guess now we know why! He knew that the connection was going to be HORRIBLE just like him!I call Comcast 3 times and all they say is "the system says that is what you should get".Well let me tell you, when you purchase 105 MBS, then you should get that! We did the speed test and were only getting 46 MBS. WTH? Our game systems wouldn't even get 12 MBS and the was with the LAN Cable.This service is a complete rip off! They false advertise and then just throw their hands up like "oh well and hey if you need someone at your house, then it will be days later"........ Guess what? Back to Centurylink! they give us 50 MBS and it is FASTER with NO connection errors! Oh hey just plugged Centurylink back in and we are up and running in a snap of a finger.SURPRISE!! Oh wait.... I'm not!Also, their customer service actually tries to help. PEACE OUT COMCAST!!!  Thanks for nothing!

    Hello youarecrazy,
    As dwitham said, you need to understand the difference between a broadcast station, actually called broadcast networks, and a broadcast provider. 
    A broadcast network is an organization that provides live or recorded content to a provider. 
    A broadcast provider is an organization who receives the content from the broadcast network and distributes it. 
    In other words, the shows that air on the channels that you watch have nothing to do with who is delivering you the content. No provider has any control over what is given to them and is only responsible for airring the provided content. 
    If you wish to write a letter, do as dwitham suggested again, and write to the network channels themselves as they are the only ones who have the ability to change what you're seeing on them. 

  • ASR9K: bandwidth and bandwidth remaining cannot be used together. How to solve the problem to grant a quota and equally assign the remaining quota?

    Hi everyone
    The problem should be trivial. We want to grant a quota to specific classes and use equally the remaining quota of available bandwidth to all the requesting classes. Let's clarify with an example:
    Class 7 ==> priority queue, level 1 with police 20%
    Class 5 ==> priority queue, level 2 with police 40%
    Class 6 ==> CIR 12%
    Class 3 ==> CIR 11%
    Class 2 ==> CIR 8%
    Class 1 ==> CIR 5%
    Class 0 ==> CIR 4%
    To simplify let's suppose that there is no traffic on class 7 and 5 and that all remaining classes are generating traffic at a rate of 300Mbps each. Outgoing interface is 1G so congestion occurs. We want that each class 6,3,2,1,0 receive its granted value (so, respectively, 120M, 110M, 80M, 50M and 40M for a total of 400M) and that the remaining available bandwidth (600M) will be equally assigned, so 120M to each class.
    Documentation from IOS-XR 5.2.2 let's understand that this should be the default behavior but if we run the policy shown below what we get is a weighted assignment of the remaining quota.
    The policy used is the following:
    policy-map TEST-POLICY
     class qos7
      police rate percent 20
      priority level 1
     class qos5
      police rate percent 40
      priority level 2
     class qos6
      bandwidth percent 12
     class qos3
      bandwidth percent 11
     class qos2
      bandwidth percent 8
     class qos1
      bandwidth percent 5
     class qos0
      bandwidth percent 4
     class class-default
     end-policy-map
    The documentation of IOS-XR 5.2.2 states that both "bandwidth percent" and "bandwidth remaining percent" could be used in the same class (which could be a solution to force the requested behavior) but using both generates the following error:
    !!% Both bandwidth and bandwidth-remaining actions cannot be configured together in leaf-level of the queuing hierarchy: InPlace Modify Error: Policy TEST-POLICY: 'qos-ea' detected the 'warning' condition 'Both bandwidth and bandwidth-remaining actions cannot be configured together in leaf-level of the queuing hierarchy'
    How could be solved the problem? Maybe a hierarchical QoS with the granted quota in the parent policy and a "bandwidth remaining percent 20" in the child?

    Hi everyone
    just to provide my contribution, the hierarchical QoS policy works balancing the remaining bandwidth after granting the requested bandwidth (see the policy implemented below). However for priority queues it is granted the policer quota but sending more flows these appears to be unbalanced. So the problem to have both PQ served (in a balanced way between flows) AND have the remaining bandwidth distributed equally remains open ...
    policy-map TEST-POLICY-parent
     class qos6
      service-policy TEST-POLICY-child
      bandwidth percent 12
     class qos3
      service-policy TEST-POLICY-child
      bandwidth percent 11
     class qos2
      service-policy TEST-POLICY-child
      bandwidth percent 8
     class qos1
      service-policy TEST-POLICY-child
      bandwidth percent 5
     class qos0
      service-policy TEST-POLICY-child
      bandwidth percent 4
     class class-default
      service-policy TEST-POLICY-child
     end-policy-map
    policy-map TEST-POLICY-child
     class qos7
      police rate percent 20
      priority level 1
     class qos5
      police rate percent 40
      priority level 2
     class qos6
      bandwidth remaining percent 20
     class qos3
      bandwidth remaining percent 20
     class qos2
      bandwidth remaining percent 20
     class qos1
      bandwidth remaining percent 20
     class qos0
      bandwidth remaining percent 20
     class class-default
     end-policy-map

  • GRE tunnel MTU and bandwidth

                       Hi Everyone.
    GRE tunnel from Site A has
    MTU 17916 bytes, BW 100 Kbit, DLY 50000 usec,
    Site B has
    MTU 1514 bytes, BW 9 Kbit, DLY 500000 usec,
    I read that default BW for GRE is 9kbit  but here 1 side has default and other side has 100.
    One side has default MTU other side has 17916
    so is this normal behaviour?
    So need to know the purpose of using MTU 17916 and bandwidth of 100  on Site A?
    Thanks
    Mahesh

    Hello Mahesh,
    The bandwidth and Delay on the Tunnel interface is sometimes used to change the EIGRP metrics to influence routing table. For example if you have two equal cost paths to certain network in your routing table, you can increase the delay on one of the tunnels and make the other tunnel the preffered one. Not sure if this is the case with your setup.. it;s really hard to say without looking at the config.
    As for the MTU, not sure why MTU size is 17916..  1514 make sense but still too big for a GRE tunnel.. I think
    Please rate this post if helpful.. THanks

  • Dashboard - a memory and bandwidth hog?

    Hello everyone,
    This is a general question about widgets really, concerning how much of a memory and bandwidth hog they are.
    I only have a few widgets on my Dashboard, as I'm worried they'll slow my Mac down. Can this happen, and does it only happen when I 'invoke' the Dashboard?
    Also, as some connect to the internet, are they a bandwidth hog? (and again, are they only 'on' when the Dashboard is invoked, or always connected in the background?).
    Many thanks for any advice!
    Ben

    So are these things always running in the background, or only when I chose to bring Dashboard to the fore?The first time you activate Dashboard after log in. Try it. Logout then, back in and check Activity Monitor, then activate Dashboard and check AM.
    With AM you can always keep an eye on your memory and quit apps, even if they may not "force quit". However, the way OS X handles memory you shouldn't have a problem unless you're doing heavy lifting like music/video edit.
    -mj
    [email protected]

  • Other traffic at 75% Crushing the optimization, Acceleration and Bandwidth

    Other traffic at 75% Crushing the optimization, Acceleration and Bandwidth.  Is there a way through the CM to see what the other traffic is?  Any thoughts or ideas?

    Hi Dan,
    You really do not need to create a policy but rather you need application definition. Here are the steps.
    Creating an Application Definition
    The first step in creating an application policy is to set up an  application definition that identifies general information about the  application, such as the application name and whether you want the WAAS  Central Manager to collect statistics about the application. After  creating the application definition, you assign it to a device or device  group. You can create up to 255 application definitions on your WAAS  system.
    The Link:
    http://www.cisco.com/en/US/docs/app_ntwk_services/waas/waas/v441/configuration/guide/policy.html#wp1042389
    One the application definition is created for the traffic you want to identify, it will start displaying statistics under the application definition name.
    Hope this helps.
    Regards.
    PS: Please mark this as Answered, if this answers your question.

  • [svn:osmf:] 16975: Fix bug FM-964, add media factory item for RTMFP multicast and remove the item from OSMFPlayer

    Revision: 16975
    Revision: 16975
    Author:   [email protected]
    Date:     2010-07-19 15:20:00 -0700 (Mon, 19 Jul 2010)
    Log Message:
    Fix bug FM-964, add media factory item for RTMFP multicast and remove the item from OSMFPlayer
    Ticket Links:
        http://bugs.adobe.com/jira/browse/FM-964
    Modified Paths:
        osmf/trunk/apps/samples/framework/OSMFPlayer/src/OSMFPlayer.as
        osmf/trunk/framework/OSMF/org/osmf/media/DefaultMediaFactory.as

    Welcome guy -
    Unless you are using Spry menus as a learning experience, you should move forward to a menus system that will display properly on the millions of portable devices that won't work with Spry which was deprecated 2 years ago.
    Many are using JQuery menus or pure HTML/CSS menus.
    If you wish to continue your Spry for learning experience, we'll be glad to assist; please let us know.
    By the way, your submenus are not showing because you need to add the red value to this rule in your vertical CSS
    ul.MenuBarVertical ul.MenuBarSubmenuVisible{
        width: 220px;
        left: 180px;

  • Complaint to FCC about Comcast's broadcast stations and prices!

    Complaint ..   Comcast -- have several broadcast stations that are showing repeat, repeat shows for the last 2-3 years such as Murder She wrote, Hart to Hart, Golden Girls, I love lucy, Little house on the praise, Food Network,  HGTV, and many more  as well as QVC, HTC and so many other stations are repeating. Why should I pay $22 more?  Comcast should FREEZE the prices until Comcast gets all new shows.   Comcast does not respect Senior Citizens who has very limited income under $14,000.  Comcast refuses to honor that to discount, they claims it can't be done because of promotion.   I think it is unfair.   I will write a letter to FCC soon. 

    Hello youarecrazy,
    As dwitham said, you need to understand the difference between a broadcast station, actually called broadcast networks, and a broadcast provider. 
    A broadcast network is an organization that provides live or recorded content to a provider. 
    A broadcast provider is an organization who receives the content from the broadcast network and distributes it. 
    In other words, the shows that air on the channels that you watch have nothing to do with who is delivering you the content. No provider has any control over what is given to them and is only responsible for airring the provided content. 
    If you wish to write a letter, do as dwitham suggested again, and write to the network channels themselves as they are the only ones who have the ability to change what you're seeing on them. 

  • P2P multicast and upstream bandwidth

    Hi,
    I'm in the process of developing a realtime video chat application where multiple users can send video streams simultanuously. The number of users receiving the streams can be very big, e.g. 10 broadcasters and 500 receivers, each receiver should get all streams.
    I use RTMFP connections to an FMS and streams are published in P2P multicast groups by passing the groupspec to the NetStream constructor. Currently I'm having problems with audio/video synchronization and video stream 'jumps' (not continuous). From what I read on other threads, this is related to the fact that there is not enough upstream bandwidth for sending the streams. So my questions are:
    How to calculate the required upstream bandwidth on every peer for the given example of 10 broadcasters and 500 receivers (is it 10*bandwidth of one stream)?
    What settings (on NetStream, Camera, Microphone etc.) should be used for best results and how to adapt them based on the number of broadcasters?
    I hope my questions make sense!
    Thanks,
    Haykel

    I have done some tests to find out how much upstream bandwidth is used for different situations. The data is taken from the 'multicastInfo' property of all involved 'NetStream' objects as follows:
    Multicast Data: average of 'multicastInfo.sendDataBytesPerSecond' of all streams.
    Multicast Control : average of 'multicastInfo.sendControlBytesPerSecond' of all streams.
    Different measurements have been done for different situations:
    The user is broadcasting and not broadcasting
    Varying number of connected users
    Varying number of users broadcasting
    I have limited the maximum video bandwidth to 16 KBytes with 'Camera.setQuality(1024 * 16, 0)'.
    The application uses only application level multicast (passing a groupspec to the NetStream constructor). In this test all peers are on the same LAN and the FMS server is on a remote machine.
    The results in Bytes/s (click to enlarge):
    What I have noticed is that the outgoing multicast data volume grows by ~15 KByte with every new connected user (receiver) and goes down when the number of broadcasters grows. Is that normal? Does it mean that only the broadcasters are sharing the streams with the other peers? I thought that every peer would share the data with a number of neighbours (i.e. 3) which would share with their neighbours and so on.
    During the tests I have also checked for a/v delays and noticed the following:
    For every new stream, at the beginning audio and video are in sync but have a delay of ~3 seconds
    After ~20 seconds the video delay becomes marginal but the audio remains delayed and so goes out of sync with video
    After ~30 seconds the audio delay becomes marginal and goes in sync with video (a/v are now stable)
    The delays of 20 and 30 seconds grow up to more than a minute when the number of broadcasters grows
    My questions:
    Are my measurements correct?
    Why is the outgoing multicast data volume growing with every new receiver? Is the publishing stream sending the data to all peers?
    Should I expect the bandwidth to grow indefinitly with the number of receivers or will it stabilize at some value?
    How to decrease the time required for a/v to become stable (in-sync with a small delay)?
    Is P2P multicast a good choice for this kind of applications (up to 10 broadcasters and a very big number of receivers)?
    Any advices???
    Thanks.

Maybe you are looking for