Data Rates & Cell Size

Hi all.
A few days back i was going through an article on High Density Wireless Deployments , and came through an extract which disabling Lower Data Rates , helps to reduce cell size.
How doe this work ? I mean are we exclusively allowing only those clients to associate by making the data rates disabled for slower clients ?
But isnt cell size , controlled by the Transmit power of the antenna ?
Or what we are looking here , is just reducing the cell size logically?
Thanks ,
Rahul.

No problem.... Look at it this way.... back in the days with autonomous, you could roam from one ap and not roam to the next two while walking down the hallway.  So in high density, you want to have some sort of control on how many clients you want in a cell or on a particular AP.  People always asked.... why an i connected to an ap 20 feet away and not the one above me.  Well for one, the client chooses and also why would it roam is it is still connected at a rate that you are still supporting.  There are so many tweaking involved with high density, well at least I do a lot of tweaking, but its so I can control the cell size for AP's in a high density area compared to areas where I need to extend the cell size a bit more.  Data rates and TX power work hand in hand when determining this.
Thanks,
Scott
Help out other by using the rating system and marking answered questions as "Answered"

Similar Messages

  • How can I cancel "The Garage Band content is now downloading" once it starts? I thought it would show the size of the download, but it just assumes I have unlimited bandwidth, instead of mobile data rates which are super expensive.

    How can I cancel "The Garage Band content is now downloading" once it starts? I thought it would show the size of the download, but it just assumes I have unlimited bandwidth, instead of mobile data rates which are super expensive.
    Once it started, it just shows it's progress and there's no option I can find to stop the download.
    Any help greatly appreciated.
    Regards,
    Kye.

    Hi,
    I am not sure whether it works, Try "Restore Sound Library" in Menu.

  • WLC Data Rates behaviour

    Hello everyone!
    I have some questions about the data rates configuration in WLC. I've been reading Cisco documentation and this forum, but as I've seen some contradictory information I would like you to clarify me some points.
    I want to disable 802.11b clients, so for that I'm going to disable 1, 2, 5.5 and 11Mbps data rates (other option could be set these as supported only). Then, I have to assign at least one of the other data rates as mandatory...
    I could think in one of these three options
    Option 1: all 802.11g data rates as mandatory. In this case, as far as I understand:
    Broadcast are sent to the lowest data rate
    Multicast are sent to the highest data rate (but if a client is connected to other data rate than the highest, then multicast frames are sent to this data rate supported by the client, is that right??). Basically, you cannot be sure at which data rate multicast frames are sent as it can change depending on the clients.
    Clients can connect as soon as they support one of the mandatory data rate
    Option 2: one 802.11g data rate as mandatory. This would define better the cell limits.
    Broadcast and multicast are sent at the same data rate
    Clients can connect if they support the mandatory data rate
    Option 3: set two data rates as mandatory (or the OFDM required data rates: 6,12, 24Mbps), and the others as supported
    Broadcast are sent at the lowest data rate
    Multicast are sent at the highest data rate (shifting back to the lowest if a customer cannot connect to this data rate)
    Clients can connect if they support at least one of the mandatory data rates.
    Am I right with this assumptions? I have read some other different behaviors, like broadcast and multicast are sent at the same common data rate...
    I hope I have explained myself...
    Thank you very much for your support.
    Carolina

    Hello,
    thank you Scott and George!
    I read the post, and it is ok, and I finally have found some information in Cisco's website:
    http://www.cisco.com/c/en/us/td/docs/wireless/controller/7-4/configuration/guides/consolidated/b_cg74_CONSOLIDATED/b_cg74_CONSOLIDATED_chapter_01011.html#ID2482
    "Access points running recent Cisco IOS versions transmit multicast frames at the highest configured basic rate and management frames at the lowest basic mandatory rates, can cause reliability problems. Access points running LWAPP or autonomous Cisco IOS should transmit multicast and management frames at the lowest configured basic rate. Such behavior is necessary to provide good coverage at the cell's edge, especially for unacknowledged multicast transmissions where multicast wireless transmissions might fail to be received. Because multicast frames are not retransmitted at the MAC layer, clients at the edge of the cell might fail to receive them successfully. If reliable reception is a goal, multicast frames should be transmitted at a low data rate. If support for high data rate multicast frames is required, it might be useful to shrink the cell size and disable all lower data rates.
    Depending on your requirements, you can take the following actions:
    If you need to transmit multicast data with the greatest reliability and if there is no need for great multicast bandwidth, then configure a single basic rate, that is low enough to reach the edges of the wireless cells.
    If you need to transmit multicast data at a certain data rate in order to achieve a certain throughput, you can configure that rate as the highest basic rate. You can also set a lower basic rate for coverage of nonmulticast clients."
    Based on your experience, does it have any sense to have configured all datarates as mandatory? This is the current configuration I have in our controllers but it wasn't done by me.
    Thanks.
    Carolina

  • Problem with Cell size in Excel output of XML report

    Dear all,
    I am facing a problem with cell size when i run my XML report in Excel output. I found that it imitates the cell size of whatever i gave in the RTF. I cannot increase the cell size in RTF as my report contains 60 columns and max width of MS Word table is 22 inches.
    Can any one suggest a way of doing this which shows full data in Excel sheet depending on the column data size with out any word wrap.
    Thanks
    RAJ

    Hi ,
    You can try with
    <xsl:attribute xdofo:ctx="block" name="wrap-option">no-wrap</xsl:attribute>
    may be helpful to you
    Thanks,
    Ananth
    http://bintelligencegroup.wordpress.com/

  • H.264 All-Intra Data Rates Significantly Higher

    Does the built-in H.264 codec encode I-frame only files differently? I am trying to determine the optimal GOP length for high bitrate exports. Image quality seems to degrade, even in the I-frames, when using key frame distances greater than one.
    After performing a series of tests to characterize the Adobe H.264 encoder, I discovered that exported files are significantly larger when key frame distance equals one frame (N=1). The average video data rate for a test file rendered with the Adobe H.264 encoder is as follows:
    N=1 : 2.17 bpp : 24I
    N=2 : 0.66 bpp : 12I + 12P
    N=3 : 0.59 bpp : 8I + 8B + 8P
    Note how the data rate drops 70% (from 2.17 to 0.66 bpp) even though 50% of the I-frames still exist when N=2. By comparison, here is the video data rate when exporting with QuickTime H.264:
    N=1 : 0.89 bpp : 24I
    N=2 : 0.70 bpp : 12I + 12P
    N=3 : 0.64 bpp : 8I + 16P
    The following chart shows data rates at key frame distances from 1-48 frames for Adobe H.264, QuickTime H.264 (via Adobe), QuickTime Pro, and Expression Encoder 4 Pro. Data rates are consistent among all encoders at GOP lengths greater than one. There is an anomoly with the Adobe H.264 codec compressing all-intra files.
    The observed behavior occurs in all profiles, which were tested at Levels 4.1, 4.2, 5.0, and 5.1:
    Image quality is better in the Adobe H.264 all-intraframe file, especially with respect to detail retention. The pictures below show sections of two consecutive frames magnified 400%. The file with N=2 is less accurate and contains noticeable blocking. Even the I-frames don't look as good in the files where N>1.
    The test file was a seventeen second Premiere Pro sequence consisting of H.264, MPEG-2, and AE files with effects applied. Exports were rendered from the Premiere Pro timeline and from a V210 uncompressed 4:2:2 intermediate file of the sequence. The following settings were used:
    Format: H.264
    Width: 1280
    Height: 720
    Frame Rate: 24 fps
    Field Order: Progressive
    Aspect: Square Pixels (1.0)
    TV Standard: NTSC
    Profile: Baseline, Main, and High
    Levels: 4.1, 4.2, 5.0, 5.1
    Render at Maximum Bit Depth: Enabled
    Bitrate Encoding: VBR, 2-Pass
    Target Bitrate: Maximized for each Profile/Level
    Maximum Bitrate: Maximized for each Profile/Level
    Key Frame Distance: 1, 2, 3, 4, 5, 6, 7, 8, 12, 24, 48
    Use Maximum Render Quality: Enabled
    Multiplexer: MP4
    Stream Capability: Standard
    Software:
    Adobe Media Encoder CS6 Creative Cloud v6.0.3.1 (64-bit)
    Premiere Pro CS6 Creative Cloud v6.0.3
    Windows 7 SP1
    QuickTime Pro for Windows v7.6.9 (1680.9)
    MediaInfo 0.7.62 (for GOP and data rate information)

    I think the short answer is yes,
    a h.264 encoder does I-frame only differently. I frames are complete expressions of a picture with no temporal compression information.
    P frames use _P_redictive information. IE information from prior frames.
    B frames use _B_i-directional predictive frame information.
    h.264 gets the majority of it's bit saving from use of B and P frames. When you do I-frame only you only get the block compression and none of the advantages of P and B frames. Thus the GOP N=1 doesn't get very good bit's per pixel.
    Having said all that I do find your comment...
    Profitic wrote:
    Note how the data rate drops 70% (from 2.17 to 0.66 bpp) even though 50% of the I-frames still exist when N=2. By comparison, here is the video data rate when exporting with QuickTime H.264:
    ... very interesting. Indeed, why is the datarate 70% less when it should at best be 50% for GOP N=2. 50% less should be the same I-frame information plus 0 bytes for the B frame between them. (GOP = I,B,I). Any more than that and it is throwing away bits from the I-frame. So, this seems to be to be a ratecontrol bug.
    http://en.wikipedia.org/wiki/Group_of_pictures
    "The GOP structure is often referred by two numbers, for example, M=3, N=12. The first number tells the distance between two anchor frames (I or P): it is the GOP size. The second one tells the distance between two full images (I-frames): it is the GOP length. For the example M=3, N=12, the GOP structure is IBBPBBPBBPBBI. Instead of the M parameter the maximal count of B-frames between two consecutive anchor frames can be used."

  • In Aperture 3.4 Export Slideshow to a video, what are the actual Data Rates used for "Best", "High", ... "Least" quality for a given resolution?

    My Photo website host (SmugMug) converts uploaded video files at a specific Data Rate in Mbps before installing them. I would like to compress my slideshow video file to the same rate before I upload it to reduce file size and upload time. When I choose the "Custom" setting for an Export, I can choose 1 of the 5 Quality choices and see the estimated file size. But, I cannot know what the actual Data Rate is until after I wait a long time to export the slideshow (hours for a long slideshow) and then examine the resulting file in QuickTime Player's Inspector.

    I ran a few tests using a short slideshow (16 images, 1:23 mins/secs) at 1,728 x 1,080 resolution to find out the bit rates for various quality level choices.
    Export setting              Quality          Resulting bit rate          File Size
    HD 1080p                    default                20.68 Mbps              214.8 MBytes
    Custom                       Best                   20.49                       212.8
    Custom                       High                     6.25                         65.0
    Custom                       Medium                3.97                         41.3
    However, I don't know if those bit rates will be the same for different length slideshows or for different output resolutions. My SmugMug host site uses an 8.0 Mbps rate for a 1728 x 1080 video file. If I choose Custom/Best, my file will be almost 3 times bigger and much higher quality than necessary, but if I choose Custom/High, my file will be smaller and lower quality than SmugMug's converted version.
    I have installed MPEG StreamClip that will let me convert an exported Aperture slideshow video file, and StreamClip allows me to choose a specific bit rate in Mbps. But, I would prefer not having to do a 2 step process (Export from Aperture, then convert in StreamClip).

  • Problem making slideshow. "Data rate to high"

    Using Encore CS3 on Win XP. Trying make dvd on a short film and some slideshows. Total c 2,5 GB. Building stops telling me: PGC "Lenas" has an error at 00:01:31:08. The data rate is too high for DVD. You must replace the file with one of a lower rate.....ref, EPGC...." All pictures in slideshow "Lenas" are the same size, 2-2,5 MB, nothing special with the picture at 00:01:31:08. What does the info say?
    Preview with no problems.
    Regards Gunnar Löwenhielm

    What are the pixel x pixel dimensions of those still images?
    What is your Encore Project setting?
    How many images do you have per SlideShow?
    Good luck,
    Hunt

  • What chokes H.264 footage: data rate or video dimensions?

    I apologize for posting in here but this board is SO much better than the Quicktime or Compressor boards, and it's all the same workflow anyhow!
    What I'm wondering is what chokes H.264 videos. Is it low bit rates? High bit rates? Big dimensions? I have an 854x480 video at 500kbps and it looks great, but even on my PowerMac there is slight stuttering, and some alpha tester feedback I've gotten tells me that a lot of slower computers are choking on the video badly.
    The file size is ridiculously small, so if increasing the data rate makes it easier to decode, that would be nice.

    It's neither - or either, if you will. H.264 achieves unprecedented levels of compression, which means that it's a lot more CPU-intensive to play back in real time. It simply doesn't play well on older hardware. Even slower G5s for example have problems playing back full HD H.264, particularly if it's progressive. It also depends on your GPU - certain GPUs support H.264 decoding, thus significantly offloading the CPU.
    One general rule, though: the higher your data rate at any given frame size and frame rate, the less work the decoder needs to do. I'd try to go for approx. 800 kbps at your chosen frame size and see if that helps.
    HTH,
    Ron

  • PGC...data rate too high

    Hallo,
    message
    nunew33, "Mpeg not valid error message" #4, 31 Jan 2006 3:29 pm describes a certain error message. The user had problems with an imported MPEG movie.
    Now I receive the same message, but the MPEG that is causing the problem is created by Encore DVD itself!?
    I am working with the german version, but here is a rough translation of the message:
    "PGC 'Weitere Bilder' has an error at 00:36:42:07.
    The data rate of this file is too high for DVD. You must replace the file with one of a lower data rate. - PGC Info: Name = Weitere Bilder, Ref = SApgc, Time = 00:36:42:07"
    My test project has two menus and a slide show with approx. 25 slides and blending as transition. The menus are ok, I verified that before.
    First I thought it was a problem with the audio I use in the slide show. Because I am still in the state of learning how to use the application, I use some test data. The audio tracks are MP3s. I learned already that it is better to convert the MP3s to WAV files with certain properties.
    I did that, but still the DVD generation was not successful.
    Then I deleted all slides from the slide show but the first. Now the generation worked!? As far as a single slide (an image file) can not have a bitrate per second, and there was no sound any more, and as far as the error message appears AFTER the slide shows are generated, while Encore DVD is importing video and audio just before the burning process, I think that the MPEG that is showing the slide show is the problem.
    But this MPEG is created by Encore DVD itself. Can Encore DVD create Data that is not compliant to the DVD specs?
    The last two days I had to find out the cause for a "general error". Eventually I found out that image names must not be too long. Now there is something else, and I still have to just waste time for finding solutions for apparent bugs in Encore DVD. Why doesn't the project check find and tell me such problems? Problem is that the errors appear at the end of the generation process, so I always have to wait for - in my case - approx. 30 minutes.
    If the project check would have told me before that there are files with file names that are too long, I wouldn't have had to search or this for two days.
    Now I get this PGC error (what is PGC by the way?), and still have no clue, cause again the project check didn't mention anything.
    Any help would be greatly appreciated.
    Regards,
    Christian Kirchhoff

    Hallo,
    thanks, Ruud and Jeff, for your comments.
    The images are all scans of ancient paintings. And they are all rather dark. They are not "optimized", meaning they are JPGs right now (RGB), and they are bigger then the resolution for PAL 3:4 would require. I just found out that if I choose "None" as scaling, there is no error, and the generation of the DVD is much, much faster.
    A DVD with a slide show containing two slides and a 4 second transition takes about 3 minutes to generate when the scaling is set to something other than "None". Without scaling it takes approx. 14 seconds. The resulting movies size is the same (5,35 MB).
    I wonder why the time differs so much. Obviously the images have to be scaled to the target size. But it seems that the images are not scaled only once, that those scaled versions of the source images are cached, and those cached versions are used to generate then blend effect, but for every frame the source images seem to be scaled again.
    So I presume that the scaling - unfortunately - has an effect on the resulting movie, too, and thus influences the success of the process of DVD generation.
    basic situation:
    good image > 4 secs blend > bad image => error
    variations:
    other blend times don't cause an error:
    good image > 2 secs blend > bad image => success
    good image > 8 secs blend > bad image => success
    other transitions cause an error, too:
    good image > 4 secs fade to black > bad image => error
    good image > 4 secs page turn > bad image => error
    changing the image order prevents the error:
    bad image > 4 secs blend > good image => success
    changing the format of the bad image to TIFF doesn't prevent the error.
    changing colors/brightness of the bad image: a drastic change prevents the error. I adjusted the histogram and made everything much lighter.
    Just a gamma correction with values between 1.2 and 2.0 didn't help.
    changing the image size prevents the error. I decreased the size. The resulting image was still bigger than the monitor area, thus it still had to be scaled a bit by Encore DVD, but with this smaller version the error didn't occur. The original image is approx. 2000 px x 1400 px. Decreasing the size by 50% helped. Less scaling (I tried 90%, 80%, 70% and 60%, too) didn't help.
    using a slightly blurred version (gaussian blur, 2 px, in Photoshop CS) of the bad image prevents the error.
    My guess is that the error depends on rather subtle image properties. The blur doesn't change the images average brightness, the balance of colors or the size of the image, but still the error was gone afterwards.
    The problem is that I will work with slide shows that contain more images than two. It would be too time consuming to try to generate the DVD over and over again, look at which slide an error occurs, change that slide, and then generate again. Even the testing I am doing right now already "ate" a couple of days of my working time.
    Only thing I can do is to use a two image slide show and test image couple after image couple. If n is the number of images, I will spend (n - 1) times 3 minutes (which is the average time to create a two slides slide how with a blend). But of course I will try to prepare the images and make them as big as the monitor resolution, so Encore DVD doesn't have to scale the images any more. That'll make the whole generation process much shorter.
    If I use JPGs or TIFFs, the pixel aspect ratio is not preserved when the image is imported. I scaled one of the images in Photoshop, using a modified menu file that was installed with Encore DVD, because it already has the correct size for PAL, the pixel aspect ratio and the guides for the save areas. I saved the image as TIFF and as PSD and imported both into Encore DVD. The TIFF is rendered with a 1:1 pixel aspect ratio and NOT with the D1/DV PAL aspect ration that is stored in the TIFF. Thus the image gets narrowed and isn't displayed the way I wanted it any more. Only the PSD looks correct. But I think I saw this already in another thread...
    I cannot really understand why the MPEG encoding engine would produce bit rates that are illegal and that are not accepted afterwards, when Encore DVD is putting together all the stuff. Why is the MPEG encoding engine itself not throwing an error during the encoding process? This would save the developer so much time. Instead they have to wait until the end, thinking everything went right, and find out then that there was a problem.
    Still, if sometime somebody finds out more about the whole matter I would be glad about further explanations.
    Best regards,
    Christian

  • PGC has an error--data rate of this file is too high for DVD

    Getting one of those seemingly elusive PGC errors, though mine seems to be different from many of the ones listed here. Mine is telling me that the data rate of my file is too high for DVD. Only problem is, the file it's telling me has a datarate that is too high, is a slideshow which Encore has built using imported jpg files. I got the message, tried going into the slideshow and deleting the photo at the particular spot in the timeline where it said it had the problem, now getting the same message again with a different timecode spot in the same slideshow. The pictures are fairly big, but I assumed that Encore would automatically resize them to fit an NTSC DVD timeline. Do I need to open all the pictures in Photoshop and scale them down to 720x480 before I begin with the slideshows?

    With those efforts, regarding the RAM, it would *seem* that physical memory was not the problem.
    I'd look to how Windows is managing both the RAM addresses and also its Virtual Memory. To the former, I've seen programs/Processes that lock certain memory addresses upon launch (may be in startup), and do not report this to Windows accurately. Along those lines, you might want to use Task Manager to see what Processes are running from startup on your machine. I'll bet that you've got some that are not necessary, even if IT is doing a good job with the system setup. One can use MSCONFIG to do a trial of the system, without some of these.
    I also use a little program, EndItAll2 for eliminating all non-necessary programs and Processes, when doing editing. It's freeware, has a tiny footprint and usually does a perfect job of surveying your running programs and Processes, to shut them down. You can also modify its list, incase it wants to shut down something that IS necessary. I always Exit from my AV, spyware, popup-blocker, etc., as these progams will lie to EndItAll2 and say that they ARE necessary, as part of their job. Just close 'em out in the Tasktray, then run EndItAll2. Obviously, you'll need to do this with the approval of IT, but NLE machines need all available resources.
    Now, to the Virtual Memory. It is possible that Windows is not doing a good job of managing a dynamic Page File. Usually, it does, but many find there is greater stability with a fixed size at about 1.5 to 2.5x the physical RAM. I use the upper end with great results. A static Page File also makes defragmenting the HDD a bit easier too. I also have my Page File split over two physical HDD's. Some find locating to, say D:\ works best. For whatever reason, my XP-Pro SP3 demanded that I have it on C:\, or split between C:\ and D:\. Same OS on my 3 HDD laptop was cool having it on D:\ only. Go figure.
    These are just some thoughts.
    Glad that you got part of it solved and good luck with the next part. Since this seems to affect both PrPro and En, sounds system related.
    Hunt
    PS some IT techs love to add all sorts of monitors to the computers, especially if networkded. These are not usually bad, but are usually out of the mainstream, in that most users will never have most of these. You might want to ask about any monitors. Also, are you the only person with an NLE computer under the IT department? In major business offices, this often happens. Most IT folk do not have much, if any, experience with graphics, or NLE workstations. They spend their days servicing database, word processing and spreadsheet boxes.

  • Cell size in JList

    Hi all,
    Why does a JList make all cells same size when I change it's layout orientation? Before changing the orientation the JList shows cells in different sizes. Is there any way I can have different cell sizes for different cells after I change the layout orientation of a JList?

    In the future, Swing related questions should be posted in the Swing forum (like you did the last time, so you obviously know the Swing forum exists).
    Is there any way I can have different cell sizes for different cells after I change the layout orientation of a JList? Are we supposed to be mind readers? How do we know what your old/new orientation is or which orientation you are having the problem with? How do we know what type of data you are displaying in the JList?
    If you need further help then you need to create a [Short, Self Contained, Compilable and Executable, Example Program (SSCCE)|http://sscce.org], that demonstrates the incorrect behaviour.
    Don't forget to use the Code Formatting Tags so the posted code retains its original formatting. That is done by selecting the code and then clicking on the "Code" button above the question input area.

  • Best Data Rate for Export to DVD as Quicktime Movie

    Hello - hope someone out there can help me!!!
    I put together a slideshow for my high school reunion using iMovie HD 5.0.2 and need help with settings exporting it as a Quicktime movie.
    *I want to export it to a DVD* so that I can play it on a friend's laptop and I'm not sure if iDVD would be the way to go - my friend has a Macbook Pro I can borrow so it might be okay - I am worried about the fact that I've been working on older software. (I have read about opening iDVD and bringing in iMovie project vs. "share to iDVD"). So they suggested instead of iDVD I just export as a quicktime. Then I can burn it in Toast (v.6.0.2)
    Wondering about *BEST DATA RATE?*
    In export settings I have:
    compressor set at H.264
    frame rate: current (should that be DV-NTSC?)
    key frames: automatic (?)
    encoding: best quality/multi-pass
    should data rate be automatic?
    Some details that might be relevant to helping me:
    *Stills brought in as clips with cross-dissolves, only two uses of "Ken Burns effect"
    *Some stills were in the project from my LAST reunion - I realize I should have reimported all the stills since I am now recompressing them, but it's too late now!
    *no audio
    *Slideshow is about 45 mins long, file size says 5.52GB although when I looked into exporting it last night it said 9+GB - not sure why
    *Using a G4 Tower (not my ancient laptop) with OS 10.4.11, Dual 533MHz PowerPC, 1.5GB SDRAM
    iDVD version is 6.0.4
    please help if you can!! thank you in advance...

    Are you trying to burn a standard DVD or are you trying to put a QuickTime .mov file on DVD media?
    A standard DVD doesn't worry about file size (only the duration).
    A "data" DVD is limited to the type of media used. A single layer DVD is about 3.7 GB's.
    In order to make a data DVD you need to keep the data rate low enough to not exceed the DVD media playback abilities. They can't handle the higher rates found in many of the preset options.
    You can avoid all of these headaches by telling the viewer to "copy" the .mov file from the DVD to their Desktop. Then nearly any of the presets option will work.
    H.264 is a great video codec and the "automatic" preset should work just fine. Use "multi-pass" for best quality (takes a very long time). Remove the check mark for "Audio" since your file has none. Leave it checked and you waste file size because a silent audio track would be added.

  • Best Data rate and format for Big Screen Playback

    I am a video producer supplying content for a tradeshow display consisting of 18 flat screens oriented vertically, 8 over 8. Very heavy duty Windows playback machine, with 3 graphics cards, 1 for each set of 6 monitors. The total pixel size is 3240x 1280 px. We have produced a number of full screen and smaller movies that the interactive programmer is trying to intergrate into a Flash program where some videos will play full screen, and others in smaller windows (those were produced in 1920x1080). All videos were mastered at the specified pixel count, square pixels, progressive, and output in Pro Res 422, 44.1K audio. Flash is being used to create the interface, and I am transcoding to .F4V at lower bit rates like 4, 6, and 9 Mbps. They are having trouble playing these movies back smoothly. We have done similar projects in the past with Director without issue. Of course they are blaming the video, I am offering to encode in any format at any data rate they suggest. Can anyone suggest a direction here?

    Are you trying to burn a standard DVD or are you trying to put a QuickTime .mov file on DVD media?
    A standard DVD doesn't worry about file size (only the duration).
    A "data" DVD is limited to the type of media used. A single layer DVD is about 3.7 GB's.
    In order to make a data DVD you need to keep the data rate low enough to not exceed the DVD media playback abilities. They can't handle the higher rates found in many of the preset options.
    You can avoid all of these headaches by telling the viewer to "copy" the .mov file from the DVD to their Desktop. Then nearly any of the presets option will work.
    H.264 is a great video codec and the "automatic" preset should work just fine. Use "multi-pass" for best quality (takes a very long time). Remove the check mark for "Audio" since your file has none. Leave it checked and you waste file size because a silent audio track would be added.

  • Encoded video does not have expected data rate

    Hi
    I've been encoding with Flash Media Encoder (CS3) recently
    and come across a contradiction which I can't seem to make sense
    of. Can it be possible that the longer the video, the harder it is
    for the encoder to maintain data rate?
    This was revealed when I started using FMS2 (space on a
    shared server as opposed to my own server). I encoded a video at
    what I thought was a data rate which was within my data rate limit
    for my stream and it stuttered on playback over the stream. The FMS
    supplier informed me that the FLV I thought I'd encoded at 400kbps
    video 96 kbps audio actually had a data rate of around 700kbps in
    total. But if I open the video in a FLV player, and look at info,
    it still shows the setting I originally intended. However, my
    supplier suggested checking data rate using this equation:
    (filesize in MB x 8)/videoduration in seconds = video
    datarate in kbits/sec
    If you apply this calculation to my video it does indeed show
    what they said:
    Using Flash Media Encoder:
    Encoded at 400kbps video - 96 kbps audio
    Compressed filesize = 26,517 kbytes
    Duration = 240 seconds
    CALCULATION GIVES DATE RATE OF 880k/b a second
    (note file contains an alpha channel)
    Furthermore, applied to a shorter video, we get the following
    more logical results:
    Encoded at 400kbps video - 96 kbps audio
    Compressed filesize = 6,076 kbytes
    Duration = 90 seconds
    CALCULATION GIVES DATA RATE OF 527 kb/sec
    (no alpha channel)
    The only differences between the two vids is the alpha
    channel and the duration. Somethings doesn't add up here.
    Any clues?
    Thanks
    Simon

    I've just partly answered my own question. Funny how writing
    a post can help you think your problem through sometimes.
    The issue is the alpha channel, which adds to the file size
    thereby making the data rate "incorrect". In other words, the same
    file encoded without the alpha channel provides the expected
    results. I guess this changes my question to: How do I create
    predicatable data rates when encoding using an alpha channel. I
    could just estimate of course and see what comes out, but is there
    a better way?
    SImon.

  • Lower Data Rates for VoWLAN

    When I take a look in the deployment guides I could see that I should not use data rates in the 802.11a network below 12mbit. We do not expect more than 6 calls and 8 phones on one access point but would like to raise the coverage by using 6mbit data rate. We could not place more access points. Is there someone with experience using lower data rates?
    Using:1252 Access Points and WISM Controller

    Allowing speeds from 6 Mbps can technically work, but the recommendation is also to avoid retries... the larger your cell, the more chances you have to have multipath issues making packets fail. With a voice deployment, if you have more than 1 % error rate, your MOS degrades below 4.1 (the quality of the call is affected and users start complaining that voice conversation is unstable, choppy etc).
    So allowing 6 Mbps is fine, but the result will depend on the quality of your RF environment...
    hope it helps
    Jerome

Maybe you are looking for