RTP jitter buffer growing too large?

Hi all I am experiencing a rather annoying problem when receiving RTP audio data and rendering it: It takes some time for the player to get created and realized, in the mean time RTP packets continued to arrive, causing them to be buffered. It appeared that the buffer grew until data is drained from it (by the player), so the longer it took the player to get created and realized the larger the buffer became, causing a massive delay which is annoying when a conversation is being carried out. I did set the buffer length (via the RTPManager's BufferControl) to 200ms but this does not seem to make any difference. I don't have direct proof that this is what actually happened under the hood but all evidence seemed to point to this unchecked growth of the jitter buffer. The faster the computer, the faster the player get realized and the smaller the delay.
Does anyone else experience this phenomenon? Is there a fix?

I don't know if your diagnosis is correct, for shure I have a lot of jitter between two PC using the same java app and playing a RTP broadcast audio.
But I could not relate it with the speed of the computer, sometimes A plays before B, sometimes after. Problably it is the time to create objects that varies.
Still looking for a solution....

Similar Messages

  • TDS buffer length too large & Protocol error in TDS stream

    Hi,
    While performing the HFM Copy application(using the copy utility) from Production to Development, I receive some errors many times noticed from the log file which are as follows:
    11-24-2009 01:06:37 : 157 : Error : Number=(-2147467259)(80004005) Source=(Microsoft OLE DB Provider for SQL Server) Description=(TDS buffer length too large) SQLState=(HY000)NativeError=(0)
    11-24-2009 01:06:37 : 157 : Error : Number=(-2147467259)(80004005) Source=(Microsoft OLE DB Provider for SQL Server) Description=(Protocol error in TDS stream) SQLState=(HY000)NativeError=(0)
    I would like to know why did this error appeared and any help would be of great help.
    Thanks.
    Regards,
    Ravi Shankar S

    I have seen this where the server doesn't respond in time, pulling data from SQL via ODBC using Excel. If so the fix is fairly simple:
    sp_configure
    'remote query timeout (s)',3600
    GO
    RECONFIGURE
    GO
    JCEH

  • TIme Machine  backup grows too large during backup process

    I have been using Time Machine without a problem for several months, backing up my imac - 500GB drive with 350g used. Recently TM failed because the backups had finally filled the external drive - 500GB USB. Since I did not need the older backups, I reformatted the external drive to start from scratch. Now TM tries to do an initial full backup but the size keeps growing as it is backing up, eventually becoming too large for the external drive and TM fails. It will report, say, 200G to back up, then it reaches that point and the "Backing up XXXGB of XXXGB" just keeps getting larger. I have tried excluding more than 100GB of files to get the backup set very small, but it still grows during the backup process. I have deleted plist and cache files as some discussions have suggested, but the same issue occurs each time. What is going on???

    Michael Birtel wrote:
    Here is the log for the last failure. As you see it indicates there is enough room 345g needed, 464G available, but then it fails. I can watch the backup progress, it reaches 345G and then keeps growing till it give out of disk space error. I don't know what "Event store UUIDs don't match for volume: Macintosh HD" implies, maybe this is a clue?
    No. It's sort of a warning, indicating that TM isn't sure what's changed on your internal HD since the previous backup, usually as a result of an abnormal shutdown. But since you just erased your TM disk, it's perfectly normal.
    Starting standard backup
    Backing up to: /Volumes/Time Machine Backups/Backups.backupdb
    Ownership is disabled on the backup destination volume. Enabling.
    2009-07-08 19:37:53.659 FindSystemFiles[254:713] Querying receipt database for system packages
    2009-07-08 19:37:55.582 FindSystemFiles[254:713] Using system path cache.
    Event store UUIDs don't match for volume: Macintosh HD
    Backup content size: 309.5 GB excluded items size: 22.3 GB for volume Macintosh HD
    No pre-backup thinning needed: 345.01 GB requested (including padding), 464.53 GB available
    This is a completely normal start to a backup. Just after that last message is when the actual copying begins. Apparently whatever's happening, no messages are being sent to the log, so this may not be an easy one to figure out.
    First, let's use Disk Utility to confirm that the disk really is set up properly.
    First, select the second line for your internal HD (usually named "Macintosh HD"). Towards the bottom, the Format should be +Mac OS Extended (Journaled),+ although it might be +Mac OS Extended (Case-sensitive, Journaled).+
    Next, select the line for your TM partition (indented, with the name). Towards the bottom, the Format must be the same as your internal HD (above). If it isn't, you must erase the partition (not necessarily the whole drive) and reformat it with Disk Utility.
    Sometimes when TM formats a drive for you automatically, it sets it to +Mac OS Extended (Case-sensitive, Journaled).+ Do not use this unless your internal HD is also case-sensitive. All drives being backed-up, and your TM volume, should be the same. TM may do backups this way, but you could be in for major problems trying to restore to a mis-matched drive.
    Last, select the top line of the TM drive (with the make and size). Towards the bottom, the *Partition Map Scheme* should be GUID (preferred) or +Apple Partition Map+ for an Intel Mac. It must be +Apple Partition Map+ for a PPC Mac.
    If any of this is incorrect, that's likely the source of the problem. See item #5 of the Frequently Asked Questions post at the top of this forum for instructions, then try again.
    If it's all correct, perhaps there's something else in your logs.
    Use the Console app (in your Applications/Utilities folder).
    When it starts, click +Show Log List+ in the toolbar, then navigate in the sidebar that opens up to your system.log and select it. Navigate to the +Starting standard backup+ message that you noted above, then see what follows that might indicate some sort of error, failure, termination, exit, etc. (many of the messages there are info for developers, etc.). If in doubt post (a reasonable amount of) the log here.

  • SharePoint TempDB.mdf growing too large? I have to restart SQL Server all the time. Please help

    Hi there,
    On our DEV SharePoint farm > SQL server
    The tempdb.mdf size grows too quickly and too much. I am tired of increasing the space and cannot do that anymore.
    All the time I have to reboot the SQL server to get tempdb to normal size.
    The Live farm is okay (with similar data) so it must be something wrong with our
    DEV environment.
    Any idea how to fix this please?
    Thanks so much.

    How do you get the tempdb to 'normal size'? How large is large and how small is normal.
    Have you put the databases in simple recovery mode? It's normal for dev environments to not have the required transaction log backups to keep the ldf files in check. That won't affect the tempdb but if you've got bigger issues then that might be a symptom.
    Have you turned off autogrowth for the temp DB?

  • EM Application Log and Web Access Log growing too large on Redwood Server

    Hi,
    We have a storage space issue on our Redwood SAP CPS Orcale servers and have found that the two log files above are the main culprits for this. These files are continually updated and I need to know what these are and if they can be purged or reduced down in size.
    They have been in existence since the system has been installed and I have tried to access them but they are too large. I have also tried taking the cluster group offline to see if the file stops being updated but the file continues to be updated.
    Please could anyone shed any light on this and what can be done to resolve it?
    Thanks in advance for any help.
    Jason

    Hi David,
    The file names are:
    em-application.log and web access.log
    The File path is:
    D:\oracle\product\10.2.0\db_1\oc4j\j2ee\OC4J_DBConsole_brsapprdbmp01.britvic.BSDDRINKS.NET_SAPCPSPR\log
    Redwood/CPS version is 6.0.2.7
    Thanks for your help.
    Kind Regards,
    Jason

  • Music library growing too large...

    I've been using Quod Libet as my music player for a while now, and it is pretty much exactly what I want in a music player.  However, as my music collection grows, it has been slowing down lately.  I have over 8000 songs now, around 40 gigs, and Quod Libet will slow down, peg cpu usage, and crash quite often now.  What other options do I have?  I know Amarok can use a real database backend that should scale way beyond what I currently have, but prefer GTK apps and the Quod Libet interface.  Can MPD handle a library this large?  Any MPD clients that are Quod Libet like?  Anyway to make Quod Libet scale better?
    Thanks

    luciferin wrote:
    dmz wrote:http://www.last.fm/user/betbot
    It takes a true audiophile to require The Spice Girls in lossless quality
    Here's me: http://www.last.fm/user/Arch
    That's right, I nabbed the nick Arch way back in 2004 on Audioscrobbler and Neowin.net   Arch Linux and I were meant to be together.
    And to derail this thread a little bit: does anybody know of a linux music player that doesn't use a database?  Just adds files from your directories ala Foobar?
    The Spice Girls is very underestimated. And Mel C is a hell of a girl. So beautiful.. I wish.. oh well. Maybe you want to take a look at mocp or cmus, if you dont want to use mpd.

  • Automatic Deployment Rule for SCEP Definitions growing too large.

    See the deployment package for SCEP definition is now 256MB and growing.  How can we make sure it stays small?  The ADR creating the package is leaving 26 Definition in there right now.

    The method that Kevin suggests above is what is implemented as part of a default deployment template included with SP1. This limits the number of definitions in the update group to the latest eight (I think).
    As a supplemental note here, whenever an ADR runs and is configured to use an existing update group, it first wipes that update group.
    Jason | http://blog.configmgrftw.com

  • Tablespace growing too large

    Good morning gurus,
    Sorry if I sound novice at some point .
    I have this table space vending of size 188,598.6MB.It keeps on growing.I have to give it extra space every week and all is consumed.It is a permanent table space with extent management local and segment space management auto.This table space is the backbone of the database which is 250G.We are currently running oracle 10.2.0.4 on windows.
    Please help
    Regards
    Deepika

    Hi..
    Please do mention the database version and the OS.
    You need to know what are the objects, object_types on such a big tablespace.Which schemas use it waht do they do.Do they do any kind of DIRECT Loading in the database.Are all the tables and the indexes on the same tablespace.What i feel is, you are having all the tables and the indexes on the same tablespace.I would recommend 2 things:--
    1. Do the data purging.Talk to the considered applications team,or who so ever is the concerned person, and the decide data retention period in the database and move the rest of the data to some other database as history.
    2. Keep different tablespaces for the tables and indexes.
    HTH
    Anand

  • Content Database Growing too large

    We seem to be experiencing some slowness on our SharePoint farm and noticed that one of our databases (we have two) is now at 170 Gb. Best practice seems to be to keep the database from going over 100Gb.
    We have hundreds of Sites within one Database and need to spit these up to save space on our databases.
    So I  would like to create some new databases and move some of the sites from the old database over to the new databases.
    Can anyone tell me if I am on the right track here and if so how to safely move these sites to another Content Database?
    dfrancis

    I would not recommend using RBS. Microsoft's RBS is really just meant to be able to exceed the 4GB/10GB MDF file size limit in SQL Express. RBS space /counts against/ database size, and backup/restore becomes a more complex task.
    Trevor Seward
    Follow or contact me at...
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • CSCtz15346 - /mnt/pss directory growing too large and having no free space

          hello,
    Nexus 3K switch is not allowing me to save the configuration, showing the below message.   
              switch %$ VDC-1 %$ %SYSMGR-2-NON_VOLATILE_DB_FULL: System non-volatile storage usage is unexpectedly high at 99%
    switch# switch# copy  r s
    [########################################] 100%
    Configuration update aborted: request was aborted
    switch#
    how to clean up /mnt/pss ?
    Thanks

    Hi Naga,
    From the CLI, issue "show system internal flash" to see what directory is taking up the space.   Unfortunately, if it is /mnt/pss, then you really need to engage TAC to get on the switch and enable the internal access to the file system so it can be cleared up.
    Sincerely,
    David.

  • RSZWOBJ table growing too large

    Hello Experts:
    RSZWOBJ is the largest table at my client.  Does anyone have experience with archiving the RSZWOBJ table or handling its data growth?
    Thanks,
    Jane

    Hi,
    can you carry out say bookmark purge for the content which are older than 6 months or so?
    can u check that whether we can delete those history from the system and appraently from table?
    How to delete user defined Bookmarks ?
    How to find the Infoprovider and Query name with help of WAD tech name:
    Thanks and regards
    Kiran

  • Var/adm/utmpx: value too large for defined datatype

    Hi,
    On a Solaris 10 machine I cannot use last command to view login history etc. It tells something like "/var/adm/utmpx: value too large for defined datatype".
    The size of /var/adm/utmpx is about 2GB.
    I tried renaming the file to utmpx.0 and create a new file using head utmpx.0 > utmpx but after that the last command does not show any output. The new utmpx file seems to be updating with new info though... as seen from file last modified time.
    Is there a standard procedure to recreate a new utmpx file once it grows too largs?? I couldnt find much in man pages
    Thanks in advance for any help

    The easiest way is to cat /dev/null to utmpx - this will clear out the file to 0 bytes but leave it intact.
    from the /var/adm/ directory:
    cat /dev/null > /var/adm/utmpx
    Some docs suggest going to single user mode to do this, or stopping the utmp service daemon first, but I'm not positive this is necessary. Perhaps someone has input on that aspect. I've always just sent /dev/null to utmpx and wtmpx without a problem.
    BTW - I believe "last" works with wtmpx, and "who" works with utmpx.

  • XMLTRANSFORM Too large stylesheet - code buffer overflow issue

    Hi All,
    My question is related to MSWordML generation from PLSQL stored procedure.
    1. I have table, containing XSLT stylesheets for different documents
    2. PLSQL stored procedure is generating dynamic content depending on some params and at the end I'm using
    SELECT XMLTRANSFORM(XMLTYPE.createxml(db_data_clob), XMLTYPE.createxml(x.xslt_clob)).GetClobVal()
    INTO   res
    FROM   msword_ml_data x
    WHERE  x.report_id = rep_id_variable;
    where : x.xslt_clob -> column, containing XSLT CLOB
    db_data_clob -> dynamic content CLOB
    res -> CLOB result
    All this was working fine on Oracle11gR1, but I had to reinstall database and I said why not install Oracle11gR2 ...
    Guess what. Stored procedure is raising exception when using XMLTRANSFORM :
    Exception : : ORA-31011: XML parsing failed
    ORA-19202: Error occurred in XML processing
    LPX-00004: internal error "Too large stylesheet - code buffer overflow"
    Google says nothing about it. I don't recall setting some special DB property in Oracle11gR1.
    Has anyone encountered this ?
    I haven't changed procedure nor table.
    I'm using exactly the same XSLT's from Java code and they are working just fine, so they are not the reason. My guess is that something in Oracle11gR2 related to XML processing is changed.
    If anyone could help, thanks in advance

    For those who are interested.
    I have logged a service request and it turned out that this is is a bug in Oracle 11gR2.
    "The limitation on the style sheet is not exactly a size limit but a limitation on the number of style sheet instructions and depends on the way the style sheet has been written. This is a C based parser limitation"
    Anyway, the workaround is to create Java stored procedure and do transformation from there.

  • Requested buffer too large - but data is already in memory

    Hello all,
    I am writing a program that generates sound and then uses the Java Sound API to play it back over the speakers. Until recently, using clips have not led to any problems. On two computers I can play the sound without a hitch. However, on the newest computer (and also with the largest specs and especially more RAM), I am getting an error while trying to play back the sound. The exception that is thrown is:
    javax.sound.sampled.LineUnavailableException: Failed to allocate clip data: Requested buffer too large.
    I find this odd because the buffer already exists in memory: I don't have to read in a .wav file or anything because I am creating the audio during the course of my program's execution (this is also why I use Clips instead of streaming - the values are saved as doubles during the calculations and then converted into a byte array, which is the buffer that is used in the clip.open() method call). It has no problems allocating the double array, the byte array, or populating the byte array. It is only thrown during clip.open() call. I also find it strange that it would work on two other computers, both of which have less RAM (it runs fine on a machine with 512MB and 2GB of RAM, both XP 32-bit). The only difference is that the computer with the issue is running Windows 7 (the RTM build), 64-bit with 6GB of RAM. I am running it through Netbeans 6.7.1 with memory options set to use up to 512MB - but it's never gone up that far before. And I've checked the size of the buffer on all three computers and they are all the same.
    Does anyone know what the issue could be or how to resolve it? I am using JDK6 if that matters. Thank you for your time.
    Edited by: Sengin on Sep 18, 2009 9:40 PM

    Thanks for your answer. I'll try that.
    I figured it had something to do with Windows 7 since it technically hasn't been released yet (however I have the RTM version thanks to a group at my univeristy in cahoots with Microsoft which allows some students to get various Microsoft products for $12).
    Edit: I just changed the Clip to a SourceDataLine (and the few other necessary changes like changing the way the DataLine.Info object was created) and wrote the whole buffer into it, drained the line and then closed it. It works fine. I'll mark the question as answered, however that may not be the "correct" answer (perhaps it does have something to do with Windows 7 and not being completely tested yet). Thanks.
    Edited by: Sengin on Sep 21, 2009 8:44 PM
    Edited by: Sengin on Sep 21, 2009 8:46 PM

  • Warning:The EXPORT data cluster is too large for the application buffer.

    Hi Friends,
    I am getting following warning messages whenever I click on costing in "Accounting" tab.
    1. Costing data may not be up to date  Display Help
    2.  No costing variant exists in controlling scenario CPR0001
    3.  An error occurred in Accounting (system ID3DEV310)
    4. The EXPORT data cluster is too large for the application buffer.
      I can create project automatically from cprojects. PLan costs, budget and actuals maintain in WBS elements can be visible in cproject Object Links.
    Your reply is highly appreciated.
    Regards,
    Aryan

    Hi;
    Please, check the Note --> 1166365
    We are facing the same problem in R3, but apply this no fix it.
    Best regards.
    Mariano

Maybe you are looking for

  • Hyperlinks not exporting to pdf.

    We have a report that has hyperlinks to different pages in the report. I did this by formatting text as a hyperlink and using a formula to generate the link in the format #page=5 when this was exported to pdf, the links worked perfectly and linked ba

  • Default Audit policy in 11g

    Hi all, 11.2.0.3.11 aix6 What v$view can I select all information about our database Audit Policy Setting? That shows type of actions, events, and captured information? Thanks, mk

  • OSM orders getting stuck at particular task

    Hi, I am facing problem with OSM cartridges . Orders are getting stuck at particular task. I can see in OMS web client that particular task is still in Accepted state whereas it should have gone to completed state and proceeded to next task. I can al

  • ADF_FACES-60098:Faces lifecycle receives unhandled exceptions in phase REST

    I had the following exception <LifecycleImpl> <_handleException> ADF_FACES-60098:Faces lifecycle receives unhandled exceptions in phase RESTORE_VIEW 1 java.lang.NullPointerException      at java.util.concurrent.ConcurrentHashMap.put(ConcurrentHashMap

  • Process instructions

    Hi, I have maintained CONS_I as the Instruction category with scope of generation as 01 (for all reservation items) in customizing. When I create the process order and try to generate the control recipe I get the error that PPPI_RESERVATION_ITEM is n