Buffer Size and Latency

I knew the 2 might be connected but I recently read some information which confused me. I thought a larger buffer size was good, but I've just read that increasing the buffer size lengthens latency. Can anyone enlighten me on the relationship between these; just simple answers, please.

Audio is sent through the system in chunks. Chunks of the buffer size. Buffer size is measured in samples so a buffer setting of 256 samples rewquests and provides audio in 256 sample packets. As a real time application, this cannot be predicted so the buffer reads from the present/past as opposed to looking ahead of time. The chunk will be handed off to CoreAudio that much ( time ) later, in comparison. So...
Larger buffer increases track/plugin count since you allow the computer more time to manage it's tasks.
Decreasing allows for less total system throuhput ( = less latency ).
Many people have found a good balance for this ( if they need to dynamically alter system resources ) to be:
Low buffer size while recording, for minimal latency
Then increase buffer for mixdown, allowing more pluggies and tracks.
Your latency setting is also dependent on your audio hardware and system busses. So, a good interface and driver on a dedicated FireWire bus can allow you to decrease your buffer setting.
J

Similar Messages

  • Data retrieval buffers - buffer size and sort buffer size

    Any difference to tune BSO and ASO on data retrieval buffers?
    From Oracle documentation, the buffer size setting is per database per Essbase user i.e. more physical memory will be used if there are lots of concurrent data access from users.
    However even for 100 concurrent users, default buffer size of 10KB (BSO) or 20KB (ASO) seems very small compare to other cache setting (total buffer cache is 100*20 = 2MB). Should we increase the value to 1000KB to improve the data retrieval performance by users? The improvement impact is the same for online application e.g. Hyperion Planning and reporting application e.g. Financial Reporting?
    Assume 3 Essbase plan types with 100 concurrent access:
    PLAN1 - 1000KB * 100 = 100MB 1000KB * 100 = 100MB (total sort buffer size)
    PLAN2 - 1000KB * 100 = 100MB 1000KB * 100 = 100MB (total sort buffer size)
    PLAN3 - 1000KB * 100 = 100MB 1000KB * 100 = 100MB (total sort buffer size)
    Total physical memory required is 600MB.
    Thanks in advance!

    256 samples Buffer size will always give you a noticable amount of latency. If you use Software Monitoring you should try setting your Buffer to 64 samples. With the recording delay slider in the preferences->Audio you can compensate for the latency (of course not in realtime) so that the Audio will be placed exactly there, where it should hae been recorded at. In your case set it to a -value. A loopback test (check the link below) will clarify the exact amount of Latency occuring on your system.
    http://discussions.apple.com/thread.jspa?threadID=1883662&tstart=0

  • Buffer size and recording delay

    Hi
    I use a Focusrite Saffire LE audio interface, and in my core audio window the buffer size is currently set at 256 and there is a recording delay of 28ms. I don't think I've ever altered these settings since I started to use Logic. I don't know if this is related, but I do notice a slight delay when I monitor what I'm recording -- a slight delay, for example, between my voice (as I'm recording it) through the headphones and my "direct" voice.
    Anyway, are these settings OK, or should they be altered for optimum performance?

    256 samples Buffer size will always give you a noticable amount of latency. If you use Software Monitoring you should try setting your Buffer to 64 samples. With the recording delay slider in the preferences->Audio you can compensate for the latency (of course not in realtime) so that the Audio will be placed exactly there, where it should hae been recorded at. In your case set it to a -value. A loopback test (check the link below) will clarify the exact amount of Latency occuring on your system.
    http://discussions.apple.com/thread.jspa?threadID=1883662&tstart=0

  • T61p 200gb at 7200rpm buffer size and real world performance?

        Hi, I wanted to confirm what the buffer size on the 200gb 7200 option of the T61p had... there seems to be conflicting information regarding this model. I'd also appreciate any information regarding the real world performance of this particular drive...
    Thanks!
    Message Edited by carthikv12 on 05-19-2008 09:31 AM
    Solved!
    Go to Solution.

    both the hitachi and seagate 200GB/7200RPM drives used in the T61p have 16MB of onboard cache.   performance tests of these two drives are scattered across the internet.   search google for "hitachi 200GB 7K200" and "seagate 200GB 7200.3" respectively.
    ThinkStation C20
    ThinkPad X1C · X220 · X60T · s30 · 600

  • Buffer size and samples per channel

    I have a question regarding the allocation of the output buffer. I have
    a digital I/O card PCI-6534 and I use the driver NI-DAQmx 7.4.0f0. I
    would like to generate a digital output using different clock rates.
    For example, I need to write 500 samples at 1000 samples per second and
    other 500 samples at a rate of 10000 samples per second. the simplest
    solution is to prepare two different waveforms, write the first one on
    the buffer and generate the output. Then I can load the second waveform
    on the buffer and generate the second output. In order to minimize the
    delay between the two output sequences, I would like instead to write
    the buffer once. I tried to set the buffer size to 1000, and the number
    of samples per second to 500, but it doesn't work. There is a way to
    set independently the buffer dimension and the number of samples which
    have to be generated?

    I can post the whole thing but I'll talk a little about it. It's a producer consumer loop with a continuous analog input sample from external clock program in the producer loop. Right now the consumer loop has a simple write to spreadsheet VI in it but eventually I want to average each revolution (well, two since it's a four stroke but that's neither here nor there) of pressure traces and spit out a single curve.
    The wiring is simple. I have a voltage supply feeding the encoder and the quadrature A input on PFI 8 of the 6212 DAQ. I also have the Z index plugged in but nothing else to it. The analog input is a BNC right to AI 0. I can make a diagram if you want one. I've scoped the rotary encoder output and it looks great, very square with small relative rise times.
    Attachments:
    Yanmar Program V2.vi ‏46 KB

  • Out of Memory Error, Buffer Sizes, and Loop Rates on RT and FPGA

    I'm attempting to use an FPGA to collect data and then transfer that data to a RT system via a FIFO DMA, and then stream it from my RT systeam over to my host PC. I have the basics working, but I'm having two problems...
    The first is more of a nuisance. I keep receiving an Out of Memory error. This is a more recent development, but unfortunately I haven't been able to track down what is causing it and it seems to be independent of my FIFO sizes. While not my main concern, if someone was able to figure out why I would really appreciate it.
    Second, I'm struggling with overflows. My FPGA is running on a 100 MHz clock and my RT system simply cannot seem to keep up. I'm really only looking at recording 4 seconds of data, but it seems that no matter what I do I can't escape the problem without making my FIFO size huge and running out of memory (this was before I always got the Out of Memory error). Is there some kind of tip or trick I'm missing? I know I can set my FPGA to a slower clock but the clock speed is an important aspect of my application.
    I've attached a simplified version of my code that contains both problems. Thanks in advance for taking a look and helping me out, I appreciate any feedback!

    David-A wrote:
    The 7965 can't stream to a host controller faster than 800MB/s. If you need 1.6GB/s of streaming you'll need to get a 797x FPGA module.  
    I believe the application calls for 1.6 GB over 4s, so 400 MB/s, which should be within the capabilities of the 7965.
    I was going to say something similar about streaming over ethernet. I agree that it's going to be necessary to find a way to buffer most if not all of the data between the RT Target and the FPGA Target. There may be some benefit to starting to send data over the network, but the buffer on the host is still going to need to be quite large. Making use of the BRAMS and RAM on the 7965 is an interesting possibility.
    As a more out there idea, what about replacing the disk in your 8133 with an SSD? I'm not entirely sure what kind of SATA connection is in the 8133, and you'd have to experiment to be sure, but I think 400 MB/s or close to that should be quite possible. You could write all the data to disk and then send it over the network from disk once the aquisiton is complete. http://digital.ni.com/public.nsf/allkb/F3090563C2FC9DC686256CCD007451A8 has some information on using SSDs with PXI Controllers.
    Sebastian

  • Based on my computer what I/O buffer size and process buffer should I have to stop getting the system too slow error message?

    I need some sugestions on my settings. I have a 13 in macbook pro runnig 10.8.2 with a 2.7 Ghz i7 with 4 GB of ram.
    I am using Logic pro X 10.0.3 and when I get to a really load heavy section of a project then I get a too slow error message.
    I know this will happen from time to time because of my computer but is there anyway to make it better?

    Im curious how you are running LPX on an OS X 10.8.2 system?
    As to your question.. there is no specific answer... Try playing with the settings (making a note of your current ones) and see what improvements, if any, are made under different combinations of the settings.
    Personally I'd add more RAM if you can... as that alone may help matters by some degree...
    Finally,
    This article may also help.. (Its written for LP9 but most of it is applicable for LPX too)
    http://support.apple.com/kb/TA24535

  • A quick primer on audio drivers, devices, and latency

    This information has come from Durin, Adobe staffer:
    Hi everyone,
    A  common question that comes up in these forums over and over has to do  with recording latency, audio drivers, and device formats.  I'm going to  provide a brief overview of the different types of devices, how they  interface with the computer and Audition, and steps to maximize  performance and minimize the latency inherent in computer audio.
    First, a few definitions:
    Monitoring: listening to existing audio while simultaneously recording new audio.
    Sample: The value of each individual bit of audio digitized by the audio  device.  Typically, the audio device measures the incoming signal 44,100  or 48,000 times every second.
    Buffer Size: The  "bucket" where samples are placed before being passed to the  destination.  An audio application will collect a buffers-worth of  samples before feeding it to the audio device for playback.  An audio  device will collect a buffers-worth of samples before feeding it to the  audio device when recording.  Buffers are typically measured in Samples  (command values being 64, 128, 512, 1024, 2048...) or milliseconds which  is simply a calculation based on the device sample rate and buffer  size.
    Latency: The time span that occurs between  providing an input signal into an audio device (through a microphone,  keyboard, guitar input, etc) and when each buffers-worth of that signal  is provided to the audio application.  It also refers to the other  direction, where the output audio signal is sent from the audio  application to the audio device for playback.  When recording while  monitoring, the overall perceived latency can often be double the device  buffer size.
    ASIO, MME, CoreAudio: These are audio driver models, which simply specify the manner in which an audio application and audio device communicate.  Apple Mac systems use CoreAudio almost exclusively which provides for low buffer sizes and the ability  to mix and match different devices (called an Aggregate Device.)  MME  and ASIO are mostly Windows-exclusive driver models, and provide  different methods of communicating between application and device.  MME drivers allow the operating system itself to act as a go-between and  are generally slower as they rely upon higher buffer sizes and have to  pass through multiple processes on the computer before being sent to the  audio device.  ASIO drivers provide an audio  application direct communication with the hardware, bypassing the  operating system.  This allows for much lower latency while being  limited in an applications ability to access multiple devices  simultaneously, or share a device channel with another application.
    Dropouts: Missing  audio data as a result of being unable to process an audio stream fast  enough to keep up with the buffer size.  Generally, dropouts occur when  an audio application cannot process effects and mix tracks together  quickly enough to fill the device buffer, or when the audio device is  trying to send audio data to the application more quickly than it can  handle it.  (Remember when Lucy and Ethel were working at the chocolate  factory and the machine sped up to the point where they were dropping  chocolates all over the place?  Pretend the chocolates were samples,  Lucy and Ethel were the audio application, and the chocolate machine is  the audio device/driver, and you'll have a pretty good visualization of  how this works.)
    Typically, latency is not a problem if  you're simply playing back existing audio (you might experience a very  slight delay between pressing PLAY and when audio is heard through your  speakers) or recording to disk without monitoring existing audio tracks  since precise timing is not crucial in these conditions.  However, when  trying to play along with a drum track, or sing a harmony to an existing  track, or overdub narration to a video, latency becomes a factor since  our ears are far more sensitive to timing issues than our other senses.   If a bass guitar track is not precisely aligned with the drums, it  quickly sounds sloppy.  Therefore, we need to attempt to reduce latency  as much as possible for these situations.  If we simply set our Buffer  Size parameter as low as it will go, we're likely to experience dropouts  - especially if we have some tracks configured with audio effects which  require additional processing and contribute their own latency to the  chain.  Dropouts are annoying but not destructive during playback, but  if dropouts occur on the recording stream, it means you're losing data  and your recording will never sound right - the data is simply lost.   Obviously, this is not good.
    Latency under 40ms is  generally considered within the range of reasonable for recording.  Some  folks can hear even this and it affects their ability to play, but most  people find this unnoticeable or tolerable.  We can calculate our  approximate desired buffer size with this formula:
    (Sample per second / 1000) * Desired Latency
    So,  if we are recording at 44,100 Hz and we are aiming for 20ms latency:   44100 / 1000 * 20 = 882 samples.  Most audio devices do not allow  arbitrary buffer sizes but offer an array of choices, so we would select  the closest option.  The device I'm using right now offers 512 and 1024  samples as the closest available buffer sizes, so I would select 512  first and see how this performs.  If my session has a lot of tracks  and/or several effects, I might need to bump this up to 1024 if I  experience dropouts.
    Now that we hopefully have a pretty  firm understanding of what constitutes latency and under what  circumstances it is undesirable, let's take a look at how we can reduce  it for our needs.  You may find that you continue to experience dropouts  at a buffer size of 1024 but that raising it to larger options  introduces too much latency for your needs.  So we need to determine  what we can do to reduce our overhead in order to have quality playback  and recording at this buffer size.
    Effects: A  common cause of playback latency is the use of effects.  As your audio  stream passes through an effect, it takes time for the computer to  perform the calculations to modify that signal.  Each effect in a chain  introduces its own amount of latency before the chunk of audio even  reaches the point where the audio application passes it to the audio  device and starts to fill up the buffer.  Audition and other DAWs  attempt to address this through "latency compensation" routines which  introduce a bit more latency when you first press play as they process  several seconds of audio ahead of time before beginning to stream those  chunks to the audio driver.  In some cases, however, the effects may be  so intensive that the CPU simply isn't processing the math fast enough.   With Audition, you can "freeze" or pre-render these tracks by clicking  the small lightning bolt button visible in the Effects Rack with that  track selected.  This performs a background render of that track, which  automatically updates if you make any changes to the track or effect  parameters, so that instead of calculating all those changes on-the-fly,  it simply needs to stream back a plain old audio file which requires  much fewer system resources.  You may also choose to disable certain  effects, or temporarily replace them with alternatives which may not  sound exactly like what you want for your final mix, but which  adequately simulate the desired effect for the purpose of recording.   (You might replace the CPU-intensive Full Reverb effect with the  lightweight Studio Reverb effect, for example.  Full Reverb effect is  mathematically far more accurate and realistic, but Studio Reverb can  provide that quick "body" you might want when monitoring vocals, for  example.)  You can also just disable the effects for a track or clip  while recording, and turn them on later.
    Device and Driver Options: Different  devices may have wildly different performance at the same buffer size  and with the same session.  Audio devices designed primarily for gaming  are less likely to perform well at low buffer sizes as those designed  for music production, for example.  Even if the hardware performs the  same, the driver mode may be a source of latency.  ASIO is almost always  faster than MME, though many device manufacturers do not supply an ASIO  driver.  The use of third-party, device-agnostic drivers, such as  ASIO4ALL (www.asio4all.com) allow you to wrap an MME-only device inside a  faux-ASIO shell.  The audio application believes it's speaking to an  ASIO driver, and ASIO4ALL has been streamlined to work more quickly with  the MME device, or even to allow you to use different inputs and  outputs on separate devices which ASIO would otherwise prevent.
    We  also now see more USB microphone devices which are input-only audio  devices that generally use a generic Windows driver and, with a few  exceptions, rarely offer native ASIO support.  USB microphones generally  require a higher buffer size as they are primarily designed for  recording in cases where monitoring is unimportant.  When attempting to  record via a USB microphone and monitor via a separate audio device,  you're more likely to run into issues where the two devices are not  synchronized or drift apart after some time.  (The ugly secret of many  device manufacturers is that they rarely operate at EXACTLY the sample  rate specified.  The difference between 44,100 and 44,118 Hz is  negligible when listening to audio, but when trying to precisely  synchronize to a track recorded AT 44,100, the difference adds up over  time and what sounded in sync for the first minute will be wildly  off-beat several minutes later.)  You are almost always going to have  better sync and performance with a standard microphone connected to the  same device you're using for playback, and for serious recording, this  is the best practice.  If USB microphones are your only option, then I  would recommend making certain you purchase a high-quality one and have  an equally high-quality playback device.  Attempt to match the buffer  sizes and sample rates as closely as possible, and consider using a  higher buffer size and correcting the latency post-recording.  (One  method of doing this is to have a click or clap at the beginning of your  session and make sure this is recorded by your USB microphone.  After  you finish your recording, you can visually line up the click in the  recorded track with the click in the original track by moving your clip  backwards in the timeline.  This is not the most efficient method, but  this alignment is the reason you see the clapboards in behind-the-scenes  filmmaking footage.)
    Other Hardware: Other  hardware in your computer plays a role in the ability to feed or store  audio data quickly.  CPUs are so fast, and with multiple cores, capable  of spreading the load so often the bottleneck for good performance -  especially at high sample rates - tends to be your hard drive or storage  media.  It is highly recommended that you configure your temporary  files location, and session/recording location, to a physical drive that  is NOT the same as you have your operating system installed.  Audition  and other DAWs have absolutely no control over what Windows or OS X may  decide to do at any given time and if your antivirus software or system  file indexer decides it's time to start churning away at your hard drive  at the same time that you're recording your magnum opus, you raise the  likelihood of losing some of that performance.  (In fact, it's a good  idea to disable all non-essential applications and internet connections  while recording to reduce the likelihood of external interference.)  If  you're going to be recording multiple tracks at once, it's a good idea  to purchase the fastest hard drive your budget allows.  Most cheap  drives spin around 5400 rpm, which is fine for general use cases but  does not allow for the fast read, write, and seek operations the drive  needs to do when recording and playing back from multiple files  simultaneously.  7200 RPM drives perform much better, and even faster  options are available.  While fragmentation is less of a problem on OS X  systems, you'll want to frequently defragment your drive on Windows  frequently - this process realigns all the blocks of your files so  they're grouped together.  As you write and delete files, pieces of each  tend to get placed in the first location that has room.  This ends up  creating lots of gaps or splitting files up all over the disk.  The act  of reading or writing to these spread out areas cause the operation to  take significantly longer than it needs to and can contribute to  glitches in playback or loss of data when recording.

    There is one point in the above that needed a little clarification, relating to USB mics:
    _durin_ wrote:
     If  USB microphones are your only option, then I would recommend making  certain you purchase a high-quality one and have an equally high-quality  playback device.
    If you are going to spend that much, then you'd be better off putting a little more money into an  external device with a proper mic pre, and a little less money by not  bothering with a USB mic at all, and just getting a 'normal' condensor  mic. It's true to say that over the years, the USB mic class of  recording device has caused more trouble than any other, regardless.
    You  should also be aware that if you find a USB mic offering ASIO support,  then unless it's got a headphone socket on it as well then you aren't  going to be able to monitor what you record if you use it in its native  ASIO mode. This is because your computer can only cope with one ASIO device in the system - that's all the spec allows. What you can do with most ASIO hardware though is share multiple streams (if the  device has multiple inputs and outputs) between different software.
    Seriously, USB mics are more trouble than they're worth.

  • DBMS_LOB.WRITEAPPEND Max buffer size exceeded

    Hello,
    I'm following this guide to create an index using Oracle Text:
    http://download.oracle.com/docs/cd/B19306_01/text.102/b14218/cdatadic.htm#i1006810
    So I wrote something like this:
    CREATE OR REPLACE PROCEDURE CREATE_INDEX(rid IN ROWID, tlob IN OUT NOCOPY CLOB)
    IS
    BEGIN
         DBMS_LOB.CREATETEMPORARY(tlob, TRUE);
         FOR c1 IN (SELECT ID_DOCUMENT FROM DOCUMENT WHERE rowid = rid)
         LOOP
              DBMS_LOB.WRITEAPPEND(tlob, LENGTH('<DOCUMENT>'), '<DOCUMENT>');
              DBMS_LOB.WRITEAPPEND(tlob, LENGTH('<DOCUMENT_TITLE>'), '<DOCUMENT_TITLE>');
              DBMS_LOB.WRITEAPPEND(tlob, LENGTH(NVL(c1.TITLE, ' ')), NVL(c1.TITLE, ' '));
              DBMS_LOB.WRITEAPPEND(tlob, LENGTH('</DOCUMENT_TITLE>'), '</DOCUMENT_TITLE>');
              DBMS_LOB.WRITEAPPEND(tlob, LENGTH('</DOCUMENT>'), '</DOCUMENT>');
              FOR c2 IN (SELECT TITRE,TEXTE FROM PAGE WHERE ID_DOCUMENT = c1.ID_DOCUMENT)
              LOOP
                   DBMS_LOB.WRITEAPPEND(tlob, LENGTH('<PAGE>'), '<PAGE>');
                   DBMS_LOB.WRITEAPPEND(tlob, LENGTH('<PAGE_TEXT>'), '<PAGE_TEXT>');
                   DBMS_LOB.WRITEAPPEND(tlob, LENGTH(NVL(c2.TEXTE, ' ')), NVL(c2.TEXTE, ' '));
                   DBMS_LOB.WRITEAPPEND(tlob, LENGTH('</PAGE_TEXT>'), '</PAGE_TEXT>');
                   DBMS_LOB.WRITEAPPEND(tlob, LENGTH('</PAGE>'), '</PAGE>');
              END LOOP;
         END LOOP;
    END;
    Issue is that some page text are bigger than 32767 bytes ! So I've got an INVALID_ARGVAL...
    I can't figure out how can I increase this buffer size and how to manage this issue ??
    Can you please help me :)
    Thank you,
    Ben
    Edited by: user10900283 on 9 févr. 2009 00:05

    Hi ben,
    I'm afraid, that doesn't help much, since you have obviously rewritten your procedure based on the advise given here.
    Coluld you please post your new procedure, as formatted SQL*Plus, embedded in {noformat}{noformat} tags, like this:SQL> CREATE OR REPLACE PROCEDURE create_index(rid IN ROWID,
    2 IS
    3 BEGIN
    4 dbms_lob.createtemporary(tlob, TRUE);
    5
    6 FOR c1 IN (SELECT id_document
    7 FROM document
    8 WHERE ROWID = rid)
    9 LOOP
    10 dbms_lob.writeappend(tlob, LENGTH('<DOCUMENT>'), '<DOCUMENT>');
    11 dbms_lob.writeappend(tlob, LENGTH('<DOCUMENT_TITLE>')
    12 ,'<DOCUMENT_TITLE>');
    13 dbms_lob.writeappend(tlob, LENGTH(nvl(c1.title, ' '))
    14 ,nvl(c1.title, ' '));
    15 dbms_lob.writeappend(tlob
    16 ,LENGTH('</DOCUMENT_TITLE>')
    17 ,'</DOCUMENT_TITLE>');
    18 dbms_lob.writeappend(tlob, LENGTH('</DOCUMENT>'), '</DOCUMENT>');
    19
    20 FOR c2 IN (SELECT titre, texte
    21 FROM page
    22 WHERE id_document = c1.id_document)
    23 LOOP
    24 dbms_lob.writeappend(tlob, LENGTH('<PAGE>'), '<PAGE>');
    25 dbms_lob.writeappend(tlob, LENGTH('<PAGE_TEXT>'), '<PAGE_TEXT>');
    26 dbms_lob.writeappend(tlob
    27 ,LENGTH(nvl(c2.texte, ' '))
    28 ,nvl(c2.texte, ' '));
    29 dbms_lob.writeappend(tlob, LENGTH('</PAGE_TEXT>'), '</PAGE_TEXT>')
    30 dbms_lob.writeappend(tlob, LENGTH('</PAGE>'), '</PAGE>');
    31 END LOOP;
    32 END LOOP;
    33 END;
    34 /
    Advarsel: Procedure er oprettet med kompileringsfejl.
    SQL>
    SQL> DECLARE
    2 rid ROWID;
    3 tlob CLOB;
    4 BEGIN
    5 rid := 'AAAy1wAAbAAANwsABZ';
    6 tlob := NULL;
    7 create_index(rid => rid, tlob => tlob);
    8 dbms_output.put_line('TLOB = ' || tlob); -- Not sure, you can do this?
    9 END;
    10 /
    create_index(rid => rid, tlob => tlob);
    FEJL i linie 7:
    ORA-06550: line 7, column 4:
    PLS-00905: object BRUGER.CREATE_INDEX is invalid
    ORA-06550: line 7, column 4:
    PL/SQL: Statement ignored
    SQL>

  • Getting recv buffer size error even after tuning

    I am on AIX 5.3, IBM J9 VM (build 2.3, J2RE 1.5.0 IBM J9 2.3...), Coherence 3.1.1/341
    I've set the following parameters as root:
    no -o sb_max=4194304
    no -o udp_recvspace=4194304
    no -o udp_sendspace=65536
    I still get the following error:
    UnicastUdpSocket failed to set receive buffer size to 1428 packets (2096304 bytes); actual size is 44 packets (65536 bytes)....
    The following commands/responses confirm that the settings are in place:
    $ no -o sb_max
    sb_max = 4194304
    $ no -o udp_recvspace
    udp_recvspace = 4194304
    $ no -o udp_sendspace
    udp_sendspace = 65536
    Why am I still getting the error? Do I need to bounce the machine or is there a different tunable I need to touch?
    Thanks
    Ghanshyam

    Can you try running the attached utility, and send us the output. It will simply try to allocate a variety of socket buffer sizes and report which succeed and which fail. Based on the Coherence log message I expect this program will also fail to allocate a buffer larger then 65536, but it will allow you verify the issue externally from Coherence.
    There was an issue with IBM's 1.4 AIX JVM which would not allow allocation of buffers larger then 1MB. This program should allow you to identify if 1.5 has a similar issue. If so you may wish to contact IBM support regarding obtaining a patch.
    thanks,
    Mark<br><br> <b> Attachment: </b><br>so.java <br> (*To use this attachment you will need to rename 399.bin to so.java after the download is complete.)<br><br> <b> Attachment: </b><br>so.class <br> (*To use this attachment you will need to rename 400.bin to so.class after the download is complete.)

  • Doing Buffered Event count by using Count Buffered Edges.vi, what is the max buffer size allowed?

    I'm currently using Count Buffered Edges.vi to do Buffered Event count with the following settings,
    Source : Internal timebase, 100Khz, 10usec for each count
    gate : use the function generator to send in a 50Hz signal(for testing purpose only). Period of 0.02sec
    the max internal buffer size that i can allocate is only about 100~300. Whenever i change both the internal buffer size and counts to read to a higher value, this vi don't seem to function well. I need to have a buffer size of at least 2000.
    1. is it possible to have a buffer size of 2000? what is the problem causing the wrong counter value?
    2. also note that the size of max internal buffer varies w
    ith the frequency of signal sent to the gate, why is this so? eg: buffer size get smaller as frequency decrease.
    3. i'll get funny response and counter value when both the internal buffer size and counts to read are not set to the same. Why is this so? is it a must to set both value the same?
    thks and best regards
    lyn

    Hi,
    I have tried the same example, and used a 100Hz signal on the gate. I increased the buffer size to 2000 and I did not get any errors. The buffer size does not get smaller when increasing the frequency of the gate signal; simply, the number of counts gets smaller when the gate frequency becomes larger. The buffer size must be able to contain the number of counts you want to read, otherwise, the VI might not function correctly.
    Regards,
    RamziH.

  • About Labview tool kit for Ti's DSP (RTDX buffer size)

     Now I am using Labview tool kit for Ti's DSP to
    do the RTDX communication between Labview and Ti's 6713DSK. Now I am
    encounting a problem that the my Human interface (Labview) works fine in
    the first 10 minutes. However after that data looks freeze. The RTDX
    become very slow. It's hard to input data or receive data to DSP through
    Labview. I am sending 1 Integer 16 bit data and 1 floating data to DSP
    using RTDCXwrite vi and the frequency is 33.3Hz. And at the same time I am
    transfer 5 integer 16 bit data from DSP to Labview. The frequency is 25Hz.
    What's wrong with this? Is that because the buffer for RTDX has been used
    up? And how to set up the RTDX buffer in Labview? because I check the vi
    about RTDX and can't find a definition about the buffer. This is very
    important in my project. Please tell me how to set up the buffer size and
    the RTDX work state (continuous mode or discontinuous). Thank you very
    much and I am looking forward to hearing from you.

    Now I am using winxp, 512Mb RAM, LV version is 7.1. My labview coding and DSP code main function is attached below. Please help me out. I want  to transfer 8 16bit integer data. Is it better that I used an array to group these data then transfer this array using only one channel. Compared with using 8 independent channels to transfer them.
    Attachments:
    HMI_pannel.zip ‏85 KB

  • Onboard buffer size

    Hi, can anyone tell me the difference between the "buffer size" and "onboard buffer size" under the DAQmx buffer>>output property?
    Thanks,
    David
    www.controlsoftwaresolutions.com
    Solved!
    Go to Solution.

    OnbrdBufSize - is the fast memory every DAQ card has to store the acquired data. It's like RAM in the PC. For example here you find that the NI PCI 6120 has 128MB of this memory which will equal to some amount of samples the buffer can store (this number you will read). This said it also explains why you can read and not set this property.
    Output.BufSize - Before you start data acquisition software buffer is defined based on the task you perform. I think it just allocates part of the fast memory of the DAQ card. The reason why the buffer is set (either by the user or automatically) is to have defined memory space for each task. Imagine you have more tasks using the card and each of them needs some memory of the card - if I am wrong correct me somebody.
    If your acquisition is finite (sample mode on the Timing function/VI set to Finite Samples), NI-DAQmx allocates a buffer equal in size to the value of the samples per channel attribute/property. For example, if you specify samples per channel of 1,000 samples and your application uses two channels, the buffer size would be 2,000 samples. Thus, the buffer is exactly big enough to hold all the samples you want to acquire.
    If the acquisition is continuous (sample mode on the Timing function/VI set to Continuous Samples), NI-DAQmx allocates a buffer equal in size to the value of the samples per channel attribute/property, unless that value is less than the value listed in the following table. If the value of the samples per channel attribute/property is less than the value in the table, NI-DAQmx uses the value in the table.
    Sample Rate Buffer Size
    No rate specified 10 kS
    0–100 S/s 1 kS
    100–10,000 S/s 10 kS
    10,000–1,000,000 S/s 100 kS
    >1,000,000 S/s 1 MS
    LV 2011, Win7

  • I'm having sever latency issues on Logic 9 through the built-in input (despite having no plugins active and 32 bit buffer size selected)

    I'm literally running a guitar straight into my computer (no plugins, no interface, just the "built-in input" on my macbook) and there's a LOT of latency. I'm very experienced with Logic and pro-audio (I have logic on my other computer and it works fine) but for some reason there's a ton of delay on my signal. NO idea where to start with this one. I've examined every option within Logic and nothing seems to be helping. I've changed every parameter, including buffer size, disabling and reenabling core audio, and low-latency mode, but nothing has worked.
    Any advice would be appreciated, thanks.
    Also, I'm using a late 2011 15" Macbook (Quad core, 4 GB ram)

    Never heard of this before... what are you using for output in Logic's Audio Prefs?
    Are you on 10.7.5 as you signature says?  There is a supplemental update for early builds of 10.7.5 that were released.
    Take a look at the Audio portion the "Audio-MIDI Setup" in Utilities.
    Bit-depth, sample frequency?  Try it at 16-bit 44.1kHz.  Is Logic set to the same?
    What kind of cable are you using to get the guitar into Logic, the Mac audio input jack is multifunction, could you be partially enabling the digital I/O.
    just tossing some ideas..

  • Sudden latency! No change of buffer size or hardware. What give?

    I'm trying to record a song and there's latency. I was recording two nights ago and everything was fine. The only thing that's happened between now and then is I unplugged my gear and replugged it today. The buffer size is set to 256 (and always has been as far as I know). Is it possible that the latency is due to a faulty hardware connection? I don't understand how it could be software related. Also, if the solution is to change my buffer size (which I tried and it does reduce the latency at 32) does that affect the sound quality? I find this all quite mysterious... and infuriating.

    Reducing latency does not affect soundquality, but computer performance.
    Buffer 256 induces pretty much latency. On my system, RME Babyface, 14.8 ms roundtrip.
    And I find it quite useless to record with. So I go for buffer 64/32, or simply mute the channel I'm recording to.
    And monitoring through my interface........But you didn't explain if your problem is MIdi or audio related.
    With first version of LPX (10.0) I experienced som strange behaviour, too. It seems to have gone now.

Maybe you are looking for