I/O Buffer Size Causes Bounce Delay

Hi All,
Came across a really frustrating problem here. I've got a pretty sizable project I'm trying to finish up. I'm about to send it to someone else to mix, so I'm bouncing out a few of the tracks that contain instruments that the mixer doesn't have. Unfortunately, everything I bounce is coming out delayed, depending on the I/O buffer size. Just to verify this, I bounced out a clip with the buffer set to 128 and one with the buffer at 1024, both with the click left on. Sure enough, the 1024 is pretty significantly delayed from the click, whereas the 128 is pretty much right on it.
I'm using some inserts across the project, but really nothing significant. All instruments, vocals, etc. are being bussed to aux tracks for grouping and then all of the group tracks are being sent to a master bus track which is going out the stereo out. I don't have any processors on the master bus or stereo out, and PDC is set to "all".
Anyone else experiencing this? More importantly, does anyone have a solution? I'd really rather not have to figure out the sample delay and shift everything.

I would think it has more to do with the Fast Track Pro than your computer or Logic. Does the FTP have drivers or is it system compliant? (plug it in and it just works)
All things considered that's a very inexpensive interface.
Please don't take this next bit personally... Unless you have a pristine listening environment and top  of the line equipment... recording at 96kHz is not really a wise decision. Recording at 24-bit will have a much greater impact than the sample rate.
Here's what happens at 96kHz.
The files recorded are twice the size of files recorded at 48kHz.
A 96kHz / 24-bit stereo file is approx 35 Megs a minute, add to that effects running at 96kHz and is fairly easy to over load an iMac or Laptop type machine.. a MacPro will fare a little better, depending on the audio hardware. 
Effects like Space Designer and some of the top line AU-instruments s-u-c--k CPU cycles as well.
What I think you're seeing is an overload of the bus system, why it happens at higher buffer levels is anyone's guess, most would say it's the iffy M-Audio drivers.
pancenter-

Similar Messages

  • Buffer size and recording delay

    Hi
    I use a Focusrite Saffire LE audio interface, and in my core audio window the buffer size is currently set at 256 and there is a recording delay of 28ms. I don't think I've ever altered these settings since I started to use Logic. I don't know if this is related, but I do notice a slight delay when I monitor what I'm recording -- a slight delay, for example, between my voice (as I'm recording it) through the headphones and my "direct" voice.
    Anyway, are these settings OK, or should they be altered for optimum performance?

    256 samples Buffer size will always give you a noticable amount of latency. If you use Software Monitoring you should try setting your Buffer to 64 samples. With the recording delay slider in the preferences->Audio you can compensate for the latency (of course not in realtime) so that the Audio will be placed exactly there, where it should hae been recorded at. In your case set it to a -value. A loopback test (check the link below) will clarify the exact amount of Latency occuring on your system.
    http://discussions.apple.com/thread.jspa?threadID=1883662&tstart=0

  • Network Stream Error -314340 due to buffer size on the writer endpoint

    Hello everyone,
    I just wanted to share a somewhat odd experience we had with the network stream VIs.  We found this problem in LV2014 but aren't aware if it is new or not.  I searched for a while on the network stream endpoint creation error -314340 and couldn't come up with any useful links to our problem.  The good news is that we have fixed our problem but I wanted to explain it a little more in case anyone else has a similar problem.
    The specific network stream error -314340 should seemingly occur if you are attempting to connect to a network stream endpoint that is already connected to another endpoint or in which the URL points to a different endpoint than the one trying to connect. 
    We ran into this issue on attempting to connect to a remote PXI chassis (PXIe-8135) running LabVIEW real-time from an HMI machine, both of which have three NICs and access different networks.  We have a class that wraps the network stream VIs and we have deployed this class across four machines (Windows and RT) to establish over 30 network streams between these machines.  The class can distinguish between messaging streams that handle clusters of control and status information and also data streams that contain a cluster with a timestamp and 24 I16s.  It was on the data network streams that we ran into the issue. 
    The symptoms of the problem were that we if would attempt to use the HMI computer with a reader endpoint specifying the URL of the writer endpoint on the real-time PXI, the reader endpoint would return with an error of -314340, indicating the writer endpoint was pointing to a third location.  Leaving the URL blank on the writer endpoint blank and running in real-time interactive or startup VI made no difference.   However, the writer endpoint would return without error and eventually catch a remote endpoint destroyed.  To make things more interesting, if you specified the URL on the writer endpoint instead of the reader endpoint, the connection would be made as expected. 
    Ultimately through experimenting with it, we found that the buffer size of the create writer endpoint  for the data stream was causing the problem and that we had fat fingered the constants for this buffer size.   Also, pre-allocating or allocating the buffer on the fly made no difference.  We imagine that it may be due to the fact we are using a complex data type with a cluster with an array inside of it and it can be difficult to allocate a buffer for this data type.  We guess that the issue may be that by the reader endpoint establishing the connection to a writer with a large buffer size specified, the writer endpoint ultimately times out somewhere in the handshaking routine that is hidden below the surface. 
    I just wanted to post this so others would have a reference if they run into a similar situation and again for reference we found this in LV2014 but are not aware if it is a problem in earlier versions.
    Thanks,
    Curtiss

    Hi Curtiss!
    Thank you for your post!  Would it be possible for you to add some steps that others can use to reproduce/resolve the issue?
    Regards,
    Kelly B.
    Applications Engineering
    National Instruments

  • Getting recv buffer size error even after tuning

    I am on AIX 5.3, IBM J9 VM (build 2.3, J2RE 1.5.0 IBM J9 2.3...), Coherence 3.1.1/341
    I've set the following parameters as root:
    no -o sb_max=4194304
    no -o udp_recvspace=4194304
    no -o udp_sendspace=65536
    I still get the following error:
    UnicastUdpSocket failed to set receive buffer size to 1428 packets (2096304 bytes); actual size is 44 packets (65536 bytes)....
    The following commands/responses confirm that the settings are in place:
    $ no -o sb_max
    sb_max = 4194304
    $ no -o udp_recvspace
    udp_recvspace = 4194304
    $ no -o udp_sendspace
    udp_sendspace = 65536
    Why am I still getting the error? Do I need to bounce the machine or is there a different tunable I need to touch?
    Thanks
    Ghanshyam

    Can you try running the attached utility, and send us the output. It will simply try to allocate a variety of socket buffer sizes and report which succeed and which fail. Based on the Coherence log message I expect this program will also fail to allocate a buffer larger then 65536, but it will allow you verify the issue externally from Coherence.
    There was an issue with IBM's 1.4 AIX JVM which would not allow allocation of buffers larger then 1MB. This program should allow you to identify if 1.5 has a similar issue. If so you may wish to contact IBM support regarding obtaining a patch.
    thanks,
    Mark<br><br> <b> Attachment: </b><br>so.java <br> (*To use this attachment you will need to rename 399.bin to so.java after the download is complete.)<br><br> <b> Attachment: </b><br>so.class <br> (*To use this attachment you will need to rename 400.bin to so.class after the download is complete.)

  • I/O Buffer Size

    Hello
    i'm using Melodyne from Celemony as a plugin. It requires a change of I/O buffer size. Is that effecting other settings as recording delay oder midi delay?
    THX, Jay.

    Janoth wrote:
    It requires a change of I/O buffer size.
    Really? I don't think so. There are several ways to use Melodyne plugin (editor). If you use in Logic just hit 'transfer' BEFORE hitting play and you will avoid troubles. Hitting play first and then 'transfer' coud lead you to troubles.

  • How does buffer size affect double buffered waveform generation?

    I had originally posted the following question:
    "Why does the double buffered waveform generation pause after the first buffer before continuing?"
    "I am using an AT-AO-10 board to generate a multiple channel waveform in double buffered mode. The board's DAC's are updated by an external clock signal. While the waveform generation performs well, I notice that after the first buffer has been generated there is a time delay before the next buffer is output. However the second buffer and thereafter perform well without any time delays. If anyone can provide me an explanation on why this happens I would appreciate it. I am using NI-DAQ API functions to generate the waveforms and my settings for the WFM_DB_Config function are 1 for oldDataStop to disallow regeneration of data and 0 for partialTransferStop to not stop when a half buffer is partially transferred."
    -posted by Vadi on 6/7/2001
    I received a response from Geneva as follows:
    Geneva L. on 6/11/2001 says:
    "Vadi,
    The first thing is to make sure that you have the latest version of NI-DAQ installed, NI-DAQ 6.9.1. If you need to install it, make sure you completely uninstall any prior versions. Then, you will have examples installed in either the NI-DAQ or the CVI directory. In the AO directory, you should find the WFMdoubleBuf example.
    Start with that to make sure the output appears as you expect. Then, you can modify it to apply your external update clock, following the idea presented in the WFMsingleBufExtUpdate example. You might even want to double-check that your external clock acts as you expect using an oscilloscope.
    Finally, modify the example such that you can update on multiple channels, remembering that you interleave each channels buffer into one buffer for WFM_DB_Transfer. Whatever data is in the buffer will be updated on the output channels.
    Regards,
    Geneva L.
    Applications Engineer
    National Instruments"
    I have checked my version of NI-DAQ and it is 6.9.1. I am generating the double buffered waveform according to the format shown in WFMdoubleBuf and with some modifications from WFMsingleBufExtUpdate to allow me to use my external update clock. However I continue to notice the same phenomena again and again. For a buffer size of 7500 or 10000 points there is a time lag meaning after the first buffer has been output there is a noticeable time delay before the second buffer and buffers there after is output. This time lag doesn't exist for the buffers that are output after the first buffer but it does exist for the first buffer. When I decrease the buffer size down to 5000 points the time lag disappears (Note: this phenomena also occurs when I use an internal time base as opposed to my external clock). Is there a reason for this? I am using a AT-AO-10 board and I know the on board FIFO is 1024 points deep. However from the documentation provided it doesn't indicate that double buffered mode uses the on board FIFO at all. In fact, the functions require that the FIFO mode be turned off (in WFM_Load) for double buffered waveform generation. Is there a reason why when the buffer size is increased that there is a time lag after the first buffer? Is this because of the functions themselves or is this because of the AT-AO-10 board?

    Vadi,
    Make sure that your buffer size is set to the same number of points that you're actually writing to the buffer initially. For instance, if you run the example as-is, the NIDAQMakeBuffer puts exactly the ulCount amount of data into the buffer. Then, it continuously writes out half buffers. Thus, if you are not writing enough data to fill up the buffer the first time, there will be that lag until the section where half buffers are output.
    Regards,
    Geneva L.
    Applications Engineer
    National Instruments
    http://www.ni.com/ask

  • Doing Buffered Event count by using Count Buffered Edges.vi, what is the max buffer size allowed?

    I'm currently using Count Buffered Edges.vi to do Buffered Event count with the following settings,
    Source : Internal timebase, 100Khz, 10usec for each count
    gate : use the function generator to send in a 50Hz signal(for testing purpose only). Period of 0.02sec
    the max internal buffer size that i can allocate is only about 100~300. Whenever i change both the internal buffer size and counts to read to a higher value, this vi don't seem to function well. I need to have a buffer size of at least 2000.
    1. is it possible to have a buffer size of 2000? what is the problem causing the wrong counter value?
    2. also note that the size of max internal buffer varies w
    ith the frequency of signal sent to the gate, why is this so? eg: buffer size get smaller as frequency decrease.
    3. i'll get funny response and counter value when both the internal buffer size and counts to read are not set to the same. Why is this so? is it a must to set both value the same?
    thks and best regards
    lyn

    Hi,
    I have tried the same example, and used a 100Hz signal on the gate. I increased the buffer size to 2000 and I did not get any errors. The buffer size does not get smaller when increasing the frequency of the gate signal; simply, the number of counts gets smaller when the gate frequency becomes larger. The buffer size must be able to contain the number of counts you want to read, otherwise, the VI might not function correctly.
    Regards,
    RamziH.

  • Error -200609 occurred at DAQmx Write: Selected Buffer Size Too Small

    Hello, I'm writing some simple test VI's that I will eventually build upon to make an externally clocked analog output VI. I started with a very simple program to output finite samples using the onboard clock with the DAQmx Timing.VI. When I run the program, I almost immediately get an error. The error message is below.
    Error -200609 occurred at DAQmx Write (Analog DBL 1Chan 1Samp).vi:1
    Possible reason(s):
    Generation cannot be started, because the selected buffer size is too small.
    Increase the buffer size.
    Conflicting Property
    Property: Output.BufSize
    Corresponding Value: 1
    Minimum Supported Value: 2
    Task Name: _unnamedTask<1C>
    I have used DAQmx VI's before in similar applications and never encountered this error. Additionally, I read at the link below that DAQmx Timing.VI should be generating the buffer automatically. Any ideas as what could be causing this?
    Specs:
    Windows 7
    Labview 2012
    PCIe-6353 as DAQ board
    Below is a picture of my block diagram and the VI is attached.
    Solved!
    Go to Solution.
    Attachments:
    FiniteSamplesTest.vi ‏18 KB

    Oops. Just realized my very silly mistake: I forgot to add the Start Task VI. I did so and it works as designed.

  • Data retrieval buffers - buffer size and sort buffer size

    Any difference to tune BSO and ASO on data retrieval buffers?
    From Oracle documentation, the buffer size setting is per database per Essbase user i.e. more physical memory will be used if there are lots of concurrent data access from users.
    However even for 100 concurrent users, default buffer size of 10KB (BSO) or 20KB (ASO) seems very small compare to other cache setting (total buffer cache is 100*20 = 2MB). Should we increase the value to 1000KB to improve the data retrieval performance by users? The improvement impact is the same for online application e.g. Hyperion Planning and reporting application e.g. Financial Reporting?
    Assume 3 Essbase plan types with 100 concurrent access:
    PLAN1 - 1000KB * 100 = 100MB 1000KB * 100 = 100MB (total sort buffer size)
    PLAN2 - 1000KB * 100 = 100MB 1000KB * 100 = 100MB (total sort buffer size)
    PLAN3 - 1000KB * 100 = 100MB 1000KB * 100 = 100MB (total sort buffer size)
    Total physical memory required is 600MB.
    Thanks in advance!

    256 samples Buffer size will always give you a noticable amount of latency. If you use Software Monitoring you should try setting your Buffer to 64 samples. With the recording delay slider in the preferences->Audio you can compensate for the latency (of course not in realtime) so that the Audio will be placed exactly there, where it should hae been recorded at. In your case set it to a -value. A loopback test (check the link below) will clarify the exact amount of Latency occuring on your system.
    http://discussions.apple.com/thread.jspa?threadID=1883662&tstart=0

  • Best buffer size for BufferedInput/Output Streams?

    I'm developing a client-server application where one server serves an undetermined number of clients. The server spins off threads to service each client individually. The clients send data (serialized objects) to each other via BufferedInput/OutputStreams wrapped in ObjectInput/OutputStreams. The streams are created like this:
    toServer = new ObjectOutputStream(new BufferedOutputStream(theSocket.getOutputStream()));
    fromServer = new ObjectInputStream(new BufferedInputStream(theSocket.getInputStream()));I've never had more than a handful (5-6) users on the system at one time, and it works fine. However, I haven't the resources to test the system with a large number of clients, and I'm wondering if the default buffer size is enough to handle a potentially large amount of traffic to a client. Should I increase the buffer size just in case, or should I not worry? Does the buffer size even matter when using an ObjectInput/OutputStream?
    I'd appreciate any good advice.

    There is an underlying buffer size of 1024 in ObjectIn/OutputStream anyway so the buffer should be at least this large. Generally 4k is adequate unless you are doing very high data volumes in which case 16k, 32k, 63k may help. If you are on Windows you should also up the socket send/receive buffers which by default are 8k: change this to at least 28k. The precise value depends on the network link & in theory should be >= the bandwidth-delay product.

  • PLEASE can a AE from NI take a look at my problem. Sound input read behave in strange manner then the buffer size is larger than 2X number of samples to read.

    On my computer I have discovered some strange behavior then reading data from the sound card. Then the buffer size is 2x samples to read everything is as expected. But since I read the sound card 10 times pr second I feel a .2 second buffer is to small. I am using XP, and XP is not a RTOS so with a buffer set to 0.2 seconds I may lose data. Therefore I set the buffer size (number samples/ch on Sound Input Configure.vi) to be in range of 2 seconds. The result then is that then reading from Sound input.vi, a reading often take more than 0.1 second. On my computer it is often 500mSec. Then the next 5 read follows with almost zero interval. I do not loose data. But on my front panel the graphs looks like an very early silent movie. This error was introduced in Labview 8.x. To be honest I think the labview 7.x sound system was much better in many ways.
    But before I point any finger NI. Other people has to verify the behavior I experience. I have made an example showing this error. It is a modified version  of the "Continuous Sound Input.vi" example. Then the "buffer in seconds" control is set to 0.2 every thing works OK. Changer this to a larger number will produce the mentioned above hiccup. The larger number in this control the larger hiccup. Is it any way to fix this? My solution up to now has to use a free 3. part software(http://www.zeitnitz.de/Christian/index.php?sel=wav​eio) But I guess it soon will be outdated. It may not work with newer windows versions.
    Any help at all will be appreciated 
    And yes I have the most updated version fo DirectX. Also I se this in Labview 2009 which I have trail version of. The VI I have made is in 8.6
    Message Edited by Coq Rouge on 09-07-2009 10:54 AM
    Besides which, my opinion is that Express VIs Carthage must be destroyed deleted
    (Sorry no Labview "brag list" so far)
    Attachments:
    Continuous Sound Input with timing.vi ‏23 KB

    macaba wrote:
    If you take a moving average of the 0.2s buffer vs. 3s buffer at an update rate of 10, then they are the same (just under 100ms), so the average refresh rate is the same. I agree that is odd behaviour that the time between sound reads go to zero quite a lot then take a long time once in a while (presumably to fill the buffer
    I guess it goes to zero because it is reading data from the buffer it do not has to wait for data from the sound card. The mysterious thing is the periodic delay. You are also correct then saying that average timing is correct. And in my application I have no data loss.
    If you search for sound in this forum you will find out that many people has reported trouble with the sound system.
    Besides which, my opinion is that Express VIs Carthage must be destroyed deleted
    (Sorry no Labview "brag list" so far)

  • Network Stream Max Buffer Size

    Hello,
    I recall an AE on here once mentioning that network streams can exhibit problems if you set a buffer size greater than 9MB, but I haven't been able to find any concrete explanation of this.  The reason behind me asking is that, I'm currently running into some memory problems in my application in which I initialize my buffer to 30MB.
    Basically, my application starts out running fine, but after 30-45 minutes, all of a sudden, my network stream buffer starts building up until it reaches a max value, even though I am still reading data out of it at seemingly the same rate.  It's as if routine memory allocation/deallocation elsewhere in my application causes issues with the network stream.
    Anyone have any insight to this?
    thanks in advance,

    The size of the buffer you set for your network streams will just determine how much memory is set aside within your application. If you make your stream too large, LabVIEW will throw an error such as 'Memory is Full' telling you there is not enough space to create that large of a buffer. 
    The important benchmarking data to look at concerning network streams is throughput and latency. The following KB has data in section 6 regarding how quickly data can be passed through a network stream:
    http://www.ni.com/white-paper/12267/en/#toc4
    Since you are able to function properly for the first 30-45 minutes, it does not seem that you are violating network capabilities with your setup. However, there seems to be something that is building up/slowing down after that period of time.
    How did you determine that you need a 30 MB buffer size? If you are originally only storing between 0-8000 items on the buffer, it may make sense to make a buffer smaller than 30000000 to free up resources for other parts of your application.
    Here is some more information about how to profile performance and memory within LabVIEW: http://zone.ni.com/reference/en-XX/help/371361J-01/lvdialog/profile/
    Applications Engineer
    National Instruments

  • Buffer size and samples per channel

    I have a question regarding the allocation of the output buffer. I have
    a digital I/O card PCI-6534 and I use the driver NI-DAQmx 7.4.0f0. I
    would like to generate a digital output using different clock rates.
    For example, I need to write 500 samples at 1000 samples per second and
    other 500 samples at a rate of 10000 samples per second. the simplest
    solution is to prepare two different waveforms, write the first one on
    the buffer and generate the output. Then I can load the second waveform
    on the buffer and generate the second output. In order to minimize the
    delay between the two output sequences, I would like instead to write
    the buffer once. I tried to set the buffer size to 1000, and the number
    of samples per second to 500, but it doesn't work. There is a way to
    set independently the buffer dimension and the number of samples which
    have to be generated?

    I can post the whole thing but I'll talk a little about it. It's a producer consumer loop with a continuous analog input sample from external clock program in the producer loop. Right now the consumer loop has a simple write to spreadsheet VI in it but eventually I want to average each revolution (well, two since it's a four stroke but that's neither here nor there) of pressure traces and spit out a single curve.
    The wiring is simple. I have a voltage supply feeding the encoder and the quadrature A input on PFI 8 of the 6212 DAQ. I also have the Z index plugged in but nothing else to it. The analog input is a BNC right to AI 0. I can make a diagram if you want one. I've scoped the rotary encoder output and it looks great, very square with small relative rise times.
    Attachments:
    Yanmar Program V2.vi ‏46 KB

  • I/O buffer size switch by itself

    Hi, whenever I'm playing my guitar in Logic Pro 9, the latency suddenly changes, and I start to hear this delay in the sound.
    Then I have to modify the I/O buffer size switch by (e.  like128 to 256) to hear the sound in real time again.
    It doesn't let me record as it changes when I'm playing.
    What is Logic doing this? and how can I fix it.
    It started doing this after I updated to maverick, before I had no problems at all with this.
    Thanks
    Using Logic Pro 9
    9.1.8
    Macbook Pro
    OS X 10.9

    You will find several threads on the M-Audio website dedicated to Mavericks issues with M-Audio Fast Tracks...
    Here is the major one I believe.....
    http://community.m-audio.com/m-audio/topics/os_x_mavericks_fast_track_pro_suppor t
    It does not make for good reading i am afraid to say..... and although there are a variety of different issues discussed there.. it basiclaly amounts to..
    The interface doesn't have 10.9 compatible drivers and it may be several weeks until M-Audio release updated ones....
    Sorry!

  • I/O Buffer Size Requires Constant Changes

    Every 20 seconds or so, I have change the I/O Buffer Size to stop a crackling noise and latency.  When I change it, everything sounds clear for 20 seconds, and then goes back to sounding terrible and out of sync.  Any help?

    it sounds like you have an issue that isn't actually directly related to the buffer size. When you chnge the buffer size, you are reinitializing core audio. Something else is happening that causes your sync issues after about 20 seconds, then reintializing clears it.
    What sort of setup do you have? Do you have an aggregate device? A clock? What sorts of instruments? etc.

Maybe you are looking for