IPhone Low latency full duplex audio?

Hi, I'm trying to port a small multitracking program from winmobile to iPhone and am having a hard time figuring out what audio stuff to use. Audioqueue seems like it could do the job, but I've read someone state that they couldn't get it to do input and output at the same time. I've also read that Remote IO audio unit is the thing to use for low latency audio, but I can't find any documentation that shows how to use it. I've tried to familiarize myself with using Audio Unit Graphs but the information for iphone seems incomplete. Specifically I can't seem to link how to feed the RemoteIO. Mac OSX has the AudioFile unit, but the iphone doesn't.
Can someone verify which method is best suited for this? And if it is indeed the Remote IO audio unit, a any tips on how to use it would be great.
Thanks.
Message was edited by: drunken bird

Hi,
Did you make any progress on this post? I am interested in the exact same problem, and had the exact same issue.
- how to do full duplex audio?
- where is the documentation for audio units?
If you find out, please let me know.
BTW, which API did you use in winMobile?
Kevin
at
checkeffect.net

Similar Messages

  • When I press 'Low Latency' button I only have audio coming through left channel ?!

    When I press 'Low Latency' button I only have audio coming through left channel  !?
    When you take it off 'Low Latency' mode the audio is fine and comes out Right and Left speaker.
    Can anyone please advise why this happening and how I can fix it ?

    I think Pancenter is trying to get you to read and understand the "Working in Low Latency" section of the manual as the results or problems with working in Low Latency mode are variable.
    This section, for instance:
    Note: The sound may change in Low Latency mode. Depending on the plug-ins in use, the changes can be anything from subtle to dramatic. If plug-ins being used do not exceed the total latency limit, there will be no audible difference.
    If it was me posting about this issue I would now come back with what exactly is inserted/bussed on the tracks because it could be an accumulative problem based on what plug-ins you have active and if they are on audio tracks (stereo or mono), instrument tracks etc. Lots of variables. The section of the manual quoted above won't give specific instances as any user could be using any combination of any amount of plug-ins.
    So more information is needed before anyone can help you any further.

  • Yet Another Latency Issue Involving Audio through a 1620 into Logic 9

    Hi. Let me apologize in advance for my ignorance, and let me acknowledge that there are lots of tips on latency all over this and other sites.
    I haven't found the answer despite some extensive looking.
    I'm running Logic Studio / Logic 9 on a 32-bit only 2008 iMac.  I am using a Mackie Onyx 1620 [not 1620i] with Firewire InterfaceLogic as my audio interface.
    I am tracking audio into a project with both MIDI tracks and Audio tracks.  I have established an audio loop as an augmented click track.
    I have set latency to a low setting / buffer size in general preferences, and I have engaged "low latency" mode during audio recording.
    I have no problem monitoring through my Mackie Onyx 1620 [not 1620i], with no audible latency or delay problem at all whilst tracking.
    When I then play back the audio tracking, it is about 240 ticks off -- a dramatic latency problem.
    The problem exists even where I have no other tracks beyond my audio loop, and where there are no effects on the audio loop/click track.
    In other DAW setups, I've experienced latency in hearing my backing tracks and having an audible delay from latency in monitoring my input track.  This is effectively the reverse.
    Any ideas?  And again, I am woefully under-educated in the area.  With your kind help, I intend to remedy that . . . .
    Thanks very much.

    Pancenter, thanks for responding and trying to help.
    if so, you have a misunderstanding regarding latency.
    I can promise you that you're right!
    My understanding of latency in a nutshell: it is the small but perceptable delay [from "withholding" of signal] caused as a signal travels from point to point -- both (1) physical point-to-point (for example, from Mac processor to FW jack out on Mac through FW cable to FW jack in on interface, through cable out to monitor speakers or headphones); and
    (2) processing point to point (from recorded audio track through multiple plug-ins on channels strip to bus or master out, through plug-ins on each of those).
    I already know that I likely am wrong about this, or am at least incomplete.
    Based on that understanding, the problem I'm experiencing defies latency as I understand it, and defies logic (no pun intended; I just mean it makes no sense, as I'll explain below).
    What effects do you have on the master bus?
    None.  I had none in the original project I described in my first query, and have none on a simplified experiment project I've conducted since (after reading your response).
    If you are monitoring through the Mackie and NOT using Logic's software monitoring there is no need to set the Preferences/Audio/ I/O Buffer to a low value, set it to 256.  You also don't need to use Low Latency mode if you're monitoring through the Mackie.
    Here, I think I understand, but am not sure. 
    I am definitely monitoring through the Mackie in an absolute sense -- the returning signals from my DAW are coming in to the Mackie and (a) going out the 1620's headphone jack; (b) going out the 1620's master out to a Mackie Big Knob; and (c) going out the Big Knob's headphone jack, or to powered Mackie 824s.
    [No, I'm not a Mackie devotee; I just happened across the gear at great prices, and thought I understood it.]
    I don't think, though, that I'm "monitoring through the Mackie" as you intend it.  For my simple experiment, I established an audio drum track as a click track.  Each of the transients falls right on the metronome click/ beat location, so I know that it's "good."  I then plugged a Precision bass into the Mackie (channel one), with an audio track in Logic with "Input 1" as the input source. There are no plugins on the Precision track's channel strip.
    Because the Mackie FW interface  on the 1620 sends post-gain but pre-fader, I was able to turn down all faders on the Mackie except the master out.  When muting the Precision channel strip in Logic, I hear NO Precision.  *Nothing from the Precision gets to the monitors if the faders are down and Logic is not sending signal.*  If I then "de-mute" the Precision in Logic, I hear signal through the master line in the Mackie 1620. 
    In this setup, I assume I am not "monitoring through the Mackie," but I'm not sure.  I think that I'm "monitoring through Logic," and that's why I've engaged low-latency and set the preferences to minimize latency.
    we need to establish your setup, how you're monitoring, and what effects are on the master bus. (if any)
    In my simple experiment, with Logic set up to minimize latency, and with no plugins on the Logic master, I engaged "record" in Logic (only on the Precision track with no plugins), and I then played simple quarter notes on the Precision in time with the drumbeats in my Logic "clicktrack."  By playing simply like that, I confirmed that I am "in time" as far as my ears can perceive.  I then hit "Stop" in Logic.
    Upon playback, my Precision is perceptably (about 240) ahead of the click track !
    In my understanding of latency, at the worst the Precision should have been late because the Clicktrack would have been traveling through Logic's processing and down the lines to my monitors.
    Again, thanks a ton for trying to help me out here.  I hope that this description makes it easier.

  • [iPhone] Full duplex AudioQueue - is it supported?

    In other words - is it possible to playback some sound and to record something at the same time? Somewhere like a karaoke effect.
    Using the modified SpeakHere sample code, but the playback stops when starting a record session there. Is this feature supported at the hardware level?

    thanks for reply!
    yeap - the actual iPhone device. I've found an interesting URL, where some guys have finally managed to get the full duplex:
    http://touchmods.blog.com/
    (search the "there is also a simultaneous playback-record function available")
    but no sources or any comments present. What kind of magic did they use? But surfing the relative pages there, I've found out, they apply various hacks on theirs way(like using back-engineered Celestial.h etc). Is there a non-hack way to get the goal, I wonder.

  • Low latency audio interface, firewire or usb? and wich one would u recommend?

    Low latency audio interface, firewire or usb? and wich one would u recommend?

    Around 600, only need 2 xlr inputs.

  • Low-latency audio on Windows Phone 8 (hint: forget it for now...)

    My business specializes in audio and music apps for the Windows ecosystem. For this new project that I’m considering (a virtual instrument of sorts), I need to achieve the lowest possible audio latency from capture to render.
    A measured latency below 3 ms would be ideal, 20 ms would be ok, and anything above 30 ms would be a deal breaker. And just to be precise about my definition of latency, I’m strictly talking about “mic-to-speaker” latency (includes the processing in-between).
    I’m not talking about “glass-to-speaker” latency, although this topic is also of interest to me.
    I figured that I should be using WASAPI, as this is apparently the API that sits at the bottommost of the audio stack in WinRT / WinPRT. So I spent a couple of days re-familiarizing with C++ (happy to meet an old friend)
    and I coded a prototype capturing, processing and rendering audio using WASAPI having in mind the goal of achieving the lowest possible latency. After trying and comparing various methods, workflows, threading mechanisms, buffer values and mix formats, the
    best I could achieve using WASAPI on Windows Phone was ~140 ms of latency. This is using 3 ms buffers. And although the audio is glitch-free, I could have been using 30 ms buffers and it wouldn’t have made the slightest difference! The latency would still
    be around 140 ms. As I found out, the API/driver is extremely defensive in protecting against audio starvation, so it adds quite a bit of buffering at both ends. This is very unfortunate, because it basically disqualifies real-time audio/musical
    applications.
    I’d love to be able to provide quality audio/musical apps for the platform (both Windows Phone and Windows 8), but right now, this latency issue is kind of a deal breaker.
    I've been pointing out the importance of low latency audio to Microsoft for quite a while, and I know I'm not the only one, and I know a lot of people at Microsoft realize this is important. But in its execution,
    it seems Microsoft constantly fails to deliver a truly low-latency audio stack. In the pre-XP days, I've had talks with the sysaudio devs about this, and I was told, "yeah, we're working on a new architecture that will come out after XP and
    it will solve the latency problem for all audio and musical applications." Fast forward to mid-2010 (pre-Windows Phone 7), and I was still there pointing out the horrible latency figures one would get from the APIs that were about to ship. And
    now that WASAPI is available on WP8 (our best hope yet for low-latency audio), I discover the overly defensive and buffer-happy architecture of WASAPI (even though one of its promises was precisely low-latency audio).
    So, the question is…
    Is Microsoft aware of this issue? If so, is Microsoft giving up and simply conceding the pro-audio territory to iOS? If not, I’d be glad to discuss this issue with an engineer at Microsoft. I’m serious about bringing audio apps
    to the platform. I just need some assurance that Microsoft is taking action on its end, so that I can sync my development with the next product cycle.
    Thanks in advance for any help, advice, or insights!
    /Antoine

    Any update on this issue? Is there a roadmap for a fix?
    I ported a realtime sound analyzing application (requiring both audio input and output) to WinRT and was quite surprised at the high latency from WASAPI. I then searched around and posted some questions regarding this on various MS forums and got to the
    following conclusions based on feedback from people including MVPs and MS employees:
    A lot of those people are under the impression that WASAPI qualifies as a low-latency API.
    Apparently WinRT apps are not supposed to have high CPU usage, and this is purposefully baked into the framework (minimal thread priority control, async/await everywhere which results in thread priority inversion, and other issues).
    Why would anyone ever need high-priority threads? They must use a lot of CPU, right?
    A lot of people think low-latency audio means high CPU usage. You can see where this is going when you look at the previous point.
    Async/Await is being forced down to all levels, even though it should only be used at the UI level. What some people are actually now calling "old-school" multithreading is now being pushed out (lock, etc). Async/Await has horrible overhead and
    results in thread priority inversion amongst other issues. For example, witness that there is no StorageFile.Read, just StorageFile.ReadAsync. Do some IO benchmarks with some of these async methods and you will see some horrible performance compared to desktop
    file IO.
    To get an understanding of what low latency audio means and why it is important, see this video. It compares the latency of Android to iOS using the exact same music app. Ever wondered
    why there are no quality music apps for Android? Well now you know. And then realize that WinRT has
    twice as much latency as Android.
    And if anyone thinks this is a niche use case, consider that Apple created adds showcasing "musicians" playing their iPad "instruments". A use case that is essentially unavailable for WinRT apps. Why would Apple create
    adds for a niche use case?
    This should have been one of the high priority issues solved from the start in WinRT. MS solved this issue in Windows (desktop) long ago, with the ability to get insanely low latency there (0.3 ms in some stress tests,
    see here), even beating out OSX. It is as if there is a new generation of architects at MS that know nothing about this previous work and are doomed to make the same mistakes over again. I really don't
    understand why these pre-existing APIs can't be exposed in WinRT. No need to re-invent the wheel. But I guess it just isn't important enough.
    I find this situation really sad since otherwise it could be a great and powerful platform.

  • Full duplex bluetooth with Iphone?

    Both my wife and I have a 3G Iphone and our Jabra BT530 headsets seem to not work as full duplex. Is this an issue with Iphone? The Jabra BT530 claims to work in full duplex mode. When I test the system and talk while the other person talks, they can't hear me; it totally cuts out they say. Tried this on two Iphones.

    Well, glad to see i am not the only one.
    I posted about this issue back in October, and the forum admin removed my post!
    I have experimented with this issue many times, and it's always the same result.
    If you use your iphone and a BT headset, and the other person is talking, you can't talk over them - because it's only half duplex.
    Worse is if you use your iphone with a BT headset and call someone who is driving in their car. The noise from their car occupies the channel, and the only way they can hear you, is if you yell into the headset!
    I use my phone for several hours of talktime everyday, and therefore am dependant upon a BT headset (otherwise my arm would fall off).
    Because the iphone is essentially un-useable with a headset, I can not use it as a phone!!!
    I was praying that Apple would fix the issue with V3.0, but SOL!
    I am very surprised that more people do not complain about this. Can we be the only ones to notice the problem? Surely not!

  • HT204368 I recently upgraded from an iPhone 4s to an iPhone 5.  I have a Motorola HX550 Bluetooth Headset which worked flawlessly with the iPhone 4s.  However, since upgrading to the iPhone 5, I frequently lose audio when in the middle of a call.  I am no

    I recently upgraded from an iPhone 4s to an iPhone 5.  I have a Motorola HX550 Bluetooth Headset which worked flawlessly with the iPhone 4s.  However, since upgrading to the iPhone 5, I frequently lose audio when in the middle of a call.  I am not certain whether it is a full disconnect or just an audio loss.  The person on the other end reports still being able to hear me, however, I suppose they could hear me speaking over another source even if bluetooth disconnected. The loss of audio is always preceded by a quick shrill noise, so I can easily tell when it happens.  All I have to do to fix the problem is to select Audio Source on the call screen, temporarily switch audio back to the iPhone, and then switch back to the headset, and then it starts working again.
    Is is important to note that I am not the only one experiencing this issue:
    This is a link to someone using the Motorola H19txt headset experiencing the same issue
    https://forums.motorola.com/posts/94ac15ad7c
    If you go to the Plantronics website and look at the "Ratings and Reviews" on the Voyager Legend you will see the same issue reported:
    http://www.plantronics.com/us/product/voyager-legend (next paragraph is taken from the review since there is no direct link to the review.
    Really nice Headset unless you own an Iphone 5. I own several Plantronics headsets and have always loved the quality/ performance. This headset seems not to be compatible with the new Iphone 5. In the middle of a conversation it will lose all incoming audio. This leaves you trying to switch audio sources really fast in order not to lose your phone call. Ive tried 3 legend headsets and they all do the same thing. Hopefully this gets fixed soon because the headset is really nice
    Since this issue appears to affect multiple headsets from multiple vendors, I think it is fair to conclude that it is an issue with the iPhone5 bluetooth itself.
    I am posting this as a new discussion as I don't see any other posts specific to this iPhone 5 Bluetooth issue.  I would have reported this as a problem to Apple, but don't seem to be able to find a way to do this on the web.  Hopefully they are already aware, but if not, maybe they will see i here.

    here is an interesting thing: take the iphone and set lock screen to never. Now make an email with siri--be sure to activate her with call button. get to the body and get some text in there. then just stop talking. LOCK SCREEN APPEARS!!!!!! and of course you draft is gone.
    There does seem to be a work around for some--maybe all these issues. Don't use the call button--use the buttons on the phone.
    Siri seems to behave properly with scenerio above---sans call button. She does not go to lock.

  • Full Duplex Lost with Wave Files on SB Live! Value (W

    <SPAN>Following a clean install upgrade from WinME to WinXP SP2 on a Dell Dimension 400 with an OEM-installed SB Li've! Value (WDM) PCI card onboard, full duplex wave audio (recording one wave file while playing back one or more other wave files) ups and disappears ? ME had it, XP doesn?t.<SPAN> The record side of software mixers I use on the XP OS (Quartz AudioMaster, NCH Swift Sound MixPad/WavePad) fail to acquire or record the wave signal(s) being played back by the mixer side of the application.<SPAN> Stand alone recorders (Roxio Easy Audio Capture, HotKey Sound Recorder) also fail to acquire or record the wave signal(s) being played back by the aforementioned mixers.<SPAN> But if I switch the record input to microphone during the record pass (on those apps that permit such midstream switching), the mic records without incident.
    <SPAN>Creative?s Auto Update shows all drivers for the sound card are up to date.<SPAN> Applying Creative?s Knowledge Base Solution ID 645 passes all DXDIAG tests with no problems (text file results available if anyone cares to see them) but the manual test using twin Sound Recorders fails as the second copy (record-designated) Sound Recorder fails to acquire or record a signal from the first copy (playback-designated) Sound Recorder (no error codes display at any time).<SPAN> Clean booting WinXP and running the same record passes doesn?t rectify the problem.
    <SPAN>Anyone know what?s going on here?<SPAN> And how to fix it's<SPAN> I?m stumped.
    <SPAN>TIA . . .

    For benefit of anyone who hits this thread in a search:
    FWIW - Following 5-weeks of troubleshooting this problem with Microsoft Tech Support (including a parallel install of the XP SP2 OS), we reached the following conclusion: <SPAN>Since the OEM sound card (Creative Model CT4780) was known to operate in full-duplex mode without incident under my system's prior ME OS but the same card fails to consistently operate in full-duplex mode while running under a virgin parallel install of the XP OS, the source of my full-duplex problem must lie in the files and drivers resident on the XP OS CDROM and dropped to my system by the OS loader/installer.<SPAN> There really isn't any other plausible explanation.<SPAN> Absent its drivers, a sound card's nati've hardware state is either going to be full-duplex or it isn't - it would never be full-duplex sometimes and half-duplex (or simplex) the rest.
    <SPAN>The problem was resolved when a retail, nearly identical, sound card (Creative <SPAN>Model SB040) and the drivers and files from its CDROM, were installed on the same system.
    <SPAN><SPAN>Perhaps, somewhere, drivers exist that will permit the CT4780 to operate consistently in full-duplex mode on an XP SP2 OS - but neither Microsoft, nor I, had any success in locating them.

  • Latency, Recording incoming Audio into Logic,... I don't understand

    I'm trying to clear up my understanding (or misunderstanding) of latency of incoming audio into Logic.
    As a test, here's what I did.
    Set-Up
    1. In Preferences/Audio/General, I set Plug-In delay Compensation to off.
    2. In Preferences/Audio/Core I set the I/O Buffer Size to 32 (is this necessary?, to go that low?)
    3. In Preferences/Audio/General I turn on the Low Latency Mode On, set Limit to 0ms (Zero Milliseconds)
    4. Created a 1/4 note click track with Klopfgeist.
    5. Just for reference, I converted to audio. Opened in the sample editor and it is indeed, exactly on the beat.
    6. Set up my Roland Fantom as a Midi instrument. Channel 10 to trigger a Side Stick sound.
    7. Create another 1/4 note click track.
    8. Recorded the Rolands Output into my MOTU 2408 interface into Logic.
    9 Opened up the waveform up in the sample editor, zoom in and notice it is obviously late.
    10. Move the Anchor point 6 ticks to line it up,
    Therefore, the incoming audio is off by 6 ticks.
    Is it not possible to get 0 latency? I read the chapter in the manual but I believe this wasn't made clear, i could be wrong.
    I have a guitarist coming over to record on my record in a week, I don't know how to deal with this except to move his entire track
    by 6 ticks. ?

    That is not really the type of latency that's affected by your audio I/O buffer setting. A DAW system has different types of latency, MIDI latency, latency caused by some plugins, AD/DA converter latency..etc. The six tics will probably remain the same or close to it if you're buffer is set to 64, 126, 256...etc.
    Why do you have "Low Latency" mode selected, do you know what it's function is?
    The I/O buffer setting affects two main aspects.
    The response of the system when you play a virtual instrument, and...
    ...real-time monitoring of an audio input track. (this is an audio signal that passes through Logics audio engine and can use live effects)
    The example using the Fantom requires MIDI sent to the keyboard and the audio into the MOTU passes through the AD converter.
    What you want to set is the Record Delay so that your audio records in time, however... it needs to be set using a loopback test.
    Use a 1/4 note recorded audio click (side stick) that you've adjusted to be perfectly on the beat copy it a few times. Use a cable from the MOTU's output to an open channel's input, record the click, compare.. now adjust the "Recording Delay" until the recorded click matches the first click.
    Recording delay is on the Preferences/Audio tab.
    pancenter-

  • Why do I have to check low latency  every time I want to record 1 track

    I-Mac, 3.06 GHz Intel core 2 duo. 4GB RAM. Logic 9.1.1 Mac 10.6.4, Apogee Duet.. Theres no need for this in my main studio. Mac Pro, Motu 896. Even with 8 tracks recording.

    OK here ya go. 2 different versions of logic and 2 different versions of SL. Every time you close and reopen a project. You have to check low latency again. I guess I just never noticed it in the main studio because I never needed it, Using a Mac Pro, Motu 896, Buffers set @ 512 Medium With multiple fire wire devices, Multiple midi devices, Recording 8 audio tracks at the same time. Never a problem. On the I-Mac With Apogeee duet. 1 Track with no plugins. Buffers @ 128 I need it. Go figure. Thanks for your input everyone. Bill
    Message was edited by: cowbell bill

  • Low-latency multi-I/O - Is It Essential?

    Hi,
    On Apple's Logic Express 8 Technical Specifications page, it says "Low-latency multi-I/O audio hardware and MIDI interface recommended".
    My question is, is a Low-latency multi-I/O audio hardware and Midi interface essential to use Logic Express properly? Or is it only necessary if you're doing live recording of instruments? If you're just using Logic's internal virtual synths do you still need one of these interfaces? I was under the impression Logic Express worked just fine on a MacBook Pro with no extra hardware required.

    I don't know if Apple specifically recommends any i/o device, but I use products by Echo Audio. You can look them up at http://www.echoaudio.com. I personally have the Audiofire12, but if you are wanting a smaller i/o they have a few different models, Audiofire2, Audiofire4, and Audiofire 8. They are great products, I have been using them for years.
    -Tyler

  • ASIO Low Latency Driver sound output is bad?

    My Notebook comes with "build-in" asio drivers called "Generic Low Latency ASIO Driver". Further there is ASIO DirectX FullDuplex Driver and ASIO Multimedia Driver.
    Sound card is: IDT High Definition Audio CODEC.
    Running Win7 x64 (HP ProBook 6550b).
    All drivers are the most recent ones.
    Problem: 'Generic Low Latency ASIO Driver' sound output is much worse than using the other ones (there the sound is really great in both of them, but they have way to much latencies, so I cannot use them in cubase or any other music daw).
    So I'd like to use the (standard build-in) Generic Low Latency ASIO Driver. But the sound is much worse in contrast to the other drivers! It sounds a bit like a phaser or gramophone.
    Just to clarify: there are no glitches or stuttering in any case. Just the sound-"quality" is that messed up...
    I'm getting really upset about this. Tried the recent version of the ASIO4All 2.10 driver, but the problem remains! The sound with asio4all is exactly as bad as with the buildin Asio Low Latency Driver.
    How on earth can I achieve low latency with good sound in Cubase? Has anyobne experienced things like this before, or might there be any fix for this?

    Well, it's just the standard driver that comes with my notebook, named HP ProBook 6550b.
    http://h20000.www2.hp.com/bizsupport/TechSupport/SoftwareDescription.jsp?lang=en&cc=us&prodTypeId=32...
    I did not take control over the buildin audio driver installation, because win7 installs these drivers innately.
    The problem does not only occure in Cubase, but in any music tool/program/player where I set the driver to low latency (which obv does not make sense in others than cubase, but for testing I found out that it is not an application related problem).
    It is really strange. The soundcard seems to provide real asio, but the low latency sounds as bad as the "pretending" asio4all. hm...

  • Can't send to bus when in low latency mode?

    My colleague and I have almost the same setup, except he is running OS X 10.8.5, and I'm running OS X 10.9.4, Both are running Logic X 10.0.7. He has Apogee hardware (Symphony 64 card to DA16X/AD16X), and I have a UA Apollo.
    For some reason, this works on my setup, but not on his:
    Set up a bus send on any audio or VI channel, and assign the output of that channel to outputs 3-4 (outputs 1-2 are both of our main outs). The purpose of this is to send to a cue/headphone box.
    Set latency compensation to Instruments and Tracks (not ALL, setting it to ALL defeats this)
    Turn on low-latency mode.
    Turn on Software Monitoring (buffer is set to 64)
    Record enable that track.
    On my setup, I get signal on channels 3-4, on his, no signal as soon as record is enabled
    Auto-input monitoring is off in both cases.
    On my setup, the bus popup turns orange, but I still hear the signal, on his, it turns orange, but no signal.
    All plugins removed on the track and the bus.
    Weird.

    Bus Send is set the same on both systems, Post Pan,  Post Fader,   Pre Fader ?
    Good point, I'll check that
    Bus is set to output on 3-4 ?
    Yes, on both systems
    Bus level is the same ?
    Yes, similar anyway
    Sure you friend doesn't have any plugins on the master output bus  ?
    I didn't consider that - somehow didn't think that plugins on the master bus (which is outputting on 1-2) could effect a different bus. Will check that tomorrow.
    Do you by any chance have direct monitoring enabled on the Apollo, any input signal will be heard.
    I do use direct monitoring on the Apollo, but that is not a factor - it's prerecorded material we are trying to monitor, not input signal.
    Curious as to why are you running in Low Latency mode, it is for a specific purpose not something to be used in general recording/work.
    He is running on an older Mac tower and uses a lot of plugins and prefers software monitoring, and finds that he has problems unless he enables low latency mode. I usually leave it off, I just wanted to see if the issue was present on my system as well, and it's not.
    Thanks, will report back.

  • Full Duplex Collisions???

    OK Im really confused. I thought you couldnt have collisions with a full duplex port but yet my switches are showing collisions even though they are set to full. Also I was told that some applications will not run right if the network cards are set to full duplex. Has anyone ever run into this?

    William
    Part of the definition of full duplex is that the collision detection mechanism is disabled on a full duplex port/interface. So I believe it is impossible to have collisions on a port that is configured as full duplex - other than the possible result of some bug in the code.
    Is it possible that the interface counters recorded collisions at some point in the past when the port was half duplex?
    I have not run into any situation where an application would not run on a port set to full duplex. And I wonder how the application would know. One of the advantages of the layered model is that the upper layers are insulated from dependencies on what happens at lower layers. How would an application know whether the port was half duplex or full duplex?
    HTH
    Rick

Maybe you are looking for