"fine/sample delay", mixing down live drums, esp for JustinC!

I'm curious about the "fine/sample delay" and "phase inversion" techniques briefly mentioned by JustinC in an older post regarding recording live drums. I understand using phase inversion on snare mics that are pointing up/down, but I'm not 100% sure about the fine delay adjustment. Is this actually shifting the audio to line up waveforms from mics that are different distances from the same source? I've done this with an upright bass before (F-hole mic and another a few feet away) and it seemed to get a much fatter, clearer sound. I really appreciate any tips you might have on how to pinpoint and solve the problems.

hey over-man,
I was prowling tonight and ran across your post. I don't know your reference specifically but I understand your question.
Sample delay regarding a multimic'ed source:
This is based on phase cancellation and reinforcement. It is initiated by everything in a multimic'ed environment. The times, distances, microphone choice, effects choice-it's everywhere.
The best approach for me (typically) is to get the most appropriate sound earlier on unless you can expect variables ahead of time and factor them in right away. We'll work with an example of simply getting a natural drum sound that will line up with existing tracks (bass, gtr, vox...).
1) You can set up the sound/mics as close as possible to fit the sound you're after (we'll breeze over these points not specific to your question).
2) Before destroying the sound with a handful of plugins adjust delays. Some assumptions can be made:
A: Overheads/Room Mics will typically be the reference for the reason that they are what you have least control over because they represent more sources (Drums in this case).
B: These Mics will often have the largest delay from the start for the reason that they are farthest from the source.
C: These Mics should be adjusted less than others, I would EQ them quite minimally. Solve any problems as early in the signal chain as possible and use only what you know you'll need. Do not apply FX you don't need and position mics to better suit your desired sound.
So your OHs/Rooms are as good as they can sound naturally and sit with the track as well as possible, you can now start adjusting delays/intrtoducing other elements.
The order can vary here but I tend to start with the tracks/mics which will be more prominent. I will use snare in this example.
Some useful tools before we resume: Logic's sample delay, Expert Sleepers' Latency Fixer, because Logic's Sample Delay will not predelay. Found here:
http://www.collective.co.uk/expertsleepers/latencyfixer.html
I would normally just load both of these up, I use full PDC and apply these AUs to each track-remember to turn off Software Monitoring!
Create an instance of Latency Fixer, set it to 2048 samples. You have now added 2048 samples of predelay to your track.
Create an instance of Sample Delay, set it to 2048 samples. You have now adjusted your track to play back at the original record position.
Copy both instances to each adjacent track.
So we now have Latency Fixer followed by Sample Delay on each track for the kit (not necessary for the OHs unless you want to shift everything a little, but I will often do this from the arrange by ms).
At this point you will not need to adjust Latency Fixer's settings and simply use sample delay to offset delay in either direction.
Bring the snare track up to a reasonable level (the closer you are the better). Adjust Sample Delay in either direction and you will notice the sound changes from phase cancellation/reinforcement. Certain frequencies will stand out more at different settings. It normally won't take much, we don't want it to sound like a delay. Someplace within +/- 300 samples is often best for a natural sound. It's a good idea to have a phase inverter (Gain PlugIn) handy for 2 reasons: It may sound better inverted. You can invert the phase to pronounce things you DON'T want to hear and then flip it around the right way after setting the delay to pronounce them. Volume has a huge bearing on interaction here, a good delay setting may sound bad after reducing the level 2dB so Sample Delay setting, volume and phase should all be updated as soon as possible.
By now you have noticed how adjusting the delay affects the sound, inverting the phase is an exaggerated example of this principle. Inverting the phase of two identical tracks cancels them entirely, applying fine delay to one will introduce some of the frequencies and leave some cancelled. You can hear it scan through which frequencies by adjusting the delay. With multiple sources, different mics and different distances these delays already exist. Delaying them so they match (based on distance) is not always the best sounding but try to start somplace close. After setting the delay and Volume as close as possible you should be able to get away with much less surgical types of PluggIng and ultimately have more natural sounding tracks. Will often comprise 75% of the sound (if you want natural tracks).
If there are dynamic effects you know you'll want to apply there are special considerations.
Many of Logic's "lookahead" paramaters delay the signal.
Compression/Gating, etc. (on a per track basis) will alter the volume significantly enough that if you know you want a track compressed... ahead of time you should probably set these as you would then adjust the delay.
I tend to use predelayed tracks as keys for the audible tracks, this allows you to get around the aforementioned lookahead problems and will provide you with more control over the "shape" or envelope.
Concerning phase inversion based on which head you mic: It is a generalized solution to mic setup, though not a hard and fast rule which applies in every case.
Delaying in this manner is aligning waveforms. Based on frequency you are causing some frequencies to stand out more or less since we are working with complex waveforms. Very small deviation from original position will affect higher frequencies most and as you get further away from the original postion it begins to affect mids and eventually low frequencies. Of course this is because the frequency/wavelength. Align and misalign the waveforms for your desired sound. It does require you listen very closely when setting it up but I really think the results are worth the effort. Hopefully you'll find this helpful, it's easier to hear the effect than to describe it, hopefully it is apparent once you hear it.
Finally, I don't have any Platinums. If it doesn't work better for you, don't believe it needs to. Hopefully others will chime in here so you can get some second opinions on the approach and/or general drum mixing techniques.
Cheers, J

Similar Messages

  • Can I mix down to 32 bit at a higher sample rate than 44.1 kHz ?

    When I use Ableton Live, it lets me choose 16, 24, or 32 bit, and then I can choose a sample rate all the way up to 192000.  Is this possible in Audition ?  I have been going through all the preferences and all the tabs and I can't find this option.  All I find is a convert option, or the adjust option.  But that's not what I want.  I want to mix down this way.
    The closest thing I found is when I go to "Export Audio Mix Down", I can select 32 bit.  Then there is a box for sample rate, with all the different values.  But it won't allow me to change it from 44100.

    JimMcMahon85 wrote:
    Can someone explain this process in laymens terms:
    http://www.izotope.com/products/audio/ozone/OzoneDitheringGuide.pdf ---> specifically Section: VIII "Don't believe the hype"
    I don't read graphs well, can someone put in laymens terms how to do this test, step by step, and where do i get a pure sinewave to import into audition in the first place??
    UNbelievable: So I have to first run visual tests using a sinewave to make sure dither is working properly, then do listening tests with different types of dither to hear which I like best on my source material, and then for different source material it's best to use different types of dither techniques???... Am I getting this right???...
    Hmm... you only need to run tests and do all that crap if you are completely paranoid. Visual tests prove nothing in terms of what you want to put on a CD - unless it's test tones, of course. For the vast majority of use, any form of dither at all is so much better than no dither that it simply doesn't matter. At the extreme risk of upsetting the vast majority of users, I'd say that dither is more critical if you are reproducing wide dynamic range acoustic material than anything produced synthetically in a studio - simply because the extremely compressed nature of most commercial music means that even the reverb tails drop off into noise before you get to the dither level. And that's one of the main points really - if the noise floor of your recording is at, say, -80dB then you simply won't be hearing the effects of dither, whatever form it takes - because that noise is doing the dithering for you. So you'd only ever hear the effect of LSB dither (what MBIT+, etc. does) when you do a fade to the 16-bit absolute zero at the end of your track.
    Second point: you cannot dither a 32-bit Floating Point recording, under any circumstances at all. You can only dither a recording if it's stored in an integer-based format - like the 16-bit files that go on a CD. Technically then, you can dither a 24-bit recording - although there wouldn't be any point, because the dither would be at a level which was impossible to reproduce on real-world electronics - which would promptly swamp the entire effect you were listening for with its own noise anyway. Bottom line - the only signals you need to dither are the 16-bit ones on the file you use for creating CD copies. And you should only dither once - hence the seemingly strange instructions in the Ozone guide about turning off the Audition dither when you save the converted copy. The basic idea is that you apply the dither to the 32-bit file during the truncation process - and that's dithered the file (albeit 'virtually') just the once. Now if you do the final file conversion in Audition, you need to make sure the dithering is turned off during the process, otherwise all the good work that Ozone did is undone. What you need to do is to transfer the Ozone dithering at the 16-bit level directly to a 16-bit file proper, without anything else interfering with it at all. So what you do with Ozone is to do the dithering to your master file, and save that as something else - don't leave the master file like that at all. After saving it, undo the changes to the original, in fact - otherwise it's effectively not a file you could use to generate a master with a greater-than-16-bit depth from any more, because it will all have been truncated. Small point, but easy to overlook!
    just what's the easiest way to test if a simple dithering setting is
    working for 32-bit down to 16-bit in Audition?...  Why is there no info
    about dithering from 32 bit to 16 bit (which is better then dithering
    from 24-bit isn't it)?
    I hope that the answers to at least some of this are clearer now, but just to reiterate: The easiest way to test if its working is to burn a CD with your material on it, and at the end of a track, turn the volume right up. If it fades away smoothly to absolute zero on a system with lower noise than the CD produces then the dither has worked. If you hear a strange sort-of 'crunchy' noise at the final point, then it hasn't. There is info about the 32 to 16-bit dithering process in the Ozone manual, but you probably didn't understand it, and the reason that there's nothing worth talking about in the Audition manual is because it's pretty useless. Earlier versions of it were better, but Adobe didn't seem to like that too much, so it's been systematically denuded of useful information over the releases. Don't ask me why; I don't know what the official answer to the manual situation is at all, except that manuals are expensive to print, and have also to be compatible with the file format for the help files - which are essentially identical to it.
    Part of the answer will undoubtedly be that Audition is a 'professional' product, and that 'professionals' should know all this stuff already, therefore the manual only really has to be a list of available functions, and not how to use them. I don't like that approach very much - there's no baseline definition of what a 'professional' should know (or even how they should behave...), and it's an unrealistic view of the people that use Audition anyway. Many of them would regard themselves as professional journalists, or whatever, but they still have to use the software, despite knowing very little about it technically. For these people, and probably a lot of others, the manual sucks big time.
    It's all about educating people in the end - and as you are in the process of discovering, all education causes brain damage - otherwise it hasn't worked.

  • What is the easiest way to mix and match drum samples from Logic software instruments

    I have been using Logic software instruments (EXS 24 drums kits) for a while. I know how to edit indiviual zones in a drum kit but I would like to know how to copy and paste multiple zones all at once from one kit to another so that I can create a hybrid ESX24 drum kit with my favorite snares, kicks, toms etc. Is there a good tutorial video showing how to do this?

    so that I can create a hybrid ESX24 drum kit with my favorite snares, kicks, toms etc.
    To my opinion the Environment "Mapped Instrument" object is designed exactly for what you want. You can load even different Software (drum) Instrument kits into say EXS, Addictive Drums, UB etc. Patch multiple cables to each Software Drum Instrument instance, open the Mapped Instrument and assign the pads "cable" outputs in the Cable column - this is one of the most elegant ways in Logic...
    A.G
    www.audiogrocery.com
    Author of: Logic GUI Deluxe(Free),
    Vox De Bulgaria - s.a.g.e vocal pack for RMX,
    Logic Snapshot Console,
    RMX Power CTRL - Logic Environment Midi editor for Stylus etc.

  • Avoiding phase cancellation with sample delay plug in

    Hi
    I had soem horns in a mix sounded flat so i used the sample delay plug in to slightly delay the right side then they sounded great , really big and sat in the mix but when witching to mono they nearly dissapear.
    Is there any trix to avoid this phase cancellation when swithching to mono?

    Data Stream Studio wrote:
    1.-I think if you reverse polarity on 1 of the channels of your mix instead of just reversing polarity of one side of the problematic horns, you may well solve one phase issue and create a whole lot of new ones.
    2.-This is a good idea, maybe there's a setting that sound as fat as the one you're using but will not phase cancel in mono.
    3.-I'm not sure what audio file you'd move... could you explain?
    If it's a Stereo file, split it into two mono tracks, and move on of the two up/down the timeline by a few samples, until you get the least phase combing effect.
    Maybe the OP needs to go back and re-mix that song, and find the culprit horn out of the whole section... if he can...

  • Error message when mixing down audio

    Hi gang
    I occasionally get an error message when mixing down audio on a FCP 5.1.4 project. Message sez: File error: the specified is open and in use by this or another application.
    I've never seen this before. No other applications are open at the time this message comes up.
    thanks for any clarification.
    Craig

    Thanks for your reply. No . . . and if I try a bit later in the project, it seems to render the audio fine. Maybe I'll just trash my prefs and see what happens.
    Craig

  • How do I record live drums only while bassist plays along?

    If the bassist is going direct into the mixer or the interface, how can i record a live drum track while the bassist plays, but is not recorded, yet heard by himself and the drummer ?
    Message was edited by: Geoff J

    Here's my equipment:
    A mixer the drummer borrowed
    Tascam USB-144
    The 144 only allows two (2) instruments/mics, etc . . .
    So we mic'd the individual instruments of the drumkit up , ran it through a mixer into the interface, then into GB
    The bassist had an amp w/a line out
    we could not find a way to get it to where
    1 .The bassist could hear himself & the drummer
    2. THe drummer could hear himself & the bassist
    3. They both could be playing simultaneously, but only the drums get recorded
    Any configuration we did, it seems like they both would wind up getting recorded on the single track in GB
    Please advise

  • Mixing down

    I would like to know what do you guys beleive is better to mix down on my Motu HD192 or logic pro.
    Also I have a huge midi issue, I have my Fantom X7 connected to my G5 via midi. I have a motu HD192 that I'm using as my sound card interface. Using my roland I can play and hear all the plug-ins within logic no problem, EXS24 Ultrabeat.... etc. However when I try to play and hear the sounds coming from my Roland, either I hear nothing at all; or I hear the sounds just fine, but when I try to play another instrument, on a different track I still hear the sounds coming out of my Roland.
    I know it's something I'm overlooking I tried the ports and click, I tried to midi it all failed. If you guys can help I would really appreciate it.
    Thanks
    E

    I would like to know what do you guys beleive is
    better to mix down on my Motu HD192 or logic pro.
    Try both. See what sounds better to you.
    Also I have a huge midi issue, I have my Fantom X7
    connected to my G5 via midi. I have a motu HD192
    that I'm using as my sound card interface. Using my
    roland I can play and hear all the plug-ins within
    logic no problem, EXS24 Ultrabeat.... etc. However
    when I try to play and hear the sounds coming from
    my Roland, either I hear nothing at all; or I hear
    the sounds just fine, but when I try to play another
    instrument, on a different track I still hear the
    sounds coming out of my Roland.
    I know it's something I'm overlooking I tried the
    ports and click, I tried to midi it all failed. If
    you guys can help I would really appreciate it.
    See this thread, it may help you.
    http://discussions.apple.com/thread.jspa?threadID=339514&tstart=45

  • Sample Delay - samples into millisecond delay time?

    Hi guys
    Hopefully a simple question and I'd love some info.
    I'm mixing in 5.1 and want to simply put up a general 25 millisecond delay in the left rear channel and 35 millisecond in the right rear channel. Some room calculations for what I recorded require 25 - not 20 and 35 not 30 milliseconds...... the standard stereo Delay won't allow increments of 5 miliseconds.
    I'm therefore thinking of using the SAMPLE DELAY ........ whilst I have a 24 bit 48 khz recording setting ....... is it possible that within the maximum 10000 samples I can get up to 35 milliseconds ....?
    How do we make the calculation work ..... the manual says 10 centimetres mic distance is 13 samples at 44k
    is it possible to get up to 35 milliseconds using this?
    Many thanks
    Dick

    My dear Christian ....... that is just the quickest most wonderful solution to question I've ever had ..... thank you for that PERFECT answer ....... I'm a happy man!
    All the very best
    Dick

  • Mixing down - lost MIDI tracks

    Hi
    I have just started using GarageBand over the past couple of months. I have figured out all about locking tracks etc - or so I thought. I can get all the tracks to run fine (real audio and MIDI). The only thing is that when I export to iTunes, a couple of the MIDI tracks can be lost. Why???
    It all seems quite random. My MIDI keyboard is connected. And all the tracks are locked for mixing down. (Did I need to have the keyboard connected at the moment when I locked the tracks, maybe?) The funny thing is, on some songs I am hearing all the tracks, MIDI included.
    So, baffled...

    You don't need to have the keyboard connected for exporting the mix: and you don't need the tracks to be locked - that is to make it easier for the CPU to handle the load with a lot of tracks.
    As to the missing tracks, forgive me for asking, but are you sure you didn't click the mute button instead of the lock button? (We've all done things like that! - and it's worth starting with the basics). Anyway, try a mix without locking the tracks - perhaps that's the problem, and, as I say, there's no need.

  • Bad audio on mix down

    I am using an older version of Audition - v1.5 to be exact - and am encountering problems with the audio on mix down. I am adding tags to show audio and then adjusting the length to fit the requirements of our broadcast clock. The time adjustments are no more than 2% and are done at "high precision". What I am experiencing with the resultant audio is wackiness with the audio quality. The best way to describe what I am hearing is it sounds like an audio recording on tape that has been "accordianed" or as fluttery audio. It does not happen throughout the entire file, only in certain places. It does not appear to have any rhyme or reason as to when it occurs in the file.
    I have checked all other possiblities in the chain the audio passes through and have narrowed down to Audition. All processing is being done within Audition. All audio is the same bit depth and sample rate - 44.1K and 16 bit. Anyone have thoughts as to why this is happening and what I can do to remedy the situation?
    Thanks in advance for any help you can offer.

    You're not really alone with this. In subsequent versions time stretching and compression has improved considerably, so I'd suggest an upgrade.

  • Any tips on good EQ for live drums?

    I've just recorded some live drums for a demo, and I've been tweaking the graphic EQ to improve the snare and make the kick drum punchier. Anyone got some good advice on the curve of the EQ for snare and kick drum? I've already added some compression and so far it sounds great, but any tips for further improvement gratefully received!

    http://www.gearslutz.com/board/tips-techniques/167984-drums-mixing-tips-james-me eker.html
    He recommends boosting snare around 120-240Hz...
    I have found shelving the snare, 10db or so around 100Hz works good too. Also helps with OH's and room.

  • Sample Delay

    Hi,
    with using ProTools there is a nice function that allows you to adjust the sample delay due to adding several plug-ins to a track.
    this way you can compensate the physical micro-dealy created by the
    poor/stressed DSP by processing that amount of data at the same time..
    now, where can i find the same fuction here in LE?
    how is ti called?
    thanx

    You're talking about plug-in delay compensation. It's built right into Logic (Express and Pro). You can switch its behaviour to act on All tracks, Audio Tracks and Instruments, or None in the Audio preferences (I'm not in front of Logic at the moment...can't remember the exact path).
    You'll want to leave it on "Audio Tracks and Instruments" when you're tracking, and "All" when mixing. Other than that, it's automatic. No adjustments required.

  • Startling error messages when trying to bounce my final mixes down.

    Friends, Im nervous about this one as I've logged in 100+ hours and dropped a little bit of $ for off-site recording.
    When Im ready to "send songs to disk", an error message pops up saying "GarageBand found 2 audio files in 8-bit format" "This format is unsupported and can not be played back" with an OK" option. After pressing OK, it proceeds to mix down but saves nowehere on my comp (or drive).
    After restarting and trying to reopen the session, Im given this error message: "This file type is not supported and gives me the truly horrible option of "abort". Then it gives me the error message again.
    It then gives me the original error message once more: "GarageBand found 2 audio files in 8-bit format" "This format is unsupported and can not be played back" with an OK" option. After I press it, it gives me that same message again for good measure and now the session is up.
    But I cant save it and I cant send a mix anywhere. So Im pretty screwed?
    We did a little off-site recording and imported those files Sunday, but Ive been working on it for a few days before these glitches came through.
    HELP!
    Im on a brand new MacBook OSX10.9 with Garage Band !! 6.0.5

    Thank you, Leonie for replying.
    So I went to the contents of the file and took the problem files out.
    It immediatley pops up a error saying that a track was not found. "Audio File "guiro.wav" not found"
    Obviously, since the track for guiro is still there.
    But as I tried to move on, I still got a ""GarageBand found 1 audio file in 8-bit format" before I could get to the session and start deleting them.
    As I began doing just that, that same ol' error message popped up fpr every move and then when I finished, it still wouldnt save anything saying..."The document 'SongFour.band' could not be saved.

  • I'm trying to record audio into my iPad2 using GarageBand and an iRig, but I can't monitor my tracks at the same time as recording, meaning if I lay down a drum track and attempt to record guitar, I can't hear the drums while recording. Thoughts?

    I'm trying to record audio into my iPad2 using GarageBand and an iRig, but I can't monitor my tracks at the same time as recording, meaning if I lay down a drum track and attempt to record guitar, I can't hear the drums while recording. Thoughts?

    Hi, I have this exact same problem. I haven't been using an iRig, just a cheap USB-jack cable. I managed to do this with the first cable I bought, but since it broke I've been unable to find a replacement that allows me to hear the drums while recording. DId you manage to get over this issue? If so I would really like to know how. Thanks and best wishes.

  • Is there a way to mix down your tracks into one so you can have more tracks to use

    Is there a way to mix down your tracks into one so you can have more tracks to use

    Here is my tutorial to bounce tracks
    http://ipad.slapwagon.co.uk/
    some quality is lost due to export compression, but it's bareable considering you don't need to connect to a pc or use itunes...
    cheers
    @zorin

Maybe you are looking for

  • Report with PNP Database

    Hi All, I am writing a program in which I take in a personnel number and then process all employees under that personnel number. I understand that PNP logical database GET PERSA event gives us pernr one by one. I want to gather data for each employee

  • Validation for saving values in jsp--is validation compulsory?

    Hi all, in my application there is a jsp which uses a pagebean. now i added somemore attributes to the pagebean and included them in the jsp as well. there is javascript validation for the previous attributes. the values can be modified and should be

  • JPA - Select with where clause

    Hi I have 2 classes: -Class A -Class B with a member which holds a reference to an object of Class A --> unidirectional one-to-one relation. The mapping is the following:                <one-to-one name="a">                     target-entity="domain.

  • How to Save PCB design as a component

    Hello,          I'm trying to combine multiple PCB designs into one file in Ultiboard, and I recently find out it has the ability to save pcb design as a componet. Is anyone know how to use this feature ? Thanks  Solved! Go to Solution.

  • Missing class IDLEntity NoClassDefFoundError

    Hi, I wore a simple iView that have a class that implements the IDLEntity interface. when ever the class is created, i get an exception "java.lang.NoClassDefFoundError: org/omg/CORBA/portable/IDLEntity" the IDLEntity is part of the java official rt.j