RMS Levels

Does anyone knows how to set the RMS levels to fall between -20db and -15db in Logic pro x?

I thought to provide some more information.
I need to set up the RMS level of one audio track to fall in between -15db and -20db.
This is what I do:
I am checking the RMS through the level meter under metering before using compression.
I change the setup of the compressor (the threshold, while my RMS box below is checked) to fall in this range (-15db to -20db).
After changing the compressor set up I go back to check the RMS level under metering. Nevertheless, it does not change anything.
Am I doing something wrong?
Any ideas on how to do it right! How to set up the RMS level?    

Similar Messages

  • Contious (RMS) levels.

    Can anyone tell me how to get precise information on continous (RMS) levels of a bounced audio file? Need to know this so I can create the tight sounding mix I want, and still leave enough headroom for mastering....
    Mac Pro   Mac OS X (10.4.10)   2 x 2.66 GHz 4 GB RAM

    The multi meter in Logic has RMS readings. It has options for both fast and slow metering times and the stereo master level section on the right of the plugin interface shows peak/rms on the bottom of the meter so that must serve the dual purpose. I believe they are multi colored levels to distinguish between the peak and rms. Or one reads the peak level on the outside meters and the rms on the inside meters or something like that....hence the name "multi-meter" (in addition to correlation of stereo image).
    Noah

  • How to calculate the RMS level of a signal with its spectral representation

    hi!
    how to calculate the RMS level of a signal with its spectral representation
    thanks

    1. Find the length, N, of the spectral signal.
    2. Convolve the magnitude of the signal with itself.
    3. Take the Nth element of the resulting signal
    4. Take the square root.
    5. This should be the RMS value of your signal.
    Randall Pursley
    Attachments:
    RMS.bmp ‏616 KB

  • Can I set the normalization level in the Bounce dialog?

    I want to normalize my tracks to, say, -1db instead of 0db.  Is there any way to set the normalize feature in the bounce dialog to do this?

    Hi
    The Art Of Sound wrote:
    Which in turn reduces the dynamic range......
    Louder parts get quieter... and quieter parts get louder....
    No
    Turning the gain up or down to ensure that the peak levels ( or RMS levels) merely turns the whole signal level up or down: no compression or expansion
    http://en.wikipedia.org/wiki/Audio_normalization
    CCT

  • Amplitude and level

    Dear sir
    What is the difference between amplitude & level measurement with vibration level measurement? I found both has measurement such as RMS and Peak value, but it gave a quite different value when I try it simultaneously.
    Can I used the amplitude & level measurement to measure vibration level?

    Product differences:
    Amplitude and Levels Step is activated by SignalExpress
    Vibration Level Step is activated by Sound and Vibration
    Functional differences:
    Amplitude and Levels Step gives a new result for every block (Step will hold peaks if you check that option)
    Amplitude and Levels Step supports measurement of DC
    Vibration Level Step always returns the result from the last reset
    Vibration Level Step does not measure nor remove DC component; instead this step assumes vibration around equilibrium and/or that the data was acquired with AC coupling
    Vibration Level Step supports time-domain integration (to convert acceleration --> velocity --> displacement) prior to the level computation.
    Vibration Level Step supports industry-standard (fast, slow, impulse) and custom time constants for exponential level measurements.
    What types of vibration are you trying to measure?
    Vibration during shock event --> Amplitude and Levels
    RMS level of an arbitrary signal --> Amplitude and Levels
    Machine or structural vibration --> Vibration Level
    In many cases, both steps can be used simultaneously Amplitude and Levels Step outputs can be used for trigger conditions. Vibration Level output (such as Exponential Level and Max-Min) can be used for logging and tracking of industry-standard levels.
    Doug
    NI Sound and Vibration

  • The Levels bar into the red after normalization

    Sorry if this is a daft question, I am new to home recording and need to submit my audio via specific guidelines.  I'm trying to get into audiobook production.  I have a strong theatrical background but i don't know much about mastering audio so this has been challenging...
    In the raw file, the levels were primarily in green, occasionally in yellow.
    I used "match volume" to get my RMS to a certain level.  Then I used a limiter to set to -3 db.
    Now the file goes into the red.  According to Amplitude statistics.  there are 0 "possibly clipped samples" so why then do the levels keep going into Red?  I thought that was indicative of clipping?
    That's my first question, my second is does anyone know of any basic audio mastering classes for voiceover using Adobe Audition?  Less about how to edit audio, more about anatomy of audio files and the meaning of each of these parameters and how they relate to one another. 

    Thanks for your response.  I probably did a terrible job of explaining.
    I did originally try the "normalize" option and then the limiter but my RMS stats were too high to submit the audio book. They recommend normalizing peaks at -6db,  I found this video which states that you can normalize via RMS levels by match volume:  How to Normalize Audio Peak and RMS in Adobe Audition CS 5.5 Tutorial - Normalization - YouTube which I thought would at least bring that RMS number down so I was within their required range.  It did so all my amplitude specs meet their specs now but I'm just a little perplexed by the red in the level bar.
    This is all very new to me and I need to educated myself.  I don't even really understand the difference between normalization and compression and why one is favored over the other.  This is all really kind of over my head.  I think I can learn it but I need someone to explain it to my in layman's terms.
    When I do a raw recording, my Total RMS is always way over the recommended RMS the audiobook production company suggests a pre-mastered file should have.  Lowering  the gain makes the peak amplitude too low. That should be at -12 max while the RMS should be no greater than -28.  RMS is around -40.  So in a sorta related question, while recording, how do i lower my RMS without killing the peak amplitude. 

  • Can't find the horizontal Level Envelope

    I'm making a CD with a bunch of my songs; on some of them, the wave form is skinny, because their volume is lower, and I want to even them up so that the listener isn't turning the volume up and down from song to song.
    I found instructions to do this by adjusting the level of the skinny (i.e. quieter) regions. It says "Move the pointer over the horizontal Level Envelope that you want to adjust. The pointer changes to the level envelope tool and a help tag displays the original and adjusted values." However, I can't see the line, and my pointer does NOT change to a level envelope tool. What do I need to change so that both these things appear?
    Also, if there's an easier way to achieve my goal, that would be great (I tried normalizing the tracks, but that didn't help).
    Thanks!!

    You don't see the line which ends with a ◊ at each end, like this?
    !http://web.me.com/johnalcock/filechute/waveburnerlevel.png!
    Also, if there's an easier way to achieve my goal, that would be great (I tried normalizing the tracks, but that didn't help).
    Adjusting the overall level probably won't get you too far - you can have a song, with a couple of loud peaks, that will appear to be much quieter than an intentionally softer song with controlled peaks. Normalizing won't help.
    The key here is that you want to increase the perceived volume (levels), which has nothing to do with the peak levels. 0 dBFS cannot be exceeded - it's the absolute maximum. So to get your tracks louder, you need to increase your apparent volume, often referred to as the average RMS level.
    A good way to try this is to open your mix in a fresh Logic project (I prefer doing it there instead of WaveBurner, but whatever you prefer). You could try first applying a compressor, threshold set so that any peaks are controlled. You can use a compressor or a multiband compressor, but be careful; the latter can really mess up your mix unless you're careful and practice a lot.
    Next, insert the Adaptive Limiter. Find the peak in your song, and set the left (Input Margin) side of AdLimiter so that you get close, but never go over 0 dB. on the right, set AdLimiter's output to -0.3 dB and crank the gain control to taste. At the bottom of AdLimiter's window, click the disclosure triangle, and set Mode to "NoOver.' You will change the character of the sound doing this; it's up to you if you like it or not, but you can definitely get your track loud.
    There are lots of threads here and in the Logic forums about this. Try searching on AdLimiter, compression and mastering.

  • Displacement RMS value from acceleration

    I am trying to get displacement RMS value from a signal obtained from an accelerometer. This accelerometer is connected to a shaker that vibrates at 159,2 HZ with an acceleration RMS value of 3,16 m/s2. At that frequency it's supposed to obtain the same RMS value for velocity (3,16 mm/s) anf for displacement (3,16 um).
    I have used the SVT RMS level.vi using the signal obtained from accelerometer as input. I obtain the correct aceleration RMS value. Then I double integrate that signal using SVT Intgration.vi and apply SVT RMS level.vi, but the value I obtain doesn't mach with the expected one(3,16 um).
    Should I eliminate DC componet of the signal before integrating it?  Or before using SVT RMS level?

    Your expected values are valid only for a pure sine. Every little offset from the DAQ of amplifier will result in different values.
    So you should eliminate the DC in your first measured signal.  Your integration time should be a multiple of 1/159,2 s 
    PS: use 159,155 Hz ;-)
     PPS: If you want to do calibration take a look at SAM (Sinus approximation method)
    Greetings from Germany
    Henrik
    LV since v3.1
    “ground” is a convenient fantasy
    '˙˙˙˙uıɐƃɐ lɐıp puɐ °06 ǝuoɥd ɹnoʎ uɹnʇ ǝsɐǝld 'ʎɹɐuıƃɐɯı sı pǝlɐıp ǝʌɐɥ noʎ ɹǝqɯnu ǝɥʇ'

  • Automation vs. separated channel mixing?

    I have a mixing question I've been dying to ask.
    When it comes to mixing, a friend of mine swears by automation.
    His technique is to put all lead vocals (or any similar sort, so he would put verse 1 & 2 and bridge lead on one channel) on one single channel and automate the level of that channel up and down according to the part of the song and obviously the volume he wants from the vocal part. He has left me with the impression that "That's how everyone does it".
    I on the other hand have never bothered with automation when I mix a song (speaking primarily of vocal channels) unless I really have to. I simply just use more separate channels even if it is the lead vocal. So for example if it's verse 1 there are three parts of the verse that I want in different volumes, I just put all three on different channels in logic and copy the COMPRESSION, EQ setting and bus sends on each. So in the end I usually end up with between 5 or 20 audio channels just lead. I can then also obviously set EQ and COMP and whatever else I wanna do on each channel individually as the need arrises.
    This to me is way faster because while mixing the entire song I can quickly get to the channel that needs adjustment and change the volume according to what I hear.
    Mixing with Automation in the described scenario by my friend is just clumsy to me and harder.
    Please let me in on your preferred way of mixing.... with automation and similar to the way described that my friend does it or do most of you also separate audio parts in to various channels as the needs arrises?
    Of course I am always willing to learn.
    Thanks.
    Message was edited by: Chance Harper

    I mostly use similar approaches to the very well described methods in Bill's response - except that first, I spend a lot of time (where necessary) to tweak levels within the vocal tracks in the sample editor. In there, I'll first remove pops, thumps, noises etc. I'll then change gain on individual phrases and sometimes even words or parts of words. I don't use compressors to replace riding a fader or automation.
    Then, once my tracks are cleaned up, I'll often use two compressors - one dealing with undesirable remaining peaks, and one dealing with RMS levels, but mostly for the sound it gives me because of the choice of compressor, not gain control. All of this apples to other instruments too - my goal is always to get the raw tracks as close to 'right' as possible before applying anything - which goes back to the recording stage, obviously.
    As I mix OTB often, automation I use is mostly to push-pull in fairly coarse increments. Using the same 'parallel' compression routing, but not actually using compression, I might apply affects to another bus, so I'm generally mixing the affected vocal track with the untreated vocal track, and sometimes swapping them for more extreme effects. As I'm often OTB at this stage I don't have a lot of options as I use the creaky old (now obsolete) Uptown automation system - or none at all on certain tracks. Somehow there's a magic that happens when there are human hands on faders during a mix, and the mix becomes another performance.
    Of course, this applies mostly to the music type with which I work - roughly 80% audio, 20% virtual instruments, so everyone's techniques will be different.

  • Smart Software Engineer with LabVIEW experience (and acoustics a plus) needed in Boston, MA

    We've are looking for a staff software engineer to join our company in Boston, MA (near downtown).  We have a 3000+ vi application that has been in continuous development by multiple software engineers (currently 4 engineers + 1 intern) for 15 years.  Every year we release a new version of the software with significant new features.  An engineer with our company needs to be more than just a LabVIEW hacker.  We need a software engineering that can go into our large application, modify it, sometimes in very fundamental ways, without breaking existing functionality, and have an eye for how their changes impact the maintainability, scalability, reliability, and readability of our code.
    Candidates will likely be LabVIEW Architects or have equivalent experience if they don't have formal certification.  We lean towards candidates who have Masters Degrees in such fields as Electrical Engineering, Mechanical Engineering, and Computer Science. Interviews will be conducted over phone, web, and in person by a LabVIEW Architect and will need to be able to discuss topics such as the following:
    - coupling and cohesion in software design, how this relates to design paterns such as action engines
    - software lifecycle models- state machines, parallel loop architectures, race conditions, data structures, type definitions, Xcontrols,
    - Object Oriented design
    - importance of documentation, importance and use of source code control
    - pseudo code and its usefulness as a design tool, some exercises will require users to read and write pseudo code to solve classic computer science problems
    - tradeoffs of various file formats in terms of flexibility for future software changes
    - FFT, Frequency Response, Amplitude/phase, RMS level, dB, noise, averaging, distortion, loudness, A-weighting
    Formal job ad is below:
    To be considered for this position, please send resumé and cover letter explaining why you are the ideal candidate for this job (in Word or PDF format only) to [email protected]. Please use the subject title Software Programmer.
    Programmer for Audio Test and Measurement software - Boston
    Listen, Inc. is the market leader in PC based electro-acoustic test and measurement systems for testing loudspeakers, microphones, telephones, audio electronics, hearing aids and other transducers. We have been in business for over 15 years and our continued growth has created an opportunity for a software engineer to join our programming team. This is an exciting opportunity to work on an industry leading electro-acoustic test and measurement system used by numerous Fortune 500 companies in the field of loudspeaker, microphone, headphone, telecommunications and audio electronic manufacturing.
     This position reports to the Software Manager. Duties include, but are not limited to:
    Programming in LabVIEW
    Designing and coding new Sound Measurement and Analysis software
    Improving, reviewing and de-bugging existing code
    Preparing internal and user technical documentation
    Testing code
    Interfacing with management, sales teams and customers to define tasks
     Required skills / education
    Bachelor’s degree (Masters preferred) in electrical engineering, mechanical engineering, computer science, physics, or similar subject
    Strong background (4+ years) in programming with 1+ years in LabVIEW.
    A methodical approach to coding, testing and documentation
    The ability to work well in a small team. A willingness to challenge and discuss your own and other people’s ideas.
    Experience in acoustic engineering is a plus.  Relevant topics include FFT, Frequency Response, Amplitude/phase, RMS level, dB, noise, averaging, distortion, loudness, A-weighting
    About Listen
    Listen has been in business for over 15 years and our suite of PC & sound card audio test & measurement products is the accepted standard in many blue-chip companies worldwide. We offer the spirit and flexibility of a small company, combined with stability and an excellent externally managed benefits package which includes competitive salary, healthcare, paid vacation, retirement plan and more.
    Applicants must have authorization to work in the US. We are unable to assist with visa / work permit applications.

    we're interviewing candidates, but, this position is still available

  • What is the MultiMeter telling me?

    I have a project with midi tracks and audio tracks recorded at very conservative levels. I always record and mix at low levels. My Out 1-2 fader is set to -14db with a peak of -9.6db. My channel strip peak in my mixer view is -8db. As I said none of my audio tracks were recorded "hot" in the slightest.
    But yet my MultiMeter peak is 4db above 0db. I noticed that regardless of what my out 1-2 fader or my master fader are set at the MultiMeter still reads the same overs. But adjusting my channel strip faders does effect the level of the MultiMeter.
    I realize that in some instances that while no channel strip could have overs there could still be an over in the master fader through cumulative audio signal. But why is the MultiMeter not affiliated with my out 1-2 or the master fader?
    So what does this mean? Before I bounce a mix to disk for further editing does it matter what the MultiMeter reads? Should I rely on my out 1-2 fader for the levels that are actually going to be bounced to disk?

    Hi Franz, your piano music kills my hands...
    Just a couple of things:
    The Multimeter reads prefader since it is an insert. If you have your master fader at anything lower than 0dB (unity) then you can subtract that from what the channel meter peak values will be. So if your multimeter reads a top peak level of -2dBfs and your fader is at -10dB, then your channel meter will read -12dBfs for its top peak value of the same source.
    You don't say if you have anything else (like an EQ, compressor,etc) after the multimeter, you shouldn't for clarity sake.
    You should probably set your output (Output1+2) to unity gain along with the master fader. If your multimeter is over 0dBfs, then turn down your tracks. It goes beyond the scope of your question why this is, but while Logic can handle high channel levels, other things related (like plug-ins and summing issues) can't and the sound will suffer. Your mixes will thank you.
    If you are in 24bit, realize that you would have to reduce your level 40dB to bring it down to fullscale 16bit so you have headroom to burn. If your monitors are even semi-calibrated to around 83dBspl at at -20dBfs signal, you will have a healthy sound coming out of the speakers with plenty of digital headroom left over. If you want to spend that later on (mastering, etc) to make your mix louder, you have a plenty of options.
    Back to your question. The multimeter is also showing you the RMS and Peak values at the same time. The dark blue will agree with your channel meter unless you have the fader set below (or above) the 0dB mark (unity). The light blue is RMS and is showing you an average level - this is important as this is more aligned to how we hear loudness issues.
    Again, you can take this or leave it but here is what I would do:
    1) Put the Output1+2 and master fader at 0dB and leave them there.
    2) Get your monitoring system at least close to -20dBfs equating to 83dBspl - if you have to turn down your master faders because you are clipping but it isn't making your ears hurt, then turn UP your monitoring system. Mix to an RMS level of -20dBfs and then 'master' or finalize your mix wherever you feel the need to. With this method you have options and you can control the trade off between pure transients and loudness.
    3) Make sure that the Multimeter is the last plug-in on your chain or it is worthless as a meter.
    4) After you have #2 above, the light blue segment will be the thing you look at the most.
    hope this answers your question and helps
    Paul

  • Pre-fader vs. post-fader metering

    I generally avoid any internal clipping on my virtual instruments by making use of logic's pre-fader metering option, is this really necessary?
    Which is best, mix post fader and forget about internal levels of plug-ins, instruments etc. or always make sure there is no internal clipping?
    What are the results really when making sure there is no post or pre fader clipping?

    Chance Harper wrote:
    I did a quick experiment to see what the result would be if I overloaded or pushed the internal volume of a plug-in but made sure the output channel wasn't clipping VS. a plug-in that got leveled pre and post.
    I used only a drum kit from battery 3 with a simple riff. Pushed the plug-ins overall volume to +6.0dB and then made sure my post fader level wasn't clipping, both of them then got mastered using ozone 4.
    Both of the tracks reached the same RMS level, neither of the two sounded louder than the other.
    The one I pushed up in the plug-in sounded a bit squashed and not as clear or pure as the one that considered the pre and post fader metering. The drums sounded way more defined and clear when ensuring both pre-and post metering wasn't clipping.
    It does seem that by ensuring none of your virtual stuff clips internally you get purer sound.
    Of course I am sure you could still get a great sound if you internally only pushed it over by 2dBs. Still think it's best though to mix pre and post as a rule. Any comments?
    Message was edited by: Chance Harper
    Message was edited by: Chance Harper
    Message was edited by: Chance Harper
    again, i would agree with BeeJay's original reply to the original post. it's not good practice to let internal levels fly out of control and there's really no good reason to. and i would also agree that keeping levels lower result in better quality mixes.

  • Encoding aif to ac3 for DVD raises volume

    MacPro (early 2008)
    2 GB RAM
    OS 10.5.8
    Compressor 3.0.5
    DVD Studio Pro 4.2.1
    Soundtrack Pro 2.0.2
    1) I encode my 50 sec long .aif to an ac3 using Compressor.
    2) Import to DVD Studio Pro to a track within my show.
    3) Burn DVD.
    Play in several different DVD players and it seems that the volume has been increased by about 10 db vs. the level I had mixed in Soundtrack Pro 2.
    Here are the settings I use in Compressor:
    Audio Tab:
    Target System: DVD Video
    Audio Coding Mode: stereo 2/0 LR
    Sample Rate: 48 kHz
    Data Rate: 192 kbps.
    Bit Stream Mode: Complete Main
    Dialog Normalization: -31
    Bitstream Tab:
    Center Downmix: -3.0 dB
    Surround Downmix: -3.0 dB
    Dolby Surround Mode: Not Indicated
    Copyright Exists (checked)
    Content is Original (checked)
    Audio Production Information (unchecked)
    Preprocessing Tab:
    Compression: none
    General Digital Deemphasis (unchecked)
    LFE Channel: Low-Pass Filter (checked and greyed out)
    Full Bandwidth Chan. Low Pass Filter (checked); DC Filter (checked)
    Surround Channels 90 percent Phase-Shift (checked and greyed out)

    Thanks for the detailed post.
    Are you playing the disk through the same audio mixer as how you monitor your audio from the computer? Unless you have the same monitoring path for both streams, it's hard to say whether the audio is really louder.
    This is how I monitor -
    • FCP playback- it goes to a Matrox MXO2mini, then the audio goes to a Mackie mixer, then to a set of near field speakers.
    • DVD playback from a stand alone DVD player - the audio goes directly to the same Mackie mixer then to the same speakers.
    With the gain on the channels set to the same level and the output slider on each channel set the same and the master output level set the same, it would be noticeable if there is a difference between FCP playback and DVD playback. I can say I haven't noticed it.
    How are you monitoring?
    x
    edit - what happens if you take the ac3, convert it back to aiff and play it within STP. Does it exhibit a change in gain or does it still play to the same RMS levels?

  • My Audio's Getting Dynamically Compressed When It Shouldn't

    I just took a look to a DVD I have been compiling, and the soundtracks are horribly dynamically compressed. In compressor I set the dialog normalization at -31 and turn the "Film Standard Compression" to "none". I leave the rest of the defaults as is. Does anyone know where the problem could lie? The tracks were sent to compressor through a FCP export. Also, I mastered the tracks a bit by myself. Is there anyway I made them too hot, and there is some preset threshold in compressor that I dojn't know about?

    Thank you for your quick response!
    My soundtrack is basically one long stream of music. When compressed to an ac3, i can hear the volume "wavering". The nature of the music, is that there are sometimes transient "spikes" - ie, a snare hit that may make a peak very briefly. In the ac3 file, it sounds like a slow-response compression setting was applied to it. When it hits one of those peaks, the volume drops, and slowly rises again (in relative terms - the volume rise sounds to be over the course of a second).
    At some parts where there is a lot of quick dynamics in the soundtracks, it sounds like it is continuously raising and lowering.
    I know more about music engineering for CD's then for movies, but it sounds to me like there is some sort of threshold that is very close to the average level of my soundtracks, but many times it goes over that threshold, and some sort of compression is applied (at a high ratio).
    Could it be that my music is mastered to hot? What RMS level should i shoot for? Do you know anything about the inner workings of compressor?

  • Headroom Questions

    Hey,
    I've been reading a lot of discussions about headroom, and I'm hoping someone can clarify a couple things that are confusing me.
    It seems the common wisdom is to leave about 6 db of headroom for peaks and 12 - 18 db for average signal level.
    Here are my questions:
    1. Is the -6 db peak level in reference to a recorded audio file? In other words, if I put an audio file on a track in a Logic Project, and insert only a Level Meter on the channel strip, and look at the Peak and RMS levels of the Level Meter, ideally the Peak should be roughly - 6 db and the RMS signal should be -12 to -18?
    2. Assuming the answer to #1 is yes: I don't quite understand this, in that, if the peaks of the audio file are 6 db below the max level, what is the headroom for? "inter-sample" peaks? I realize this may be a dumb question, if so, sorry!
    3. Assuming again that the answer to #1 is yes: Does this apply to ANY audio file, including a pre-mastered stereo mix of a song? Because, I thought that when you do your final (pre-mastering) mix, you should try to get Output 1-2 level meter to be as close to 0 as possible, which I found results in the peaks in the mix's audio file being close to zero. Am I missing something?
    Thanks.

    This is a very complex subject but basically resolves around how sample value(s) are handled inside a computer.
    One crucial thing to remember is that when you are looking at a meter in Logic (or any DAW) you are not seeing a level, you are seeing a sample value which is representing a level. It has nothing whatsoever to do with volume.
    Once a signal passes through your A/D and is inside your computer it is nothing but a string of 1's and 0's. There is no sound inside your computer, there is only maths. A single track by itself really presents no problem but it is rare that a song you are recording is going to be a single track. As you add tracks these sample values get added together, you may have come across the term 'summing'. Summing is exactly what it says, it adds these numbers together.
    The internal mix engine in Logic will give you phenomenal headroom but we must remember that this needs to come back out into the real world, out of our D/A so we can hear it and play it. Very, very simplistically, what a computer does when a number is too big it will truncate it. Keeping your tracking levels lower (the -6db for peaks is cool, at 24bit it isn't harmful to take that to -12) will give the summing engine room so that nothing is truncated when coming back into the real world.
    I like this question as it is an engineering question. When I started with tape we had to learn how to align the machines, de-mag the heads, clean the tape path, what tape had what tolerances and how hard we could hit certain tape brands to get a good sound. Now things have moved on but there is a new set of rules to learn. This one is the most crucial, I find.
    You can experiment with levels, as I and many others have done and keeping tracking levels lower really does give a better sounding mix. As I said, this is a very simplistic explanation and you have to remember that plug-ins can clip too. Everything that you put onto a channel will 'add' to the numbers that the summing engine will need to process.
    So, in your last question, a single track may not be clipping but 24 tracks all hitting -1, once summed with all manner of processing, will be way too hot.
    Once you are finished and end up with a stereo song, you basically have 2 tracks in a single phase-locked stereo file. A mastering engineer will bring this 2 track recording up to a healthy level.
    I hope this helps a little.

Maybe you are looking for

  • How do I determine the size of an image ( bytes not dimensions )?

    I am working on an application which downloads images from the web and caches them for speed reasons. I want to allow the users to browse the cache and dump selected images to free up memory. Ideally I would display the image size in bytes for each c

  • Sony Bravia KDL-32L5000 or Samsung LN32B460?

      I'm getting a 32" HDTV and these are my options Sony Bravia KDL-32L5000 or Samsung LN32B460 I will be using this tv for sd/hd channels,PS3 gaming,and blu-ray Thanks in advance

  • Help for Image Distortion Anyone?

    Is there an easy way to get one line of distortion through text like shown in the picture? or maybe a similar effect? Thank you for any suggestions.

  • Module MM - ME55 transaction, Variant Initial Screen

    Hi Experts, I would like to set a Variant initial screen, when i call TRX ME55, nothing happen. I would like setting theses boxes -Blocking indicator 1 and -Scope of list=F How coulg I do it?, I tried with trx Se38, but It was impossible. Thanks, Ign

  • Profile Parameter Setup (RZ10) - Help Needed

    In using RZ10 to setup profile parameter for QAS, in the scenario below: How dow I change the "Unsubtituted and Subituted standard value to match this miadevs2\sapmnt\trans