Loudspeaker Frequency Response

Hi all! I'm making a program to measure the frequency response of a loudspeaker. I'll be using a sweep to test the loudspeaker. 
Particularly i would like the program to be like this, which is real time. How can i implement this in LabVIEW. This is the screenshot
Details about the plot.
First graph:
Shows a plot in real-time of a frequency sweep with a constant sine sweep amplitude of 1 V. When sweep is started, the graph shows a plot of FFT moving from left to right, with peak of FFT at maximum amplitude of 1 at corresponding frequency of the sweep.
Second graph:
Shows the plot of the Sound Pressure Level in dB versus freqeuncy.
Please refer to the picture and video link below.
https://www.youtube.com/watch?v=sKC3ioWXG38, skip to 4:10

Assuming your idea is to sweep a frequency into an amplifier connected to the loudspeaker, and measure the frequency response with a microphone:
You need to monitor the signal at the loudspeaker terminals to account for any non-linearity's in the signal generator and amplifier.
You need to know the frequency response of the microphone, this is difficult, and that is why calibrated microphones are expensive.
You need an anechoic chamber so that the results are not affected by any room resonances.
Your sound level plot is in dB (A). My understanding of the A weighting is so that the human perceived loudness is constant across the audio frequency range. If you are concerned about loudspeaker performance, is it worth discarding the complexity of this additional frequency response curve?
This will be a difficult project. Please let us know how you get on.

Similar Messages

  • Where can I find the amplitude and phase frequency response when the AC coupling is set on the 4472

    My friends,
      I need to know the amplitude atenuation but most important, the phase distortion introduced on the low frequency components of the signals when the AC coupling is setted (i.e. when the high pass filter is connected) on the 4472 and 4472B DAQBoard.
      I can construct the amplitude frequency response by generating and aquiring a sine waveform of a knew amplitude. But I cannot construct the phase distortion introduced by the circuitry. 
      However, I assume that this crucial information should be available in the DAQ Manual or  in the website of NI, but untill now I haven´t found it.
    Thankning in advance,
    crimolvic from Chile

    crimolvic,
    Here are the Specifications and Datasheet for the 4472.  They indicate a phase non-linearity of less than 0.5 degrees across all frequencies.
    For information on how this varies with frequency, see the attached
    spreadsheet.  This response was the result of testing on a single
    4472.  Although this is classified as a "typical" response, it is
    not gauranteed.
    Have a great, day!
    Travis
    Attachments:
    4472 Phase Linearity.xls ‏21 KB

  • MacBook core duo Sept. 2006  - the audio is a mystery.  Any analogue out has bass boost with bass distortion.  With digital out, by USB or Airport Express Airtunes, the frequency response is normal.  Somewhere, Apple put in an analogue bass boost.. why?!

    Since new, my Macbook core duo has sent all audio to all analogue outputs with frequency distortion.  Some physical hardware or firmware in the analogue section adds a bass boost to the frequency response of the audio....  And, this boost adds bass frequency distortion to most, if not all, of the audio analogue output.
    This happens for all audio sources, players, iTunes, videos, streaming audio/video, movies...  any sound source available to the Macbook.  No-one with whom I have talked about this problem, has ever heard of it.  Not elegant ideas for reducing it are to use equalisers on players and iTunes.  iTunes even has a preprogrammed "bass reduce" on its equaliser, as if it knows already that this inherent bass boost "feature" has been built into the Macbook, and cannot be defeated by the user.
    The digital audio is of sufficiently high quality to play well on a very good to excellent sound-system;  so, it's a shame someone has mucked around with the frequency response of the analogue conversion, by designing the frequency distortion right into the computer.  This motherboard, (which includes the sound-section), has been replaced, along with the speakers, and everything sounds exactly the same as it was before.
    To receive a flat frequency response and no frequency distortion, I listen on one receiver using Airtunes, (it doesn't matter whether or not the analogue or digital output jack is wired analogue or connected via optical cable....  the freq. response is flat);  and I listen on a high-end stereo system using an USB output port to an A/D converter, passing the analogue result using long patch-cords connected to the "line-in" jacks of the stereo.  The bummer is that most of my audio sources are not derived from iTunes...  hence, I cannot use Airtunes with Airport express for them.  I've tried using "Airfoil" without luck.  The sound becomes distorted with sibilance and other frequency anomalies.
    Question:  has anyone else discovered this sound imperfection in any Mac product?  And, does anyone know if Apple is aware of it?  Finally, has anyone found an Apple fix for the problem;  or, at least, come up with better solutions than I have?
    Thanks heaps for any impute and answers you may supply!!  junadowns

    Army wrote:
    This might not help you a lot, but if you want a stable system, try using packages from the repo wherever possible, look at the news before you update your system and don't mess things up (like bad configuration etc.).
    When it comes to performance, you won't gain much by compiling linux by yourself! Just use the linux package from [core] or if you want a bit more performance, install the ck-kernel from
    [repo-ck]
    Server = http://repo-ck.com/$arch
    (this has to go to the bottom of /etc/pacman.conf)
    (use that one which is best for your cpu (in your case this might be the package linux-ck-corex).
    Hmmm, Linux-ck-corex doesn't even load.. I am now trying to install the generic one. Hope it works.
    Edit: I will first try linux-lqx...
    Last edited by exapplegeek (2012-06-26 18:33:31)

  • Frequency response requirements for headphones with CMSS on XFi ???

    Hi,
    I would like to know if someone could tell me what kind of heaphones are suitable for the CMSS mode with the XFi.
    I mean between : flat response/free-field correction/diffuse-field correction.
    Applying HRTF filtering should mean that headphones with flat response is the best option ( same configuration as binaural recordings).
    But I have a big doubt that Creative team expects costumers to possess such a pair of headphones, as it is rather for scientific uses (psychoacoustics, audiology etc...).
    So, if we look at the technical solutions for wide audience we have two options (FF correction and DF correction). Here is a trick because these corrections intend to reproduce some of the effects from HRTF (for two different environment configuration of HRTF measurements). It is why the frequency response of most of the headphones have a notch between 4Hz and 0 kHz.
    To simplify, if we listen binaural sounds with classical headphones the effect of outer pinna is reproduced twice.
    So I guess Creative have implemented a kind of normalization/equalization/correction process to deal with the non-flat frequency response of headphones, but do someone know if they have chosen diffuse field or free field correction ?
    This post might seem a detail but the issue can be very important for the accurate localisation and the coloration? of 3D sounds with headphones.
    Thank you, and please forgive my english!

    The only possibility that I can think of is that 2/2. mode is NOT as simple as headphone mode with crosstalk cancelation. Perhaps the HRTF only kicks in for sound sources outside of the arc directly in front of the listener. If that were the case, you wouldn't percei've any distortion for sound sources in front of you.
    Also, you are wrong regarding DirectSound3D. Keep in mind that Direct3D and DirectSound3D are not the same. The whole point of OpenAL and DirectSound3D is that they present an API to the programmer through which there is NO specification of the number of speakers. When using OpenAL or DirectSound3D, the only thing a programmer can do is specify the location of a mono sound source in 3D space relati've to the listener. The speaker settings for your DirectSound3D or OpenAL device will then determine how this sound is "rendered" by the soundcard. It is not under control of the game. For example, if you have 5. speakers and the 3D position is behind you, the SOUND CARD will make the decision to use the rear speakers. If you use headphones, the SOUND CARD will decide to apply an HRTF to create the illusion of a rear sound source. The point is that the game does not have control over how many speakers you will get sound from.
    However, to further complicate the situation, there are SOME games (HL2 is an example) where DirectSound3D is used, BUT the sound output of the game itself IS a function of the Windows speaker settings. This is not how programmers are SUPPOSED to use DirectSound3D. I've written about this countless times. There is a good post on [H]ard|Forum about this. Do an "advanced" search with my username (thomase) looking for the terms "hl2" and "cmss".

  • What is the frequency response for the new iPhone 5 earbuds?

    I have recently purchased a new iPhone 5 with iOS 6.0.1. I was wandering about the specifications of the new earbuds. What is the frequency response? Thank you for the information.

    Apple has released 2 sets of earbuds with the iPhone 5 release.  There are the earpods and also the in-ear headphones.  Here are the in-ear headphones: http://store.apple.com/us/product/MA850G/B/apple-in-ear-headphones-with-remote-a nd-mic?fnode=49.  The frequency response on these is 5kHz-21kHz.  I was unable to find the earpods response for definite but the best response I got was 30Hz-16kHz.  Hope this answers your question.

  • Why do I get a Track out of memory error while running open loop frequency response?

    MatrixX Build 61mx1411: I get a "Track out of memory" error when I run the Open Loop Frequency Response from the MatrixX pull down tools. What can I do to prevent this? We are running on an HP B1000 with 768 MB of RAM under HP-UX 10.2.

    In the old days of Mx say Version 5 and prior the user actually selected the amount of memory that would be allocated. Depending on the size of the model etc. you would have to allocate memory. In version 6.0 and going forward there is no need for the user to manually allocate the memory.
    Build {rstack=50000,istack=200000,sstack=50000,cstack=50​0 000}
    If this is a command in a script file that you are running and the error is resulting from that then I would try commenting out everything after the letter d in the word build and then starting it back up.
    i.e. only use Build
    I don't believe that there is a way to manually allocate the initial SystemBuild Stack size.
    I believe initially the stack size is set to 10010.
    However, one way
    you can manually set the initial SystemBuild stack size,is to create a large StateSpace as soon as you start up SystemBuild. This will prevent piece-meal reallocs while using SystemBuild.
    You can created a new SuperBlock in SystemBuild and then drop down a StateSpace Block with 199 inputs and 199 Outputs and 1 State and entered ones(200,200)as the StateSpace Matrix without any problems. This would resize this internal stack to at least 40000.
    You really should not have to do this but if that helps then you might think about doing this in your startup.ms file you could use SBA or load the file then you could delete the superblock and begin working.
    "Bob" gave me this little tid bit.
    Please let me know if any of this is of use.
    Garrett
    Garrett Thurston
    [email protected]
    Phone: 781.993.5540

  • Room frequency response measurements

    Does anyone know of a Shareware program available that use your PC's sound card to play pink noise while recording from the microphone, and then display the room's frequency response?
    I have a high-end stereo system and am looking for something that will show me what my room modes are doing.
    Thanks in advance.

    thanks but I'm not looking for pink noise to drown out others. It's to use as a measurement. see my post again.
    1.33ghz PowerBook g4, 17-inch   Mac OS X (10.4.7)   1.5gb ram

  • Frequency Response Function & FFT & Inverse FFT (problem of unit Volts-RMS)

    Hello everyone,
    I am currently working on a VI in order to compare two analog signals : the first one corresponds to the output signal (my reference) which is sent by my data acquisition card to a shaker and the second one corresponds to the input signal recorded by an accelerometer fixed on the same shaker. The final goal of the VI is to correct the analog output signal by using the analog input recorded signal in order to have the vibrations on the shaker which corresponds to what we really want.
    To summary, I have a problem of unit with the Volts-RMS...
    So this is my method for the VI :
    First, I have to calculate the Frequency Response Function between the two analog signals (output and input). For it, I use the " Frequency Response Function (Real-Im).vi " which returns the complex values of the FRF in Volts-RMS (but I don't want to use this unit).
    Then, I want to calculate the FFT of the analog output signal (my reference). There are two different blocs which can be used : " FFT Spectrum (Real-Im).vi " and " FFT.vi ".
    The " FFT Spectrum (Real-Im).vi " returns the FFT complex values of the signal in Volts-RMS and the " FFT.vi " returns the FFT complex values in Volts (or say me if I am wrong, thank you). I really would like to use the second one because of the unit.
    Then, I divide the FFT just calculated with the Frequency Response Function calculated just before.
    For the end, I calculate the inverse FFT of that with the " Inverse FFT.vi " which use the complex values with the same unit than for the " FFT.vi ".
    I don't want to use the Volts-RMS unit because I absolutly want to use the blocs " FFT.vi " and " Inverse FFT.vi ".
    The problem is that I don't find a bloc which use the same unit for the Frequency Response Function. The " Frequency Response Function (Real-Im).vi " returns only the complex values in Volts-RMS unit. Maybe it is possible to convert it correctly? Or maybe there is an other bloc which can be used in order to calculate the Frequency Response Function with the same init than for the FFT and Inverse FFT ? Because I can't mix everything for the moment...
    Thank you for your help,
    Best regards,
    Sebastien

    Hello Preston,
    No, I have not use the Sound and Vibration toolkit. I have only used the signal processing toolkit with the two toolboxes " Waveform measurement " and " Transforms ".
    But I think that what I have done for the moment in my VI is correct (I have finished the complete VI). But I am not sure of the units (Volts, Volts-RMS...) and I would like to understand.
    I have tried with the Sound and Vibration toolkit for the frequency response function (because you say me that it deals with all the unit conversion) and I can obtain the same results than with the " Frequency Response Function.vi " of the toolbox " Waveform measurement ".
    But I would like to understand the units (see my previous post please). For example, for the FFT (the result is a complex), why sometimes it is in Volts, sometimes it is in Volts-RMS ? Is it possible to convert it ? How ?
    If you want, I can attach on the forum my VI and that will maybe help you to explain me. Maybe it will help other people interested.
    And if someone else can give me other precisions or advices about it, do not hesitate.
    Thank you for your help,
    Sebastien

  • PXI-5610 RF Frequency Response adjustment

    Hello,
    I have noticed that adjustment for 5610 in RF frequency response is taking long time (about 2-3hr) with Cal exec 3.5, and I have tried to get the PXI chassis in a good enviroment and with some extra fans around since the adjustment has the legend "RF Freq. Response accumulated Meas. (Normalized to 45°C)
    I am wondering if the enviroment has some important point here or it needs some update in a particular application inside Cal Executive 3.5?
    I also have this question, Win7 has some advantage over WinXP in this particular or general adjustment?.
    Br,
    Omar

    Hello Omar_Rdz,
    Thanks for using NI forums! Calibration Executive usually takes a considerable amount of time for some of the PXI modules. By reviewing the Cal Exec manual it says that for the 5610 module it could take up to 240 mins (around 4 hrs). But definitely the temperature is a factor that can affect the performance not only of the calibration procedure but the whole system. Have you tried to execute the calibration when the controller and the chassis are cold? Also try to avoid dust accumulation in both the modules and the chassis because these can impact on the general performance of the system.
    Answering your question regarding the OS, Win7 has a better performance compared to WinXP but it will also depend on what type of controller you are using. In order to give you a better answer, could you please tell me what controller and chassis are you using? 
    Regards,

  • Frequency Response VS FFT for measring frequency response of a audio ouput signal.

    We have purchased the Sound and Vibration Toolkit and I have some questions.
    From the frequency response example I looked at you measure the input then the ouput and the Vi gives you the difference.
    I want to measure the audio frequency response a radio. 
    So all I have is the output I dont have the audio input  to use as a reference. 
    How would I measure frequency response with the frequency rasponse VI with only the output audio signal?
    I was also looking at the FFT example to measure the audio signal frequency response.
    But from what I can tell so far this only does 1 channel I want to do both channles at the same time.
    Is there a way to do FFT on 2 channels at once and have them output on the same graph?
    Thanks for any help you can provide.

    Hi,
    I was looking through the examples and If you look at the SVXMPL_Multichannel FFT (simulated) examples, this shows how to take an FFT with multiple signals. You just pass in an array of the waveforms.  If you need to put your signals together you can use the merge signals VI, which is located in the Express»SignalManipulation Pallette.  Let me know if you need any further help with this issue.
    Have a great day,
    Michael D
    Applications Engineering
    National Instruments

  • Buil-in microphone frequency response graph

    I'm curious as to where I can find a frequency response graph for the built-in mic in the 15" MBP 2.8. I know it's a silly thing to want, but... Sometimes I don't have the time or resources to set-up an external mic with a flat frequency response, and would just like to use the built-in mic very quickly to "EQ" or "ring out" rooms. I can't find the info on the built-in mic anywhere. Any suggestions?

    Have you ever used the microphone? Have you ever actually used a laptop to do this?
    I don't have a graph, but from years of working with audio I can tell you that it doesn't have the frequency response, sensitivity and directionality to do what you want.
    A nice external mic and portable interface like the iMac would not only do a better job but would look a lot more professional than running around trying to point the cheap microphone of a laptop in the right direction.

  • FS7 audio Frequency Response 50Hz - 20KHz?

    Hello guys, I'd like to point out an issue that I couldn't find on the forum regarding the audio frequency response.I've always worked with pro camcorders that recorded 20Hz to 20KHz, which is the standard frequency us humans can hear. So I was surprised when I saw in the FS7 manual that the audio Frequency Response is 50Hz to 20KHz. Is this a typo? Or is the camera actually limited to 50Hz for the lower freqencies?I would assume that the internal mic would be limited to that range, yes, but not the internal recording of the sound? Why limit the camera to 50Hz? Any light on the subject would be appreciated. Thanks!

    The tests I posted are for the mic inputs, not line inputs. I did test both, but since I often run & gun and cannot support all the extra gear to make the FS7 mic inputs on  a par with Sony's EX3 mic inputs, I'm requesting that the high pass filter added with firmware 2.0 be removed in futre updates. We have the wind filter already. No additional filter is needed. While Genelec monitors are pretty good, they're not noted for deep bass response. Most studios use the ubiquitous NS10, which are only good to 80hz, which is why few people notic the deficiency. The custom built system in my screening room has a -3dB point of 7hz. I noticed that even piano sounds thin on this camera. that doesn't surprise me,  since the rolloff starts at 200hz. All I'm asking for is the same quality I'm used to with my EX3's mic inputs. Only 0.5dB down @ 20hz and no thinning of the midrange. When I do large budget orchestra shoots (rare) I use 24/96 sampling, 8 channels and large diaphragm studio condensers. My recordings earned critical praise from Peter Aczel at The Audio Critic. I was a sound engineer for 4 decades before I got my first digital camera. In addition to that, I was consulted as an independent expert on infrasonics for chapter 17 of Ethan Winer's THE AUDIO EXPERT, published by Focal Press. I was responsible for the information on subwoofers and how they operate. Acoustic and speaker design share a long history with me, and my 'day job' is amplifier design, modification and repair. I'm too intimately aware of what's going on in the signal path of an audio system and when modern digital systems deviate intentionally from DAT-quality, it really bothers me. I made a big deal about this in 2007 with the Sony HVR-V1U, and Sony must have listened, because they got it right with the EX1 and EX3 cameras. But sadly, they've reverted to these games with the FS7, though not as badly as 2007.

  • Microphone frequency response function

    Hi all,
    I am trying to do an impact test with 3 types of sensors to investigate surface properties of a material.
    The first sensor is the sensor in the modal hammer.
    The second sensor is a geophone , whilst the last sensor is an air-coupled sensor in the form of a microphone.
    Aside from the microphone which is directly connected to the laptop by USB, the instrumented hammer and the geophone is connected to the NI 9233 which is connected to the NI USB-9162 which finally connects to the laptop.
    This hardware doesn't support analog triggering therefore software triggering is used.
    I am using a vi example from http://www.ni.com/example/28438/en/ 
    I have successfuly setup the LabVIEW VI to read the raw data coming in from all the sensors and to obtain the frequency response of the geophone due to stimulus signal created by the impact hammer.
    I have tried to obtain the frequency response of the microphone due to the modal hammer however no results show up in the graph for the frequency response of the microphone.
    My question is this, what is going on and how do I fix this so that I may obtain the frequency response of the microphone?
    I have attached my vi for your consideration.
    Solved!
    Go to Solution.
    Attachments:
    Y2014M01D28 IRnMicFRFRecAll .vi ‏190 KB

    I cannot fix your VI because I do not have DAQmx or the SVFA toolkit. What I have attached is a simulation which shows some of the concepts.
    The left side simulates acquisition of two channels (hammer and geophone) via the DAQ device and one channel of sound (microphone).  Don't worry too much about the details. I just threw this together quickly. The hammer signal is a square pulse, the geophone signal is one cycle of a sine wave with noise, and the microphone signal is one cycle of a triangle wave plus noise. Each of the pulses is dleayed from the start of the acquisition by different amounts to represent the trasmission delays of the sound and vibration.  The values chosen are arbitrary and do not simulate any physics.
    I used the sampling rate, block size, and duration controls from your VI, so the numbers of samples and the sampling rates should be the defaults from your VI. The three graphs one the left show the three signals as generated. Note that the Microphone graph has X-axis autoscaling turned off. Also the data is all in waveforms or arrays. I avoid the use of the Dynamic Data Type produced by Express VIs because it effectively obscures the data structure.
    The right side shows one way to do the triggering. I used Basic Trigger Level Detection.vi from the Waveform Monitoring palette and the Get Waveform Subset.vi from the Waveform palette. Note that there are separate trigger VIs for the DAQ channels and the sound channel. Because they are not started at the exact same time and they do not have the same dt (= 1/sample rate), the waveforms cannot be combined to use a single trigger. The geophone signal is synchronized with the hammer signal so one trigger from the hammer can be used to get both subsets.
    Note that with the default delays the microphone pulse occurs 30 ms after the geophone pulse but in the subsets it occurs about 4 ms before the geophone. This is due to the independent triggering.
    Since you do not have a common signal or trigger for both devices (DAQ and sound), you cannot know exactly what the timing relationship is between them. On every run the differences in the start times will vary. So you will need to do some kind of calibration to determine the time delays.  I am thinking of some kind of periodic stimulus which will produce several pulses to both the geophone and microphone. The first pulses will have indeterminate delays but subsequent pulses should have reproducible delays. The relative spacing and directions during the calibration runs should be the same as the hammer position for the real experiment.
    Lynn
    Attachments:
    Sound and hammer sim.vi ‏28 KB

  • Flat Frequency Response

    I probably shouldn't have to be asking this question since I charge people for my obviously amateurish recording abilities but it's one that I've never had explained to me and one I need to know the answer to......
    Let me set the question up this way:
    When you get in the car and pop in a professionly produced cd, most people crank the treble control up to 6-10 (on a scale of 10) and the bass up to (4-10) depending on the factory speakers and type of material and whether or not they care what their music sounds like. When you get home and you're listening to the much higher end home stereo still listening to that professionaly produced album, you still reach for those treble and bass knobs and crank them up several notches or if you have a graphic eq you tweak out a smiley face .
    There's so much emphasis in the recording world about getting a flat frequency response out of your room with absorbers and bass traps and spreading around the reflections with deflectors, etc...etc..etc..., that we spend thousands of dollars on this stuff and some measuring software to make sure that it's flat. Then we use that flat response to produce music that sounds great and expect that to translate to those cd players and home stereo systems.
    (I'll additonally preface my question by saying that I've had no problems getting my music to translate from my home studio to any other playback system, but I'm a little confused about what's going on.)
    Now finally my question(s), when we reach for those treble and bass knobs on our car and home playback systems, are we really just trying to make up for the lack of bass and top end in those systems so that we too can achieve a flat frequency response and make the music sound good on whatever system?     or
    Do we as listeners actually prefer the smiley face frequency response in music and are we taking a cd that itself has a flat frequncy response and making a smiley face out of it so that it sounds good to our ears? (Please don't give me a material/genre answer.)
    The reason I ask is because I have to put a graphic eq on my Truth 2031A monitors to make the professional stuff sound good through them, and then I in turn mix my music to sound the same for whatever material/genre of course. (I'm not really interested in any monitor bashers or I would've asked this over at Gearslutz.)
    So again rephrased...Do the masses think music sounds good when it has a flat frequency response or the smiley face and if it's the lattter of those, how are we supposed to achieve that when our home studio setup is producing a flat frequency response, do we tune our monitors with eqs like me?
    Additonaly I understand that when were talking about flat frequency responses in rooms we're talking about throwing sine waves through a system and ranging their frequencies, measuring them out so that we can detect any over emphasis/deficiencies in the room so maybe this question is a little more towards monitor tuning.

    If you look up Fletcher and Munson in Google, you might begin to get a bit of the start of an idea of why this isn't quite as straightforward as it seems.... and I'm not sure that I can give you a complete answer either, although I can give you a few connected but slightly random things to ponder, wearing my acoustician's hat:
    The fundamental problem is that when things are quieter (and less distorted, incidentally) our ears get more sensitive to the midrange frequencies, and if we listen to music that way, it invariably sounds as though the bass and treble are out of balance. In cars it's slightly different though; the frequency response of whatever's in there generally tends to be anything but flat - and often over-emphasises the midrange anyway. Treble tends to get absorbed very easily in upholstered cars, and since most car speakers don't have anything like acceptable tweeters in as a rule, it's not surprising that people want to increase the treble. As for the bass - well I'm always turning that down personally, but I know what you mean in principle!
    If by a 'commercial' CD you mean one where the vocal is prominent, then yes I can easily imagine why you might as a matter of course want to increase the response at the extremes - it makes sense if you think about it. The mid-range vocal is prominent and probably compressed, so its average level is louder than the backing - this helps it to stand out. But also it distorts the overall time-based response - the backing may well be balanced so it's okay on its own, but that doesn't always translate if you have the wrong vocal settings applied, or at a minimum, applied unsympathetically. And some voices make this significantly worse; for instance Sealion Dying (AKA Celine Dion) makes the most appalling racket in the midrange, and you'd definitely need less of that!
    So really I'd say that it's not a Bass/Treble issue, but a midrange one. If you look at commercial CDs in general, you tend to find that the energy distribution is pretty even over the whole audio band, which implies that it falls off at 6dB/octave if you look at it in Audition (this is an energy/Hz thing), but in reality most CDs these days are mixed a little brighter than that - more like a -3dB/octave slope down from about 1kHz, and that's partly to compensate for a lot of things - some of which are cars... You do have to watch out for the distortion issue though - most people don't realise, but you are able to tolerate rather higher levels of non-distorted sound than anything with significant distortion levels in it, and if that distortion is in the midrange, then you'll want it quieter anyway. So decent, over-rated under-run PA systems always sound cleaner and louder but you should beware - they can damage your ears just as much, if not more.
    Do car interiors themselves increase the chances of midrange boost occurring?  I think it's a pretty safe bet that they do, as a rule, simply because of the size of them, and the treble problem I already mentioned. And if you are trying to compensate for too much midrange, then the rest follows. Most domestic replay systems these days seem to be midrange heavy to me as well - I haven't heard anything cheap recently that had anything like a flattish response - and they really don't suit the rooms they are in either.
    If you want to listen to material as it really should be, then you need to experience it live first, I'd say, and then do a direct comparison with what you can hear in your monitors. I'm fortunate - I can do this quite regularly with a variety of material. Do I tend to leave things as flat as they are recorded? Well, it depends on what it is. If it's in any way classical, then sometimes I look carefully at the bass balance, but generally I leave the rest alone. Everything else these days I just get to sound good - and that can mean all sorts of tweaks, depending on all sorts of things. More and more though, I've come to the conclusion that too much midrange isn't necessarily a good thing - but that's mainly because of the general lack of good reproduction equipment around these days.
    As for monitors and flatness - well that's not really an issue for most people, compared to getting their listening environment correct. If you have a pair of cheap monitors in a good room, the chances are that the results will sound better than an expensive pair in an uncorrected room - despite what all the monitor freaks on gearslutz might say. These days, even the cheaper ones can sound quite respectable. But flatness isn't an issue with monitors really - a decent impulse response, and low distortion are far more important. Chances are that if a manufacturer has got this right, the monitor is going to be suitably 'revealing' anyway - which is what a monitor is supposed to do.
    The one thing you do not do though, is EQ the feed to your monitors - that went out of fashion almost as soon as it came in - fortunately. You fix the room so that it's more truthful. If you EQ the monitor feed it will inevitably only sound good in one place in the room, and that's no use to man nor beast. The only decent things that proper room correction systems can do is equalise the immediate time response to take account of what's actually between the monitors and you - which if done properly can improve stereo imaging no end.
    So answers? Well not really. But at least I've explained a few (but not all) of the issues.

  • Frequency response vi and writing to lvm files

    I'm having trouble writing the magnitude y/x of the frequency response vi to an .lvm file.
    I don't get any samples in my file when I run the program.
    Any idea what I'm doing wrong here?
    Attachments:
    testernw.vi ‏381 KB

    Hi,
    If you place indicators immediately before and after the frequency response express VI, what do they show?  Are there any differences between the input data before and after the conversions are applied?  I tried putting together a simple VI that takes two simulated signals, inputs them into the frequency response graph, and then outputs it to a file.  It appears to work fine, as long as the two inputs are at a different frequency.
    What version of LabVIEW are you using?  Where specifically are you reading the 0 values?
    Regards,
    Lauren
    Applications Engineering
    National Instruments

Maybe you are looking for

  • Album art and error 50

    hi there! ok heres the deal. i have a brand new 30GB ipod video and i use windows XP (sp2) with itunes 6. Like many other people on here, my album work was showing up in itunes, but not on my ipod. i added the artwork via my ipod in the source list.

  • Registration Info

    Guys, I found the FL2.1 registration is a bit troublesome and every time I hard reset my device, after I reinstall the FL2.1, I need to do the whole process of registration (open license.txt, copy and paste the URL, copy and paste the key to Adobe we

  • (!apple video) with gnome-mplayer & gecko-mediaplayer

    I'm trying to watch these video lectures for my trig class. Can't get them to play. installed firefox 3 (branded) along with gecko-mediaplayer and gnome-mplayer. I'm running arch32 and am having the same problem in kde as I had in gnome. Not sure wha

  • Buffer Hit Ratio % -- Whats the right query ?

    Whats the right query to track Buffer Hit % ; Using this : prompt BUFFER HIT RATIO % prompt =============== select 100 * ((a.value+b.value)-c.value) / (a.value+b.value) "Buffer Hit Ratio" from v$sysstat a, v$sysstat b, v$sysstat c where a.statistic#

  • Panic error on macbook pro

    I was on my computer watching a movie and my friend video called me on skype so I answered. A few mins later my macbookpro just froze so i had to hold the power button to force it to turn off and when I started my computer again it showed the message