FFT Amplitude into Sound Pressure Level (SPL)

Hi everyone,
I have a problem with my program. Basically, i have a chirp signal to be outputted into my DAQ and being fed to my speaker.
The microphone will sense anything coming out from the speaker, and acquire it using my DAQ.
I want to display a frequency response vs. SPL.
My method is that, I'm using FFT on the received Chirp Signal from the microphone and display its frequency response.
My questions are?
1. Is by doing mere FFT will give me an output of SPL vs Frequency?
2. If not, how do I convert this value of the amplitude of the FFT to SPL db?
3. Why is the output amplitude of my FFT is negative?

The FFT is a mathematical tool that outputs complex values that contains frequency and  phase information. If you want to obtain the amplitude of your signal as a function of frequency, it is easier to use the "Power and Phase Spectrum" vi in the -> Signal processing -> Spectral palette.
Marc Dubois
HaroTek LLC
www.harotek.com

Similar Messages

  • I'm looking for a .vi for sound pressure level meter (SPL)

    Hi....
    I'm looking for help regarding the implementation of a SPL (sound pressure level)meter using LabVIEW.....If anyone has a vi or diagram or know how to proceed with this project please tell me about it...
    According to the project , I need a microfone and my laptop with LabVIEW to make the vi.
    Please help
    Thanks

    Here's an example program:
    This VI simulates the functionality of a Radio Shack Analog Sound Pressure Level (SPL) Meter. This VI requires a data acquisition board, Radio Shack SPL Meter and an RCA cable. Connect one end of the RCA cable to the SPL Meter's output, and then splice off the other end and connect it to channel 0 of the DAQ board. The connection to the DAQ board must be made differentially.
    The output of the SPL meter is a raw waveform that is converted by the program into a decibel (dB) reading. The front panel shows a dB offset based on a range, just like the actual SPL meter. The front panel also displays the overall dB reading.
    Attachments:
    SPL.llb ‏88 KB

  • Diagram or vi for Sound Pressure Level Meter (SPL)

    Need help for implementing a Sound Pressure Level Meter (SPL) with LabVIEW 7.1 or 6.3....I must use a microfone and a laptop for my project.....Need diagram or ready vi ....!!!
    How can I make an A,B,C Weighting Filter vi for my Sound Pressure Level Meter project....???
    If there is one already made or you can give me any help please send it to me.......[email protected]
    Thanks.....

    Search the archives. A similar question about the weighting filters was posted within the past month or two, if I recall correctly. I am not aware of any ready-built VIs, but the filter specs are published (Google A-weighting).
    Also be careful with the frequency response of inexpensive microphones. They can skew the results substantially if you do not have some way to measure and compenste for the response.
    Lynn

  • Sound pressure reporting

    Hi,
    We are new (actually, still evaluating) LabView customers.  I know nothing about Labview, and less about the technical intracacies of the project that I have to complete.  Basically, what I need to do is acquire and capture up to three streams of sound pressure data using B&K 4944-A sound pressure mics attached to an NI 9233.  I think we can do this with SignalExpress and the S&V Assistant.  In essence we are replacing three B&K 2209 sound meters with B&K 4136 pressure mics capturing one or more precussive sound events.  The new hardware is already purchased.  We are trying to get the capture and reporting going.
    My need is:
    1.  Capture a continuous stream of log data beginning with starting to record and ending with stopping the recording.  We need to trigger the start of the actual logging based on a rising slope over 110 dB, and capture data for .25 seconds after the trigger is met (I have been unable to capture any data yet - probably has to do with I don't know where the values to set in the trigger).  We need the trigger to be activated continuously until the recording is stopped.
    2.  Reporting.  For each triggered log event above, need to extract and report the peak sound pressure level (dB) for the three data sources logged.  Tabled data is OK; graphs and charts are not necessary.
    3.  Would like to (automatically, if possible) store the results from #2 above in a database to evalute potential product changes.  For now we can use Visual Pen and Paper.
    Can anyone help with a starting point?

    Hello,
    Triggering based on the level of the signal itself is known as an analog start trigger. Since the NI 9233 does not have analog or digital triggering, we could use another module (like a NI 9205) to start the task that the NI 9233 is in.
    How Can I Trigger a NI 9233 or 9234?
    http://digital.ni.com/public.nsf/allkb/4859504F14AF68DB8625721100640F26
    If you do not want to buy another module, you could try post-processing the data to ignore any data that comes in before this level.
    I hope this helps.

  • Sound pressure spectra with SVT

    Hey all,
    I have a project which I have been having a bit of trouble implementing. I'm performing several measurements at 48kHz/24Bit of some sound events. My requirement are as follows for sound pressure spectra analysis. 
    500 - 20 kHz frequency bandwidth with a 25 Hz frequency resolution.
    Peak  Hold  averaging  with  50%  overlap  processing  and  Hanning  windowing  applied  to  the time records. 
    Anti-aliasing filtering. 
    A - weighting. 
    I have the sound and vibration toolkit and have implemented as follows:
    1. Read wav
    2. Convert from Volts to dB via SVL Scale Volts to EU.vi. (I used a calibrated signal of 94dB at 1kHz to find my mic sensitivity). 
    3. A-weighting using SVL A,B,C Weighting Filter.vi
    4. Peform analysis using SVL Zoom Power Spectrum.vi. 
    Few Issues:
    I dont know how to implement the Anti-aliasing filter. 
    Is the method I have used ok with measuring sound spectra? Or should I perform the FFT then convert to dB?
    I have tried to implement the same analysis using Matlab and although I get the same FFT profile, my dB levels are different where the 16kHz noise is almost the same level as the 8kHz noise? Not sure what is happening there, but I want to concentrate on using LV for now. 
    I dont have any guidance available or experience with this sort of analysis. I have trialed my program out on a sample wav file which has two separate distinct noises events, one at 8kHz and another at 16kHz. My FFT is showing that the 16kHz is a much higher sound level (95dB) than the 8kHz (80dB) but when I listen to the wav via a pair of studio monitor speakers, the 8kHz is much more audible. Wouldn't the implementation of A-weighting filter tend to make the 8kHz noise more prevalent in the FFT? 
    Any suggestions on this topic are appreciated.
    Thanks!  

    Hi David,
    If you want to enable the enhanced alias filter, you can use a DAQmx channel property node. I would recommend placing a DAQmx Channel property node on the block diagram and then going to Analog Input » General Properties » Advanced » Enhanced Alias Rejection Enable. See if that works and let me know.  
    Jake H | Applications Engineer | National Instruments
    Attachments:
    Enhanced Alias Rejection.png ‏3 KB

  • How to generate .wav files from sound pressure data

    Hi,
    I have sound pressure data (as dB values) vs frequency data.
    How can I create a playable .wav file based on above?
    Note: I do NOT have sound pressure vs time data
    I am working in LV 7.1 FDS on Win2000.
    If really required, I can incorporate VIs from sound and vibration toolkit (version 1.0).
    However, I would prefer a solution that does not need me to do that.
    Thanks in anticipation,
    Gurdas
    Gurdas Singh
    PhD. Candidate | Civil Engineering | NCSU.edu

    All,
    With some help from NI-India, we managed to build a nice sound player and .wav saving utility.
    I have attached the VI sent by NI-India. The key component which was missing in my VI was "resample waveform".
    Note: For some reason, I could not use the attached VI to generate sound but did use the concept to succesfully read my sound pressure data and output it to the speaker (and also save it as .wav file tha can be played on Windows Media Player). Also note that I am now generating sound from sound pressure vs time data. I have not tried generating sound from sound pressure vs frequency data.
    I would love to have the following answered:
    1) What data does the sound write VI expect? Is it Sound Pressure values? What units (N/m2 or dB or dBA)? I think it expects sound pressure data and the units are not relevant. Reason being my next question.
    2) I found that until I mutliply the sound pressure data by a very large number say 10e7, the sound is hardly audible (even when volume was programmatically set to max. i.e. 65535). The sound QUALITY does not change when I change this multiplier.
          2.a) Does that mean its only the relative difference between wave data points that affects sound quality? If yes, then I believe the sound pressure data can be in any units.
          2.b) Is it expected to use this huge multiplier OR did I do something without really knowing what it meant?
    Thanks,
    Gurdas
    Gurdas Singh
    PhD. Candidate | Civil Engineering | NCSU.edu
    Attachments:
    Sound from FFT (LV 8).vi ‏165 KB

  • System Sound Output Level Reset After Reboot ?

    Whenever my iMac Core Duo is rebooted, system sound output level always get reset to value 0.56 (-24dB) as shown in utility "/Applications/Utilities/Audio Midi Setup". I have to increase the system sound output level to desired value of 0.75(-12.75dB) to match with built-in amplifier of Altec Lansing FX6021 speaker system. The desired setting lasts until next reboot. Is there any way I can fix the system sound output level at desired level between reboots ? Why I have to be involved in such constant struggle with the iMac for such simple audio configuration ?
    Audio Hijack Pro 2.6.7 wihtout Soundflower 1.1 virtual audio device has been installed. Audio hijack function of this utility has not been used at all. Thank you for any pointer.

    Thanks for the confirmation that my iMac is not an isolated case. In case you are not willing to use Audio Hijack Pro, just use Automator to create an Automator script to set system volume (In Automator's Library plane: Applications>System>Set System Volume) and save it as application. Set this application as Login Items for your account through following menu path:
    System Preferences>Accounts>Login Items. Whenever you login with this account, the script will be invoked automatically by Mac OS X and sound levels for output, input and alert will be set to user preferred levels. For older versions of Mac OS X (10.3.9 and earlier) without Automator, same effect can be achieved with Apple Script, use Apple Script Editor instead of Automator to create Apple Script and set it as Login Items. This workaround has been working consistently on my iMac Intel Core Duo.
    iMac Intel Core Duo 20" 2 GHz 2 GB RAM   Mac OS X (10.4.7)   Firewire External Drive, Belkin Wireless Router, iPod nano

  • Aggregating data loaded into different hierarchy levels

    I have some problems when i try to aggregate a variable called PRUEBA2_IMPORTE dimensinated by time dimension (parent-child type).
    I read the help in DML Reference of the OLAP Worksheet and it said the follow:
    When data is loaded into dimension values that are at different levels of a hierarchy, then you need to be careful in how you set status in the PRECOMPUTE clause in a RELATION statement in your aggregation specification. Suppose that a time dimension has a hierarchy with three levels: months aggregate into quarters, and quarters aggregate into years. Some data is loaded into month dimension values, while other data is loaded into quarter dimension values. For example, Q1 is the parent of January, February, and March. Data for March is loaded into the March dimension value. But the sum of data for January and February is loaded directly into the Q1 dimension value. In fact, the January and February dimension values contain NA values instead of data. Your goal is to add the data in March to the data in Q1. When you attempt to aggregate January, February, and March into Q1, the data in March will simply replace the data in Q1. When this happens, Q1 will only contain the March data instead of the sum of January, February, and March. To aggregate data that is loaded into different levels of a hierarchy, create a valueset for only those dimension values that contain data. DEFINE all_but_q4 VALUESET time
    LIMIT all_but_q4 TO ALL
    LIMIT all_but_q4 REMOVE 'Q4'
    Within the aggregation specification, use that valueset to specify that the detail-level data should be added to the data that already exists in its parent, Q1, as shown in the following statement. RELATION time.r PRECOMPUTE (all_but_q4)
    How to do it this for more than one dimension?
    Above i wrote my case of study:
    DEFINE T_TIME DIMENSION TEXT
    T_TIME
    200401
    200402
    200403
    200404
    200405
    200406
    200407
    200408
    200409
    200410
    200411
    2004
    200412
    200501
    200502
    200503
    200504
    200505
    200506
    200507
    200508
    200509
    200510
    200511
    2005
    200512
    DEFINE T_TIME_PARENTREL RELATION T_TIME <T_TIME T_TIME_HIERLIST>
    -----------T_TIME_HIERLIST-------------
    T_TIME H_TIME
    200401 2004
    200402 2004
    200403 2004
    200404 2004
    200405 2004
    200406 2004
    200407 2004
    200408 2004
    200409 2004
    200410 2004
    200411 2004
    2004 NA
    200412 2004
    200501 2005
    200502 2005
    200503 2005
    200504 2005
    200505 2005
    200506 2005
    200507 2005
    200508 2005
    200509 2005
    200510 2005
    200511 2005
    2005     NA
    200512 2005
    DEFINE PRUEBA2_IMPORTE FORMULA DECIMAL <T_TIME>
    EQ -
    aggregate(this_aw!PRUEBA2_IMPORTE_STORED using this_aw!OBJ262568349 -
    COUNTVAR this_aw!PRUEBA2_IMPORTE_COUNTVAR)
    T_TIME PRUEBA2_IMPORTE
    200401 NA
    200402 NA
    200403 2,00
    200404 2,00
    200405 NA
    200406 NA
    200407 NA
    200408 NA
    200409 NA
    200410 NA
    200411 NA
    2004 4,00 ---> here its right!! but...
    200412 NA
    200501 5,00
    200502 15,00
    200503 NA
    200504 NA
    200505 NA
    200506 NA
    200507 NA
    200508 NA
    200509 NA
    200510 NA
    200511 NA
    2005 10,00 ---> here must be 30,00 not 10,00
    200512 NA
    DEFINE PRUEBA2_IMPORTE_STORED VARIABLE DECIMAL <T_TIME>
    T_TIME PRUEBA2_IMPORTE_STORED
    200401 NA
    200402 NA
    200403 NA
    200404 NA
    200405 NA
    200406 NA
    200407 NA
    200408 NA
    200409 NA
    200410 NA
    200411 NA
    2004 NA
    200412 NA
    200501 5,00
    200502 15,00
    200503 NA
    200504 NA
    200505 NA
    200506 NA
    200507 NA
    200508 NA
    200509 NA
    200510 NA
    200511 NA
    2005 10,00
    200512 NA
    DEFINE OBJ262568349 AGGMAP
    AGGMAP
    RELATION this_aw!T_TIME_PARENTREL(this_aw!T_TIME_AGGRHIER_VSET1) PRECOMPUTE(this_aw!T_TIME_AGGRDIM_VSET1) OPERATOR SUM -
    args DIVIDEBYZERO YES DECIMALOVERFLOW YES NASKIP YES
    AGGINDEX NO
    CACHE NONE
    END
    DEFINE T_TIME_AGGRHIER_VSET1 VALUESET T_TIME_HIERLIST
    T_TIME_AGGRHIER_VSET1 = (H_TIME)
    DEFINE T_TIME_AGGRDIM_VSET1 VALUESET T_TIME
    T_TIME_AGGRDIM_VSET1 = (2005)
    Regards,
    Mel.

    Mel,
    There are several different types of "data loaded into different hierarchy levels" and the aproach to solving the issue is different depending on the needs of the application.
    1. Data is loaded symmetrically at uniform mixed levels. Example would include loading data at "quarter" in historical years, but at "month" in the current year, it does /not/ include data loaded at both quarter and month within the same calendar period.
    = solved by the setting of status, or in 10.2 or later with the load_status clause of the aggmap.
    2. Data is loaded at both a detail level and it's ancestor, as in your example case.
    = the aggregate command overwrites aggregate values based on the values of the children, this is the only repeatable thing that it can do. The recomended way to solve this problem is to create 'self' nodes in the hierarchy representing the data loaded at the aggregate level, which is then added as one of the children of the aggregate node. This enables repeatable calculation as well as auditability of the resultant value.
    Also note the difference in behavior between the aggregate command and the aggregate function. In your example the aggregate function looks at '2005', finds a value and returns it for a result of 10, the aggregate command would recalculate based on january and february for a result of 20.
    To solve your usage case I would suggest a hierarchy that looks more like this:
    DEFINE T_TIME_PARENTREL RELATION T_TIME <T_TIME T_TIME_HIERLIST>
    -----------T_TIME_HIERLIST-------------
    T_TIME H_TIME
    200401 2004
    200402 2004
    200403 2004
    200404 2004
    200405 2004
    200406 2004
    200407 2004
    200408 2004
    200409 2004
    200410 2004
    200411 2004
    200412 2004
    2004_SELF 2004
    2004 NA
    200501 2005
    200502 2005
    200503 2005
    200504 2005
    200505 2005
    200506 2005
    200507 2005
    200508 2005
    200509 2005
    200510 2005
    200511 2005
    200512 2005
    2005_SELF 2005
    2005 NA
    Resulting in the following cube:
    T_TIME PRUEBA2_IMPORTE
    200401 NA
    200402 NA
    200403 2,00
    200404 2,00
    200405 NA
    200406 NA
    200407 NA
    200408 NA
    200409 NA
    200410 NA
    200411 NA
    200412 NA
    2004_SELF NA
    2004 4,00
    200501 5,00
    200502 15,00
    200503 NA
    200504 NA
    200505 NA
    200506 NA
    200507 NA
    200508 NA
    200509 NA
    200510 NA
    200511 NA
    200512 NA
    2005_SELF 10,00
    2005 30,00
    3. Data is loaded at a level based upon another dimension; for example product being loaded at 'UPC' in EMEA, but at 'BRAND' in APAC.
    = this can currently only be solved by issuing multiple aggregate commands to aggregate the different regions with different input status, which unfortunately means that it is not compatable with compressed composites. We will likely add better support for this case in future releases.
    4. Data is loaded at both an aggregate level and a detail level, but the calculation is more complicated than a simple SUM operator.
    = often requires the use of ALLOCATE in order to push the data to the leaves in order to correctly calculate the aggregate values during aggregation.

  • Sound pressure signal as input

    I want to use a function SV Time-Varying Loudness in the Sound and vibration suite. The output of this function is exactly what I require for my speech analysis. 
    I have a speech sample recorded from the microphone obviously a voltage signal. This function requires a sound pressure signal. 
    The voltage signal would be proportional to the sound pressure signal right? How do I convert this voltage signal to the sound pressure signal (Pa). 
    I'm not even sure if I can use this function. I trying to extract the features from the speech signal. The algorithms used for this are linear predictive coding , perceptual linear predictive coding and mel frequency cepstral coefficients. 
    I already implemented Linear predictive coding in labVIEW. I wanted to try out Perceptual linear predictive coding because it is considered to be robust in noisy conditions. 
    This SV time varying loudness function does the exact thing I want to do but I don't know if it is applicable to speech signals. 
    In the help I says this VI process 2ms blocks of sound pressure data. But speech signals are to be processed only at 10 to 15ms. Please help

    Your microphone should have calibration tables to relate the output voltage to an audio quantity like dB or perhaps even Pa.
    Mike...
    Certified Professional Instructor
    Certified LabVIEW Architect
    LabVIEW Champion
    "... after all, He's not a tame lion..."
    Be thinking ahead and mark your dance card for NI Week 2015 now: TS 6139 - Object Oriented First Steps

  • TA24264 so if I enable Sound Check and then increase the volume on an individual song will that song revert to playing at the sound check level??

    so if I enable Sound Check and then increase the volume on an individual song will that song revert to playing at the sound check level??

    If that's the best you can do why don't you get on another forum?

  • Internal sound speaker level of the macbookPro locked after installation.

    Following the installation, I can no more adjust the internal sound speaker level of the macbookPro. It is locked.

    Ok... see the discussion Sound locked; no Internal Speaker option. And you solve this problem (success with inserting a jack and taking it out.

  • Can i length stamp an image triggered by an encoder into a low level buffer

    Hello.
    I am triggering images of a fast moving web into a low level circular buffer. Can I stamp the image with the absolute encoder value at which it was captured?
    Thanks for any help.
    Rob.

    Hello Rob,
    I have almost the same application. It is a webinspection  where I need to detects defects with a linescan camera. The downweb (longitudinal) coordinate of the defect must be exchanged with a robot that cuts out the specific part. Since the robot is placed 25 m further away downweb, we synchronize both systems with a counterboard that counts each encoder pulse of 0.25 mm. Each m the robot is synchronised with the vision system by exchanging the (linetrigger) encoder position at the time of the m pulse rising edge to re-reference both systems.
    The defect position within a page image is the encodercountervalue of the beginning of that specific page + the row position of the defect in the image translated to encoder pulses. Therefore I need to know the encoder value at the time of capturing that particular page very accuratelly. The framegrabber board does provide a (software) signal that generates a callback. Since we do not use a realtime OS, there is non predictable delay between the actual start of capturing the first line of a page to the execution of the callback in which I can read the actual encoder position. The accuracy of the positioning mainly depends on this delay.
    Therefore I am very interested if you already found a solution for your problem, if needed with additional hardware?!?
    Looking forward to your reply!
    Robbert

  • Why is it that an amplitude-scaled sound does not play louder than the original sound using Play Waveform?

    I don't know if I have posted this on a correct location (kinda new here).
    I am trying to demonstrate the effect of multiplying a constant to a waveform. I have a sound file read by Sound File Read Simple.vi. Then I am trying to scale the amplitude of the signal by a constant number. I've tried three methods for this: the Numeric Multiplication, the express VI Scaling and Mapping, and the scaling and conversion VI of Signal Express. The waveform shows that the amplitude of the scaled signal is indeed scaled, but when played thru the play waveform VI, the sound level is still the same, no matter how much I am scaling the input. Does anyone why is this so? Also, I really wanted to show that scaling an audio waveform results in a louder waveform. How do I do this? As much as possible, I've wanted to use simple VIs as the class is an introduction to signal processing and the students (and so am I) are still finding our way into LabVIEW (we used to use MATLAB in the course).
    Attachments:
    scaling.vi ‏69 KB

    JaymeW wrote:
    Hi kirkbardini,
    According to this article about Low Volume From Sound VI, you can use a scalar multiple to increase the volume of a sound saved to a .wav file, so you should be able to do this before playing it.  Does that not work for you?
    That information is for Labview 7.x and hence outdated. As mentioned before. Labview do a normalization before sending data to the sound card. As you know sound cards are most often 16 or 24 bits. And in most cases data input to a sound card is in DBL type. After normalization the data is converted so the floating point value 1 represent full scale output on data sent to the the sound card DA. To get any further I suggest kirkbardini take the time needed to send us his/hers current code. Then we may solve things from there
    Besides which, my opinion is that Express VIs Carthage must be destroyed deleted
    (Sorry no Labview "brag list" so far)

  • Add variuos amplitudes to sound graph

    Hi
    I have the following code that computes a sound graph with a fixed amplitude for a number of waves. I need to be able to control the amplitude of each wave, but am not sure how to amend the following code to produce the effect. There is also an fft function that converts the amplitude time graph to frequency time graph, but I am not concerned for this now - only want to change the amplitude for n number of waves.
    import java.awt.*;
    import java.awt.event.*;
    public class FFTv1 extends Frame
       implements ActionListener, WindowListener
      final int maxDim = 1024;
      int width =400, height =300;
      Button b;
      Complex x[];
      drawPanel pdraw;
      //float sam[] = {5, 10, 1};
      public FFTv1() {
    // The title of the frame is "FFT".       
         super("FFT");
    // Use the BorderLayoutlayout manager.    
         setLayout(new BorderLayout());
         addWindowListener(this);
    // Instantiate a button with label "FFT" which when pressed takes the FT.
         b = new Button("FFT");
    // Instantiate a panel object p where the button will be placed.    
         Panel p = new Panel();
    // Place the panel p in the South position  of the frame.  
         add("South", p);
    // Add the button to the panel p    
         p.add(b);
    // The button listens for click events.    
         b.addActionListener(this);
    // Instantiate an array x of type Complex with a length of maxDim.
    // Loop over all the elements of the array setting them equal to the value of a test function
         x = new Complex[maxDim];
    // Initialise the value of test function to zero for all sampling points x[i] = (0,0) for i = 0 to N-1.     
         for (int i=0; i < maxDim; i++) {
            x=new Complex(0.0f, 0.0f);
    // The outer loop is over the N sample points.
    // The inner loop is over the n number of graphs - in this case 2
    // CAN AMPLITUDE BE ADDED BELOW??
         for (int i=0; i < maxDim; i++) {      
              for (int j=10; j < 30; j+=10) {               
         float arg = (float)(j*i * Math.PI/maxDim);
         x[i].setReal(x[i].getReal()+(float)Math.cos(arg));      
    // Instantiate a panel object pdraw where the graph will be drawn.
    pdraw = new drawPanel(x);
    // Place the panel pdraw in the Center position of the frame.
    add("Center", pdraw);
    setBounds(100,100, width, height);
    setVisible(true);
    public void actionPerformed(ActionEvent e) {
    // On clicking the button perform the FT on function contained in complex array x.
    //Instantiate object fft of type FFT passing the array x to the constructor.
    FFT fft = new FFT(x);
    // The FT input data x is overwritten.
    fft.Execute();
    // The drawing panel pdraw needs to be repainted.
    pdraw.repaint();
    static public void main(String argv[]) {
    // Instantiate an object of type FFTv1 which contains the GUI elements and thus drives the overall program.      
    new FFTv1();
    public void windowClosing(WindowEvent wEvt)
    System.exit(0); //exit on System exit box clicked
    public void windowClosed(WindowEvent wEvt){}
    public void windowOpened(WindowEvent wEvt){}
    public void windowIconified(WindowEvent wEvt){}
    public void windowDeiconified(WindowEvent wEvt){}
    public void windowActivated(WindowEvent wEvt){}
    public void windowDeactivated(WindowEvent wEvt){}
    To draw the graphs, we need drawPanel class
    import java.awt.*;
    class drawPanel extends Panel
       Complex x[];
       float ymin, ymax;
       float hscale, vscale;
       int ytop, ybot;
       float ampmax[] = {5, 10, 1};
       float ampmin[] = {-5, -10, -1};
       protected drawPanel(Complex xa[])
        x = xa;
        ymin = -1e9F;
        ymax = 1e9F;     
      public void paint(Graphics g)
          ymin = 1e9F;
          ymax = -1e9F;
           for (int i=0; i< x.length; i++)
              if (x.getReal() > ymax) {ymax = -2;}
         if (x[i].getReal() < ymin) {ymin = 2;}     
    Dimension d = getSize();
    hscale = (float)d.width/x.length;
    vscale = 0.9f*((float)(d.height))/(ymax - ymin);
    ytop = (int)(d.height*0.95);
    ybot = 0;// (int)(d.height*0.05);
    int x0 = calcx(0);
    int y0 = calcy(x[0].getReal());
    System.out.println(y0);
    for (int i = 0; i < x.length; i++)
         int x1 = calcx(i);
         int y1 = calcy(x[i].getReal());
         g.drawLine(x0, y0, x1, y1);
         x0 = x1;
         y0 = y1;
    private int calcx(int i)
    return (int) (i*hscale);
    private int calcy(float y)
         return ytop-(int)((y - ymin)*vscale)+ybot;     
    //=============================================
    I am hoping someone can help, as I have spent hours trying to change the amplitude - I thought it may be ymax and ymin of drawPanel - but then I cannt get n number of waves at the required frequency.
    Please help!
    Thanks
    sam

    Like this
    By the way, that Abort VI button is not for stopping your loops.  I just kills execution wherever it may be.  It should only be used as a last resort if your stopping code is not working properly.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines
    Attachments:
    Record_BD.png ‏88 KB

  • Sound clip level lowers when effect applied??

    Hi Everyone.
    Does anyone know why it is that the sound level contained within a clip lowers substantially after the clip has an effect applied to it (such as Brightness/Contrast?)
    Is there a fix???
    I think I asked this question about 2 years ago... and I believe the answer was sitting on Dansangle site. I just checked but I cannot find it!
    help?!?! please!?!
    Nick.

    A clip's volume is lower after you use an effect or title in iMovie 4
    http://docs.info.apple.com/article.html?artnum=107944
    (Two workarounds are in article)

Maybe you are looking for

  • CRS-0210: Could not find resource while start ASM on 10G RAC

    Dear All I'm trying to install 10g RAC on Redhat 4.1. At the point at which I try to start ASM it fails with the message ORA-03113: end-of-file on communication channel select value from v$parameter where name='instance_type' ERROR at line 1: ORA-010

  • Error while syncing with iTunes 8.0.1

    I get an error at the end while syncing, during photo uploads. It cancels out saying iPhone can't be synced. I've quit, restarted, reinstalled and nothing helps. Anyone experiencing the same?

  • How do you import photos into IPhoto from a RSS feeder?

    I want to create a Photocast so I can deliver pictures into my friend's photo management program of choice. I have IPhoto 6 but my friends that have IPhoto 5 are having trouble. When I send them a Photocast, they can put the URL in the Safari Address

  • Why won't my computer wake up now that i downloaded yosemite?

    after downloading yosemite twice i have to power off my computer to get it to start up>

  • How can i control the height of an af:table

    I want to make something like this: <af:table maxrows='10'/> If my table have 3 rows the height of table is 50px for example, if the size exceed 10 rows, a scrollbar is add automatically. How can i make this using Adf Faces ? Obs.: The parameter maxR