Implementation of tone measurement vi in speedy33 project

i am using labview 8.2 i have speedy33 processor. tone measurement vi is available in labview8.2. but when creating a project with speedy33 the tone measuremet vi is not available.i plan'd to import tone measurement vi from normal labview8.2 to project in speedy33.. but it di'nt work out... pls tell me how to implement tone measurement vi in speedy33...
Solved!
Go to Solution.

I am sending you a VI which identifies which tone is present in the signal acquired through the speedy 33 microphone.The identification is done by frequency comparison based on the concept of DTMF.
Include this VI in a Speedy 33 project.
First, carry out the testing of the sample stored in your system with this VI. The results you receive will be characteristics of your sample.
Based on these results, you can reconfigure this VI to process the input signal acquired from speedy 33 so as to search for the presence of the desired characteristics.
Hence you can identify whether the required sample is present in the acquired signal.
Regards
Akshat Jain
Attachments:
DTMF Decoder.vi ‏32 KB

Similar Messages

  • Tone measurement express vi affect the output signal?

    Hi everyone, I am doing a oscilloscope to measure the input signal coming from my analog input of myRIO. I am still new to Labview and I not sure how to implement the tone measurement to measure my signal frequency, amplitude and phase. The first diagram below show the program before i haven't add the tone measurement express vi, the signal get from the waveform graph seem great. However, when i added the tone measurement express vi, the signal seem distorted and unstable. Does anyone know how to solve this problem? Is it the way I implemented is wrong?
    Figure 1: without tone measurement express vi
    Figure 2: with tone measurement express vi

    I would like you to prove the "bug" that you're describing by creating an example that shows the calculated and the expected results. By opening the code of the "Spectral Measurements" Express VI you can dig into the heart of the calculation, and there (in the "ma_FFT Power Spectrum and PS Density no State.vi") you will see that the Power Spectral Density indeed IS calculated by using the formula PSD = (Power Spectrum/(df*Noise Power BW of Window)). See the attached PSD.jpg that shows where the calculation happens.
    The Power Spectral Density is supposed to be dependant of the chosen Window. Please read chapter 5 "Smoothing Windows" in the "La
    bVIEW Analysis Consepts" manual.
    Best regards,
    Philip C.
    National Instruments
    - Philip Courtois, Thinkbot Solutions
    Attachments:
    PSD.JPG ‏139 KB

  • How do I set two different beat measures in the same project, e.g. 2/4 and 3/4?

    How do I set two different beat measures in the same project, e.g. 2/4 and 3/4?

    mpcsouza wrote:
    How do I set two different beat measures in the same project, e.g. 2/4 and 3/4?
    you can't, GB is restricted to a single time signature per project
    however, the time sig is really more cosmetic, it doesn't affect the way the project plays back, so you could set the project to 2/4 when working on the 2/4 parts, and 3/4 when working on the 3/4 parts

  • Video Playback Problem with Developer Walkthrough of the Continuous Measurement and Logging Sample Project

    Hi!
    I have serious problems playing the video Developer Walkthrough of the Continuous Measurement and Logging Sample Project.
    As the video pauses every few seconds for buffering. I am not able to watch it. Just wanted to check if you can playback the video:
    http://zone.ni.com/wv/app/doc/p/id/wv-3401
    This happens on any browser (chrome, ie, firefox). I am using Win XP and the latest Adobe Flash Player. About 2 weeks ago NI Germany confirmed the problem and promised to contact NI USA as they are responsible for these videos. So far nothing happened. In the meantime I received an e-mail that the problem could not be confirmed although they had confirmed it before...
    Thank you for any feedback.
    Regards,
    Anguel

    Hello ctVolFan,
    As indicated in the "support" section of the SPOT, you should get in touch with self-paced-training(at)ni(dot)com.  I don't believe that team monitors these forums directly.
    Regards,
    Tom L.

  • Tone measurements Express VI

    I had installed the Sound and Vibration toolkit . When I open Tone Measurements from Tone and Distortion of Sound and Vibration,It can't initializing completely and close automatically . So I can' t  configure it , please help to solve this problem,thanks!

    Hi,
    I had done with your proposal.Both of them is 2012,it's OK now.Thank you! New problem appeared,I acquired signal to analysis THD and at the same time I used "Harmonic Distrotion Analyzer.vi".The two values are different.Please see attached picture.
    Attachments:
    THD test capture.jpg ‏60 KB

  • Phase measured by two tone measurement have different dt

    Dear all;
        I write a labview program for isolated a signal from the mixture of two signals by using a reference input. I choose to use butterworth filter and apply it in bandpass mode. To determine the band width of the filter, I also use two tone measurement vi to detect the phase of the reference signal and the signal I want to outcome. Unfortunately, when I run my program, warming with message "waveforms have different dt value" is always show. How can I fix this problem?
    Regards
    Attachments:
    lock-in test.vi ‏88 KB

    Well you got a bad case of the DDT's and dt's
    If you get rid of those express vis and use a vi from the waveform measurments pallette you won't get fooled by that dt that does not exist coming out of your filter
    Of course you COULD use the Digital IIR Filter vi off the Waveform conditioning palatte and keep your dt info but that doesnt get rid of your DDT's
    Here is one quick mod
    Jeff
    Attachments:
    lock-in testmod.vi ‏36 KB

  • Amplitude measurement using "Tone Measurement" when monitoring amplitude of a waveform

    Exactly how does Labview make the amplitude measurement using "Tone Measurement" when monitoring amplitude of a waveform?
    What is the mathematical operation that Labview uses to make an amplitude measurement when using "Tone Measurement" function?
    Thank you!
    OpResp

    Hello,
    Yes we have a very detailed tutorial on how the method of signal analysis in LabVIEW, please take a look at the following document.
    The Fundamentals of FFT-Based Signal Analysis and Measurement in LabVIEW and LabWindows/CVI
    http://zone.ni.com/devzone/cda/tut/p/id/4278
    I hope you find it informative!
    Regards,
    Anna K.
    National Instruments

  • How to do implement the FrameAccess.java code in my project

    Iam doing an project in java for inserting the videos into oracle9i and searching the inserted videos using the frames of the videos inserted.I have done the project to insert and search the videos.But i have been asked to put an EXTRACT button to extract all the frames of the video being inserted.Please help me to implement the FrameAccess.java coding in my existing coding.I have pasted my coding(VideoInsert.java) and FrameAccess.java(used to extract frames) coding.
    The VideoInsert.java when executed will contain browse button to choose the video file(only .mpg files) to be inserted into the database.After selecting the file and when we press open button in the Open dialog box,the video will be played in jmf player in a separate window and the filepath of the video will be included in the textbox.Now what i need is,an extract button should be placed and when it is clicked,the frames of the corresponding selected video has to be extracted in the location where the project files are stored i.e.,the FrameAccess.java coding has to be executed.
    Please help me if anyone knows how to implement the above concept.
    VideoInsert.java
    import javax.swing.*;
    import java.util.*;
    import java.io.*;
    import oracle.sql.*;
    import java.sql.*;
    import oracle.jdbc.driver.*;
    import oracle.sql.BLOB ;
    import symantec.itools.multimedia.ImageViewer;
    public class VideoInsert extends javax.swing.JFrame {
        private Connection con;
        private Statement st=null;
        private OracleResultSet rs=null;
        int count=0;
        int count1=0;
        ImageViewer displaywindow = new ImageViewer();
        /** Creates new form VideoInsert */
        public VideoInsert() {
            initComponents();
            imgpane.add(displaywindow);
            try {
                DriverManager.registerDriver(new oracle.jdbc.driver.OracleDriver());
                con = DriverManager.getConnection("jdbc:oracle:oci:@","scott","tiger");
                  //con = DriverManager.getConnection("jdbc:oracle:thin:@localhost:1521:FIRST","scott","tiger");
                con.setAutoCommit(false);
                st =con.createStatement();
                rs=(OracleResultSet)st.executeQuery("select max(vid) from browsevideo");
                while (rs.next()) {
                    count = (rs.getInt(1) + 1);
                rs.close();
                st =con.createStatement();
                rs=(OracleResultSet)st.executeQuery("select max(imageno) from browseimage");
                while (rs.next()) {
                    count1 = (rs.getInt(1) + 1);
                rs.close();
            } catch (SQLException e) {
                e.printStackTrace();
            txtvid.setText(String.valueOf(count));
             VideoTypeComboBox.addItem("All");
            VideoTypeComboBox.addItem("Entertainment");
            VideoTypeComboBox.addItem("Sports");
            VideoTypeComboBox.addItem("Animation");
            VideoTypeComboBox.addItem("News");
             VideoTypeComboBox.addItem("Others");
        /** This method is called from within the constructor to
         * initialize the form.
         * WARNING: Do NOT modify this code. The content of this method is
         * always regenerated by the Form Editor.
        private void initComponents() {//GEN-BEGIN:initComponents
            jLabel1 = new javax.swing.JLabel();
            txtvid = new javax.swing.JTextField();
            jLabel2 = new javax.swing.JLabel();
            txtvidfile = new javax.swing.JTextField();
            jLabel3 = new javax.swing.JLabel();
            txtvidinfo = new javax.swing.JTextField();
            jLabel4 = new javax.swing.JLabel();
            txtimgfile = new javax.swing.JTextField();
            vidbrowse = new javax.swing.JButton();
            imgbrowse = new javax.swing.JButton();
            jLabel5 = new javax.swing.JLabel();
            txtimgcont = new javax.swing.JTextField();
            insert = new javax.swing.JButton();
            imgpane = new javax.swing.JPanel();
            jLabel6 = new javax.swing.JLabel();
            VideoTypeComboBox = new javax.swing.JComboBox();
            getContentPane().setLayout(null);
            setTitle("VideoInsert");
            addWindowListener(new java.awt.event.WindowAdapter() {
                public void windowClosing(java.awt.event.WindowEvent evt) {
                    exitForm(evt);
            jLabel1.setText("Video ID");
            getContentPane().add(jLabel1);
            jLabel1.setBounds(30, 80, 90, 16);
            txtvid.setEditable(false);
            getContentPane().add(txtvid);
            txtvid.setBounds(130, 80, 130, 20);
            jLabel2.setText("VideoFile");
            getContentPane().add(jLabel2);
            jLabel2.setBounds(30, 130, 70, 16);
            getContentPane().add(txtvidfile);
            txtvidfile.setBounds(130, 130, 130, 20);
            jLabel3.setText("VideoInfo");
            getContentPane().add(jLabel3);
            jLabel3.setBounds(30, 180, 80, 16);
            getContentPane().add(txtvidinfo);
            txtvidinfo.setBounds(130, 180, 130, 20);
            jLabel4.setText("TopImage");
            getContentPane().add(jLabel4);
            jLabel4.setBounds(30, 230, 70, 16);
            getContentPane().add(txtimgfile);
            txtimgfile.setBounds(130, 230, 130, 20);
            vidbrowse.setText("Browse");
            vidbrowse.addActionListener(new java.awt.event.ActionListener() {
                public void actionPerformed(java.awt.event.ActionEvent evt) {
                    vidbrowseActionPerformed(evt);
            getContentPane().add(vidbrowse);
            vidbrowse.setBounds(280, 130, 78, 26);
            imgbrowse.setText("Browse");
            imgbrowse.addActionListener(new java.awt.event.ActionListener() {
                public void actionPerformed(java.awt.event.ActionEvent evt) {
                    imgbrowseActionPerformed(evt);
            getContentPane().add(imgbrowse);
            imgbrowse.setBounds(280, 230, 78, 26);
            jLabel5.setText("ImageContent");
            getContentPane().add(jLabel5);
            jLabel5.setBounds(30, 280, 80, 16);
            getContentPane().add(txtimgcont);
            txtimgcont.setBounds(130, 280, 130, 20);
            insert.setText("Insert");
            insert.addActionListener(new java.awt.event.ActionListener() {
                public void actionPerformed(java.awt.event.ActionEvent evt) {
                    insertActionPerformed(evt);
            getContentPane().add(insert);
            insert.setBounds(150, 400, 81, 26);
            imgpane.setLayout(new java.awt.BorderLayout());
            getContentPane().add(imgpane);
            imgpane.setBounds(410, 120, 350, 260);
            jLabel6.setText("Video Type");
            getContentPane().add(jLabel6);
            jLabel6.setBounds(30, 340, 80, 16);
            getContentPane().add(VideoTypeComboBox);
            VideoTypeComboBox.setBounds(130, 340, 130, 25);
            pack();
            java.awt.Dimension screenSize = java.awt.Toolkit.getDefaultToolkit().getScreenSize();
            setSize(new java.awt.Dimension(800, 600));
            setLocation((screenSize.width-800)/2,(screenSize.height-600)/2);
        }//GEN-END:initComponents
        private void insertActionPerformed(java.awt.event.ActionEvent evt) {//GEN-FIRST:event_insertActionPerformed
            BLOB blb= null;
            PreparedStatement stmt = null;
            OutputStream fout=null;
            File f=null;
            FileInputStream fin=null;
            int bufferSize;
            byte[] buffer=null;
            int bytesRead = -1;
            String sfile1 = txtvidfile.getText();
            String sfile2 = txtimgfile.getText();
            String format=null;
            String format1=null;
            String videoinfo=txtvidinfo.getText();
            String imgcontent=txtimgcont.getText();
            String videoType=(String)VideoTypeComboBox.getSelectedItem();
            if(sfile1.endsWith("avi")) {
                format="avi";
            }else if(sfile1.endsWith("mpg")) {
                format="mpg";
            }else {
                format="mpg";
            if(sfile2.endsWith("jpg")) {
                format1="jpg";
            }else if(sfile2.endsWith("gif")) {
                format1="gif";
            }else {
                format1="jpg";
            if((sfile1.length()>0) && (sfile2.length()>0)) {
                try {
                    stmt=con.prepareStatement(" insert into browsevideo values (?,EMPTY_BLOB(),?,?,?)");
                    stmt.setInt(1,count);
                    stmt.setString(2,format);
                    stmt.setString(3,videoinfo);
                     stmt.setString(4,videoType);
                    stmt.executeUpdate();
                    stmt.close();
                    con.commit();
                }catch(Exception e) {
                    e.printStackTrace();
                try {
                    stmt = con.prepareStatement("Select video FROM browsevideo WHERE vid = ? for update of video");
                    stmt.setInt(1,count);
                    rs = (OracleResultSet)stmt.executeQuery();
                    rs.next();
                    blb = rs.getBLOB("video");
                    fout = blb.getBinaryOutputStream();
                    f = new File(sfile1);
                    fin = new FileInputStream(f);
                    bufferSize = blb.getBufferSize();
                    buffer = new byte[bufferSize];
                    while((bytesRead = fin.read(buffer)) != -1) {
                        fout.write(buffer, 0, bytesRead);
                    fout.flush();
                    fout.close();
                    con.commit();
                    rs.close();
                    stmt.close();
                }catch(Exception e) {
                    e.printStackTrace();
                try {
                    stmt=con.prepareStatement(" insert into browseimage values (?,?,?,EMPTY_BLOB(),?,?,?)");
                    stmt.setInt(1,count);
                    stmt.setInt(2,count1);
                    stmt.setInt(3,count1);
                    stmt.setString(4,imgcontent);
                    stmt.setString(5,format1);
                    stmt.setString(6,videoType);
                    stmt.executeUpdate();
                    stmt.close();
                    con.commit();
                }catch(Exception e) {
                    e.printStackTrace();
                try {
                    stmt = con.prepareStatement("Select image FROM browseimage WHERE imageno = ? for update of image");
                    stmt.setInt(1,count1);
                    rs = (OracleResultSet)stmt.executeQuery();
                    if(rs.next()) {
                        blb = rs.getBLOB("image");
                        fout = blb.getBinaryOutputStream();
                        f = new File(sfile2);
                        fin = new FileInputStream(f);
                        bufferSize = blb.getBufferSize();
                        buffer = new byte[bufferSize];
                        while((bytesRead = fin.read(buffer)) != -1) {
                            fout.write(buffer, 0, bytesRead);
                        fout.flush();
                        fout.close();
                        con.commit();
                        rs.close();
                    stmt.close();
                    count++;
                    count1++;
                    txtimgfile.setText("");
                    txtvidfile.setText("");
                    txtimgcont.setText("");
                    txtvidinfo.setText("");
                    txtvid.setText(String.valueOf(count));
                    JOptionPane.showMessageDialog(this,"Successfuly Completed");
                }catch(Exception e) {
                    e.printStackTrace();
        }//GEN-LAST:event_insertActionPerformed
        private void imgbrowseActionPerformed(java.awt.event.ActionEvent evt) {//GEN-FIRST:event_imgbrowseActionPerformed
            ExampleFileFilter filter1 = new ExampleFileFilter();
            JFileChooser chooser = new JFileChooser();
            filter1.addExtension("jpg");
            filter1.addExtension("gif");
            filter1.setDescription("JPG,GIF Images");
            chooser.setFileFilter(filter1);
            int returnVal = chooser.showOpenDialog(this);
            if(returnVal == JFileChooser.APPROVE_OPTION) {
                txtimgfile.setText(chooser.getSelectedFile().getAbsolutePath());
                try{
                        displaywindow.setImageURL(new java.net.URL("file:"+txtimgfile.getText()));
                        displaywindow.setStyle(ImageViewer.IMAGE_SCALED_TO_FIT);
                    }catch(Exception e){
                        e.printStackTrace();
        }//GEN-LAST:event_imgbrowseActionPerformed
        private void vidbrowseActionPerformed(java.awt.event.ActionEvent evt) {//GEN-FIRST:event_vidbrowseActionPerformed
           ExampleFileFilter filter2 = new ExampleFileFilter();
           JFileChooser chooser = new JFileChooser();
           filter2.addExtension("avi");
            filter2.addExtension("mpg");
            filter2.setDescription("AVI & MPG Video");
            chooser.setFileFilter(filter2);
            int returnVal = chooser.showOpenDialog(this);
            if(returnVal == JFileChooser.APPROVE_OPTION) {
                txtvidfile.setText(chooser.getSelectedFile().getAbsolutePath());
                VideoAudioPlayer vap=new VideoAudioPlayer(txtvidfile.getText());
        }//GEN-LAST:event_vidbrowseActionPerformed
        /** Exit the Application */
        private void exitForm(java.awt.event.WindowEvent evt) {//GEN-FIRST:event_exitForm
            setVisible(false);
            dispose();
        }//GEN-LAST:event_exitForm
         * @param args the command line arguments
        /*public static void main(String args[]) {
            new VideoInsert().show();
        // Variables declaration - do not modify//GEN-BEGIN:variables
        private javax.swing.JButton imgbrowse;
        private javax.swing.JLabel jLabel4;
        private javax.swing.JTextField txtvid;
        private javax.swing.JLabel jLabel1;
        private javax.swing.JTextField txtimgcont;
        private javax.swing.JLabel jLabel3;
        private javax.swing.JLabel jLabel2;
        private javax.swing.JPanel imgpane;
        private javax.swing.JButton insert;
        private javax.swing.JComboBox VideoTypeComboBox;
        private javax.swing.JButton vidbrowse;
        private javax.swing.JTextField txtvidfile;
        private javax.swing.JLabel jLabel6;
        private javax.swing.JLabel jLabel5;
        private javax.swing.JTextField txtimgfile;
        private javax.swing.JTextField txtvidinfo;
        // End of variables declaration//GEN-END:variables
    FrameAccess.java
    import java.awt.*;
    import javax.media.*;
    import javax.media.control.TrackControl;
    import javax.media.Format;
    import javax.media.format.*;
    import java.io.*;
    import javax.imageio.*;
    import javax.imageio.stream.*;
    import java.awt.image.*;
    import java.util.*;
    import javax.media.util.*;
    * Sample program to access individual video frames by using a
    * "pass-thru" codec. The codec is inserted into the data flow
    * path. As data pass through this codec, a callback is invoked
    * for each frame of video data.
    public class FrameAccess implements ControllerListener {
         Processor p;
         Object waitSync = new Object();
         boolean stateTransitionOK = true;
         public boolean alreadyPrnt = false;
         * Given a media locator, create a processor and use that processor
         * as a player to playback the media.
         * During the processor's Configured state, two "pass-thru" codecs,
         * PreAccessCodec and PostAccessCodec, are set on the video track.
         * These codecs are used to get access to individual video frames
         * of the media.
         * Much of the code is just standard code to present media in JMF.
         public boolean open(MediaLocator ml) {
              try {
                   p = Manager.createProcessor(ml);
                                } catch (Exception e) {
                   System.err.println(
                        "Failed to create a processor from the given url: " + e);
                   return false;
              p.addControllerListener(this);
              // Put the Processor into configured state.
              p.configure();
              if (!waitForState(Processor.Configured)) {
                   System.err.println("Failed to configure the processor.");
                   return false;
              // So I can use it as a player.
              p.setContentDescriptor(null);
              // Obtain the track controls.
              TrackControl tc[] = p.getTrackControls();
              if (tc == null) {
                   System.err.println(
                        "Failed to obtain track controls from the processor.");
                   return false;
              // Search for the track control for the video track.
              TrackControl videoTrack = null;
              for (int i = 0; i < tc.length; i++) {
                   if (tc.getFormat() instanceof VideoFormat) videoTrack = tc[i];
                   else     tc[i].setEnabled(false);
              if (videoTrack == null) {
                   System.err.println("The input media does not contain a video track.");
                   return false;
              String videoFormat = videoTrack.getFormat().toString();
              Dimension videoSize = parseVideoSize(videoFormat);
              System.err.println("Video format: " + videoFormat);
              // Instantiate and set the frame access codec to the data flow path.
              try {
                   Codec codec[] = { new PostAccessCodec(videoSize)};
                   videoTrack.setCodecChain(codec);
              } catch (UnsupportedPlugInException e) {
                   System.err.println("The process does not support effects.");
              // Realize the processor.
              p.prefetch();
              if (!waitForState(Processor.Prefetched)) {
                   System.err.println("Failed to realise the processor.");
                   return false;
              p.start();
              return true;
         /**parse the size of the video from the string videoformat*/
         public Dimension parseVideoSize(String videoSize){
              int x=300, y=200;
              StringTokenizer strtok = new StringTokenizer(videoSize, ", ");
              strtok.nextToken();
              String size = strtok.nextToken();
              StringTokenizer sizeStrtok = new StringTokenizer(size, "x");
              try{
                   x = Integer.parseInt(sizeStrtok.nextToken());
                   y = Integer.parseInt(sizeStrtok.nextToken());
              } catch (NumberFormatException e){
                   System.out.println("unable to find video size, assuming default of 300x200");
              System.out.println("Image width = " + String.valueOf(x) +"\nImage height = "+ String.valueOf(y));
              return new Dimension(x, y);
         * Block until the processor has transitioned to the given state.
         * Return false if the transition failed.
         boolean waitForState(int state) {
              synchronized (waitSync) {
                   try {
                        while (p.getState() != state && stateTransitionOK)
                             waitSync.wait();
                   } catch (Exception e) {
              return stateTransitionOK;
         * Controller Listener.
         public void controllerUpdate(ControllerEvent evt) {
              if (evt instanceof ConfigureCompleteEvent
                   || evt instanceof RealizeCompleteEvent
                   || evt instanceof PrefetchCompleteEvent) {
                   synchronized (waitSync) {
                        stateTransitionOK = true;
                        waitSync.notifyAll();
              } else if (evt instanceof ResourceUnavailableEvent) {
                   synchronized (waitSync) {
                        stateTransitionOK = false;
                        waitSync.notifyAll();
              } else if (evt instanceof EndOfMediaEvent) {
                   p.close();
                   System.exit(0);
         * Main program
         public static void main(String[] args) {
    //          if (args.length == 0) {
    //               prUsage();
    //               System.exit(0);
    // System.out.print("masoud");
    String url = new String("file:F:\\AVSEQ01.mpg");
              if (url.indexOf(":") < 0) {
                   prUsage();
                   System.exit(0);
              MediaLocator ml;
              if ((ml = new MediaLocator(url)) == null) {
                   System.err.println("Cannot build media locator from: " + url);
                   System.exit(0);
              FrameAccess fa = new FrameAccess();
              if (!fa.open(ml))
                   System.exit(0);
         static void prUsage() {
              System.err.println("Usage: java FrameAccess <url>");
         * Inner class.
         * A pass-through codec to access to individual frames.
         public class PreAccessCodec implements Codec {
              * Callback to access individual video frames.
              void accessFrame(Buffer frame) {
                   // For demo, we'll just print out the frame #, time &
                   // data length.
                   long t = (long) (frame.getTimeStamp() / 10000000f);
                   System.err.println(
                        "Pre: frame #: "
                             + frame.getSequenceNumber()
                             + ", time: "
                             + ((float) t) / 100f
                             + ", len: "
                             + frame.getLength());
              * The code for a pass through codec.
              // We'll advertize as supporting all video formats.
              protected Format supportedIns[] = new Format[] { new VideoFormat(null)};
              // We'll advertize as supporting all video formats.
              protected Format supportedOuts[] = new Format[] { new VideoFormat(null)};
              Format input = null, output = null;
              public String getName() {
                   return "Pre-Access Codec";
              //these dont do anything
              public void open() {}
              public void close() {}
              public void reset() {}
              public Format[] getSupportedInputFormats() {
                   return supportedIns;
              public Format[] getSupportedOutputFormats(Format in) {
                   if (in == null)
                        return supportedOuts;
                   else {
                        // If an input format is given, we use that input format
                        // as the output since we are not modifying the bit stream
                        // at all.
                        Format outs[] = new Format[1];
                        outs[0] = in;
                        return outs;
              public Format setInputFormat(Format format) {
                   input = format;
                   return input;
              public Format setOutputFormat(Format format) {
                   output = format;
                   return output;
              public int process(Buffer in, Buffer out) {
                   // This is the "Callback" to access individual frames.
                   accessFrame(in);
                   // Swap the data between the input & output.
                   Object data = in.getData();
                   in.setData(out.getData());
                   out.setData(data);
                   // Copy the input attributes to the output
                   out.setFlags(Buffer.FLAG_NO_SYNC);
                   out.setFormat(in.getFormat());
                   out.setLength(in.getLength());
                   out.setOffset(in.getOffset());
                   return BUFFER_PROCESSED_OK;
              public Object[] getControls() {
                   return new Object[0];
              public Object getControl(String type) {
                   return null;
         public class PostAccessCodec extends PreAccessCodec {
              // We'll advertize as supporting all video formats.
              public PostAccessCodec(Dimension size) {
                   supportedIns = new Format[] { new RGBFormat()};
                   this.size = size;
              * Callback to access individual video frames.
              void accessFrame(Buffer frame) {
                   // For demo, we'll just print out the frame #, time &
                   // data length.
                   if (!alreadyPrnt) {
                        BufferToImage stopBuffer = new BufferToImage((VideoFormat) frame.getFormat());
                        Image stopImage = stopBuffer.createImage(frame);
                        try {
                             BufferedImage outImage = new BufferedImage(size.width, size.height, BufferedImage.TYPE_INT_RGB);
                             Graphics og = outImage.getGraphics();
                             og.drawImage(stopImage, 0, 0, size.width, size.height, null);
                             //prepareImage(outImage,rheight,rheight, null);
                             Iterator writers = ImageIO.getImageWritersByFormatName("jpg");
                             ImageWriter writer = (ImageWriter) writers.next();
                             //Once an ImageWriter has been obtained, its destination must be set to an ImageOutputStream:
                             File f = new File(frame.getSequenceNumber() + ".jpg");
                             ImageOutputStream ios = ImageIO.createImageOutputStream(f);
                             writer.setOutput(ios);
                             //Finally, the image may be written to the output stream:
                             //BufferedImage bi;
                             //writer.write(imagebi);
                             writer.write(outImage);
                             ios.close();
                        } catch (IOException e) {
                             System.out.println("Error :" + e);
                   //alreadyPrnt = true;
                   long t = (long) (frame.getTimeStamp() / 10000000f);
                   System.err.println(
                        "Post: frame #: "
                             + frame.getSequenceNumber()
                             + ", time: "
                             + ((float) t) / 100f
                             + ", len: "
                             + frame.getLength());
              public String getName() {
                   return "Post-Access Codec";
              private Dimension size;

    check out the java.lang.Runtime and java.lang.Process classes

  • Can you add measures to the current project?

    I'm working on a song with GarageBand and I'm rather new to the whole thing. I'm using a midi keyboard for most of it and layering vocals over that. I ran out of measures but still have more to do. I'm sure there is a way to add length to the project, but for the life of me I cannot find out where that option is. Please help!

    In the timeline scroll to the end of the song. At the top you will see a little purple arrow, click and drag to whatever length you need.

  • FFT and Tone Measurements

    Does the Express VI, Spectral Measurements and the Extract Multiple Tone Information VI return the same data?  The express VI displays the information in dB and the Extract Multiple Tone Information VI returns the amplitude in volts.  The issue is that the amplitude don't seem to match.  I am using 20log(volts) to convert to dB.  The chart shows about -14 dB and the Extract Multiple Tone Information VI returns -27 dB (volts convert to dB).
    Any help is appreciated,
    Matthew Fitzsimons
    Certified LabVIEW Architect
    LabVIEW 6.1 ... 2013, LVOOP, GOOP, TestStand, DAQ, and Vison

    You are calculating a power spectal density in the spectral measurements Express VI.  If you configure it to calculate the spectral measurement magnitude (peak) the values will match.  The difference from the help file is:
    Magnitude (peak)—Measures the spectrum and displays the
    results in terms of peak amplitude. You typically use this measurement with more
    advanced measurements that require magnitude and phase information. The
    magnitude of the spectrum is measured in peak values. For example, a sine tone
    of amplitude A yields a magnitude spectral value of A at the sine tone
    frequency. You can unwrap the phase spectrum or convert it from radians to
    degrees by setting Phase to Unwrap phase or
    Convert to degree, respectively. If you place a checkmark in
    the Averaging checkbox, the phase of the spectrum is zero for
    averaging.
    Power spectral density—Measures the spectrum and displays
    the results in terms of power spectral density (PSD). Power spectral
    density is a scaled version of Power spectrum, where
    the power present within each spectral bin is normalized by the frequency bin
    width. You typically use this measurement to examine the noise floor of a signal
    or the power in a specific frequency range. Normalizing the power spectrum by
    the bin width makes this measurement independent of the signal duration, or
    number of samples.So, they are the same if they are the same.
    Hope that this helps,
    Bob Young
    Bob Young - Test Engineer - Lapsed Certified LabVIEW Developer
    DISTek Integration, Inc. - NI Alliance Member
    mailto:[email protected]

  • Difficulty implementing the fetch measurement vi for an application on the 5102

    I have been spending what I believe an excessive amount of time integrating what seems to be a useful vi "multi fetch measurement statistics.vi" into an existing application. Attached is a word document that describes the issue in more detailed and a zip file which includes (1) a vi of the original program flow and (2) a .llb needed to load the vi as well as an additional vi that shows the flow after adding what initially thought to be a quick mod.
    Attachments:
    Measure_vi_issues.doc ‏46 KB
    testmainX4.zip ‏786 KB

    It appears that you can infact use remote panels within an executable.
    The problem I had was that I had left the web server port for the remote panels the same as the LabVIEW development environment. That meant that the main LabVIEW web server was getting in the way of the web server that was associated with my built application.
    With the application's web server on a different port and the VI in memory (and the file in a library/directory external to the exe if it is not the top level VI), you just need to specify the VI name such as main.vi .
    Solved!

  • How to implement a singleton model in a Flex project

    After too much reading, I would like to use a singleton model in my Flex application to hold global variables.  What I cannot determine, however, is how to do this.  Is there a good tutorial/example available that could walk me through this for a Flex project?
    Thanks!

    Here's simple example code:
    package examples
       public class Singleton
          static public const instance:Singleton = new Singleton();
          // private constructors not supported
          public Singleton()
             if( _instance )
                throw new Error( "Singleton instance already exists." );
    Let me know if that helps...
    Ben Edwards

  • Implementing a "back button" in a JSC project.

    I would like to give my use a "back button" that acts like the back button on most browsers.
    In other words, add a button to a page and when the user clicks that button, the user will be sent back to the previous page just like if they clicked the back button on the browser.
    I assume that this can be done using the onClick Javascript event for the button, but what exactly needs to be done in that method?
    I searched the Internet and it looks like something like this might work:
    Go back
    But how to convert that to JavaScript inside the JSC app I do not know.
    Does anyone know how this is done?

    Note that I tried adding:
    history.back()to the onClick event, but that didn't do anything.
    My full page code looks like this:
    <?xml version="1.0" encoding="UTF-8"?>
    <jsp:root version="1.2" xmlns:f="http://java.sun.com/jsf/core" xmlns:h="http://java.sun.com/jsf/html" xmlns:jsp="http://java.sun.com/JSP/Page">
        <jsp:directive.page contentType="text/html;charset=UTF-8" pageEncoding="UTF-8"/>
        <jsp:text><![CDATA[
    <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
    "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
    ]]></jsp:text>
        <f:view>
            <html lang="en-US" xml:lang="en-US">
                <head>
                    <meta content="no-cache" http-equiv="Cache-Control"/>
                    <meta content="no-cache" http-equiv="Pragma"/>
                    <title>DisplayDiagnosis Title</title>
                    <link href="resources/stylesheet.css" rel="stylesheet" type="text/css"/>
                </head>
                <body style="-rave-layout: grid">
                    <h:form binding="#{DisplayDiagnosis.form1}" id="form1">
                        <h:panelGrid binding="#{DisplayDiagnosis.centeringGridPanel}" columns="1" id="centeringGridPanel" style="left: 0px; top: 0px; position: absolute">
                            <h:graphicImage binding="#{DisplayDiagnosis.image1}" id="image1" value="resources/myLogo.JPG"/>
                            <h:panelGrid binding="#{DisplayDiagnosis.diagnosisPanel}" columns="3" footerClass="" id="diagnosisPanel"/>
                            <h:panelGroup binding="#{DisplayDiagnosis.groupPanel1}" id="groupPanel1" style="height: 50px; width: 100px">
                                <h:commandButton binding="#{DisplayDiagnosis.button1}" id="button1" onclick="history.back()" value="Back"/>
                            </h:panelGroup>
                        </h:panelGrid>
                    </h:form>
                </body>
            </html>
        </f:view>
    </jsp:root>

  • LabView 2012 Continuous Measurement and Logging (DAQmx) Project

    I've been working with this example to try to build a DAQmx and Serial acquisition message handling loops, as well as another message handling loop with two analog outputs for control. This control would also be based off of information from the DAQmx acquistion loop. Is this example appropriate to do this with? I've added two additional message handling loops (Serial and the AO control loop). I worried that this example might not be the best to follow.
    Attachments:
    Main.vi ‏108 KB

    I'm not sure what example you are talking about but it seems like you are wanting something like the document below.
    http://www.ni.com/white-paper/14119/en
    Carl W.
    Applications Engineering
    National Instruments

  • Request better support skin tone evaluation/measurement

    Hello. I am writing this with the intention that it will be read by the folks at Adobe that are involved in the development of Lightroom. I'm pretty new to this forum, however, and I'm a little confused about whether this feature requests thread actually goes to the Adobe Lightroom team, considering that it is in a user-to-user forum. If there is a better avenue to get my feature request to Adobe, I would appreciate if someone could point me in the right direction.
    Before I get into my request, I also want to mention that I did my due diligence and searched this thread to make sure this has not been requested before.
    I am submitting this request because I do not see any useful way to measure or evaluate skin tone in Lightroom while adjusting white balance (or after adjusting white balance, for that matter). I can do a mouse-over to read RGB values, but I am not aware of any useful way to use RGB values for evaluating skin tones.
    I just watched the latest George Jardine video, in which he recommends to use a calibrated monitor and move the Develop controls back and forth until your eye tells you it's correctly adjusted. I enjoyed this video, and I have found that this generally works well for me for tone balance, but I believe an additional tool for measuring or evaluating skin tones would benefit the Lightroom workflow as I will explain below.
    I believe that many serious photographers, pro and amateur alike, routinely use the eyedropper in Photoshop for reading CMYK values to confirm the skin tones in their work. Even if they feel like they can usually eyeball pretty well, they find they get greater consistency when they use the eyedropper.
    Now I'm not saying that Lightroom necessarily needs CMYK support. Photoshop Elements, for example has a skin tone adjustment even though it doesn't have CMYK support. And I'm also not suggesting that Lightroom necessarily needs skin tone sliders like Photoshop Elements. I'm just suggesting that the Lightroom workflow would benefit from some kind of tool for evaluating skin tone while or after adjustments are being made in the Develop Module. I would like to leave it up to Lightroom to decide exactly how to implement this.
    The only way I currently see to do adjust while measuring skin tones is to open the file in Photoshop, make adjustments, and save. Even if there is a way to do this with ACR and have the adjustments saved in the sidecar or in the DNG, it still seems like a time-comsuming and unnecessary step for my workflow.
    Now this request is predicated on the assumption that evaluting skin tone is fundamental enough for a basic workflow that it should be included in Lightroom. In my opinion it is, and that is why I am making this feature request. I'm sure that some might not need it for their workflow, but it seems to me that this would be a valuable feature to a great many Lightroom users.
    Thanks for lending your ear, Adobe. I look forward to ALL your future versions of Lightroom, and I hope that skin tone evaluation/measurement is included in one of them.
    Regards,
    Mike

    Your post seems to assume that Lightroom is a tool for travel/landscape photography, and other types of photography (e.g., portrait/fashion) should be supported by a "specialized add-on module". I have to disagree with you on that point. Considering many of the examples on the Lightroom marketing are fashion shoots, I would think that they considers portrait/fashion photographers to be an important part of their target audience. They are not a fringe group of specialists.
    I'm sure that portrait/fashion photographers would feel the same way about a Lightroom capability that primarily benefits the workflow of a travel/landscape photographer, i.e., when I do do some landscape work, I just edit in Photoshop. But you wouldn't agree to that, would you?
    Skin tone measurement can be an incredibly easy tool to implement. It can be something as simple as showing the CMY values alongside the RGB values during a mouseover. Keep in mind, I'm talking about CMY not CMYK, so there should be no need to worry about what ICC profile to use. RGB to CMY is a straightforward transformation. It's embarassingly simple.
    There are other ways Adobe can implement skin tone management that would be more powerful but a little more complicated. Those would be great too.
    Anyway, thanks for the link to the Adobe feature request page! I will use it.
    Regards,
    Mike

Maybe you are looking for

  • CD/DVD drive not working properly

    Hello, I have a Satellite L305-S5907. I just received it Dec. 2008. Recently I installed a game. Stronghold 2 and had trouble with disk 2. I managed to get it installed. When I would put in the play disc it would not always recognize it so I uninstal

  • Exporting Table data to delimited txt file

    I am trying to import the table data into delimited text file, I am running the following code from Tom Kyte's website on the server. When I run the procedure after I run the function it is not creating any .dat file for me. I have also checked for t

  • How To make program to use rtf template to show excel output

    Hi I have created template in rtf and have created data definations and registerd template on application R12 Now when i run report using submit process it didn't show the template in layout field when we are submitting report request tell me about t

  • Safari won't play videos which require flash player

    I have a 13 inch macbook with safari 5, which on my account will not play videos on youtube, bbc iplayer and other sites. Safari will play the videos on the root user and another account. I have installed flash player, shockwave player, and adobe air

  • Is there any way to locate an iPad with a MAC Address and a Serial Number?

    I happen to know the MAC Address for my recently stolen iPad.  Is there any way to use that to locate an IP address the next time it accesses the web?