Capture a vector space

i want to capture a vector space (axis1&2),but when configure buffer ,the problem occur(see figure uploaded error -70006)..How should I modify in my procedure ?I am sure the board Id is configured right and the triggers are input correctly.

Similar Messages

  • How to setup breakpoints in a vector space?

    Hi,
    How to setup breakpoints in a 2D vector space in NI 7344 and check for them using flex_read_breakpoint_status_rtn?
    Thanks.

    Hi,
    Is Office Communicator something to do with the Word, Exel and Powerpoint Office Package ?
    What sort of Screen Names or IDs do you use with that ?
    If these are MSN/Live accounts names then it can be made to work for Text Chats but not Video or Audio Only chats
    10:38 PM      Friday; April 20, 2012
    Please, if posting Logs, do not post any Log info after the line "Binary Images for iChat"
      iMac 2.5Ghz 5i 2011 (Lion 10.7.2)
     G4/1GhzDual MDD (Leopard 10.5.8)
     MacBookPro 2Gb (Snow Leopard 10.6.8)
     Mac OS X (10.6.8),
    "Limit the Logs to the Bits above Binary Images."  No, Seriously

  • Motion and Vector Space Question

    I have an x-axis stage and a y-axis stage with 2 different leadscrew pitches, i.e.: one table has a 5 turns per inch leadscrew and the other has a 50 turn per inch leadscrew. Consequently, the stepper motors attached have to operate at different speeds (10x difference) in order for the tables to have the same velocity. I have this accounted for in the axis configuration settings in the NI MAX explorer. When I am configuring automated moves in LabView and set up a "vector space" composed of the two axis, it seems that the motors operate at the same speed and ignore the MAX settings. I thought that after intializing the controller, the current axis settings would be carried over and applied through the vector space and to the actual move
    (a 2D move made in Motion Assistant). Am I missing something? Do I need to load the velocity for each axis with a flexmotion VI somewhere in the code? I'd like to command the motrs to move at 2 different speeds so the tables move at the same velocities. Any comments would be appreciated. An abbreviated sample of some of the code is attached. Thanks. ROB
    Attachments:
    sample.vi ‏39 KB

    Rob,
    you should be able to adjust the pitch of the leadscrews by setting the stepper steps per revolution and/or the encoder steps per revolution settings in MAX accordingly (in your configuration these settings should differ by the factor of 10 for your two axes)
    In your application you then have to use the Load Velocity in RPM.flx and the Load Accel/Decel in RPS/sec.flx in order to get the same resulting velocities on both axes. If you do it like this you still have to account the factor of 10 for your target position calculations.
    I couldn't find out if you have proceeded like that in your application as the vi you have attached is missing a crucial subvi.
    If you are using a four axes board like the PCI-7344 or the PCI-7334 there is another option:
    You
    could use a third axis that acts as a master for one of your axes (I would prefer the axis with the 5 inch per leadscrew). Your real axis has to be configured to follow the master axis by a factor of 1/10. In your application you configure a vector space for the master axis and the axis that is not configured as slave. Your physical signal connections need to be done to this axis and the slave axis.
    Please examine the LabVIEW gearing examples for more information.
    In this case you should enter the same values for stepper steps per revolution and/or the encoder steps per revolution in MAX for both axes as the pitch adjustment is done by the gearing.
    The advantage of this method is that now you can set the same velocities, accelerations and even target positions for both axes and you never again have to account the pitch factor.
    Best regards,
    Jochen Klier
    Applications Engineering Group Leader
    National Instruments Germany GmbH

  • Vector Space Model implementation

    Hi,
    Does anybody know where I can find a java implementation of "vector space model"? Or simply a java program to extract keywords from an HTML page?
    Actually I don't know if this is the write forum to pose this question!
    Edited by: Marziye on May 28, 2008 10:55 PM

    You can take a look at the Apache Lucene project. It should be possible to extract keywords from documents with it. Anyway, it is not that hard to implement the vector space model yourself!

  • Configuring separate axis velocity/acceleration within a vector space

    Is there a way to separately configure axis velocity/acceleration when using vector space motion control? My axes motivation are very diverse and I can not simply use a vector velocity/acceleration.

    I guess that your application should not use Vector configuration then. The purpose of Vector Spaces is to group axes and define vector moves, the board will calculate each axis parameters in order to achieve the vector move.
    If you require different velocity and acceleration per axis you might want to try to make up your own Vector algorithm in which for the eyes of the board you are treating each axis independently, but in your software you are actually grouping them. That is, you will configure each axis by its own, and you can do multi-starts and multi-stops. The Position/Velocity/Acceleration will be calculated by your own algorithm.
    Good luck!
    Nestor.
    Nestor
    National Instruments

  • Configuring a third axis on vector space to perform a tangential following

    Is there some kind a function call/mode in flexmotion library to configure a third axis in a vector space to perform a tangential following (ie to remain tangent to the trajectory of the other two axis)?

    Tulio,
    Thanks for the explanation. This makes perfect sense.
    The Flexmotion driver does not have any built in function to perform this type of motion, however it would be relatively straightforward to implement this for contour moves. You could set this up as a 3 axis contour move. Based on the list of X,Y coordinates, use a simple difference equation to get the slope of the trajectory at each point. Convert this slope to rotation in counts and use these values for the third axis (cutter angle). This could be implemented with just a couple VIs or lines of code with negligible processing overhead or you could even use just a spreadsheet. I have attached below a spreadsheet that shows an example of using a difference equation (centra
    l difference) to get cutter positions based on X,Y trajectory data.
    Cheers,
    Brent R.
    Applications Engineer
    National Instruments
    Attachments:
    Tangent_Following.xls ‏18 KB

  • Re capturing? disk space issues? HELP

    i am cutting a 5 minute piece on my macbook pro laptop. i am trying to re capture the footage that i had dumped previously for a specific cut to make some changes to it. for some reason im having trouble capturing JUST the media for that specific sequence. i have 20 GB of space to work with which should be plenty however, because it will only let me capture ALL the media, i wont have enough space. any idea why or how to get around this? i have tried breaking the link between the clips and their master clips, created another project and copied the sequence into it...etc.

    This should work:
    1. In the browser, only highlight the clips you need for your edit
    2. Choose "Batch Capture".
    3. Choose "All selected items"
    Rabbe

  • Video capture and disc space

    i usually save my projects to an external hard drive (200 gigs free left to use). However recently i noticed that when capturing videos i only had 61 minutes or 14 gigs left to use. I double checked my scratch discs and they indicate that it is set for the external hard drive. I even deleted old projects, and yet the number of capturing minutes is still 61 minutes. What should i do to free up more space and capture more tapes? am i missing something?! thank you

    I'd say the first step is to check the available space in the finder. If it says you've got 150 GB available, then there are issues. If it says something small, do what these other gents suggest. If neither of those work, it's just a matter of using "get info" on folders to track down what's eating up all your space.

  • Arc.vi 's input "Radius" is distance from arc centre or distance from vector space centre 0,0

    Hi everybody,
    I am using PXI 7352 +UMI+2 servo Drives+motors which move a table in linear direction X and rotate it along Z axis C.
    How can I use Load Circular Arcvi if my  one axis is linear and other is rotary e.g. XC table.
    Thanks
    Surender Kumar
    Reckers Automation
    Delhi

    Surender,
    You can run contours with an (almost) infinite number of position
    points. The trick is to load new data into the buffer when it has
    depleted to a certain amount of points. There is an example in LabVIEW
    (Buffered Contouring (Rose).vi) that shows this method. Maybe there was
    a bit confusion when you talke with NI India in terms of wording. There
    are also onboard variables (about 100) but these are not used for
    contouring. Contouring data is stored in an onboard buffer.
    In contouring mode the board interpolates through the position points
    stored in the buffer with a rate of 10 ms (minimum). Inbetween these
    poionts the board interpolates using a cubic spline algorithm. In terms
    of accuracy this method is equal to the arc move as arc moves use a
    similar internal mechanism as contouring.
    With contouring of course you could take care of the tool radius
    compensation in your calculations but there is no ready to use solution
    for that in LabVIEW. Maybe a 3rd party solution is available for that
    (please refer to this thread which you should know
    already).
    Jochen

  • Capture audio from soundcard - help!

    Hi everyone,
    I'm tyring to write a little utility that captures everything coming out of my sound card and stores it as a .wav file on the system. (When I say everything, I mean wave output...for now).
    The purpose of this little exercise is two fold, i) familiarise myself with JMF and ii) enable me to store music from online radio stations to burn and listen on CDs later.
    I have thus far got to the point where I can successfully record a .wav file although when I play the file back it's completely silent. I have a sneaky suspicion that this is because I have not configured something to capture from an output port on the soundcard instead of the usual mic/line it etc...
    Another problem I've just stumbled accross whilst testing the code before I post it is that if I run in debug mode and step through everything works fine. If I execute normally I get a "javax.media.NotConfiguredError: setContentDescriptor cannot be called before configured" exception. Why would this be happening when running normally, but not in debug?
    Any hints/tips with this little lot would be great.
    Thanks in advance,
    Chris
    PS My prototype is below;
    import java.io.BufferedReader;
    import java.io.InputStreamReader;
    import java.util.Iterator;
    import java.util.Vector;
    import javax.media.CaptureDeviceInfo;
    import javax.media.CaptureDeviceManager;
    import javax.media.DataSink;
    import javax.media.Manager;
    import javax.media.MediaLocator;
    import javax.media.Processor;
    import javax.media.datasink.DataSinkEvent;
    import javax.media.datasink.DataSinkListener;
    import javax.media.datasink.EndOfStreamEvent;
    import javax.media.protocol.DataSource;
    import javax.media.protocol.FileTypeDescriptor;
    public class JCapture
    CaptureDeviceInfo captureDeviceInfo = null;
    DataSink dataSink = null;
    public static void main(String[] args)
    JCapture jCapture = new JCapture();
    jCapture.doIt();
    System.exit(0);
    private void doIt()
    // retrieve list of available capture devices
    Vector vector = CaptureDeviceManager.getDeviceList(null);
    Iterator it = vector.iterator();
    while(it.hasNext())
    CaptureDeviceInfo deviceInfo = (CaptureDeviceInfo)it.next();
    if("DirectSoundCapture".equals(deviceInfo.getName()))
    captureDeviceInfo = deviceInfo;
    break;
    // exit if no capture devices found
    if(captureDeviceInfo == null)
    System.err.println("No capture devices found!");
    System.exit(-1);
    try
         // get media locator from capture device
         MediaLocator mediaLocator = captureDeviceInfo.getLocator();
         // create processor
         Processor p = Manager.createProcessor(mediaLocator);
         // configure the processor
         p.configure();
         // set the content type
         p.setContentDescriptor(new FileTypeDescriptor(FileTypeDescriptor.WAVE));
         // realise the processor
         p.realize();
         // get the output of processor
         DataSource dataSource = p.getDataOutput();
         // create medialocator with output file
         MediaLocator dest = new MediaLocator("file://c:\\myFile.wav");
         // create datasink passing in output source and medialocator specifying our output file
         dataSink = Manager.createDataSink(dataSource,dest);
         // create dataSink listener
                   dataSink.addDataSinkListener
                        // anonymous inner class to handle DataSinkEvents
                        new DataSinkListener()
                             // if end of media, close data writer
                             public void dataSinkUpdate(DataSinkEvent dataEvent)
              // if capturing stopped, close DataSink
                                  if(dataEvent instanceof EndOfStreamEvent)
                                  dataSink.close();
         // open the datasink
         dataSink.open();
         // start the datasink
         dataSink.start();
         // start the processor
         p.start();
         getConsolePress();
         // stop the processor
         p.stop();
         // close the processor
         p.close();
    catch(Exception e)
    e.printStackTrace();
    System.exit(-1);
    System.out.println("Finished!!");
    public void getConsolePress() throws Exception
    InputStreamReader isr = new InputStreamReader(System.in);
    BufferedReader input = new BufferedReader(isr);
    String line;
    System.out.println("Press x to stop capturing...");
    while (!(line = input.readLine()).equals("x"))
    System.out.println("Press x to stop capturing...");
    return;

    tyring to write a little utility that captures everything coming out of my sound card and stores it as >>a .wav file on the systemCapture (input) from an output???
    You can capture from the mic/line in, so plug in a microphone and talk into it, to see if you are recording.
    enable me to store music from online radio stations This has nothing to do with the sound card. The stream is from the internet, then capture that stream, pass it to jmf, and record to file.
    You only need the sound card for listening(output) or record (live input from mic).

  • Data Reading Capture Error - what's up with this?

    I know a few others have posted about this, but I have not seen an explanation.
    My stats: 10.4.2 (yes, I'm going to get 10.4.3 - I will eat my pants if that has anything to do with this), FCP 5.0.3, G4 1.5GHz Pbook, LaCie 250GB external firewire drive for capture (TONS of space), Sony GV D300 capture deck, footage shot on the Panasonic DVX100 24p camera. The tapes are non-dropframe.
    The error reads: Capture encountered a problem reading the data on your source tape. This could be due to a problem with the tape. To capture as much of your footage as possible, please log and capture a clip that ends before this segment, and another one that starts right after it.
    This sounds to me suspiciously like Apple programmers punting on a known bug just prior to release, rather than something ACTUALLY wrong with my tapes, my computer, my connections, my drives or my deck.
    The reason I say this is that I have just started a new project and am capturing five tapes of footage. I am on tape 4. I had no problem capturing from two of the 4 tapes so far - so my cables, deck, drive and software are fine.
    On the 2 tapes that have given me problems, there is also discontinuous timecode. This is the DP's fault. However: the capture error is NOT occurring in places on the tape where the TC breaks (or skips, rather). THAT problem is simply causing the usual errors and requiring careful logging.
    Instead, I get the error in question for clips that appear to be fine in terms of TC. When I get the capture error, I have actually been able to do 2 Capture Nows, but instead of capturing on both sides of the problem area ("this segment" in the error message parlance), I have been able to capture ALL OF THE TIMECODE in 2 clips. That is, when the capture fails and kicks out a mini clip (up to the point where the capture snagged), I can start a Capture Now from one frame after the last TC in the mini clip and Capture from there to the end.
    So that suggests to me that there is nothing wrong with my tapes, either. To me, this is conclusive evidence of a pure software error.
    I am wondering, therefore, if there is anyone from Apple monitoring these boards who can tell me what is ACTUALLY happening here and what to do, because it's very frustrating.
    Thanks,
    Nathan Jongewaard

    My suggestion is that you reconsider whether your in Non-Drop frame. NTSC standard is Dropframe 29.97. I'm afraid you may have this backwards. DVX100's default to dropframe (29.97fps) in the USA. FCP could be encountering a pulldown cadence every few minutes because it expects a frame drop that is not there.
    Remember the frame drop only occurs every few minutes so this makes sense. With standard NTSC capture (not 24fps pulldown removal) such dropped frames don't mess anything up. They just subtly shift the timing and sync of the audio track. You could have captured this way for years and rarely seen a problem with short clips. When doing a pulldown removal, a missing or in your case extra frame can confuse the pulldown removal system as it expects to find the video encoded in 29.97 rather than 30fps.

  • Programming mathematical function spaces in f#.....

    This is a very general question in order to perhaps get some redirection to the appropriate resources....II am interested in the idea of representing mathematical function spaces using f# to program the function relationships..... The idea s that in math...
    Function spaces use function instead of vectors in vector spaces... I would like to get an idea of how to implement function spaces in f#.....?????????.......
    Thank you........

    g2d.draw(polygon)

  • FCE 4 Froze At The End Of VHS Capture Through DAC-100.

    I spent a large part of yesterday copying a 2 hour VHS tape to FCE 4 via my DAC-100.
    At the end of the 2 hours I pressed the esc key only to hear a warning "thud" bleep and find that FCE had suddenly frozen.
    After I force-quit I found the 25GB file was useless.
    I tried again with a 30 minute capture with the same result.
    During each attempted capture I had briefly answered an email.
    Assuming that my short internet use had caused the end freeze I tried again, not using any other computer function and succeeded.
    Whilst using the web is the chief suspect, could there be another obvious cause? (I am sure that in the past I have used the iMac whilst capturing).
    The irritating thing was that it waited until the capture was finished before freezing!

    That is interesting Armando.
    Thanks to your info I can now use Spaces!
    Here are the results:-
    1. With FCE capturing in Window 1, I can switch to Window 2 and open Mail (or any other app), use it and go back to Window 1. (This works with both camera and converter captures).
    The Mail name is still in the menu in Window 1 and I can't bring up FCE's name but it does not matter because pressing esc stops the capture and puts FCE's name back in the menu.
    2. For it all to work, I must OPEN the new app in the new window. If the app is already open, even though I "wake" it up in Window 2, it will lock FCE when I go back to Window 1.
    3. As mentioned previously, if FCE locks whilst capturing from a camera, I can simply manually stop the camera, FCE will unlock and I get the "End of Tape" message. The capture is successful. But this technique does not work when capturing with a converter.
    So, you have shown me that I can use other apps whilst capturing by using Spaces, but it does not explain why I have the original problem.
    I have just done some capture tests on my G4 eMac (OSX 10.4.10) and opening other apps during capture does not cause a problem.

  • Capturing audio from soundcard!

    Can anybody help me? When i compile this code no problems appears, however when i execute the program this is shown to:
    Microsoft Windows XP [Version 5.1.2600]
    (C) Copyright 1985-2001 Microsoft Corp.
    c:\Sun\SDK\jdk\bin>java JCapture
    javax.media.NotConfiguredError: setContentDescriptor cannot be called before con
    figured
    at com.sun.media.ProcessEngine.setContentDescriptor(ProcessEngine.java:3
    42)
    at com.sun.media.MediaProcessor.setContentDescriptor(MediaProcessor.java
    :123)
    at JCapture.doIt(JCapture.java:69)
    at JCapture.main(JCapture.java:27)
    Exception in thread "main" javax.media.NotConfiguredError: setContentDescriptor
    cannot be called before configured
    at com.sun.media.ProcessEngine.setContentDescriptor(ProcessEngine.java:3
    42)
    at com.sun.media.MediaProcessor.setContentDescriptor(MediaProcessor.java
    :123)
    at JCapture.doIt(JCapture.java:69)
    at JCapture.main(JCapture.java:27)
    The code is this:
    import java.io.BufferedReader;
    import java.io.InputStreamReader;
    import java.util.Iterator;
    import java.util.Vector;
    import javax.media.CaptureDeviceInfo;
    import javax.media.CaptureDeviceManager;
    import javax.media.DataSink;
    import javax.media.Manager;
    import javax.media.MediaLocator;
    import javax.media.Processor;
    import javax.media.datasink.DataSinkEvent;
    import javax.media.datasink.DataSinkListener;
    import javax.media.datasink.EndOfStreamEvent;
    import javax.media.protocol.DataSource;
    import javax.media.protocol.FileTypeDescriptor;
    public class JCapture
    CaptureDeviceInfo captureDeviceInfo = null;
    DataSink dataSink = null;
    public static void main(String[] args)
    JCapture jCapture = new JCapture();
    jCapture.doIt();
    System.exit(0);
    private void doIt()
    // retrieve list of available capture devices
    Vector vector = CaptureDeviceManager.getDeviceList(null);
    Iterator it = vector.iterator();
    while(it.hasNext())
    CaptureDeviceInfo deviceInfo = (CaptureDeviceInfo)it.next();
    if("DirectSoundCapture".equals(deviceInfo.getName()))
    captureDeviceInfo = deviceInfo;
    break;
    // exit if no capture devices found
    if(captureDeviceInfo == null)
    System.err.println("No capture devices found!");
    System.exit(-1);
    try
    // get media locator from capture device
    MediaLocator mediaLocator = captureDeviceInfo.getLocator();
    // create processor
    Processor p = Manager.createProcessor(mediaLocator);
    // configure the processor
    p.configure();
    // set the content type
    p.setContentDescriptor(new FileTypeDescriptor(FileTypeDescriptor.WAVE));
    // realise the processor
    p.realize();
    // get the output of processor
    DataSource dataSource = p.getDataOutput();
    // create medialocator with output file
    MediaLocator dest = new MediaLocator("file://c:\\myFile.wav");
    // create datasink passing in output source and medialocator specifying our output file
    dataSink = Manager.createDataSink(dataSource,dest);
    // create dataSink listener
    dataSink.addDataSinkListener
    // anonymous inner class to handle DataSinkEvents
    new DataSinkListener()
    // if end of media, close data writer
    public void dataSinkUpdate(DataSinkEvent dataEvent)
    // if capturing stopped, close DataSink
    if(dataEvent instanceof EndOfStreamEvent)
    dataSink.close();
    // open the datasink
    dataSink.open();
    // start the datasink
    dataSink.start();
    // start the processor
    p.start();
    getConsolePress();
    // stop the processor
    p.stop();
    // close the processor
    p.close();
    catch(Exception e)
    e.printStackTrace();
    System.exit(-1);
    System.out.println("Finished!!");
    public void getConsolePress() throws Exception
    InputStreamReader isr = new InputStreamReader(System.in);
    BufferedReader input = new BufferedReader(isr);
    String line;
    System.out.println("Press x to stop capturing...");
    while (!(line = input.readLine()).equals("x"))
    System.out.println("Press x to stop capturing...");
    return;
    }

    I compiled and ran your code and saw
    the error you described, but could not fix it ..
    OTOH - here is some code I was mucking
    about with recently (it was adapated from an
    example I found on a forum, but I cannot recall
    where), it seemed to work quite well, without
    ever creating a processor.
    Maybe that might help?
    import java.awt.FlowLayout;
    import java.awt.event.ActionListener;
    import java.awt.event.ActionEvent;
    import javax.swing.JFileChooser;
    import javax.swing.JFrame;
    import javax.swing.JButton;
    import javax.swing.JOptionPane;
    import java.io.File;
    import java.io.FileNotFoundException;
    import java.io.InputStream;
    import java.io.FileOutputStream;
    import java.io.ByteArrayInputStream;
    import java.io.ByteArrayOutputStream;
    import javax.sound.sampled.AudioSystem;
    import javax.sound.sampled.AudioFormat;
    import javax.sound.sampled.AudioFileFormat;
    import javax.sound.sampled.AudioInputStream;
    import javax.sound.sampled.DataLine;
    import javax.sound.sampled.SourceDataLine;
    import javax.sound.sampled.TargetDataLine;
    public class AudioRecorder extends JFrame{
      boolean stopCapture = false;
      ByteArrayOutputStream byteArrayOutputStream;
      AudioFormat audioFormat;
      TargetDataLine targetDataLine;
      AudioInputStream audioInputStream;
      SourceDataLine sourceDataLine;
      File file;
    //  AudioFileFormat.Type fileType;
      /** The recorder will request enough memory for this lengh
      of time of audio recording.More might be possible.*/
      static int minutes = 4;
      /** channels - 1 for mono, 2 for stereo. */
      static int channels = 2;
      /** bytes - 1 means 8 bit, 2 means 16 bit, 3 means 24 bit. */
      static int bytes = 2;
      /** Samples per second, sample rate in Hertz. */
      static int samplerate = 44100;
      /** Default size of the audio array before expansion.
      Guarantees there is 'minutes' of memory for recording
      'bytes' of sound depth in 'channels' at 'samplerate'. */
      static int size = samplerate*channels*bytes*60*minutes;
      /** Make the temp buffer, for this fraction of a second. */
      static int timeperiod = 20;
      /** An arbitrary-size temporary holding
      buffer
      enough for 3 minutes of sampling in
      stereo at 16 bit, 44100Hz */
      //static byte tempBuffer[];
      static byte tempBuffer[] = new byte[samplerate*channels*bytes/timeperiod];
      public static void main(
        String args[]){
        if (args.length==0) {
          new AudioRecorder(null);
        } else {
          new AudioRecorder(args[0]);
      }//end main
      public AudioRecorder(String outputFileName){//constructor
        setDefaultCloseOperation( JFrame.EXIT_ON_CLOSE );
        setLocation(100,50);
        final JButton captureBtn =
          new JButton("Capture");
        final JButton stopBtn =
          new JButton("Stop");
        final JButton playBtn =
          new JButton("Save");
        captureBtn.setEnabled(true);
        stopBtn.setEnabled(false);
        playBtn.setEnabled(false);
        captureBtn.addActionListener(
          new ActionListener(){
            public void actionPerformed(
            ActionEvent e){
              captureBtn.setEnabled(false);
              stopBtn.setEnabled(true);
              playBtn.setEnabled(false);
              captureAudio();
        getContentPane().add(captureBtn);
        stopBtn.addActionListener(
          new ActionListener(){
            public void actionPerformed(
              ActionEvent e){
              captureBtn.setEnabled(true);
              stopBtn.setEnabled(false);
              playBtn.setEnabled(true);
              //Terminate the capturing of
              // input data from the
              // microphone.
              stopCapture = true;
            }//end actionPerformed
          }//end ActionListener
        );//end addActionListener()
        getContentPane().add(stopBtn);
        playBtn.addActionListener(
          new ActionListener(){
            public void actionPerformed(
              ActionEvent e){
              //Play back all of the data
              // that was saved during
              // capture.
              saveAudio();
            }//end actionPerformed
          }//end ActionListener
        );//end addActionListener()
        getContentPane().add(playBtn);
        getContentPane().setLayout(new FlowLayout());
        setTitle("IMAR");
        setDefaultCloseOperation(
          EXIT_ON_CLOSE);
        setSize(250,70);
        setVisible(true);
        if ( outputFileName==null ) {
          JFileChooser fc = new JFileChooser();
          int returnVal = fc.showSaveDialog(null);
          if(returnVal == JFileChooser.APPROVE_OPTION) {
            outputFileName = fc.getSelectedFile().toString();
          } else {
            outputFileName = "default.wav";
        file = new File(outputFileName);
        FileOutputStream fout;
        try {
          fout=new FileOutputStream(file);
        } catch (FileNotFoundException e1) {
          e1.printStackTrace();
      }//end constructor
      //This method captures audio input
      // from a microphone and saves it in
      // a ByteArrayOutputStream object.
      private void captureAudio(){
        try{
          //Get everything set up for
          // capture
          audioFormat = getAudioFormat();
    /*      AudioFormat.Encoding[] encodings =
            AudioSystem.getTargetEncodings(audioFormat);
          for(int ii=0; ii<encodings.length; ii++) {
            System.out.println( encodings[ii] );
          DataLine.Info dataLineInfo =
            new DataLine.Info(
              TargetDataLine.class, audioFormat);
          targetDataLine = (TargetDataLine)
            AudioSystem.getLine(dataLineInfo);
          targetDataLine.open(audioFormat);
          targetDataLine.start();
          System.out.println( "AR.cA: " + targetDataLine );
          audioInputStream = new
            AudioInputStream(targetDataLine);
          //Create a thread to capture the
          // microphone data and start it
          // running.It will run until
          // the Stop button is clicked.
          CaptureThread ct = new CaptureThread();
          Thread captureThread =
            new Thread(ct);
          captureThread.start();
        } catch (Exception e) {
          e.printStackTrace();
          System.exit(0);
        }//end catch
      }//end captureAudio method
      //This method plays back the audio
      // data that has been saved in the
      // ByteArrayOutputStream
      private void saveAudio() {
        try{
          //Get everything set up for
          // playback.
          //Get the previously-saved data
          // into a byte array object.
          byte audioData[] =
            byteArrayOutputStream.
            toByteArray();
          //Get an input stream on the
          // byte array containing the data
          InputStream byteArrayInputStream
            = new ByteArrayInputStream(audioData);
          AudioFormat audioFormat =
            getAudioFormat();
          AudioInputStream audioInputStreamTemp =
            new AudioInputStream(
              byteArrayInputStream,
              audioFormat,
              audioData.length/audioFormat.
                getFrameSize());
          audioInputStream =
            AudioSystem.getAudioInputStream(
              AudioFormat.Encoding.PCM_SIGNED,
              audioInputStreamTemp );
          DataLine.Info dataLineInfo =
            new DataLine.Info(
            SourceDataLine.class,
            audioFormat);
          sourceDataLine = (SourceDataLine)
            AudioSystem.getLine(dataLineInfo);
          sourceDataLine.open(audioFormat);
          sourceDataLine.start();
          //Create a thread to play back
          // the data and start it
          // running.It will run until
          // all the data has been played
          // back.
          Thread saveThread =
            new Thread(new SaveThread());
          saveThread.start();
        } catch (Exception e) {
          e.printStackTrace();
          System.exit(0);
        }//end catch
      }//end playAudio
      //This method creates and returns an
      // AudioFormat object for a given set
      // of format parameters.If these
      // parameters don't work well for
      // you, try some of the other
      // allowable parameter values, which
      // are shown in comments following
      // the declarations.
      private AudioFormat getAudioFormat(){
        float sampleRate = 44100.0F;
        //8000,11025,16000,22050,44100
        int sampleSizeInBits = channels*8;
        //8,16
    //    int channels = 2;
        //1,2
        boolean signed = true;
        //true,false
        boolean bigEndian = false;
        //true,false
        return new AudioFormat(
          sampleRate,
          sampleSizeInBits,
          channels,
          signed,
          bigEndian);
      }//end getAudioFormat
      //===================================//
      /** Inner class to capture data from
      microphone */
      class CaptureThread extends Thread{
        public void run(){
          byteArrayOutputStream =
             new ByteArrayOutputStream(size);
          stopCapture = false;
          try{//Loop until stopCapture is set
            // by another thread that
            // services the Stop button.
            while(!stopCapture){
              //Read data from the internal
              // buffer of the data line.
              int cnt = targetDataLine.read(
                tempBuffer,
                0,
                tempBuffer.length);
              if(cnt > 0){
                //Save data in output stream
                // object.
                byteArrayOutputStream.write(
                  tempBuffer, 0, cnt);
              }//end if
            }//end while
            byteArrayOutputStream.close();
          }catch (Exception e) {
            e.printStackTrace();
            System.exit(0);
          }//end catch
        }//end run
      }//end inner class CaptureThread
      /** Inner class to play back the data
      that was saved. */
      class SaveThread
        extends Thread {
        public void run(){
          try{
            if (AudioSystem.isFileTypeSupported(
              AudioFileFormat.Type.WAVE,
                audioInputStream)) {
              AudioSystem.write(audioInputStream,
                AudioFileFormat.Type.WAVE, file);
            JOptionPane.showMessageDialog(null,"Clip Saved!");
          } catch (Exception e) {
            e.printStackTrace();
            System.exit(0);
          }//end catch
        }//end run
      }//end inner class PlayThread
      //===================================//
    }//end outer class AudioCapture.java

  • Capture from within Servlet

    Following along the various code tutorials on the Java developer site, I decided to try my hand at JMF / Servlet integration following http://java.sun.com/dev/evangcentral/totallytech/jmf2.html
    However, the servlet seems unable to detect any capture devices what so ever. And it does seem localized to servlet:
    When running as the main function of a java class I get:
    Attempting to detect capture devices
    DirectSoundCapture
    JavaSound audio capture
    vfw:Microsoft WDM Image Capture (Win32):0
    Found 3 capture devices.
    However, when the exact same code is inserted into my servlet's init method the result is:
    Attempting to detect capture devices
    Found 0 capture devices.
    The code in question is:
    System.out.println("Attempting to detect capture devices");
    Vector videoDevs = CaptureDeviceManager.getDeviceList(null);
    for (int i = 0; i < videoDevs.size(); i ++) {
         System.out.println(((CaptureDeviceInfo)videoDevs.get(i)).getName());
    System.out.println("Found "+videoDevs.size()+" capture devices.");I'm figuring it has something to do with Tomcat's jvm not being able to locate the jmf.properties file, but I'm somewhat at a loss as to how to get it to. I've tried putting the files directly into the WEB-INF/lib, /class, and so forth, but it hasn't helped.
    My setup is using Java 1.5.0 b32c, JMF2.1.1e, and Tomcat 5.0.18 on winXP.

    Hi,
    I am having the same problem. Running JMStudio finds the devices, but running the sample servlet in jmfdemo - WebCamServlet - yields a null return from this line:
    CaptureDeviceInfo device = CaptureDeviceManager.getDevice(cameraDevice);
    I am also trying to run from Tomcat (5.0.24). I have my JAVA_HOME set to:
    c:tools\Sun\AppServer\jdk
    which I assume is where the JMF did its install, as it is the first java.exe in my PATH, and is where Tomcat should be getting its JVM from.
    Did you get this to work, or can anyone else help?
    Thanks!!

Maybe you are looking for

  • Multi Touch not supported in Adobe Air for IOS?

    Hi, I've just started developing my first app for my iPhone which is an iPhone 3GS. I'm using CS5 to make these apps. I did a simple test game where there are two arrows on the screen and a fire button. The screen also has a ground and the player sta

  • Laptop won't boot up WITH PICTURE

    Hello! I'm sorry if this has been asked before; I tried to search for words that might be related, and it returned way too many for me to read through, or to guess if they're having the same issue as I am.  I have a A505D-S6968 Satellite laptop.  It

  • Very heavy illustrator file

    hello, I have a problem because I developed a very simple file and the same is weighing nearly 200mb. Has no image, type transformed  into object and the drawing size is 30x60mm sources  very small. I have had these problems with other files.

  • Need to reinstall illustrator CS5 upgrade but original version I have is Illustrator 10 which is only supported by Classic

    I had my hard drive replaced on my imac. I now need to reinstall illustrator and Photoshop CS5 upgrades. The original full version of these was Ill10 and Photoshop 6 which were only supported by Classic OS9 which is no longer on my computer. How do I

  • Table usage in view

    Hi, I want to find the views in which my table is used. Is there any dictionary view? In all_views the text is stored in Long field, so I am unable to query it. Help me. Regards, Subbu S.