JMF realtime video acquire is possibile?

Hello i need method for acquire video in realtime
I have need to see when registry the video.
The official example not see when record.
Thanks

There is video capture, but a web cam is typically just for grabbing still frames every so often, not for full frame rate video. I imagine the difference isnt' much from what you posted, so maybe you need to analyze the code there a little better and figure out how to change it to do what you want. Of course, full frame rate video is just grabbing more still frames faster.

Similar Messages

  • Mark an object in real time on a displayed video acquired from an USB camera

    Hello,
    I am a beginner LabView user undertaking a project in which I need to show video acquired from an USB camera in LabView on computer screen. In the video, there will be a metal part, which is always be at a fixed position at the bottom left of the view. I need to mark an arrow alike sign on the metal part, please take a look at the attached picture. To recognize the metal part easier, in the picture, I drew a blue line at the boundary of the part, whilst the arrow was green. The arrow must be radially pointed to the center of the blue line (but, the blue line may not need to be created) and must be there every time I start the VI, throughout the video.
    How should I do to display the video? Is there any way to create the arrow in LabView (either by programming or manually drawing the arrow)? If making the arrow is impossible, is there any way to just mark a cross at the center of the blue line? The blue line does not need to be marked on the video.
    I have LabView 2010 with Vision Development Module. My camera can be found under NI IMAQdx devices in Measurement and Automation Explorer.
    Thanks,
    LePhuong
    Solved!
    Go to Solution.
    Attachments:
    A fixed arrow mark on the metal part.jpg ‏26 KB

    Hi,
    Look at this post. It may help you. http://forums.ni.com/t5/Machine-Vision/pattern-matching-program/m-p/1914589#M34855. In here its video signal you can replace this with the live video.
    Sasi.
    Certified LabVIEW Associate Developer
    If you can DREAM it, You can DO it - Walt Disney

  • Advantages of JMF in video streaming

    Hi,
    Video conferencing can be done by many methods but why JMF is used for video/audio streaming
    what are its advantages? and Also what are its disadvantages?
    Is there any other better way to do video conferencing ?
    Please give details regarding the current scenario of JMF in video streaming.
    thanx

    Video conferencing can be done by many methods but why JMF is used for video/audio streaming
    what are its advantages?It has RTP built-in, handles a handful of the most basic codecs, there is decent documentation for it...is written in Java... easily extendible for new codec support...
    what are its disadvantages? It hasn't been updated or supported in 7 years... Windows 2000 was the last supported operating system... has bugs which will never be fixed... only supports a handful of low-bandwidth formats for streaming...
    Is there any other better way to do video conferencing ? Yeah, tons...the best alternative right now would probably be with Adobe Flex technology, but JavaFX and Silverlight will both have audio & video capturing / streaming capabilities soon...

  • Realtime video frame processing - JMF? FFMpeg? Java2D? ImageMagick?

    Folks,
    I'm researching methods to create a performent realtime (or better than realtime preferably) video generator engine.
    I need to:
    * Extract frames from video sources.
    * Extract audio tracks
    * Compose images and text on top of those frames
    * Downmix audio tracks.
    * Write those frames back out as a video w/ audio
    Application needs to be Enterprise grade. We are using a software package which does all this, but it is all but prohibitively expensive, so we would like to attempt a reimplementation.
    I am open to most suggestions, here's what I have tried so far:
    Using Runtime.exec() to cause FFMPEG to decode to frames and JMagick to composite the output frames, then Runtime.exec() to cause FFMpeg to encode the video. This process suffers from a number of things. Jmagick is poorly implemented and barely supported, meaning I ended up using ImageMagick via Runtime.exec() instead. It also has a massive amount of temporary files and I/O overheads.
    I'd like to explore in memory solutions, maybe using Java2D for example:
    An FFMpeg API that will give me a pipeline of frames into Java which can be stored in Java Image compatible objects, Java 2D processing and an FFMpeg API that will allow output to create the result, so no temporary files.
    JMF... using it directly I am concerned that it's codec support is somewhat limited and I don't want to start whittling away codecs with "Unsupported" in the application. I looked at some example FFMPEG-Java code (which I believe is a JMF drop in replacement?), it looked very scary and low level to me. All I wanted to do is launch an FFMpeg and receive an Array of frames.
    Anyone got any thoughts, ideas, opinions, helpful links?
    Cheers,
    Paul
    Edited by: PCampbell on Mar 9, 2009 10:52 AM
    Edited by: PCampbell on Mar 9, 2009 10:55 AM

    Application needs to be Enterprise grade. We are using a software package which does all this, but it is all but prohibitively expensive, so we would like to attempt a reimplementation.JMF is no longer supported by Sun, and hasn't been in 6 years. It's only supported for Win9x and Nt 4.0...therefore, you cannot build an "enterprise grade" application with it.
    But yeah, you can use the built-in Java stuff to markup images if you so desire...

  • JMF Compiling video from a folder of images

    Hello,
    I am trying to create a video from a folder of images. I am using a modified version of the JPEGImagesToMovie.java code.
    The video is not compiling correctly and when I playback it is only showing a blurred image of the final frame.
    Any ideas?
    Cheers, Paul
    import java.util.*;
    import java.io.*;
    import java.awt.Dimension;
    import javax.media.*;
    import javax.media.control.*;
    import javax.media.protocol.*;
    import javax.media.protocol.DataSource;
    import javax.media.datasink.*;
    import javax.media.format.VideoFormat;
    public class VideoThread extends Thread implements ControllerListener, DataSinkListener {
       // Generate the output media locators.
         private MediaLocator oml;
       private int width;
       private int height;
       private int frameRate;
       private String folder;
       public VideoThread(int width, int height, int frameRate, String folder) {
          this.width = width;
          this.height = height;
          this.frameRate = frameRate;
          this.folder = folder;
       public void run() {
          if ((oml = createMediaLocator("images/" + folder + "/" + folder + ".mov")) == null) {
             System.err.println("Cannot build media locator from: " + "images/" + folder + "/" + folder + ".mov");
          ImageDataSource ids = new ImageDataSource(width, height, frameRate, folder);
            Processor p;
          try {
             p = Manager.createProcessor(ids);
          } catch (Exception e) {
             System.err.println("Cannot create a processor from the data source.");        
             return;
          p.addControllerListener(this);
          // Put the Processor into configured state so we can set
            // some processing options on the processor.
            p.configure();
          if (!waitForState(p, p.Configured)) {
             System.err.println("Failed to configure the processor.");
             return;
          // Set the output content descriptor to QuickTime.
          p.setContentDescriptor(new ContentDescriptor(FileTypeDescriptor.QUICKTIME));
            // Query for the processor for supported formats.
          // Then set it on the processor.
            TrackControl tcs[] = p.getTrackControls();
            Format f[] = tcs[0].getSupportedFormats();
          if (f == null || f.length <= 0) {
             System.err.println("The mux does not support the input format: " + tcs[0].getFormat());
             return;
          tcs[0].setFormat(f[0]);
          System.err.println("Setting the track format to: " + f[0]);
          // We are done with programming the processor.  Let's just
            // realize it.
            p.realize();
          if (!waitForState(p, p.Realized)) {
             System.err.println("Failed to realize the processor.");
             return;
          // Now, we'll need to create a DataSink.
            DataSink dsink;
          if ((dsink = createDataSink(p, oml)) == null) {
             System.err.println("Failed to create a DataSink for the given output MediaLocator: " + oml);        
             return;
          dsink.addDataSinkListener(this);
            fileDone = false;
          System.err.println("start processing...");
            // OK, we can now start the actual transcoding.
          try {
             p.start();
             dsink.start();
          catch (IOException e) {
             System.err.println("IO error during processing");
             return;
          // Wait for EndOfStream event.
            waitForFileDone();
          // Cleanup.
          try {
             dsink.close();
          } catch (Exception e) {}
          p.removeControllerListener(this);
          System.err.println("...done processing.");
          return;
        * Create the DataSink.
       DataSink createDataSink(Processor p, MediaLocator oml) {
       DataSource ds;
       if ((ds = p.getDataOutput()) == null) {
          System.err.println("Something is really wrong: the processor does not have an output DataSource");
          return null;
         DataSink dsink;
       try {
          System.err.println("- create DataSink for: " + oml);
          dsink = Manager.createDataSink(ds, oml);
          dsink.open();
       catch (Exception e) {
          System.err.println("Cannot create the DataSink: " + e);
          return null;
         return dsink;
       Object waitSync = new Object();
       boolean stateTransitionOK = true;
        * Block until the processor has transitioned to the given state.
        * Return false if the transition failed.
       boolean waitForState(Processor p, int state) {
       synchronized (waitSync) {
          try {
             while (p.getState() < state && stateTransitionOK)
                waitSync.wait();
          catch (Exception e) {}
         return stateTransitionOK;
        * Controller Listener.
       public void controllerUpdate(ControllerEvent evt) {
         if (evt instanceof ConfigureCompleteEvent ||
          evt instanceof RealizeCompleteEvent ||
          evt instanceof PrefetchCompleteEvent) {
          synchronized (waitSync) {
             stateTransitionOK = true;
             waitSync.notifyAll();
       else if (evt instanceof ResourceUnavailableEvent) {
          synchronized (waitSync) {
             stateTransitionOK = false;
             waitSync.notifyAll();
       else if (evt instanceof EndOfMediaEvent) {
          evt.getSourceController().stop();
          evt.getSourceController().close();
       Object waitFileSync = new Object();
       boolean fileDone = false;
       boolean fileSuccess = true;
        * Block until file writing is done.
       boolean waitForFileDone() {
       synchronized (waitFileSync) {
          try {
             while (!fileDone){
                waitFileSync.wait();}
          catch (Exception e) {}
       return fileSuccess;
        * Event handler for the file writer.
       public void dataSinkUpdate(DataSinkEvent evt) {
          if (evt instanceof EndOfStreamEvent) {
             synchronized (waitFileSync) {
                fileDone = true;
                waitFileSync.notifyAll();
          else if (evt instanceof DataSinkErrorEvent) {
             synchronized (waitFileSync) {
                fileDone = true;
                fileSuccess = false;
                waitFileSync.notifyAll();
        * Create a media locator from the given string.
       static MediaLocator createMediaLocator(String url) {
          MediaLocator ml;
            if (url.indexOf(":") > 0 && (ml = new MediaLocator(url)) != null)
             return ml;
          if (url.startsWith(File.separator)) {
             if ((ml = new MediaLocator("file:" + url)) != null)
                return ml;
          else {
             String file = "file:" + System.getProperty("user.dir") + File.separator + url;
             if ((ml = new MediaLocator(file)) != null)
                return ml;
          return null;
        // Inner classes.
         * A DataSource to read from a list of image files and
         * turn that into a stream of JMF buffers.
         * The DataSource is not seekable or positionable.
    class ImageDataSource extends PullBufferDataSource {
         ImageSourceStream streams[];
         ImageDataSource(int width, int height, int frameRate, String folder) {
             streams = new ImageSourceStream[1];
             streams[0] = new ImageSourceStream(width, height, frameRate, folder);
         public void setLocator(MediaLocator source) {
         public MediaLocator getLocator() {
             return null;
          * Content type is of RAW since we are sending buffers of video
          * frames without a container format.
         public String getContentType() {
             return ContentDescriptor.RAW;
         public void connect() {
         public void disconnect() {
         public void start() {
         public void stop() {
          * Return the ImageSourceStreams.
         public PullBufferStream[] getStreams() {
             return streams;
          * We could have derived the duration from the number of
          * frames and frame rate.  But for the purpose of this program,
          * it's not necessary.
         public Time getDuration() {
             return DURATION_UNKNOWN;
         public Object[] getControls() {
             return new Object[0];
         public Object getControl(String type) {
             return null;
         * The source stream to go along with ImageDataSource.
    class ImageSourceStream implements PullBufferStream {
         String folder;
       String imageFile;
         int width, height;
         VideoFormat format;
       File file;
         int nextImage = 000001;     // index of the next image to be read.
         boolean ended = false;
         public ImageSourceStream(int width, int height, int frameRate, String folder) {
             this.width = width;
             this.height = height;
             this.folder = folder;
             format = new VideoFormat(VideoFormat.JPEG,
                        new Dimension(width, height),
                        Format.NOT_SPECIFIED,
                        Format.byteArray,
                        (float)frameRate);
          * This is called from the Processor to read a frame worth
          * of video data.
         public void read(Buffer buf) throws IOException {
          imageFile = "images/" + folder + "/webcam" + StringPrefix(nextImage,6) + ".jpeg";
          file = new File(imageFile);
            // Check if we've finished all the frames.
            if (!file.exists()) {
             // We are done.  Set EndOfMedia.
                 buf.setEOM(true);
                 buf.setOffset(0);
                 buf.setLength(0);
                 ended = true;
             System.out.println("all images");
             return;
    System.err.println("  - reading image file: " + imageFile);
            nextImage++;
             // Open a random access file for the next image.
             RandomAccessFile raFile;
             raFile = new RandomAccessFile(imageFile, "r");
             byte data[] = null;
             // Check the input buffer type & size.
             if (buf.getData() instanceof byte[])
              data = (byte[])buf.getData();
             // Check to see the given buffer is big enough for the frame.
             if (data == null || data.length < raFile.length()) {
              data = new byte[(int)raFile.length()];
              buf.setData(data);
             // Read the entire JPEG image from the file.
             raFile.readFully(data, 0, (int)raFile.length());
             System.err.println("    read " + raFile.length() + " bytes.");
             buf.setOffset(0);
             buf.setLength((int)raFile.length());
             buf.setFormat(format);
             buf.setFlags(buf.getFlags() | buf.FLAG_KEY_FRAME);
             // Close the random access file.
             raFile.close();
          * We should never need to block assuming data are read from files.
         public boolean willReadBlock() {
             return false;
          * Return the format of each video frame.  That will be JPEG.
         public Format getFormat() {
             return format;
       public ContentDescriptor getContentDescriptor() {
             return new ContentDescriptor(ContentDescriptor.RAW);
         public long getContentLength() {
             return 0;
         public boolean endOfStream() {
             return ended;
         public Object[] getControls() {
             return new Object[0];
         public Object getControl(String type) {
             return null;
       private String StringPrefix(int number, int length) {
          String stringNum = Integer.toString(number);
          while (stringNum.length() < length)
             stringNum = "0" + stringNum;
          return stringNum;
    }

    If your code was an SSCCE*, I could run it
    against some images that I have successfully
    used to create MOV's (using my own variant of
    JPEGImagesToMovie).
    It isn't, so I can't.
    The only thing I can suggest (off the top of
    my head) is that the images need to be all
    the same size/width, and the same as
    specified for the MOV, otherwise a corrupted
    MOV will be written.
    * http://www.physci.org/codes/sscce/

  • JMF and video capture card?

    i wanna ask how can i usea video capture card with the jmf and record video
    i try to detect my capture card but the jmf didnt detecte it please pleas help me

    hi,
    i use a tv-card with my camera through linking them with a composite cable. Also, my tv card has capturing capability and i use JMF to do a capture program with these tools and the result is good for me. The monitor.zip file wihich contains a sample of capturing and monitoring simultaneously exists on sun.com. I use it and it was very helpfull for me.
    JMF detects cards through its 'CaptureDeviceManager' which is a controller type object. If you specify like Vector devices = CaptureDeviceManager.getDeviceList(null) -> You will be get the list of appropriate devices on your pc.
    Regards,
    Erohan

  • JMF sending video in HTC Dream

    Hi,
    I am new to JMF. I am able to send video file successfully from PC to PC using the sample source code at JMF website with ease.
    However i am having difficulty transferring Video file from HTC Dream android to my PC. it jumped out of the try-catch without exception at
    MediaLocator locator = new MediaLocator("file://sdcard/aa.3gp");
    ds = Manager.createDataSource(locator); <--- error happened here!
    My media file is aa.3gp and is stored in my Android phone's sdcard.
    Anyone can help ?
    Warmest Regards
    Whitetaru

    Hi,
    Thanks for the great information, However, even if i changed the aa.3gp to another file .mov. It still gives the same error.
    The path /sdcard/<myfile> is correct as i have written other program to save movie file into the sdcard. I've been scratching my head for a few days, just no idea how to resolve this error. Think getting bald soon :)
    Well, wonder if the below debug message would be helpful: The error should be after "WARN/ActivityManager(65)"....
    09-16 13:36:07.314: INFO/StubService(1111): into handleInternalConn!!! isNetworkConnected:true
    09-16 13:36:07.314: INFO/StubService(1111): username not null:falsepassword not null:false
    09-16 13:36:37.964: WARN/AudioFlinger(37): write blocked for 127 msecs
    09-16 13:36:42.394: WARN/InputManagerService(65): Window already focused, ignoring focus gain of: com.android.internal.view.IInputMethodClient$Stub$Proxy@437b3da8
    09-16 13:36:43.164: INFO/jdwp(65): received file descriptor 103 from ADB
    09-16 13:36:43.184: INFO/jdwp(100): received file descriptor 11 from ADB
    09-16 13:36:43.194: INFO/jdwp(102): received file descriptor 51 from ADB
    09-16 13:36:43.204: INFO/jdwp(145): received file descriptor 59 from ADB
    09-16 13:36:43.224: INFO/jdwp(277): received file descriptor 31 from ADB
    09-16 13:36:43.234: INFO/jdwp(999): received file descriptor 24 from ADB
    09-16 13:36:43.244: INFO/jdwp(1008): received file descriptor 25 from ADB
    09-16 13:36:43.254: INFO/jdwp(1111): received file descriptor 24 from ADB
    09-16 13:36:43.264: INFO/jdwp(1182): received file descriptor 33 from ADB
    09-16 13:36:43.284: INFO/jdwp(1237): received file descriptor 32 from ADB
    09-16 13:36:43.294: INFO/jdwp(1285): received file descriptor 24 from ADB
    09-16 13:37:11.064: INFO/jdwp(1330): received file descriptor 23 from ADB
    09-16 13:37:16.474: INFO/PackageManager(65): Removing non-system package:com.pandvt
    09-16 13:37:16.974: INFO/PackageManager(65): /data/app/vmdl37043.tmp changed; unpacking
    09-16 13:37:24.754: INFO/installd(39): move /data/dalvik-cache/data@[email protected]@classes.dex -> /data/dalvik-cache/data@[email protected]@classes.dex
    09-16 13:37:25.204: INFO/dalvikvm(1330): Debugger has detached; object registry had 1 entries
    09-16 13:37:26.084: INFO/ActivityManager(65): Start proc com.android.voicedialer for broadcast com.android.voicedialer/.VoiceDialerReceiver: pid=1340 uid=10004 gids={3002}
    09-16 13:37:26.104: WARN/ResourceType(65): No package identifier when getting value for resource number 0x7f060001
    09-16 13:37:26.194: INFO/jdwp(1340): received file descriptor 11 from ADB
    09-16 13:37:26.264: WARN/ResourceType(65): No package identifier when getting value for resource number 0x7f060001
    09-16 13:37:27.944: INFO/ActivityManager(65): Stopping service: com.android.vending/.PackageMonitorReceiver$UpdateCheckinDatabaseService
    09-16 13:37:30.144: INFO/CheckinService(65): Checkin triggered: Intent { action=android.server.checkin.CHECKIN (has extras) }, market only = true
    09-16 13:37:30.164: INFO/CheckinService(65): Checkin triggered: Intent { action=android.server.checkin.CHECKIN (has extras) }, market only = true
    09-16 13:37:30.524: INFO/CheckinService(65): Sending checkin request (1916 bytes)...
    09-16 13:37:30.574: INFO/CheckinService(65): Sending checkin request (1916 bytes)...
    09-16 13:37:30.644: INFO/jdwp(1348): received file descriptor 23 from ADB
    09-16 13:37:31.394: INFO/ActivityManager(65): Starting activity: Intent { flags=0x10000000 comp={com.pandvt/com.pandvt.pandvt} }
    09-16 13:37:31.454: INFO/dalvikvm(1348): Debugger has detached; object registry had 1 entries
    09-16 13:37:31.564: INFO/ActivityManager(65): Start proc com.pandvt for activity com.pandvt/.pandvt: pid=1357 uid=10057 gids={1006}
    09-16 13:37:31.624: INFO/jdwp(1357): received file descriptor 11 from ADB
    09-16 13:37:31.754: WARN/ActivityThread(1357): Application com.pandvt is waiting for the debugger on port 8100...
    09-16 13:37:31.774: INFO/System.out(1357): Sending WAIT chunk
    09-16 13:37:31.944: INFO/dalvikvm(1357): Debugger is active
    09-16 13:37:32.034: INFO/System.out(1357): Debugger has connected
    09-16 13:37:32.064: INFO/System.out(1357): waiting for debugger to settle...
    09-16 13:37:32.264: INFO/System.out(1357): waiting for debugger to settle...
    09-16 13:37:32.464: INFO/System.out(1357): waiting for debugger to settle...
    09-16 13:37:32.664: INFO/System.out(1357): waiting for debugger to settle...
    09-16 13:37:32.864: INFO/System.out(1357): waiting for debugger to settle...
    09-16 13:37:33.064: INFO/System.out(1357): waiting for debugger to settle...
    09-16 13:37:33.274: INFO/System.out(1357): waiting for debugger to settle...
    09-16 13:37:33.474: INFO/System.out(1357): waiting for debugger to settle...
    09-16 13:37:33.674: INFO/System.out(1357): waiting for debugger to settle...
    09-16 13:37:33.874: INFO/System.out(1357): debugger has settled (1409)
    09-16 13:37:41.444: WARN/ActivityManager(65): Launch timeout has expired, giving up wake lock!
    09-16 13:37:41.894: WARN/ActivityManager(65): Activity idle timeout for HistoryRecord{43537058 {com.pandvt/com.pandvt.pandvt}}
    09-16 13:37:42.954: WARN/ActivityManager(65): Activity pause timeout for HistoryRecord{43537058 {com.pandvt/com.pandvt.pandvt}}
    09-16 13:37:43.544: INFO/CheckinService(65): From server: Intent { action=android.server.checkin.FOTA_CANCEL }
    09-16 13:37:43.654: INFO/CheckinService(65): From server: Intent { action=android.server.checkin.FOTA_CANCEL }
    09-16 13:38:17.264: WARN/dalvikvm(1357): VFY: unable to resolve static field 1660 (PROPERTY) in Lcom/ms/security/PermissionID;
    09-16 13:38:17.264: WARN/dalvikvm(1357): VFY: rejecting opcode 0x62 at 0x012e
    09-16 13:38:17.264: WARN/dalvikvm(1357): VFY: rejected Lcom/sun/media/util/Registry;.<clinit> ()V
    09-16 13:38:17.264: WARN/dalvikvm(1357): Verifier rejected class Lcom/sun/media/util/Registry;
    09-16 13:38:17.304: WARN/dalvikvm(1357): Exception Ljava/lang/VerifyError; thrown during Ljavax/media/pm/PackageManager;.<clinit>
    09-16 13:38:17.324: WARN/dalvikvm(1357): Exception Ljava/lang/ExceptionInInitializerError; thrown during Ljavax/media/PackageManager;.<clinit>

  • Displaying realtime video

    Hi,
    I am making a VI where I am acquiring frames from a camera using an NI frame grabber. Using the IMAQ Grab function, I can loop to acquire images at highspeed and display them at almost realtime. Another part of the VI is intended to analyse the images as quickly as possible. However, the image processing takes more time than it takes to acquire the images. My problem is that I dont know how to display the acquired images as quickly as they are acquired while still processing frames (not every frame, but when the program finishes processing a frame, it should begin processing whatever frame was last acquired). How can I set up my vi so that it isnt waiting for the processing to finish before updating the display? I was thinking I need two seperate while loops, one to acquire frames and display them "realtime" and then another while loop that processes frames. Could I use a local variable in the processing loop to access the image display in order to obtain the current frame? I have heard a lot of negatives about local variables, so I was wondering if there was another way I could do this that would be better.
    Any suggestions would be very helpful,
    Thanks a lot 
    Jeff
    Using Labview 7 Express

    Hello everyone,
    Thanks for posting up to this point, and I like all of the recommendations for this kind of situation.  Processing images while acquiring them at the same time is actually a very common process that IMAQ customers face.  There is actually more than what meets the eye for image acquisition and processing in LabVIEW that I would like to mention.
    Probably the biggest difference between IMAQ and other LabVIEW related activities is the fact that the image buffer datatype (purple wire) that is being passed around from one vision process to the next only contains a reference to the image in memory and not the actual pixel data.  This was purposefully done to increase performance in LabVIEW when passing the image from one point to the next since images are typically very large in size compared to other datatypes used in LabVIEW.  For this reason, the image datatype reference does not need to be passed through a queue to access the image data between multiple loops in LabVIEW.  There are more efficient means in doing so.
    Instead, consider using an image acquisition process called a Ring.  A ring configures the IMAQ driver to acquire a sequence of images in memory automatically, and then you can call the IMAQ Extract Buffer in your processing loop to pull an image from this buffer list into a processing buffer (what you created from the IMAQ Create VI.)  When the IMAQ card reaches the end of the sequence (called a buffer list)in the driver managed environment, it automatically starts over at the beginning of the list.  Therefore, the images are held in memory until you need them for processing.  For more information on the use of Rings, check out the following suggested links:
    NI Developer Zone: Rings
    NI Developer Zone: Ring Acquisitions
    Now, hopefully with a more efficient implementation of image sharing between loops put into place your processing loop will be able to keep up with the acquisition loop.  Otherwise what will happen is that the index in the buffer list of the ring will overwrite an image that you have not yet processed, effectively losing data in your program.  However, if your processing loop is simply taking too long, then you will have to consider other options to either improve processing performance.  Some ways to this include the use of ROIs to specify what you want to process, fine tuning pattern matching algorithms, substituting slower algorithms with other methods, reducing the resolution of the images that you are acquiring, or perhaps acquiring all of the images first (using a sequence acquisition) and then running the processing loop afterwards.
    I hope this gives you some direction.  Working with images can be quite a different experience, and it is important to understand the acquisition tools that you have available in order to get the most out of the driver.  Please post back if you have any followup questions.
    Regards,
    Mike Torba
    Vision Applications Engineer
    National Instruments 

  • Fantastic animation video: is it possibile replicate it in Edge?

    Hi all!
    I've found this spectacular animation video http://nystudyvacations.com and I wonder if it's possibile replicate it in Edge. I think Edge it only lacks the Z axys rotation to make this, am I correct?
    Thanks!
    Davide

    sorry - I see it´s not clear what I want (for sure because of my english...)
    The question is not how to make this 3d-animation (its already done) and it´s a just a video with transparent background.
    So what I want to do in edge:
    If someone clicks on one of the orange buttons, then the video with the 3D-animation should appear.
    and the output of the whole project shouldn´t be a video (like the one on youtube) but an interactive version, wher users can click.
    I just filmed the flash-version with a screencast-software and uploaded it on youtube to show it to you.
    The big question: is that possible with adobe edge?

  • RealTime video in JPanel ..

    Hey guys ,
    I m doin a project for my last semester and m stumped...what i am doing is capturing the real time video from my mobile camera via bluetooth on to my pc....m modelling it in a server-client system with server being my PC and the app on my phone my as a client. i m capturing the images thru the cel phone cam and sending to my PC. now i need to show real time images on my PC ie what is being seen by my mobile cam...
    i have the following questions :
    1. how can i approach the problem ?
    2. is this a difficult thing to do ?
    3. shudl i be using a JPanel for it ?
    4. and will this come under real time streaming protocol ?
    Thanks in advance.

    797661 wrote:
    im capturing the images thru the cel phone cam and sending to my PC. now i need to show real time images on my PC ie what is being seen by my mobile cam...Sending the uncompressed images from your cell phone camera, which are probably going to be fairly high resolution for most cell phones, doesn't sound like a good idea in terms of bandwidth...
    i have the following questions :
    1. how can i approach the problem ? Using the JMF framework, you could write some custom stuff to turn the series of images into an actual movie file and then use the built-in JMF stuff to render the movie to the screen...
    2. is this a difficult thing to do ?Nope.
    3. shudl i be using a JPanel for it ? If you're just wanting to render the images to the JPanel as you receive them, that's certainly doable... but I wouldn't expect the performance of that to approach anything decent.
    4. and will this come under real time streaming protocol ? RTSP is a specific protocol, not a class of "user defined in an armchair" protocols. In order for something to classify as RTSP, you'd need to be sending the data as RTP packets.
    JMF is capable of creating and sending / receiving and displaying several different formats of RTP streams, but JMF doesn't run on phones. So, using whatever technology is available for your phone to produce an RTP stream, you could use JMF to receive and display that.
    And because JMF doesn't run on phones, you'd need to seek out a phone-specific forum to determine how to produce a RTP stream with your phone's camera...

  • JMF set video source to S-video/Tuner/Composite during runtime

    Does anyone know how to change Video Source from Video Tuner to Video Composite or S-Video?
    I'm using LifeView card. I can get them to work manually. However, I have no clue how to set them at runtime. I want to have buttons to swith from Tuner to Composite to S-Video at will.
    10 dukees to anyone who answers it!
    Thanks,
    Hiren

    If your code was an SSCCE*, I could run it
    against some images that I have successfully
    used to create MOV's (using my own variant of
    JPEGImagesToMovie).
    It isn't, so I can't.
    The only thing I can suggest (off the top of
    my head) is that the images need to be all
    the same size/width, and the same as
    specified for the MOV, otherwise a corrupted
    MOV will be written.
    * http://www.physci.org/codes/sscce/

  • Realtime video stream on the cell Phone

    Hi,
    I have to develop a prototype to show a cell phone handling a CCTV feed
    1. I am trying to figure out how easy or difficult it isn The cell phone should be able to display real-time feed coming from a CCTV and provide the cell phone user the ability to record, stop, and rewind it.
    2. Any idea how long the prototype should take.Aany crude demo will do.
    3. What components of J2ME should I use (CDC,CLDC ...?..sorry I am new to this).
    4. Is there any sample code that can help me in doing it quickly ?
    Thanks in advance
    QM

    What is the video converted from? And what format is it converted to?
    Premiere Elements is technically not a conversion program. It's a video editor. But if you can tell us what type of camcorder your original video came from and what specs you output, that would be helpful.
    It also might help to know which version of Premiere Elements you're using.

  • Realtime video mapping (plugin for Millumin)

    Hello,
    Millumin, the software for videomapping and theaters, is proud to introduce its new After Effects plugin.
    It allows you to send the output from After Effects directly to Millumin : it's a bridge between the creation (with After Effects) and the video projection onto the surface (the videomapping). No more intermediate rendering : everything is now made in real time !
    Here is a tutorial, showing the process and some examples (light effect, contours, 3D, ...)
    ---> http://vimeo.com/71524396
    Let me know if you got a question. Philippe

    You're probably going to have to make a new video composite with ffmpeg.
    http://stackoverflow.com/questions/1043 … ing-ffmpeg
    http://superuser.com/questions/868204/o … ith-ffmpeg
    I'll give you some examples:
    This will put images on top of your video
    Top left corner
    ffmpeg -i inputvideo.avi -vf "movie=watermarklogo.png [watermark]; [in][watermark] overlay=10:10 [out]" outputvideo.flv
    Top right corner
    ffmpeg -i inputvideo.avi -vf "movie=watermarklogo.png [watermark]; [in][watermark] overlay=main_w-overlay_w-10:10 [out]" outputvideo.flv
    Bottom left corner
    ffmpeg -i inputvideo.avi -vf "movie=watermarklogo.png [watermark]; [in][watermark] overlay=10:main_h-overlay_h-10 [out]" outputvideo.flv
    Bottom right corner
    ffmpeg -i inputvideo.avi -vf "movie=watermarklogo.png [watermark]; [in][watermark] overlay=main_w-overlay_w-10:main_h-overlay_h-10 [out]" outputvideo.flv
    Here are 3 examples for when I am overlaying subs onto a video
    Overlay dvd_subs onto video (1st vid, 2nd audio, 2nd sub streams)
    ffmpeg -i movie.vob -filter_complex "[0:0][0:4]overlay[0]" -map [0] -map 0:2 -c:a copy -c:v libx264 -b:v 2500k movie.avi
    ffmpeg -i movie.vob -filter_complex "[0:v][0:s:1]overlay[v]" -map [v] -map 0:a:1 -c:a copy -c:v libx264 -b:v 2500k movie.avi
    Overlay dvd_subs with cropping, video cropped=a, subs scaled=b, overlay (b over a)=c
    ffmpeg -i video.vob -filter_complex "[0:0]crop=706:362:8:56[a];[0:4]scale=706:362[b];[a][b]overlay[c]" -map [c] -map 0:2 -c:a copy -c:v libx264 -b:v 2500k out.avi

  • Realtime video

    i wonder if apple will come out with that for it cause its amazing and all but how could they leave that out.
    but i still will always love apple there my fav.

    Tell Apple what you think.
    http://www.apple.com/feedback/iphone.html

  • Playing a video file retrieved from database using JMF

    Hello!
    I am developing a multimedia application that has to store videos, music, pictures in a database. I did a little searching and found that JMF is a very good solution in playing video/music using Java. I found some examples to play the video/music stored on the hard drive (as separate files), but i have to be able to take the video/music from the database, and feed it to the JMF player. Has anyone some suggestions about how this could be done?
    Thanks in advance!
    Edited by: radu.miron on May 8, 2008 9:03 AM

    Well, i think i didn't make myself clear enough :). i know how to retreive the data from the database. The thing is this: let's suppose i have a 700 MB movie stored in the database. One option to play that movie would be to retrieve it from the DB, create a file somewhere on the disk, and put the data retrieved from the database in that file. But this involves that the disk will be overflooded when let's say 100 people watch 100 different movies. Another option (as i see it) would be to gradually take parts of the movie from the database (first 50 MB, then another 50 MB, then another and so on), and feed it to the JMF player. The user will watch the movie, but will not have the whole movie available, just a part of it. As he watches it, the application takes the next chunck of movie data and feeds it to the JMF player. That was the question i intended to ask, if anyone has any idea regarding the second option, and not the part with retrieving from the database, but the part with giving the JMF player video data to play.
    The example i found on the web with JMF player is the following:
    import javax.swing.*;*
    *import javax.media.*;
    import java.awt.*;*
    *import java.awt.event.*;
    import java.net.*;*
    *import java.io.*;
    public class PlayVideo extends JFrame {
         Player player;
         Component center;
         Component south;
         public PlayVideo() {
              setDefaultCloseOperation(EXIT_ON_CLOSE);
              JButton button = new JButton("Select File");
              ActionListener listener =
                   new ActionListener() {
                   public void actionPerformed(
                             ActionEvent event) {
                        JFileChooser chooser =
                             new JFileChooser(".");
                        int status =
                             chooser.showOpenDialog(PlayVideo.this);
                        if (status ==
                             JFileChooser.APPROVE_OPTION) {
                             File file = chooser.getSelectedFile();
                             try {
                                  load(file);
                             } catch (Exception e) {
                                  System.err.println("Try again: " + e);
              button.addActionListener(listener);
              getContentPane().add(button,
                        BorderLayout.NORTH);
              pack();
              show();
         public void load(final File file)
         throws Exception {
              URL url = file.toURL();
              final Container contentPane =
                   getContentPane();
              if (player != null) {
                   player.stop();
              player = Manager.createPlayer(url);
              ControllerListener listener =
                   new ControllerAdapter() {
                   public void realizeComplete(
                             RealizeCompleteEvent event) {
                        Component vc =
                             player.getVisualComponent();
                        if (vc != null) {
                             contentPane.add(vc,
                                       BorderLayout.CENTER);
                             center = vc;
                        } else {
                             if (center != null) {
                                  contentPane.remove(center);
                                  contentPane.validate();
                        Component cpc =
                             player.getControlPanelComponent();
                        if (cpc != null) {
                             contentPane.add(cpc,
                                       BorderLayout.SOUTH);
                             south = cpc;
                        } else {
                             if (south != null) {
                                  contentPane.remove(south);
                                  contentPane.validate();
                        pack();
                        setTitle(file.getName());
              player.addControllerListener(listener);
              player.start();
         public static void main(String args[]) {
              PlayVideo pv = new PlayVideo();
    }but this example plays a video stored on the disk ( player = Manager.createPlayer(url); ), rather than a chunck of data (the whole movie or parts of it) retrieved from the database.
    Sorry for the misunderstanding!
    Cheers!

Maybe you are looking for

  • After upgrading to yosemite iPhoto doesnt work\

    after upgrading to yosemite iPhoto doesn't work?

  • Trouble with captivate trial download for mac

    im trying to download the trial version of captivate on my mac, the akamai netsession interface is now on my desktop, i go through installation, it makes that happy sound and says the installation was successful, but nothing actually happens, no new

  • Cash Forecasting for FICA - PSCD

    Hi Can anybody tell me the extra steps needed for configuring Cash forecasting for FICA / PSCD application?

  • Payment Method customer

    Hello together, how I can fix a Payment Method to a customer no.? We have some different Payment Method and we would fixed this, on the sales are data. Thanks for your help. Best regards Petra Schober

  • Command Questions Crystal XI R2

    I have 2 sql commands that run in under 3 minutes in sql as follows, the problem I am having is when I pull them into Crystal they run and hang forever, I have killed it everytime after an hour, I am linking them together on the d.yearmonthnum, I hav