10.4.9 breaks FC video capture

Installing the 10.4.9 Update on my eMac 1.42ghz G4 system essentially broke Final Cut's video capture capabilities, showing the same failures with both my FC 4.5 HD and FC Express 2.0 versions of the program.
I have taken a number of steps to confirm that the problem source is the 10.4.9 update, including re-installing 10.4.0 from DVD and then running capture tests before and after re-installing the 10.4.9 software update.
Under 10.4.9 it was impossible to capture more than a few seconds of uninterrupted DV video, even though I was using equipment, software and procedures that had performed reliably many dozens of times previously. Watching the access LED's on my external hard drive illuminated the core of the problem (possibly an issue with storage buffering or Firewire).
When video capture is functioning correctly the system writes to the drive with frequent short bursts at a rate of several times a second. Running under 10.4.9 the video capture attempts have very long periods with no drive activity, followed by long continuous writes. While
those long writes are taking place the input video freezes and falls silent, both on the preview screen and in the resulting movie file on hard disk.
One odd note is that, in testing, I was able to get a few usable video captures by setting the device control to "non-controllable device" and using "Capture Now" mode. However, EVERY capture attempt with device control set to "Firewire" or "Firewire BASIC" would fail and, once the capture function breaks, it stays broken until the system is rebooted, regardless of subsequent device control adjustments or capture mode.
I currently have the system re-installed and re-updated to 10.4.7 and my Final Cut video capture is functioning normally. Based on considerable knowledge of computing and media systems, my own experiences with the 10.4.9 update, and other comments about 10.4.9 that I have read elsewhere,
this latest OSeX update seems to make changes to some critical, low level functions that have the potential of breaking a great deal of software and peripherals. My standing advice to clients and colleagues will be to stay away from 10.4.9 until such time as Apple addresses
the apparent compatibility problems.
peace
......

I'm afraid I can't help you other than to suggest you
post your Final Cut issues in the Final Cut forum
area here:
http://discussions.apple.com/category.jspa?categoryID=
123
I doubt the 10.4.9 broke anything.
I am a very knowledgable and experienced systems tech and have done extensive trouble testing trying to resolve this issue. Acknowledging that there are very large numbers of variables involved in any modern computer process, I have scientifically confirmed that, at least for this system / model / software combination, 10.4.9 is definitely the variable that is breaking my video capture capabilities.
Having stepped through the most of the OS updates under identical hardware, software and preference settings:
OSeX 10.3.x to 10.4.8 == capture functions normally
OSex updated to 10.4.9 == capture broken
Exactly which of the many variables in the 10.4.9 update is causing the breakage is up to Apple to determine; I don't have access the source code like I do with my GNU OSS & Linux systems.
Still, per your suggestion, I will cross post this issue to the Final Cut list. There is always the possibility that some hidden variable in the equation could be causing the conflicts between the software versions I need to rely on and 10.4.9, though I have already tested a number of hardware and preference configurations to eliminate the most probable suspects.
peace

Similar Messages

  • Problem in video capture at server and fetching by client

    hi
    I am doing final year project it has server at one end which is connected to video capturing device-webcam in my case
    and the client is mobile.
    I want the live video captured be transmitted to the mobile client on fetch video request.
    I have tried implementing it but facing some problems so would like to know the reason behing and also ask if the way i am following is correct or not
    At server end i tried to extract frame as follows:
    //the datasource handler class is as follows
          * Inner class MyDSHandler takes the Output DataSource form the
          * Processor and extracts the frame from it.
          * It implements the BufferTransferHandler so that it can receive
          * Buffer from the PushBufferStream.
         public class MyDSHandler implements BufferTransferHandler
              DataSource source;
              PullBufferStream pullStrms[] = null;
              PushBufferStream pushStrms[] = null;
              //Buffer readBuffer = null;
              int i = 1, j = 1;
              * Sets the media source this MediaHandler should use to obtain content.
              * @param source the DataSource from the Processor
              private void setSource(DataSource source) throws IncompatibleSourceException
                   // Different types of DataSources need to handled differently.
                   if(source instanceof PushBufferDataSource)
                        pushStrms = ((PushBufferDataSource) source).getStreams();
                        // Set the transfer handler to receive pushed data from the push DataSource.
                        //pushStrms[0] since we need to handle only the video stream
                        pushStrms[0].setTransferHandler(this);
                   else if(source instanceof PullBufferDataSource)
                        System.out.println("PullBufferDataSource!");
                        // This handler only handles push buffer datasource.
                        throw new IncompatibleSourceException();
                   this.source = source;
                   readBuffer = new Buffer();
              * This will get called when there's data pushed from the PushBufferDataSource.
              * @param stream the PushBufferStream obtained from the DataSource
              public void transferData(PushBufferStream stream)
                   try
                        stream.read(readBuffer);
                   catch(Exception e)
                        System.out.println(e);
                        return;
                   if((readBuffer == null) || (readBuffer.getLength() == 0))
                        System.out.println("Null or Empty buffer encountered..");
                        return;
                   // Just in case contents of data object changed by some other thread
                   Buffer inBuffer = (Buffer)(readBuffer.clone());
                   // Check for end of stream
                   if(readBuffer.isEOM())
                        System.out.println("End of stream");
                        return;
                    * we can apply frame control here by deciding whether to process
                    * the frame or not
                   processBuffer2(inBuffer);
              public void start()
                   try{source.start();}catch(Exception e){System.out.println(e);}
              public void stop()
                   try{source.stop();}catch(Exception e){System.out.println(e);}
              public void close(){stop();}
              public Object[] getControls()
                   return new Object[0];
              public Object getControl(String name)
                   return null;
               * Processes the Buffer , i.e converts it into an image and
               * transfer its content via Socket
               * @param inBuffer the buffer received from the PushBufferStream
              public void processBuffer2(Buffer inBuffer)
              {         //extracting frame from video and writing image on stream
                   //RGBFormat format  = (RGBFormat)inBuffer.getFormat();
                   System.out.println(inBuffer.getLength());
                   YUVFormat format = (YUVFormat)inBuffer.getFormat();
                   Image img = (new BufferToImage(format)).createImage(inBuffer);
                   BufferedImage bimg = (BufferedImage)img;
                   if(bimg != null)
                        try
                              * encodes the image in the JPEG format and writes it to
                              * the socket
                             ImageIO.write(bimg, "jpg", clientOut);
                             System.out.println("Data Written to stream");
                             logArea.append("\n Image Data Written to Stream");
                        catch(Exception e)
                             System.out.println(e);
         }and how can one control the frame rate of the video like if i require the speed of 7 frames per second and the resolution
    also
    i am trying to convert the frame fetched to jpeg and at client , tried making use of jpeg image property by detecting the img start and end by reading the image data in bytes from the inputstream (0xffd8 as strt of image and 0xffd9 as end)
      byte[] img_data = new byte[4*1024];//buffer to read and hold one image info
    try
                    while((data = dis.read()) != -1)
                        //System.out.print(Integer.toHexString(data));
                        if(data == 0xFF)
                            fm = true;
                        else if(fm)
                            if(data == 0xD8)
                                //start of the image
                                System.out.print("Start of image found  : ");
                                //ctr should be zero at this stage here
                                //writing to byte[]
                                img_data[ctr] = (byte)0xFF;
                                ctr++;
                                img_data[ctr] = (byte)0xD8;
                                ctr++;
                            else if(data == 0xD9)
                                //end of image
                                //writing to byte[]
                                img_data[ctr] = (byte)0xFF;
                                ctr++;
                                img_data[ctr] = (byte)0xD9;
                                ctr++;
                                // consrtucting image from the byte[]
                                img = Image.createImage(img_data, 0, ctr-1);
                                if(img != null)
                                    repaint();
                                    System.out.println("Image drawn");
                                else
                                    System.out.println("Image is null");
                                    /*try
                                        Thread.sleep(500);
                                    catch(Exception e)
                                ctr = 0;  // ctr back to zero for the new image
                                //break;
                            else
                                //writing to byte[]
                                img_data[ctr] = (byte)0xFF;
                                ctr++;
                                img_data[ctr] = (byte)data;
                                ctr++;
                            fm = false;
                catch(Exception e){}
        }The problem i am facing is the client gets the black image just for a fraction of second and then the application hangs and no video is visible.
    I have very little time left to complete this project as deadline is very close
    please would be very grateful if guided timely.

    equator07 wrote:
    Thanks for the reply.
    i saw in recent posts some protocols had been made use of like rtp i am not making use of any of these
    will it effect?I can see that you're not using RTP... and "will it effect?" doesn't make any sense AT ALL...
    is the way i have made use of jmf correct?There's no correct way to use an API... but no, you're not doing things in even remotely a "tradional" way...
    *like the way i am extracting the frame,
    and how can i set the resolution of the video taken by cam and adjust the frame rate?*I have no idea if your code works. I assume that you would have tested it, and proved it to either be producing a valid JPEG image or not. If you've not done that testing, then you obviously need to.
    because i want live video to be seen at client end i am seriously confused and the progress as just come to an hault at this point and am in real need of guidance.You should probably be using RTP and the tradional way of using JMF then...
    [http://forums.sun.com/thread.jspa?messageID=10930286#10930286]
    Read through all of the links I gave the person on the above thread and you'll be on a lot better footing as far as using JMF goes.
    shall i send the server side code.Till now server and client are both on local host.No, I don't need to see any more of your code. I'm not a proof reader, and I'm not going to debug it for you. Do you own work.

  • Lost segments during video capture

    I know I've got enough horsepower to do the job, but every time I attempt video capture either from the S-video input (Hi8 camcorder)or via Firewire (DV camcorder)I get periodic breaks in the captured data. I'm not losing frames per se, I lose 2-3 second segments of the video stream every 9 to 30 seconds with no repeatable pattern that I can see.
    I've tried most of the tips found on various sites, ie. eliminating background tasks and registry fiddling with only some improvement.
    I've tried the following software:
    Intervideo Winproducer
    Intervideo Wincinema
    Sonic MyDVD
    Arcsoft Showbiz
    Any help, ideas welcome.
    BTW, thanks to the posters here. I've found solutions to a number of problems and some good ideas in the few days I've been lurking here.

    On one of my older systems, I've experienced same problem. It was solved by disabling USB in BIOS (I don't have much explanation for this, maybe a strange driver).
    On the other hand, try iuVCR (http://www.iulabs.com they could pay me for advertising  :D ), it's the only application I've used, able to capture in divx on the fly without loosing frames (of course it depends on cpu frequency, I have athlon 1ghz, capturing 352x288, but on 2.4ghz 640x480 is ok)
    Costin

  • Using MacBook Air to video capture / analyze running strides

    I'd like to use my MacBook Air's video capture abilities to capture video of a runner and then break the video down into photo clips to analyze a persons stride and form. Is this possible?
    I also have an Sony HD camera that will take video of form but want to see and analyze it within 5 minutes on macbook. Is there special software that I should get?
    I'm doing my own research but wanted to call on the experts for their thoughts.

    The simplest solution would be to record everything on your HD camera and then simply copy the movie over to your Macbook.

  • Stuttering or lagging video capture

    [n79, latest firmware, 16 or 32 GB memory card]
    When I capture video to the memory card, I periodically get stutters or lags, typically during which the video capture appears to freeze. The sound recording sometimes continues in time, sometimes also breaks for a fraction of second. I have already backed up the card to a PC and reformatted the card in the phone (assuming that the phone would set up the card in a manner best suited to the device). The problem persists, although slightly less often.
    Would it have been better to reformat the cards on a PC? If so, better to use a 16kb or 32kb "allocation unit size"?  Neither of my cards as more than 1/3 full, so there is plenty of space. I am recording video at the phone's best quality, which is 640x480, 30 fps, and a typical bitrate seems to be about .5 MB per second. I checked out both cards using the utilty "Flash Memory Toolkit", and the minimum data file write rates seem to be about 6-10 MB per second (often 10 MB), so I don't think that the bottleneck is the card itself. Perhaps the drivers used to write to the card memory on the phone are not as efficient as they should be. Or maybe for some reason I have to stick with an 8 GB card or smaller? 
    Thanks for any help.
    Solved!
    Go to Solution.

    I have reformatted the 32 GB card with Flash Memory Toolkit using a 64 kb allocation unit size (the largest option). So far, it seems as if the problem has been solved. I understand that using the larger allocation unit size, small files will take up more room, but that the phone is supposed to perform more efficiently, especially when writing to larger files. We'll see what happens when I try to record a longer video.

  • Sparadic video capture

    When I create a processor using a ProcessorModel and Manager.createRealizedProcessor using the ProcessorModel, my video capture works. The problem with this is it is a blocking call, so if something goes wrong, my session hangs.
    To avoid this I have used the StateHelper, and modified it so it accepts a processor instead of a player. I have now modified my code to configure and realize the processor step by step, using this code;
         // Use CaptureUtil to create a monitored capture datasource
         datasource = getCaptureDS(vf, af, videoCanvas);
         if (datasource != null) {
    try {
    VideoFormat outvf = new VideoFormat(VideoFormat.JPEG_RTP);
    processor = Manager.createProcessor(datasource);
    processorSH = new StateHelper(processor);
    if (!processorSH.configure()) {
    System.out.println("StartMonitoring: Failed to configure processor.");
    processor = null;
    processorSH = null;
    return;
         // Set the preferred content type for the Processor's output
         processor.setContentDescriptor(new ContentDescriptor(ContentDescriptor.RAW));
    TrackControl track[] = processor.getTrackControls();
    boolean retval = false;
    for (int i=0; i < track.length; i++) {
    if (!retval && track[i] instanceof FormatControl) {
    try {
    track.setFormat(outvf);
    retval = true;
    } catch (Exception e) {
    track[i].setEnabled(false);
    System.out.println("Info: exception setting track: " + e);
    } else {
    track[i].setEnabled(false);
    if (retval) {
    if (processorSH.realize()) {
    // datasource.connect();
    datasource.start();
    processor.start();
    } else {
    System.out.println("Info: retval false");
    getCaptureDS creates a new datasource after finding a capture device for the given video format, and connects the media locator of the capture device.
    My problem is, the video capture is sparadic, it captures an image and displays it for a few seconds and then stops. When I create and start the outbound datasink (broadcasting to an RTP URL), then it will broadcast a few frames, stop for about 5 minutes, and broadcast a few more. Using the ProcessorModel before it would broadcast continuously.
    Help, what am I doing wrong? I've tried everything I can think of but nothing is working, please help.

    thanks for the advice, I revisted my code and removed all datasource.start calls, but still no luck. Its really beginning to bug me, and I just don't know where else to look. Here's the complete code I'm using, maybe somebody will be able to advise me, please!
    package RmiChat;
    import javax.media.*;
    import javax.media.protocol.*;
    import javax.media.control.*;
    import javax.media.format.*;
    import java.awt.*;
    import java.net.*;
    import java.util.*;
    public class VideoBroadcast {
    // URL for RTP transmission
    String HostIPAddress;
    String PortNumber = "55555";
    String StreamName = "/video/1";
    String SendUrl;
    VideoInfo videoinfo;
    boolean imageFlipped = true;
    // Flag to determine if the capture device needs to be rebuilt
    boolean videoRestart = false;
    // Flags to indicate if broadcasting video or audio
    boolean sendingVideo = false;
    // Video Sizes
    protected final Dimension SmallVideoSize = new Dimension(160, 120);
    protected final Dimension MediumVideoSize = new Dimension(320, 240);
    protected final Dimension LargeVideoSize = new Dimension(640, 480);
    protected final Dimension DefaultVideoSize = MediumVideoSize;
    protected final float LowFrameRate = 5f;
    protected final float MidFrameRate = 15f;
    protected final float BestFrameRate = 30f;
    protected final float DefaultFrameRate = 30f;
    protected final String DefaultVideoEncoding = VideoFormat.RGB;
    protected final String DefaultRTPVideoEncoding = VideoFormat.JPEG_RTP;
    // JMF objects
    MonitorCDS myVideoDS = null;
    StateHelper processorSH = null;
    Processor processor = null;
    DataSink datasink = null;
    Canvas videoPreview = null;
    DataSource captureDS = null;
    DataSource previewDS = null;
    DataSource cloneableDS = null;
    DataSource initialDS = null;
    ClientMonitorStream previewStream = null;
    // String outputType = "video.quicktime";
    String outputType = "video.x_msvideo";
    VideoFormat vf = null;
    float frameRate = DefaultFrameRate;
    Dimension videoSize = DefaultVideoSize;
    String VideoEncoding = DefaultVideoEncoding;
    String outputFileExt = ".avi";
    myLog myLog;
    Button videoButton = null;
    public VideoBroadcast(Canvas vp, Button vb) {
    this.videoPreview = vp;
    this.videoButton = vb;
    this.myLog = new myLog();
    float frameRate = DefaultFrameRate;
    Dimension videoSize = DefaultVideoSize;
    // get local IP address and create SendURL
    try {
    InetAddress ipadr = InetAddress.getLocalHost();
    HostIPAddress = ipadr.getHostAddress();
    } catch(Exception e) {
    myLog.log("Error: failed to get InetAddress.");
    myLog.log("Exception: " + e);
    SendUrl = "rtp://" + HostIPAddress + ":" + PortNumber + StreamName;
    protected boolean prepareVideoCapture() {
         // Close the previous processor, which in turn closes the capture device
         if (processor != null) {
    try {
    myLog.log("PerpareVideoCapture: processor is already created, will stop and close processor.");
         processor.stop();
         processor.close();
         } catch (Exception e) {
    myLog.log("Warn: failed to close processor.");
    processor = null;
    processorSH = null;
         if (captureDS != null) {
    try {
    myLog.log("PerpareVideoCapture: captureDS is already created, will disconnect.");
    captureDS.disconnect();
         } catch (Exception e) {
    myLog.log("Warn: failed to close captureDS.");
    captureDS = null;
    if (datasink != null) {
    try {
    myLog.log("Info: Datasink still open, so closing.");
    datasink.stop();
    datasink.close();
         } catch (Exception e) {
    myLog.log("Warn: failed to close datasink.");
    datasink = null;
         vf = new VideoFormat(VideoEncoding, videoSize, Format.NOT_SPECIFIED, null, frameRate);
    videoinfo = new VideoInfo(SendUrl, imageFlipped, videoSize);
         // create a monitored capture captureDS
         initialDS = createDataSource(vf);
    captureDS = Manager.createCloneableDataSource(initialDS);
    previewDS = ((SourceCloneable)captureDS).createClone();
         if (captureDS != null) {
    // create the processor
    try {
    PushBufferDataSource dds = (PushBufferDataSource) previewDS;
    PushBufferStream [] videoStreams = dds.getStreams();
    previewStream = new ClientMonitorStream(videoStreams[0], videoPreview, videoinfo);
    processor = Manager.createProcessor(captureDS);
    processorSH = new StateHelper(processor);
    if (processorSH.configure()) {
    // Set the preferred content type for the Processor's output
    processor.setContentDescriptor(new ContentDescriptor(ContentDescriptor.RAW));
    boolean retval = false;
    TrackControl track[] = processor.getTrackControls();
         VideoFormat outvf = new VideoFormat(VideoFormat.JPEG_RTP);
    for (int i=0; i < track.length; i++) {
    if (!retval && track[i] instanceof FormatControl) {
    try {
    track.setFormat(outvf);
    retval = true;
    } catch (Exception e) {
    track[i].setEnabled(false);
    System.out.println("Info: exception setting track: " + e);
    } else {
    track[i].setEnabled(false);
    if (retval) {
    if (processorSH.realize()) {
    processor.start();
    setJPEGQuality(processor, 0.5f);
    previewStream.setEnabled(true);
    } else {
    System.out.println("Info: retval false");
         } catch (Exception e) {
    myLog.log("Error: creating realized processor: " +e);
              // Make sure the capture devices are released
    if (processor != null) {
    processor.close();
    processor = null;
    processorSH = null;
              return false;
    myLog.log("Info: Sending rtp url: " + SendUrl);
    return true;
    protected void startVideoCapture() {
    // If processor hasn't been created then try to create it
    if (processor == null) {
    prepareVideoCapture();
    // ensure we have a processor, otherwise nothing will work
    if (processor != null) {
    if (datasink != null) {
    try {
    myLog.log("Info: Datasink still open, so closing.");
    datasink.stop();
    datasink.close();
         } catch (Exception e) {
    try {
         DataSource outputDS = processor.getDataOutput();
    MediaLocator mlsend = new MediaLocator(SendUrl);
    DataSink datasink = Manager.createDataSink(outputDS, mlsend);
    datasink.open();
    datasink.start();
         processor.start();
    sendingVideo = true;
    if (videoButton != null) {
    videoButton.setLabel("Stop Broadcast");
    videoButton.setActionCommand("Stop Broadcast");
         } catch (Exception e) {
         myLog.log("Error: Failed to create DataSink for broadcast: " +e);
    datasink = null;
         myLog.log("Started broadcasting...");
    } else {
    myLog.log("Error: Attempting to start capture, but have no processor.");
    protected void stopVideoCapture() {
         // Stop the capture and the file writer (DataSink)
    if (processor != null) {
         if (datasink != null) {
    try {
    datasink.stop();
         } catch (Exception e) {
    myLog.log("Warn: Exception whilst stopping datasink: " + e);
    datasink.close();
         processor.stop();
         processor.close();
         processor = null;
         processorSH = null;
         // Restart monitoring
         prepareVideoCapture();
    sendingVideo = false;
    if (videoButton != null) {
    videoButton.setLabel("Start Broadcast");
    videoButton.setActionCommand("Start Broadcast");
         myLog.log("Stoping Capture.");
    protected void changeVideoSize(Dimension size) {
    if (!videoSize.equals(size)) {
    videoSize = size;
    videoRestart = true;
    videoinfo = new VideoInfo(SendUrl, imageFlipped, videoSize);
    public DataSource getCaptureDS(VideoFormat vf) {
         DataSource VideoDS = null;
         // Create a capture DataSource for the video
         // If there is no video capture device, then exit with null
         if (vf != null) {
         VideoDS = createDataSource(vf);
         // Create the monitoring datasource wrapper
    //     if (VideoDS != null) {
    // VideoDS = new MonitorCDS(VideoDS, videoPreview);
         return VideoDS;
    public DataSource createDataSource(Format format) {
         DataSource ds;
         Vector devices;
         CaptureDeviceInfo cdi;
         MediaLocator ml;
         // Find devices for format
         devices = CaptureDeviceManager.getDeviceList(format);
         if (devices.size() < 1) {
         myLog.log("Error: No Devices for " + format);
         return null;
         // Pick the first device
         cdi = (CaptureDeviceInfo) devices.elementAt(0);
         ml = cdi.getLocator();
         try {
         ds = Manager.createDataSource(ml);
         ds.connect();
         if (ds instanceof CaptureDevice) {
              setCaptureFormat((CaptureDevice) ds, format);
         } catch (Exception e) {
         myLog.log("Error: createDataSource: Exception: " +e);
         return null;
         return ds;
    public void setCaptureFormat(CaptureDevice cdev, Format format) {
         FormatControl [] fcs = cdev.getFormatControls();
         if (fcs.length < 1) {
              myLog.log("Warning: No formatControls");
         return;
         for (int i = 0; i < fcs.length; i++) {
              myLog.log("Info: Available format: " + fcs[i].getFormat());
         FormatControl fc = fcs[0];
         Format [] formats = fc.getSupportedFormats();
         for (int i = 0; i < formats.length; i++) {
         if (formats[i].matches(format)) {
              format = formats[i].intersects(format);
              myLog.log("Info: Setting format " + format);
              fc.setFormat(format);
              break;
    * Setting the encoding quality to the specified value on the JPEG encoder.
    * 0.5 is a good default.
    void setJPEGQuality(Player p, float val) {
         Control cs[] = p.getControls();
         QualityControl qc = null;
         VideoFormat jpegFmt = new VideoFormat(VideoFormat.JPEG);
         // Loop through the controls to find the Quality control for
         // the JPEG encoder.
         for (int i = 0; i < cs.length; i++) {
         if (cs[i] instanceof QualityControl &&
              cs[i] instanceof Owned) {
              Object owner = ((Owned)cs[i]).getOwner();
              // Check to see if the owner is a Codec.
              // Then check for the output format.
              if (owner instanceof Codec) {
              Format fmts[] = ((Codec)owner).getSupportedOutputFormats(null);
              for (int j = 0; j < fmts.length; j++) {
                   if (fmts[j].matches(jpegFmt)) {
                   qc = (QualityControl)cs[i];
                   qc.setQuality(val);
                   System.err.println("- Setting quality to " +
                             val + " on " + qc);
                   break;
              if (qc != null)
              break;
    Here's the code for the ClientMonitorStream which has loads of println in it;
    package RmiChat;
    import javax.media.*;
    import javax.media.protocol.*;
    import javax.media.control.*;
    import javax.media.format.*;
    import javax.media.util.BufferToImage;
    import java.io.IOException;
    import java.net.*;
    import java.awt.*;
    public class ClientMonitorStream
    implements PushBufferStream, MonitorControl, BufferTransferHandler {
    PushBufferStream actual = null;
    boolean dataAvailable = false;
    boolean terminate = false;
    boolean videoPlaying = false;
    Object bufferLock = new Object();
    Buffer cbuffer = new Buffer();
    BufferTransferHandler transferHandler = null;
    Canvas videoCanvas = null;
    BufferToImage bti = null;
    VideoInfo videoinfo = null;
    ClientMonitorStream(PushBufferStream actual, Canvas canvas, VideoInfo videoinfo) {
    System.out.println("Info: inside monitor stream");
         this.actual = actual;
         actual.setTransferHandler(this);
         this.videoCanvas = canvas;
    this.videoinfo = videoinfo;
    if (this.videoinfo != null) {
    this.videoCanvas.setSize(videoinfo.BroadcastSize);
    System.out.println("Info: created new MonitorStream object");
    public javax.media.Format getFormat() {
    System.out.println("Info: inside getFormat");
         return actual.getFormat();
    public void read(Buffer buffer) throws IOException {
    System.out.println("monitor stream: read: inside");
         if (!dataAvailable) {
    System.out.println("monitor stream: read: no data available");
         synchronized (bufferLock) {
    System.out.println("monitor stream: read: no data available, after sync");
              while (!dataAvailable && !terminate) {
              try {
    System.out.println("monitor stream: read: no data available: before wait");
                   bufferLock.wait(1000);
    System.out.println("monitor stream: read: no data available: after wait");
              } catch (InterruptedException ie) {
    System.out.println("transferdata: read: exception");
         if (dataAvailable) {
    System.out.println("monitor stream: read: data available");
         synchronized (bufferLock) {
    System.out.println("monitor stream: read: after sync");
              buffer.copy(cbuffer, true);
    System.out.println("monitor stream: read: after copy");
              dataAvailable = false;
         return;
    public void transferData(PushBufferStream pbs) {
         // Get the data from the original source stream
    System.out.println("transferdata: inside transfer data");
         synchronized (bufferLock) {
    System.out.println("transferdata: transfer data: after sync");
         try {
    System.out.println("transferdata: transfer data: before pbs read");
              pbs.read(cbuffer);
    System.out.println("transferdata: transfer data: after pbs read");
         } catch (IOException ioe) {
    System.out.println("transferdata: transfer data: exception");
              return;
         dataAvailable = true;
    System.out.println("transferdata: transfer data: before notifyAll");
         bufferLock.notifyAll();
    System.out.println("transferdata: transfer data: after notifyAll");
         // Display data if monitor is active
         if (isEnabled()) {
    System.out.println("transferdata: is enabled");
         if (bti == null) {
    System.out.println("transferdata: bti == null");
              VideoFormat vf = (VideoFormat) cbuffer.getFormat();
              bti = new BufferToImage(vf);
         if (bti != null && videoCanvas != null) {
    System.out.println("transferdata: bti != null");
              Image im = bti.createImage(cbuffer);
              Graphics g = videoCanvas.getGraphics();
              if (g != null) {
    System.out.println("transferdata: g != null");
    if (videoinfo.ImageFlipped) {
    System.out.println("transferdata: before draw image");
              g.drawImage(im, 0, im.getHeight(videoCanvas), im.getWidth(videoCanvas), 0, 0, 0, im.getWidth(videoCanvas), im.getHeight(videoCanvas), videoCanvas);
    System.out.println("transferdata: after draw image");
    } else {
              g.drawImage(im, 0, 0, videoCanvas);
         // Maybe synchronize this with setTransferHandler() ?
    //     if (transferHandler != null && videoPlaying)
         if (transferHandler != null)
         transferHandler.transferData(this);
    public void setTransferHandler(BufferTransferHandler transferHandler) {
    System.out.println("setTransferHandler: inside");
         this.transferHandler = transferHandler;
    public boolean setEnabled(boolean value) {
         videoPlaying = value;
    System.out.println("setEnabled: setting enabled flag");
         return videoPlaying;
    public boolean isEnabled() {
         return videoPlaying;
    public Component getControlComponent() {
    System.out.println("getcontrolcomponent: inside");
         return (Component) videoCanvas;
    public float setPreviewFrameRate(float rate) {
         System.err.println("TODO");
         return rate;
    public ContentDescriptor getContentDescriptor() {
    System.out.println("getcontentdescriptor: inside");
         return actual.getContentDescriptor();
    public long getContentLength() {
    System.out.println("getcontentlength: inside");
         return actual.getContentLength();
    public boolean endOfStream() {
    System.out.println("endofstream: inside");
         return actual.endOfStream();
    public Object [] getControls() {
    System.out.println("getcontrols: inside");
         return new Object[0];
    public Object getControl(String str) {
    System.out.println("getcontrol: inside");
         return null;
    Finally in the main program I create a new instance of the VideoBroadcast object and call PrepareVideoCapture initially. When the user clicks the broadcast button, I call startVideoCapture. After the prepare is called, 1 frame is displayed and then nothing, when I start the broadcast a few frames are displayed, it waits several seconds, displays a few more frames, waits, etc. I've checked my PC and performance is okay, memory, CPU, IO all okay, nothing to explain the sparadic activity.
    For those wanting to hack the code a bit, feel free, but if you get it to work, please send me a copy; [email protected]
    The next thing I wanted was to add in the RTPManager, but I need to get the streaming working smoothly, the RTPManager should not help in this, right?

  • Good video capture app

    what is a good standalone video capture app?

    "i used to use something called winDV which would upload everything i watned to as seperate clips (being every time i pressed the rec button would be a seperate clip) is there anything like this?"
    FCP makes this pretty simple. Just capture your whole tape with FCP (which is super easy, btw, if you have your capture configurations set correctly) or whatever utility you end up using, and then in the Browser do the following:
    1 - select your captured sequence by clicking on it once
    2 - Mark > DV Stop/Start Detect
    3 - click the arrow to the left of your captured sequence to expand
    4 - select all of the segments in your captured sequence
    5 - Modify > Make Subclip
    You will then have a clip for every break in your tape.

  • Final Cut breaks up video into many separate segments when digitizing --

    I'm sure I'm not the only person who's encountered this problem. I'm in the process of digitizing a whole slew of HD video tapes and almost all of them have done so exactly as they should. The only exception is a couple of tapes that are fragmenting into many separate files as they digitize, so instead of one nice big file that can easily be scrolled through in Final Cut, I'm getting three or four dozen different smaller video files, all of which are readable and work but it's obviously not as easy to edit something like that.
    Does anyone know why this is happening and how, if possible, to correct it? Is it a video camera setting that was used in the shooting? Or is it some setting in Final Cut that I can change? Or is it merely that the time code on it is corrupt or the tape is somewhat damaged?
    Thanks in advance.

    When you capture HDV, Final Cut Pro automatically generates a new media file at the point of each scene break. The chosen option in the On timecode break pop-up menu determines how timecode breaks affect the capture; Final Cut Pro disregards the Warn After Capture option to avoid capturing media files that contain breaks in the middle of an MPEG-2 GOP (Group of Pictures).
    To determine how scene and timecode breaks are handled when you capture HDV:
    Choose Final Cut Pro > User Preferences, then click the General tab.
    Choose one of the following options (described below) from the On timecode break pop-up menu:
    Make New Clip
    This is the default option. Whenever Final Cut Pro detects a scene or timecode break during capture, it finishes writing the current media file to disk and then begins capturing a new media file. It also creates a corresponding clip for each new media file in the Browser. To ensure that all new media files and clips have unique names, it appends a number to the filename of each new media file and clip generated by scene and timecode break detection.
    You don't want this checked.
    The reference is for FCP 5, but should work in 6

  • Pinnacle Video Capture for Mac

    After talking with a bunch of salesmen (considering getting a DvD writer), I took the advice of someone in the Apple Store and bought a Pinnacle Video Capture for Mac to copy my laserdiscs. It was $100 before tax, and didn't do stuff I don't plan on doing anyway. I did spend another $25 to get an S-video cable.
    I installed the software and connected it up to my LD. I started capturing the movie, and selected S-Video (although in the small window they showed, I didn't see any difference between that and video out. I selected a max time from a limited selection, and let it record.
    After a while, I went downstairs and saw that the audio and video were not synchronized at all. I let it continue.
    After a while I came down again, and the movie was finished, so I stopped it. I found the MP4 - it was in the iTunes movie directory, and played it in iTunes. I fast-forwarded it to near the end, the voice and video were way off-set.
    I haven't tried burning this yet.
    Quicktime player doesn't think this is a valid movie file. I selected "open with" and "other" and the recommended applications had iMovie greyed out.
    Why in the world would the audio and video record at different speeds?
    Do I have to buy software to edit the movie down to the correct size?
    My Mac has:
    Model Name: iMac
    Model Identifier: iMac7,1
    Processor Name: Intel Core 2 Duo
    Processor Speed: 2.8 GHz
    Number Of Processors: 1
    Total Number Of Cores: 2
    L2 Cache: 4 MB
    Memory: 2 GB
    Bus Speed: 800 MHz
    Boot ROM Version: IM71.007A.B03
    SMC Version: 1.21f4
    I noticed the Pinacle Video Capture program was still running, so I tried to quit it, and got a window asking for my administrator ID and passsword to allow Pinnacle Video Capture to make changes. Why?

    The audio drifted out of sync because the Dazzle doesn't support locked audio. For short videos (say, under 10 minutes) this won't be very noticeable but when you capture longer videos it becomes progressively worse over time.
    What do you want to do with your Laserdisk copies? Watch them on your iPod? Edit and/or burn to DVD?
    A device like the Canopus ADVC-110 will do the video/audio conversion properly, keeping the audio and video in sync regardless of the length of your video. It converts to DV, not to MP4, and you would use it with iMovie or Final Cut (not iTunes). However you can export your video from iMovie or Final Cut to iPod/AppleTV formats.
    The Dazzle device will not work directly with iMovie or Final Cut.
    ps. If all you really want to do is transfer your Laserdisc videos to DVDs, it will be a whole lot faster & simpler to get a DVD player/recorder that has analog inputs and record directly to DVDs. There are many brands & models to choose from and many good ones are as inexpensive as the $100 you spent on the Pinnacle converter.

  • Elgato video capture versus digital 8 camcorder to import hi 8 movies

    Hi!
    I have hi 8 video tapes which i want to import into my MacBook.
    I would like to work on them with iMovie 09, cut them, set titles etc. put music on and so on.
    Earlier I plugged my hi 8 camcorder into my panasonic DMR E85H dvd recorder with built in hdd.
    Then i importet them via handbrake into my MacBook.
    Then i importet them into iMovie 08 an did some cutting and so on.
    Then i recognized, that there where horizontal interferences in the movie.
    On the dvds the video is ok!
    It doesn´t matter if i play the videos in Quicktime or in iTunes or iMovie.
    If i would use a Elgato Video Capture to import or a digital 8 camcorder to import the videos to my MacBook would i be able to get a better result?
    Which one should i prefer - i have to buy each of them!
    Would the size of the camcorder-imported videos be bigger then with the Elgato device?
    Now i´m using iMovie 09.
    Thanks a lot for helping me out!

    Based on your comments about horizontal interferences, it may be that you are seeing interlace artifacts. The solution may be to deinterlace the clips.
    You might try checking the settings in handbrake to deinterlace.
    You might also take the clip you have produced in HB and deinterlacing using a free tool like MPEG Streamclip or JES Deinterlacer.
    Your MPEG2 on DVD is already compressed from the original on tape. Then HandBrake decompresses and recompresses it to h.264. You generally want to cut out compression steps in your workflow whenever possible, because each generation of compression will introduce noise and loss.
    You could also reimport from DVD using MPEG Streamclip (and the Apple QuickTime MPEG2 Playback Component). You could deinterlace at this step if needed.
    You could certainly import through a camcorder with passthru capability. It will be DV which will be a very high data rate and potentially high quality, but it will never be higher quality than the underlying analog material. But you will have eliminated a compression step so you will be closer to the original.
    ElGato products will certainly work as well. I use the ElGato EyeTV hybrid to capture from a VHS Deck, as well as to record high definition TV shows. My ElGato will capture to MPEG2, although the newer models may capture directly to MPEG4 (not sure). You would then use the ElGato software to export to iMovie in an editable format such as h.264 or Apple Intermediate Codec.

  • Detecting Video Capture Devices without Installing JMF

    Hi All,
    I want to detect the video capture devices, without installing the JMF.
    I had included the jmf (windows version) in lib path in netbeans ide. I am able to detect the audio capture device. But unable to detect the video capture devices. But if I install the JMF, I am able to detect the Video Capture devices in My system. Can anyone help me?
    Thanks,
    Vinoth Kumar.

    YES! Please take a look at SIP Communicator project. They have all what u want.
    The main idea is copy all .dll files to System or System32 (Please check files which JMF will copy to ur computer and where).
    U need to have jmf.jar and sound.jar too ( maybe more) in your computer.
    Your program should have DirectSoundAuto.java, JavaSoundAuto.java, JavaSoundDetector.java, JMFInit.java. Those files are in JMStudio source code. They are used to detect all capture devices and register it to JMF, but wait !!! where they will be saved? In order to save information about capture devices which are detected, you should create "jmf.properties" file in the same location with jmf.jar.Therefore, u should modify the JMFInit.java so it can check if the "jmf.properties" exists. If it's not, you have to create it.
    Check SIP Communicator at Folder "media" then "device". They have modified those files I mentioned above and created some new files to SC can detect more devices.
    Edited by: tamngminh on Sep 16, 2008 9:50 AM

  • Video capture driver for MSI GeForce 2 Pro video card

    Hi,
     I am in need of the nVidia WDM video capture driver for my Geforce 2 Pro video card. Model #M-8831. I was able to download it from the MSI/TW website over a year ago just fine. My computer crashed from a game bug a couple of months ago and MSI/TW does not list the capture driver I need any longer. I did email the company asking for help with no response back yet (after 2 weeks). You people may be my last hope.
     I do have the cd that came with the video card, but I've never been able to install the capture driver from it. There is a button to install it, but it never gets loaded. I've tried browsing the cd and also tried installing it using the windows wizards with zero results. My only successful install was from the MSI home website.
     If you need more information, please let me know.
     Thank you,
     Bruce ?(

    If you followed the second link it says this:
    Windows XP/2000/Me/98SE - WDM Driver v1.22
    Version: 1.22
    File Size: 1.1 MB
    Release Date: December 9, 2002
    Downloads
    » Primary Download Site «
    » Mirror Site 1
    » Mirror Site 2
    » Mirror Site 3
    Release Highlights:
    First stand-alone WDM driver release for Personal Cinema and VIVO-enabled products.
    Right there at the end it says Release Highlights and says specifically that it's what you want...
    Read my man, read...
    Cheers!! 8)

  • Video capture Cables with MSI StarForce 822 GeForce 3!

    I have the VT64D and I woiuld very much like to know where to find the cables for this card, I need the A/V Cables... as I would like to have the full benefit of sound as well...
    If anybody has this calbe/breakout box I would be Extremely interested in purchasing it...
    Thanks,
    Marc
    PS: Its the second revision of Starforce 8822

    If you have the Video Capture Driver installed then the video capture program should see it. On MSI site version 1.16 of the driver is available for download.
    If you still have problems (e.g. too many frames lost) you can try using Virtual VCR (worked best in my case).

  • Video Capture Problems HELP!

    Ok i just bought a MSI GeForce4 Ti4200-VTD8X, Wich got Tv_Out + Video_in
    With the card i got a Tv_out/Video_in connecter to plug in the card, now the problem seems to come, i plug my S-video cable behind my Tv_out/Video_in connecter (wich of course is connectet to my graphics adapter) and the other end of the S-video cable into a SCART Adapter wich i plug into the scart output on my VCR, but when i start any video capture software, it just says no device connectet plz HELP me!  :(

    Never mind, got it working all by myself now

  • How to connect a video capture device to Satellite M30X 127

    I'm going around in circles trying to find a video capture device that will function with a laptop and would be grateful for advice. By way of example, I'm now looking at the 'Canopus ADVC-110', although still looking around (but not Pinnacle, who seem to have a terrible reputation for capture items).
    Problems include:
    1. Main problem is - many video capture external devices require a 'line-in' to PC for good quality audio transfer. This laptop doesn't have a line in, only the mike and headphone jacks. In addition, I've also tried a few RCA pins in the 3.5mm microphone/headphones jack and they were too small. How to solve the problem of getting audio from vcr to my PC?
    2. The ADVC-110 connects to the PC via Firewire (and S-video), but is a SIX-pin type. M30X is a FOUR-pin type. Are there adapters available?
    3. Some capture programmes seem to require huge capacity of RAM. Is a laptop going to be suitable to do the job? (I simply want to capture old vhs tapes from vcr to PC, then burn to DVD with Nero 7 or similar).
    Thanks in advance!

    Hello
    You are right! You must use external device for signal transfer to your notebook. I dont know where you have this info but I have good experiences with Pinnacle product. For data transfer I have used USB port.
    'Canopus ADVC-110 is not known to me but I dont see any reason why this should not work well. Use delivered software for video capturing and there should not be any problem.
    Only thing is that after recording the whole video material must be prepared for burning and unit need a lot of power to do this well und fast. When you buy this and want to use on right way I recommend you to expand RAM to 1GB.
    Compatible memory modules for your Satellite M30x are:
    PC2700 512MB (PA3312U-1M51)
    PC2700 1024MB (PA3313U-1M1G)
    Bye

Maybe you are looking for

  • Can no longer see 2nd computer

    I have Home Sharing set up on two separate computers (MacBook Pro and iMac). Both have separate iTunes accounts. When I first bought the Apple TV this weekend I could see both computers on the Apple TV. Now I can only see the MacBook Pro. I do not kn

  • Bridge CS6 File Association Issues

    Hello, The problem I am having is this: I am associating a new program, RAWDigger with the NEF file type. It works and it makes RAWDigger as the default. When I go back to the File Association, RAWDigger doesn't even show up in the list. To make CS6

  • SAP BW 3.5 "Error in SQL Statement: DBIF_RSQL_INVALID_RSQL" in Infospoke

    Hi , I am working on SAP BW 3.5. while downloading the records through infospoke and giving them to the Application server side, I am getting an error  "Error in SQL Statement: DBIF_RSQL_INVALID_RSQL". Daily the Infospoke runs on delta mode. But for

  • Inventory Report Not able 2 trace

    Dear All, We are using "Z Report for Inventory Calcualtion of Opening and Closing Balance" The Opening Balanace and Closing Balance Figures are seen wiered in ur Production Server. But if run this same report in our QA Server.it works fine We are com

  • License for Java stack in SAP Solution Manager

    Hello! I installed SAP Solution Manager (ABAP+Java), have ordered the SAP License and applied the license for ABAP part (Tcode: slicense). Do I also need to update/license the Java Part of my system in Visual Administration? (Server 0 0_30826 .. -->