Capturing two streams of HDMI simultaneously

Am looking at capturing video from two cameras straight into FCP via the BlackMagic HDMI Intensity card. Is there a way to do that on the same Mac, or will I need two?
Computer would be one of the new dual core Mac Pros.[Cameras would be the forthcoming Sony V1U -- bypassing the HDV compression.]
Gene

You won't be able to capture two separate streams at the same time, unless you combine the signals into one video signal in an upstream video mixer, for example. And that would forever give you two pictures on the same frame - probably not what you desire.
You'll need a second system.

Similar Messages

  • Is there a way to capture two streams simultaneously?

    I am using a Sony HVR-1500a video deck to transfer DV and HDV tapes to hard drives as part of an archiving project. The Sony deck can output both an SDI signal and a firewire signal at the same time. For archival purposes the project calls for capturing both signals and thereby creating two files. I am using a BlackMagic Decklink Extreme card to capture the SDI signal into Premiere Pro CS5. It works fine. I would like to use Premiere Pro CS5 to simultaneously capture the firewire signal. At the moment I am using Nero Vision from Nero 9.x running on the same machine as Premiere Pro to capture the firewire signal. This all seems to work but I would prefer to use Premire Pro for both signals. There are over 5,000 hours of tape to transfer. So, I only want to play each tape once. If anyone can tell me how to so what I am trying to do I would really appreciate it.

    to shooternz:
    Here is a message posted on the AMIA (Association of Moving Image Archivists) in reply to a similar question. The writer is recognized as one of the leading authories on the subject. A link to his website is included in this quote:
    "Date: Wed, 24 Feb 2010 12:42:46 -0500
    Reply-To: Association of Moving Image Archivists <[email protected]>
    Sender: Association of Moving Image Archivists <[email protected]>
    From: Jim Lindner <[email protected]>
    Subject: Re: Best practices for Mini DV preservation and digital transfer.
    In-Reply-To: <[email protected]>
    Content-Type: multipart/alternative;
    I think that the points that David makes need to emphasized a bit. In particular his reply to Lee's comment regarding that the file output will be an exact replica because no re-encoding is done. For years  some people thought that just because video was "digital" that all "digital" copies would be identical. That is of course not the case. It is also not necessarily the case that a firewire output from a digital video recorder will be the same as the digital video output from the same recorder or even a "replica".   Digital Video recorders are optimized to provide a good looking VIDEO output for the human eye, and that means that a great deal of the technology in these machines are devoted to doing things to the picture that "massage" the data in such a way that it is pleasing to the eye. Many things are done to the data after playback that makes it look better. It is a very long list.... dropouts are corrected for, motion artifacts are smoothed in some situations, edges may be enhanced or blurred in others, color spaces may be changed, entire sections of images are interpolated, there are a whole list of things that happen - all in real time, and it actually is very fancy stuff. Depending on the specific machine and the output type a great deal of image processing occurs and that directly impacts what the image looks like when you view the video. When you take the direct data output you are not getting the benefit of all of the image processing that has been done. Truly Video ≠ File for many reasons - in fact they frequently are very different and will look very different.  There is another area of difference which David mentions which are the issues that relate to error correction and concealment. Correcting for these errors are fundamental to the design of digital video recorders in general and mini-dv in particular because mini-dv is a very error prone format due to the density of the recording and the media size. Correction and concealment are fundamental to the performance of that format and you easily can get different correction and concealment on every section and on every playback run - that is how error prone the format really is. I am saying that even though it is "digital" when you play back the same exact section of the tape - say only 10 or 20 seconds worth, it is extremely unlikely that you could ever get two  passes that were bit for bit identical data wise. These are not clones or anything even close to clones.  When you are moving just the data stream you are getting most of the errors (which can and do change) and not getting the benefit of most of the correction circuitry. You may get error flags that are telling you that the machine knows that there is an error, but correcting for that error outside of the machine on the file itself is a non-starter for many reasons including the fact that the exact algorithms that the manufacturer uses in hardware in the deck for correction are not available in software or on any editing system that I have ever heard of (even those made by the same manufacturer). So while the flag telling you that there is an error is nice - you really cant do anything with it and there is little if any benefit to it because it cannot be used to know what the circuitry did to make the picture look correct. It is sort of like lots of metadata - nice but what can you do with it? What does it really tell you? I suppose in aggregate it is useful to know how lossy or bad the tape is, but this really has little to do with the original point which was trying to have a best practice for MiniDv preservation. What you have "preserved" is in fact not what the machine would have reproduced as video at all. It is something different.  Frankly I feel the approach offered is too simplistic and is not best practice at all. IF one of the goals of preservation is to preserve the material in such a way that it is at least a fair representation of what the viewer would have seen at the time of playback on the deck, then I am afraid that I would have to argue that in this case a straight data capture that does not include all of the image processing technology used to create the picture is in fact a very poor preservation master precisely because with that information you can not recreate the viewing experience. In fact the best preservation master would be the SDI or HDSI output from the deck because what is coming out is digital video information that has the benefit of all of the image processing as well as data correction that the direct data stream does not have. The SDI stream has "video" and the firewire output has data - and they are far from identical and this can be very easily shown to be the case.
    Jim Lindner
    Email: [email protected]        Media Matters LLC.   450 West 31st Street 4th Floor   New York, N.Y. 10001
    eFax (646) 349-4475
    Mobile: (917) 945-2662
    www.media-matters.net
    Media Matters LLC. is a technical consultancy specializing in archival audio and video material. We provide advice and analysis, to media archives that apply the beneficial advances in technology to collection management."

  • (X-Fi Xtremegamer) Recording two streams of audio simultaneously via "What U Hear"

    Hello everybody.
    I'm running Windows 7 32bit and have downloaded and installed drivers from creative download page.
    Also performed an update. Integrated sound card (realtek high def) is disabled from BIOS. So my system
    detects only X-Fi Xtremegamer.
    I want to be able to record music in background and voice/guitar (input from FlexiJack) together.
    Currently I have set "Audio Creation Mode" for my sound card and have chosen FlexiJack input to be a microphone.
    So the problem is, when I set "What U hear" in windows recording devices as the default recording device, in idea it must record everything together. Audio being played in background and my voice over it from microphone alltogether.
    The problem is, once I set "What you hear" as the default recording device, my microphone immediately goes into "currently unavailable" state. If I put mic back as defaul rec dev, it becomes active again. This is very very frustrating.
    I know people here are pros who may have very sophisticated setups with all that MIDI controllers, routing etc. And you may tell me "So whats the problem, just set the music play on one channel of Audition for example and record your voice with mic as default recording device on the other one".
    Yes it is a valid notice, but there are situations when I really need to record both, background music/sounds along with microphone input sound simultaneously. A good example is online karaoke site(s) who need both sound streams to be recorded simultaneously and in realtime.
    Could anybody, please, give me an advise how to correct this problem? I don't want to throw away such a nice piece of hardware just because of this problem

    Oh, boy... You have my sympathy with the 'What-U-Hear' thing...recording two audio streams at once (mic etc)...
    I just upgraded my home studio system to a Win7 computer...I use SONAR to do my digital/MIDI work...
    On my old WinXP machine (a Pentium 4 with 4 gig RAM), I had the luxury of actually being ABLE to record in 'What-U-Hear' mode...
    I figured I needed a new soundcard with my new Win7...so I went out and bought the well-recommended M-Audio Audiophile 192 PCI card...
    Big mistake...I CAN'T record ANY 'What-U-Hear' streams at all!!...I wanted to tear out the new M-Audio card and plunk my trusty old Audigy 2 back in...but, the problem is not that Creative doesn't have Win7 drivers...it's that the SOFTWARE that came with it is largely useless according to the Creative site.
    Funny, I transferred all my files from the old computer (including all the Creative apps) to the new Win7 computer, and guess what? Good old 'WAVE STUDIO' from 2001 - version 4:12:07 - works just fine (WITH the ****ed M-Audio card, too!)...
    However, I STILL can't get the M-Audio 192 to do 'What-U-Hear' recording...!!
    I wonder if I DID put the old Audigy in my new Win7 computer...would that most important bit of Creative software that I use all the time (Wave Studio)...work?
    What a minefield!
    But about your particular problem...I think it is something to do with memory addressing, that the mic won't work with other audio streams...
    There is a program called 'DIGITAL CABLES' available on ZDNET for free download (costs $30 in full version) that is SUPPOSED to get around the 'What-U-Hear' problem...but, I'm afraid to screw everything up...since I use my system regularly to do work for my church choir...
    cheers...
    SonarBoy

  • How to VIEW two tracks's materials SIMULTANEOUSLY and SYNCHRONIZED in MONITORS?

    How to VIEW two tracks's materials SIMULTANEOUSLY and SYNCHRONIZED in MONITORS?
    "Program-monitor" shows the final edited material, and "Source-monitor" shows only one track at a time, and NOT SYNCHRONIZED with the sequence. "MultiCam-monitor" shows the tracks as I need, BUT editing is very stiff:  I want to do edits+cross-fades by hand IN THE TIMELEINE rather than by clicking the monitors. (Some of the "MultiCam-monitor's" other problems: 1) the timeline's time-inspector doesn't move while clicking the monitors, 2) vice-versa: monitor-image doesn't roll when the space-key is pressed for playback the timeline, 3) monitor doesn't show the cross-fades.....)   I understand the "Reference-monitor" doesn't help either at this.
    I would SO much like to SEE all tracks' materials simultaneously in whatever monitor, to edit efficently in the timeline.
    Software: Premiere CS3 version 3.2.0
    Operating System: Mac OS X (v10.4)

    You can only do this in the multicamera monitor.  That's what it's there for.  Proper procedure is to cut there, then go to the timeline and add transitions.
    If you're multicamera monitor playback isn't working correctly (I admit having a hard time understanding your points here), then try solving those.

  • Can we capture two types of Serial Numbers for a material?

    Hi,
    I have scenario where i need to capture two different types of serial numbers for a material. This scenario is very similar to the below one -
    Say if a CAR is a serialized material, then i want to capture engine no. and chasis no. for each car.
    I know in standard serial number management, we can capture one kind of serial no. for an item, but can we capture two types of serial number for an item? If yes, how we can do that?
    Please help.
    Thanks,
    Parimal.

    Hi,
    No it is not possible to have two serial numbers for a material.
    The main attribute of a serial number is that it is UNIQUE.
    You can better have the vehicle number as the required serial number and the other numbers under the Manufacturer data of the General data tab of the IQ01 transaction. ( Equipment master creation ).
    You can better change the name of the manufacturer data such as Model number to Engine numeber and Manu serial no to Chasis no etc.., using CMOD transactions.
    Regards

  • [svn:bz-trunk] 19769: Add destination and two streaming endpoints to test the invalidate-messageclient-on-streaming-close setting .

    Revision: 19769
    Revision: 19769
    Author:   [email protected]
    Date:     2011-01-14 12:05:23 -0800 (Fri, 14 Jan 2011)
    Log Message:
    Add destination and two streaming endpoints to test the invalidate-messageclient-on-streaming-close setting.
    Modified Paths:
        blazeds/trunk/qa/apps/qa-manual/WEB-INF/flex/messaging-config.mods.xml
        blazeds/trunk/qa/apps/qa-manual/WEB-INF/flex/services-config.mods.xml

    Originally Posted by namal
    Hello,
    can you please reupload the file again? The link doesn't work. I really need this driver pack for the Audigy 2 NX. I had Windows 7 and i don't want to buy a new external Sound Card for my notebook. Much thanks for the great work.
    In case you still require your driver and Alchemy, here are the links for the Audigy 2 NX
    Downloads Page
    http://support.creative.com/Products...05&prodID=9103
    Driver
    http://support.creative.com/download...wnloadId=11994
    Alchemy
    http://support.creative.com/download...wnloadId=12579

  • Can't open two streams from twitch fullscreen on different monitors at the same time, why not?

    When i try to open two streams each on one monitor ( in fullscreen), the one opened second just overlaps the existing one.

    Hi wilson1470,
    Only one window can be in fullscreen at a time. Is it possible to keep the second on in a tab of the same fullscreen window?
    This may be a nice alternative to what you are looking for: [https://addons.mozilla.org/en-US/firefox/addon/monitor-master/]

  • Audio video capture and stream

    hi anyone, i need some source code for my application.
    i've got a task from my school to build an application that could capture and stream audio video file..
    if you could help me, please send me the code (more simple code is better, coz i'm a beginner)
    btw, i use JMF 2.1.1e
    thanx before

    hello any body knows how to read a .wmv file using jmf

  • Capture video stream from network

    Hi all,
    I have application to broadcast video to network.
    And I want to write app use jmf to capture video stream.
    if you have any sample source code, please send to me
    (my mail: [email protected]).
    Thank you.

    yap
    avtransmit2 is giveing ERROR (line:123 "Couldn't create DataSource") so i thing medialocator is not finding so datasource is not created but variable locator is not NULL.
    i am trying to transmit from webcam (realtime).

  • Use two streams to manager client video/audio,Can i combine streams and record it with server AS API

    Use two streams to manager client video/audio,Can i combine streams and record it with server AS API
    I tried Stream.play()
    var s=Stream.get("combine");
    s.play("video");
    s.play("audio",null,null,false);
    s.record("append");
    it's don't work!!

    Thanks, that's what I had thought. Our domain.com zone is sourced internally and replicated to our advertisers for external users, so there's no way to change the result for internal vs external users.
    This is a rudimentary question that I should already know but I sort of inherited this after it was built: Can I have users sign in with their email address ([email protected]) but under the hood, their SIP address is [email protected]? This would let users
    sign in with an address they expect, but it would take advantage of the the local lyncdiscoverinternal record.
    Thanks,
    Matt

  • Capture RTP stream in a file

    I am developing a phone client application, one of the features it should include, is recording conversation, my problem is that I don't find how can I capture the audio stream sent via RTP and record it into a .wav file. I'm using jmf for streaming.
    I know that this is possible using ethereal, that captures the stream in a .au files, and you just have to convert it to a .wav;
    isn't it possible to do it using JMF ?????
    thank you in advance

    Hello!
    do you already know how to do it?
    I think you can create a media locator on the server and save the file on the client. For one stream, you can create a DataSink to a file. But if there are more than one stream at the same time, you can create a SessionManager.
    In this Session Manager, implement the Update method to catch the events of a new stream.
    For every new stream, you can play the stream and save it.
    But this is just what I've read. I'm planning to do it too..do you have it already done?
    Thanks!

  • Capturing and streaming

    Hi all.
    I've designed and installed an in-place video system for my university, which captures live from a camera to an iMac via Firewire. I am capturing in DV or HDV in FCP and Quicktime Player. I've been asked now to add streaming to the setup as well, for live broadcasts of performances.
    Here is one solution: capture as above, run the camera's analog output to a second computer with a capture device, and stream it with QT Streaming Server or something similar.
    But, does anyone have a suggestion as to how to capture and stream with the firewire signal? Is there a way to capture and stream from the same computer? Or a way of in a sense "splitting" the firewire signal (an impossibility I know)? It would be ideal if I could have a second computer passively capturing the firewire video for streaming.
    Any ideas? Thanks!

    I'm going to answer my own question here.
    I just experimented and, contrary to everything I thought I knew, daisy chaining firewire devices will actually work. If connect a camera to Computer A, and connect Computer B to Computer A's other firewire port, then BOTH computers can see the video signal. I will have to try this with a hub as well.
    I would imagine that the computer with multiple ports has to have a single Firewire bus, but that is a guess. And I can't get this to work with iMovie; I think iMovie by default is trying to control the device, and doesn't play well with multiple hosts actually trying to control it. But I can run FCP with it set for "non-controllable device" or Quicktime Player on both machines, and it works just fine.
    Next step: record with FCP on one computer, test streaming on the other with Quicktime Broadcaster.

  • Record two video streams to disk simultaneously

    I am recording long conference sessions, with each session up to 4 hours. I have two video cameras (FireWire Standard definition) connected to two computers. When a session begins I record each through QuickTime on each computer. This works beautifully, producing files of up to 50GB with no problem. (I am later combining these with SMIL.)
    I am travelling, and I wondering if I can simplify the equipment by recording both streams on one computer. I know this is pushing it but I would like to give it a try. I am using fast LaCie hard disks. I could use two disks connected and put each stream on a different disk.
    QT will not handle more than one stream at a time. Is there another application that I might run concurrently to achieve this? I particularly like the simplicity of QT which silently pours the DV format onto the disk.
    I tried iMovie, but when I want to get the movie out it takes a while to convert. I am nervous that with big files it will crash out.
    MacBook Pro 17" 2Gb ram   Mac OS X (10.4.7)   30" Cinema display

    Recording video is one of QuickTime 7 (Mac version only) new tricks. It is a very CPU intensive process (capture and encode simultaneously) and even a very fast computer may drop frames.
    Version 6 was not allowed to do more than one task at a time. For example you couldn't view a file while exporting another. Version 7 doesn't have these limitations. It can do multiple exports and view multiple files. But this comes at a price as exports would slow down and viewed files would start dropping frames during playback.
    Back to your question. QuickTime 7 can only have one capture at a time.
    But what if you used more than one instance of QuickTime 7?
    Duplicate the QuickTime Player app (Option-drag the app from the Applications folder to your Desktop) to have two versions.
    Rename the Desktop version so you don't get confused and open them both. You should be able to now open two recordings but I don't think you'll be able to have two different save locations or camera recoding settings (you might as I've never attempted this).
    Don't attempt to use high quality (device native) and keep the capture size small dimensions and it might work.
    But it would be easier to record to tape on the camera and edit using iMovie.

  • Capturing two video streams via firewire simutaneously

    I want to record two videos from two firewire cameras simutaneously to my hardrive directly. I have been able to record 1 stream into quicktime and one stream in imovie hd at the same time without any trouble. Just want to save the space the hd dv format video file takes up. Would buying another quicktime pro license allow me to set one quicktime for one camera and the other quicktime for the other camera. If i can do this would the record setting be best set on mp4 or h.264 if I am editing this in fcp. If this isn´t plausible, does anyone know of software out there that might solve my problem. Can you achieve this in FCP. Thanks Dave

    Just click on the QuickTime Player icon in the Applications folder. Command-C to copy it then Command-V to paste it. You now have two copies of QuickTime Player. Open one, set it's input, then open the other and see if you can set it's input separately.

  • Capturing video from multiple devices simultaneously using QTKit

           I have three cameras in my iMac,one is iSight(FaceTime HD Camera (Built-in),the other two are  USB Camera。And my System version is Mac OS X 10.7.3 (11D2001),SDKs:Mac OS X 10.7 (11E52)
           My QTKit Capture application uses multiple cameras to capture video with a frame size of 640x480 @ 30 fps simultaneously, but I've found I can only capture with one or two cameras at the same time and it is dependent on different USB port。 Is this a known limitation?

    Fallowing is what I have done:
    _captureDevices = [[NSArray alloc] initWithArray:
                           [QTCaptureDevice inputDevicesWithMediaType:QTMediaTypeVideo]];
        _captureDeviceCount = _captureDevices.count;
        NSLog(@"Find video devices :%d" ,_captureDeviceCount);
        _CaptureSessions = [[NSMutableArray alloc] initWithCapacity:_captureDeviceCount];
        _captureDeviceInputs = [[NSMutableArray alloc] initWithCapacity:_captureDeviceCount];
        _captureDecompressedVideoOutput =[[NSMutableArray alloc] initWithCapacity:_captureDeviceCount];
        _CaptureMovieFileOutputs = [[NSMutableArray alloc] initWithCapacity:_captureDeviceCount];
    // Create the capture session
        for (int i = 0; i < _captureDeviceCount; i++)
            [_CaptureSessions addObject:[[QTCaptureSession alloc] init]];
            [_CaptureMovieFileOutputs addObject:[[QTCaptureMovieFileOutput alloc] init]];
             [_captureDecompressedVideoOutput addObject:[[QTCaptureDecompressedVideoOutput alloc] init]];
      for (int idex = 0; idex < _captureDeviceCount; idex++)
    // Connect inputs and outputs to the session
          BOOL success = NO;
          NSError *error;
    // Find a video device 
        QTCaptureDevice *tempCaptureDevice =  (QTCaptureDevice*)[_captureDevices objectAtIndex:idex];
        char tempDeviceNameUniqueID[1024] = "";
        char tempCaptureDeviceName[1024] = "";
        [[tempCaptureDevice localizedDisplayName]
         getCString:tempCaptureDeviceName maxLength:1024
         encoding:NSUTF8StringEncoding];
        [[tempCaptureDevice uniqueID]
         getCString:tempDeviceNameUniqueID maxLength:1024
         encoding:NSUTF8StringEncoding];
         NSLog(@"\nCaptureDevice: %d \nDeviceName: %s  \nUniqueID :%s\n" , idex ,tempCaptureDeviceName, tempDeviceNameUniqueID);
        success = [tempCaptureDevice open:&error];
        if (error != Nil)
            NSLog(@"CaptureDevice open:&error!");
        if (!success)
            tempCaptureDevice = nil;
            // Handle error
        if (tempCaptureDevice)
           //Add the video device to the session as a device input
                        [_captureDeviceInputs addObject:[[QTCaptureDeviceInput alloc] initWithDevice:tempCaptureDevice]];
                        success = [ [_CaptureSessions objectAtIndex:idex] addInput:[_captureDeviceInputs objectAtIndex : idex] error:&error];
            if (!success)
                // Handle error
                NSLog(@"mCaptureSession addInput error!");
    // Create the movie depress output and add it to the session
            success = [ [_CaptureSessions objectAtIndex:idex] addOutput: [_captureDecompressedVideoOutput objectAtIndex:idex] error:&error];
            if (!success)
                // Handle error
                NSLog(@" Handle error");
            int _frameWidth = 640;
            int _frameHeight = 480;
            int _frameRate = 30;
            [ [_captureDecompressedVideoOutput objectAtIndex:idex]
             setMinimumVideoFrameInterval:(NSTimeInterval)1/(float)_frameRate];
            NSDictionary* captureDictionary = [NSDictionary dictionaryWithObjectsAndKeys:
                                               [NSNumber numberWithDouble:_frameWidth], (id)kCVPixelBufferWidthKey,
                                               [NSNumber numberWithDouble:_frameHeight], (id)kCVPixelBufferHeightKey,
                                               [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32ARGB, nil];
            [ [_captureDecompressedVideoOutput objectAtIndex:idex] performSelectorOnMainThread:@selector(setPixelBufferAttributes:) withObject:captureDictionary waitUntilDone:NO];
    // start runing the Sessions
            [ [_CaptureSessions objectAtIndex:idex] startRunning];
    }// end of   for (int idex = 0; idex < _captureDeviceCount; idex++)
    Message was edited by: 08064140
    Message was edited by: 08064140
    Message was edited by: 08064140

Maybe you are looking for