Question About HDV Capture / Timeline

Hello,
I am capturing HDV from a HV40. Whenever I bring a clip into the timeline, the audio is not a stereo-linked pair, it is instead two individual mono tracks. When I make it stereo linked, I then have to set the pan back to zero. These are a lot of extra steps to control audio volume. How do I make it so that it's stereo linked when I capture so I don't have to do all of this afterwards?
My capture settings are:
Sequence Preset: HDV1080i60
Capture Preset: HDV
Device Control Preset: HDV FireWire Basic
Thank you for your help.

Stereo linking is selected in the clip settings in the capture window.
You do not have to reset to the pan to zero. -1 is correct for a stereo pair; 0 is not.

Similar Messages

  • From USB to IEEE-1394: question about HD capture, editing, sharing.

    I’m new to editing video but have a question about capturing video into Premiere Elements 7. I have been using a Canon HF11 which shoots only in HD and only has USB output to copy video to my desktop. I don’t have a Blu-ray burner yet but I have burned a few projects to DVD, shared video on YouTube, and saved files in .avi and .mpeg to view on computer screens. But obviously I’m missing out on sharing HD on disk at this time. I just bought a Canon XH A1s which shoots both HD and SD on miniDV tapes. And uses IEEE-1394 cable for capturing (to Pre7). I did a 3 minute test project in SD and burned it on a DVD disk. I was very pleased at the results and the ease of editing. I don’t have any HD tapes yet, but when I do what differences will I experience from shooting in HD and then capturing, editing, then to sharing? My current HD processes with the HF11 is slow, occasional low-resources warnings, and less then pleasing results on DVD disks. Thanks for any and all comments and/or suggestions.

    Your HF11 is AVCHD format and the Canon XH A1 is HDV MPEG2. AVCHD takes a lot more computer horespower to edit compared to HDV MPEG2... main reason being that AVCHD is more highly compressed giving small file sizes. This is also why you get the low resource and memory alarms while editing.
    To shoot HD you do not need HD miniDV tapes you can use the standard miniDV tapes. However the HD tapes are supposed to give fewer drop-outs than the standard tapes but are considerable more expensive. With DV-AVI a drop out would be hardly noticed, but as HDV MPEG2 is a compressed format and the picture relies on information in a number of frames any drop-outs can give a second or so of bad video. That said I have had very few issues with drop-outs. If you are doing a wedding or something important that you do not want to risk then probably better to use a HD miniDV tape.
    In the past the advise has been to capture in high definition to get the best quality and then downconvert in the camcorder and capture in DV-AVI... the reason being that older versions of Premiere Elements did not do a good job of the down conversion. Reports on PE7 indicate it is better and that you could use a HDV workflow and burn to DVD as a last stage. This also allows you to export a high definition version of your video to view on a monitor where you can appreciate the higher resolution. Or in the future when you have a Blu-ray burner and player you can burn the file to disc... save doing all the editing all over.

  • Question about 24p capture

    I have a friend who shot with 24p dv cam and I was wondering, do I need to use his 24p dv cam to capture or can I capture using my Panasonic cam (which is not 24p) in Final Cut Pro? I realize I have to set my capture presets for 24p, but I don't have a regular dv deck.

    Sorry, Tim, but that brief article you read, I would not call research at all. It tells you in a very cursory fashion what to do to capture video shot at 24P Advanced. What it does not tell you is anything about what 24p is, how it's recorded, the various pulldown schemes, what a pulldown does, why it's needed and why or when you should remove it. You need to know that 24p is actually vanilla DV NTSC with a pulldown because the NTSC standard requires video to play at 29.97 fps, no ifs ands or buts about it. How you remove the pulldown to get back to 23.976 fps is dependent upon what flavor of 24p you shot and quite honestly, for years now we've been reading posts by those who have no clue what flavor they shot or they "accidently" shot both and need to combine them in the same sequence. The destination can be the same but the routes taken to get there are not the same.
    Adam Wilt's article and Graeme's article are far superior to that Ripple Training blurb.

  • This sounds like a silly question about podcast capture.

    To be honest, I'm not a native English speaker and I don't know much about PC. I got my new Macbook yesterday and the whole point of getting it is to make podcasts. While I was in Account Setup, it asked me to enter a Podcast Producer server to connect to. What should I put there?
    Many thanks.
    Theresa

    I have the same question!

  • Edit material in DV re-capture timeline to HDV

    (FCE HD)
    Let say I have recorded material in HDV. I'll capture all tapes by using DV-codec (in camera Sony downconversion HDV>DV via firewire). All editing is done in DV.
    When DV-editing is done, is there any way to re-capture timeline material to HDV? This way I could edit in DV mode, but make the final version to HDV (offline > online).
    G5 dp2.5/2.5GB   Mac OS X (10.4.3)  

    Thanks again. I'm still using FCE2.03, but I'll upgrade soon when I get my HDV-camcorder.
    So because FCE HD can't do native, does it use Apple Intermediate Codec? And how much AIC takes disk space (e.g. one hour)?

  • A question about AIC

    Hello
    I have a question about Apple Intermediate Codec (AIC).
    At the moment I have an Intel IMac Core Duo and Final Cut Studio 1 (FCP 5.1.4) and I am editing in SD so far.
    I have been looking at the new Sony CX6 witch use AVCHD codec and I am aware that my FCP 5.1.4 will not import that.
    However, I have just bought IMovie 08 and that program will accept AVCHD and will convert it to AIC.
    At first it sounded like a good idea to capture in Imovie 08 and then open the files in FCP since my FCP will accept AIC.
    When I looked for AIC in “Easy Setup” the only one I can find is "HDV-Apple Intermediate Codec 1080i50" (I’m in PAL country) and this is where I got confused since it said HDV.
    Is that another AIC that won’t work with my IMovie AIC or are they the same?
    To put this in one question: Could I use my Imovie 08 to capture AVCHD into AIC and the import those files in FCP under AIC1080i50 even if it says HDV-AIC?
    I hope someone will be able to help me with this question.
    Cheers
    Hans

    Your problem is you have iDVD 4 or earlier. Such versions ONLY supported internal SuperDrives.
    PatchBurn at http://www.patchburn.de/faq.html may help you.

  • Maybe a simple question about M2T ?

    Hi
    I record in HDV on the JVC GY 101.
    I store on the Focus Enhancements DR HD100 Hard disk recorder.
    The footage I have is in HDV 25frames Progressive.
    The hard disk stores as m2t file and I seem to need to use Mpeg streamclip to convert for Final Cut Pro HD.
    I have a few questions about this.
    1) What should i convert to with mpeg streamclip ? mov, dv m2v etc etc
    2) What setting should I have on FCP with regards to screen size ? The converted clips are 1280x720. When I choose this setting on the sequence, I end up with small black bars left and right of the video on the canvas ???
    3) I will also later have to add some digi beta footage which will be PAL, 720x576 Anamorphic.
    4) What is the best resolution for me here ?
    5) And will the quality of the original m2t file be downgraded by the mpeg streamclip ?
    Thanks so much for your help
    All the best
    Simon

    HDV isn't an "edit-friendly" format, meaning: HDV uses inter-frame compression, combined with a long group-of-pictures (GOP) of 15 frames (12 frames in PAL-equivalent). In intra-frame compression (as in DV, DVCPRO HD, etc.), each video frame is a complete image that stands alone. Inter-frame compression (which is also used in DVDs) uses only a limited number of complete image frames, the GOP determines that number. So, in HDV for example, with a GOP of 15 frames, 1 second of 30fps video contains only 2 complete frames, while the other 28 frames are determined by calculations used in conjunction with the Bi-directional ('B') and Predicted ('P') frames, which contain partial information on the changes in the video in between the 'I' frames which contain whole images. This process, of course, requires much more computer processing power than when editing DV, DVCPRO HD, Uncompressed, etc.
    Editing video streams such as HDV (and XDCAM HD) is made difficult by the reduced number of I-frames. For example, if you make a cut in between the I-frames in the video clip, the computer has to then recalculate new I-frames that are placed at both sides of the cut. Converting on capture/after import to an intra-frame codec like DVCPRO HD, ProRes 422, or Uncompressed will reduce processing power and increase real-time performance, as well as give better performance when adding effects and performing compositing like chroma-keying.
    A common rule of thumb in professional applications is to acquire in HDV, XDCAM HD, etc., but capture and edit in DVCPRO HD, ProRes 422, or even Uncompressed. Native HDV editing is not taboo by any means; it's just not as easy or robust -- especially with the LONG render times. Depending on what kind of work you do (and of course personal preference), you may not see much benefit by transcoding to another format.
    Are you suggesting i have to convert each clip after capture?
    Unless you have an HDV deck with HD-SDI and a capture card, you would have to convert each clip. Batch-conversion would be very useful for that.
    Hope that helps.
    tim

  • Questions about video studio

    Hi.  Not an Abobe question per se, but I'm going to be setting up a video studio / green screen room  and thought some Abode users might be able to point me to an appropriate place to ask some questions about how this should be built.  (Planning a 16' x 16', room, but could be up to 16' x 20' if necessary, standard 8' ceilings (no way around that).
    Would like to film a weekly show in the studio with anywhere from 1-3 people sitting at a greenscreen desk; would also use the room to record singing and piano.
    Will be processing with Adobe master Suite CS6 (and Sonar X2 for music in conjunction with Audition).
    I've done some greenscreen work with a consumer grade digital camcorder and hardboard painted green, Keying with After Effects and using Premiere as nonlinear editor.  But I'd like to set up something more professional now, while trying to keep the budget under control.  Perfectly fine with DIY softboxes and screens as long as they work well.
    Have questions about lighting, cameras, preferred room dimensions, and any othter factors I should consider when having this built (and needs to accomodate the recording of piano and vocals as well).
    Thanks for any suggestions.

    Thanks Steven.
    I've been worried about the 16' being a little too tight - so will go with the 20' (even bigger might be better, but that's the most I can really do).
    A few other questions:
    1) I thought I was going to be limited to 8' ceilings, but depending how things lay, it might not be a big deal to put in 9' ceilings.  Would that help significantly with the lighting or is it not a big deal.
    2) Should I have electrician install lighting on the ceiling to illuminate the greenscreen (back wall) from above or to shine down on the talent from above and behind for backlighting - or should I just do all the lighting with standing softbox lamps?
    3) I've read that sound in a rectangular studio can be a big problem.  One recommendation was to hang black curtains on tracks that could cover all areas not in the shot - this would reduce bouncing of light and spill over from greenscreen, while also minimizing standing waves / echo.  Any advice on this or other economical things to consider as room is being built? 
    4) I have a HV20 Canon consumer grade HDV camcorder - it has an HDMI port.  Anyone have experience with Blackmagic Intensity Pro - which should let you capture uncompressed raw video from the camera?
    5) Any one have advice on the Sony HVR-V1U MiniDV Camcorder  - can get used for under $1000 - would this be a giant step up and make green screening / keying much easier (using Adobe Premiere and After Effects CS6) - or not worth the money?
    6) Any advice on what to do for microphpones during a show where 2-3 people are seated at a rectangular table facing one camera and talking?  Will also record some electric piano and vocals.
    Thanks for any suggestions (including advice about any other forums where I should post my questions)

  • Questions about editing with io HD or Kona 3 cards

    My production company is switching from Avid to Final Cut Pro. I have a few editing system questions (not ingesting and outputting, just questions about systems for the actual editors - we will have mac pros with either kona 3 or io HD for ingest and outputs)
    1) Our editors work from home so they most likely will be using MacBook Pros - Intel Core 2 Duo 2.6GHz 4GB computers with eSata drives to work on uncompressed HD, will they be able to work more quickly in FCP if they are using the new Mac Pro 8-Core (2 Quad-Core 2.8GHz Intel Xeon) or will the mac book pro's be able to hold their own with editing hour long documentaries, uncompressed HD
    2) Will having an AJA Kona 3 (if we get the editors mac pros) or io HD (for the mac book pros) connected be a significant help to the editor's and their process, will it speed up their work, will it allow them to edit sequences without having to render clips of different formats? Or will they be just as well off without editing with the io HD?
    I'm just trying to get a better understanding of the necessity of the AJA hardware in terms of working with the editors to help them do what they have to do with projects that have been shot on many formats- DVCPro tapes, Aiptek cameras that create QTs and P2 footage.
    Thanks

    1. with the IoHD, laptops become OK working with ProRes and simply eSata setups. Without the Io, they can't view externally on a video monitor (a must in my book). It will not speed up rendering a ton, nor will it save renders of mixed formats. The idea is to get all source footage to ProRes with the Io, and then the Io also lifts the CPU from having to convert ProRes to something you can monitor externally on a video monitor, and record back to any tape format you want... all in real time.
    2. Kona 3's on Towers would run circles around render times on a Laptop... no matter what the codec, but the Kona does not really speed renders up. That's a function of the CPU and just how fast is it. (lots of CPU's at faster speeds will speed up render times).
    I'd recommend you capture to ProRes with Io's or the Kona 3 and don't work in uncompressed HD. You gain nothing doing it quality wise at all. And you only use up a ton of disk space (6 times the size in fact) capturing and working in uncompressed HD, which from your post, you're not shooting anyway. The lovely thing about ProRes is that it's visually lossless, efficient, and speeds up the editing process. Mixing formats can be done, but it's better to go to ProRes for all source footage, and edit that way.
    With either the Kona or the Io, you then can output to uncompressed HD tape... that's what they do for you no matter what codec you've edited in. ProRes is designed to be the codec of choice for all HD projects when you're shooting different formats especially... Get them all singing the same tune in your editing stations, and you'll be a much happier camper. Only reason to buy laptops is portability... otherwise you're much better off with towers and the Kona 3 speed wise.
    Jerry
    Message was edited by: Jerry Hofmann

  • A Question about RAW and Previews

    I have just recently starting shooting in RAW (mostly for the post production editing abilities - I am an avid amateur photographer bent on learning as much as I can). I set my camera to capture in RAW + L. I don't know why I feel like I want it to capture both the RAW and JPEG file, and thus leads me to my first question: Is it necessary to have the camera capture both the RAW and Large JPEG? I am assuming the answer to be no, as I am sure if after importing the RAW file into Aperture, you could always export a JPEG if you wanted one? So no need to fill up your internal memory (if using managed masters) with the extra JPEG? Is this thinking correct?
    Next, if you do import RAW-only files and then want to export certain images, do you have a choice to export the original RAW image? It seems that it only allows you to export a JPEG Original Size. To answer my own question, perhaps you have to export the Master in order to export the full RAW file when exporting? If you want to export a JPEG, you have to export not the Master, but a version of the Master? Is this correct?
    Lastly, I wanted to ask a question about Previews. I have my preferences set so that previews have the highest quality with no limits to size. What is the significance of setting it this way? I just assumed that if I wanted to share an image at the highest quality without exporting it, this was the way to go. Is there any validity to this? The reason I ask is that I don't want to have all of these high quality previews taking up internal disk space if I really don't need to. Is there a way to change the preview size once previews are created? Meaning, if you have it set to generate low quality previews, can you change it dynamically to high and vice versa?
    I know this is a lot in one post. Thanks for tackling it.
    Mac

    You can change the quality of the Previews in the Preferences -> Previews tab.
    You can regenerate Previews with the Delete and Update Previews under the Images menu.
    Regards
    TD

  • Questions about SRM PO in Classic scenario

    Hello All
    I have a number of questions about the SRM PO in classic scenario.
    1) If the Backend PO is changed in ECC i.e. if any quantity is added , can we have an approval workflow
    for the same?
    We currently have release strategies for other PO's in ECC. How do we accommodate the PO changes only?
    Our requirement is not have an approval initially once the PO is created, but only for the changes
    2) If the PO is sent as XML to the Vendor, is it possible to capture the PO response in ECC? What are the Pre-requisites
    for this to happen. Should SAP XI be required for this?
    3) In case the PO is cancelled/ reduced , does the Balance goes back to SRM sourcing cockpit?
    We are using SRM 7.0
    Regards
    Kedar

    Hi,
    1) If the Backend PO is changed in ECC i.e. if any quantity is added , can we have an approval workflow
    for the same?
    We currently have release strategies for other PO's in ECC. How do we accommodate the PO changes only?
    Our requirement is not have an approval initially once the PO is created, but only for the changes
    Sol: In ECC6.0 if the P.O is changed and release strategy is there in ECC6.0 then it follows the ECC6.0 Approval Route.
    2) If the PO is sent as XML to the Vendor, is it possible to capture the PO response in ECC? What are the Pre-requisites
    for this to happen. Should SAP XI be required for this?
    XI is mandatory
    3) In case the PO is cancelled/ reduced , does the Balance goes back to SRM sourcing cockpit.
    Once P.O is created in ECC 6.0 for the P.R in Sourcing Cockpit, cancelling/reduction will not have a updation in the sourcing cockpit in SRM.
    Eg  100 nos P.R is in SRM sourcing cockpit for which  you have createdaa P.O for 40 nos is ECC6.0
    for the remaining 60 nos PR ,you can create a P.O in ECC6.0
    Regards
    Ganesh

  • Question about "Native ISO" and Color Grading in PP

    I have a question about "Native ISO" in the real world and how it relates to color grading.  I was shooting 35mm film before all these digital cameras became flat-out amazing practically overnight.  Then the goal was always to shoot with the lowest ISO possible to achieve the least amount of grain (unless you were making an artistic decision to get that look).  If I was shooting outside plus had a nice lighting package I'd shoot 5201/50 ASA (Daylight) and 5212/100 ASA (Tungsten) 99 times out of 100.
    I've recently been shooting a lot with the Blackmagic 4K and have read that its "Native ISO" is 400.  Because of my film background this seems counter-intuitive.  Yesterday I was shooting for a client and had the camera at an f16 with a 200 ISO.  Because of what I'd read, I was tempted stop down to an f22 and change my ISO to 400... but the "little film voice in my head" just wouldn't let me do it.  It kept telling me "Higher ISO means more noise... stay at 200 and you will get a cleaner image".
    So how does it work with "Native ISO"?  Should I really shoot at a 400 ISO every chance I get in order to capture the best image for how the camera is calibrated?  Will it really give me more latitude when color grading?  Or would I still get a cleaner image staying at ISO 200?   I've Googled around quite, but haven't found any articles that answer specifically this question.  Would love to hear from someone who knows a bit more on the subject or has a link that could point me in the right direction.
    Thanks much.

    Hey, shooter ... yea, interesting discussion and always nice to learn. Great pic, too!
    jamesp2 ...
    Great answer. I've followed quite a bit of the discussion about the BM cams as well, one does feel a need to check out the possibilities for that next beastie one will need to acquire. But ... which one?
    I've always been a bit of a hard-case about testing testing testing. For instance, what happens with dome down or use of a flat diffuser vs. dome in the up position in metering? Back in the film days, we had our own lab and did our own printing as well as the um ... difficult images ... from other studios. I needed to know how to get exactly the same diffuse highlight no matter whether it was a "standard" light 3:1 studio shot, or a near-profile with no fill that needs dark shadows. I tested & burned through boxes of medium-format polaroid & 120 film and a lot of color paper. Finding? To get the same print time no matter the contrast or lighting style, needed to be metered either with the flat disc (Minolta) or dome-down (Sekonic) and held at the highlight-location pointed at the main light source. I could meter and nail the exposure every time. Ahh no, insist so many ... one must have the dome on/up and pointed at the camera! Right. Do that, change the contrast, and see what happens to your diffuse forehead highlight on a densitometer ... and see how your printing exposure times change. Oh, and you've just moved your center-of-exposure up or down on the film's H&D curve, which will also change the way the shadows & highlights print. In truth, though it was subtle, we had realistically no matter latitude for a best-case image with pro neg film as one had with chromes. You could probably get away with being "off" easier, but it still wasn't dead-on.
    So wading into video ... oi vey, you may have noticed the things claimed here there & everywhere ... this setting is God's Gift to Humanity but no, it's total crap ... this sensor is totally flawed but someone else is certain it's the finest piece out there. Yes, opinions will be all over ... but ... in film, it was the densitometer. In video, it's the scopes. Truth. And getting to that can be a right pain. I've seen quite a few contradictory comments about using the BM cams in film mode and also at ISO 200. Yours above gives the most ... comforting? ... explanation (for me) because of your reference to your scopes & the waveform patterns. Thank you.
    Love to learn ...
    Neil

  • Questions about the content of download meeting recording .zip file

    I tried posting this on the resurrected Connect forum, but my Adobe ID wasn't recognized there....
    Concerning the files that are included in the .zip file of the meeting recording that can be downloaded:
    1) Is there any documentation describing the files and their contents (i.e. what each file represents, what each XML element and attribute in those files represent)
    2) Are there any files that capture mouse movement on a shared desktop?
    Thank you!

    Hi Sean,
    Regarding your first post:
    Thanks Jorma! I don't have access to an FMS build at the moment but I'm quite certain it's there. As for contacting Jaydeep, I am 90% sure he authorized us to broadcast his email on here if folks had questions about the tool, but, in the case that I'm wrong and he didn't - I'm going to double-check first.
    Regarding your most recent post..
    "To be clear, the most critical goal I'm trying to accomplish is to create an automated process that will download the recording meeting at its highest quality in a consistent and reliable manner".
    I personally believe this is possible; unfortunately, I haven't seen it done yet. If your recording contains:
    - audio
    - a camera feed
    - screensharing
    Then I think you might be able to get this going. If it contains shared content, like a shared PPT, this gets trickier.
    "To do this, of course, I have to reproduce some of the functionality that Connect provides, starting and combining video and audio streams according to the instructions in the control files."
    Exactly right. If your recording didn't contain shared content, then all you've got on your hands are a bunch of audio/video files that you could edit together as you wanted with your favourite video editing tool. If it contains shared content, here's (at a high level) what's happening.
    For shared PPTs or FTContent files:
    First (for version 9 recordings only), Connect reads the information on the Shared Content's location and SCO within mainstream and indexstream and validates it before loading it. I don't recall this happening to the same extent with version 8 or earlier, but maybe it was. Now, if the content is validated (ie. Connect can find it) the share pod will display as black, if it doesn't, you get an empty pod with an message like "No content is being shared" or something like that.
    Connect then looks at the actual FTContent file, and loads the content that is to be shared using the file path and sco ID listed in here. It's important to note that the SCO ID and file path in here will likely not be the same as the original file you uploaded to your room, it's a new SCO id (I believe SCOs of this type are called referenced scos) and new path.
    Now...if I was going to build some sort of player which would play all these files in one screen to make a recording...I might not want to use Connect's code here. If you know the file path to the shared content (from FTContent), you could easily view it with the content URL (conveniently also in FtContent). I'm not a coder, but I'm envisioning something like Presenter's GUI where you've got the presentation's content in the main area, and a video file (if there is one) playing back on the side.
    Anyways, food for thought if you want to try to go about this. Connect recordings are incredibly complex and they come with a big learning curve, but if you can make sense of them the knowledge is quite valuable.

  • Questions about ActiveX Bridge and SWT

    Hi !
    I found few weeks ago the www.reallyusefulcomputing.com site answering my question about Java & Macro of MSWord. Now I've found two steps that I am not able to overcome.
    First, when I build a simple application and I have internal libraries to use, am I compelled to place the jars in the lib/ext directory of the Java Runtime Directory? I have many difficulties in placing a convenient class-path in the manifest file. Why does the application ignore the manifest string?
    The second problem (and the biggest one) is that I am trying to build an SWT java-bean-application but I have problems during runtime. Is it possible to use this technology for my software or am I compelled to use AWT and SWING?
    I really hope you can help me.
    Thank you in advance.

    hi,
    I have to catch events from excel sheet in my java code..events like some change in value or a click of some user defined button etc.I have written th follwoing code but it does not gives me any event.It simply opens the specified file in the excel in a seperate window.but when i click on the sheet or change some value no event is captured by my code....can ne one pls tell me how to go about it....
    public class EventTry2 {
         private Shell shell;
         private static OleAutomation automation;
         static OleControlSite controlSite;
         protected static final int Activate = 0x00010130;
         protected static final int BeforeDoubleClick = 0x00010601;
         protected static final int BeforRightClick = 0x000105fe;
         protected static final int Calculate = 0x00010117;
         protected static final int Change = 0x00010609;
         protected static final int Deactivate = 0x000105fa;
         protected static final int FollowHyperlink = 0x000105be;
         protected static final int SelectionChange = 0x00010607;
         public void makeVisible()
              Variant[] arguments = new Variant[1];
              arguments[0]=new Variant("true");
              //Visible---true
              automation.setProperty(558,arguments);
              //EnableEvent--true
              boolean b =automation.setProperty(1212,arguments);
            System.out.println(b);
             public Shell open(Display display){
             this.shell=new Shell(display);
              this.shell.setLayout(new FillLayout());
              Menu bar = new Menu(this.shell,SWT.BAR);
              this.shell.setMenuBar(bar);
              OleFrame frame = new OleFrame(shell,SWT.NONE);
            File file= new File("C:\\Book1.xls");
              try{
              controlSite =  new OleControlSite(frame, SWT.NONE, "Excel.Application");
              this.shell.layout();
              boolean a2=true;
              a2=(controlSite.doVerb(OLE.OLEIVERB_SHOW|OLE.OLEIVERB_OPEN|OLE.OLEIVERB_UIACTIVATE|OLE.OLEIVERB_HIDE|OLE.OLEIVERB_PROPERTIES|OLE.OLEIVERB_INPLACEACTIVATE)== OLE.S_OK);
              System.out.println("Activated::\t"+a2);
            }catch(SWTException ex)
                 System.out.println(ex.getMessage());
                 return null;
              automation = new OleAutomation(controlSite);
              //make the application visible
              makeVisible();
              System.out.println("Going to create Event listener");
              OleListener eventListener=new OleListener(){
                   public void handleEvent(OleEvent event){
                        System.out.println("EVENT TYPE==\t"+event.type);
                   switch(event.type){
                   case Activate:{
                        System.out.println("Activate Event");
                   case BeforeDoubleClick:{
                        System.out.println("BeforeDoubleClick Event");
                   case BeforRightClick:{
                        System.out.println("BeforeRightClick Event");
                   case Calculate:{
                        System.out.println("Calculate Event");
                   case Change:{
                        System.out.println("Change Event");
                   case Deactivate:{
                        System.out.println("DeActivate Event");
                   case FollowHyperlink:{
                        System.out.println("Activate Event");
                   case SelectionChange:{
                        System.out.println("Activate Event");
                        Variant[] arguments = event.arguments;
                        for(int i=0;i<arguments.length;i++)
                             System.out.println("@@");
                             arguments.dispose();
              System.out.println("outside");
              OleAutomation sheetAutomation=this.openFile("C:\\Book1.xls");
              controlSite.addEventListener(sheetAutomation,Activate,eventListener);
              controlSite.addEventListener(sheetAutomation,BeforeDoubleClick,eventListener);
              controlSite.addEventListener(sheetAutomation,BeforRightClick,eventListener);
              controlSite.addEventListener(sheetAutomation,Calculate,eventListener);
              controlSite.addEventListener(sheetAutomation,Change,eventListener);
              controlSite.addEventListener(sheetAutomation,Deactivate,eventListener);
              controlSite.addEventListener(sheetAutomation,FollowHyperlink,eventListener);
              controlSite.addEventListener(sheetAutomation,SelectionChange,eventListener);
              shell.open();
              return shell;
         public OleAutomation openFile(String fileName)
              Variant workbooks = automation.getProperty(0x0000023c);//get User Defined Workbooks
              Variant[] arguments = new Variant[1];
              arguments[0]= new Variant(fileName);
              System.out.println("workbooks::\t"+workbooks);
              IDispatch p1=workbooks.getDispatch();
              int[] rgdispid = workbooks.getAutomation().getIDsOfNames(new String[]{"Open"});
              int dispIdMember = rgdispid[0];
              Variant workbook = workbooks.getAutomation().invoke( dispIdMember, arguments );
              System.out.println("Opened the Work Book");
              try {
                   Thread.sleep(500);
              } catch (InterruptedException e) {
                   // TODO Auto-generated catch block
                   e.printStackTrace();
              int id = workbook.getAutomation().getIDsOfNames(new String[]{"ActiveSheet"})[0];
              System.out.println(id);
              Variant sheet = workbook.getAutomation().getProperty( id );
              OleAutomation sheetAutomation=sheet.getAutomation();
              return(sheetAutomation);
         * @param args
         public static void main(String[] args) {
              // TODO Auto-generated method stub
              Display display=new Display();
              Shell shell=(new EventTry2()).open(display);
              while(!shell.isDisposed()){
                   if(!display.readAndDispatch()){
                        display.sleep();
              controlSite.dispose();
              display.dispose();
              System.out.println("-----------------THE END-----------------------------");

  • HDV capture card

    I'm looking for a capture card that support HDV camcorders, for G5.
    But it seems that doesn't exist any card: I found only HD or SD capture cards.
    The only HDV capture card that is able to native editing hdv video in realtime from hdv camcorders is the canopus EDIUS NX (and canopus velxus 300 in Japan); but the problem is that canopus hdv card is only for PC.
    I called canopus and they said me that these cards will be for MAC when Apple will release the new powermac intel-based.
    So I read in the apple FCP 5 HD page this:
    "Native HDV Support
    Final Cut Pro 5 supports native long-GOP MPEG-2 editing for working with camera-original HDV video. Unlike other solutions, Final Cut Pro 5 acquires HDV media via FireWire and keeps it in the original format, transferring it into the system without any generation loss. Output via FireWire back to an HDV camera or deck, or transfer your native HDV to DVD Studio Pro 4 for an end-to-end native HDV workflow."
    I know that it is not possible capture a video by firewire from a camcorder without further compressing it. I know that it is for this reason that pro use capture cards (that have dedicated processors for this).
    So I think that there aren't HDV capture cards for mac because the hardware companies don't make new cards for only few months, so they're waiting the new intel-based macs. If so, this firewire technology of final cut pro 5 is a compromise for these last times, or is it really like the capture-card processing? and if so, how is it possible capturing a hdv video by firewire without further compressing it?
    Do you know about a HDV card for MAC or is it really good using Final Cut Pro HD 5 with firewire ?
    Thank you.

    It all depends on what type of work you are doing...
    Capture Cards are for use with other formats that cannot be captured via firewire: BetaSP, Digibeta, HDCAM, D5. They are also require for outputting to these formats.
    Not many broadcast facilities will take a DVCAM master...some will, but most won't. And depending on your client, they may or may not take an HDV master. They may want an HDCAM or D5 master if they are producing HD content. For that you need a capture card and high speed drive RAIDS.
    Why do other pricier systems come with capture cards? Because they are sold that way. FCP, out of the box, is ready to edit DV, DVCPRO 50, DVCPRO HD and HDV...FULL RESOLUTION...no compression. Any need to do higher end requires a capture card.
    Shane
    "There's no need to fear, UNDERDOG is here!"

Maybe you are looking for

  • White screen flash when I use Preview to look at pics!

    GDay all About a week ago my G5 Duel started flashing a white screen (like a camera flash) when I load any jpg tiff or pdf using Preview. I minimise the photo file, then load it back to full size and I get a white flash. Its like someone is taking a

  • Shutting down LabVIEW runtime engine

    We created a data analysis & presentation vi and then produced a dll from it to be called from C#. It seems to run fine the first time, but on subsequent (interactive) calls, the front panel does not display. It appears that the LabVIEW run-time engi

  • Get rid of parallels menu icons when in coherence mode?

    Hi everyone, When in coherence mode, Parallels inserts 6 icons into the menu bar which can be a little annoying for people who value a simplistic menu bar.  I love Parallels and this is not a 'make or break' factor for using the wonderful invention. 

  • Is it possible to apply a stroke effect to a Photoshop image in After Effects?

    Hello, I've imported an image (a maze) from Photoshop into AE, and I want the image to appear as if it's being drawn in using the Stroke animation.  Is it possible to apply some kind of path along which the image can "fill in"--similar to how you can

  • How to download es workspace BPM application

    Hello, I have recently registered myself on es workspace. I have also configured my CE server with es workspace server in sap. I have also successfully tested VC and webdynpro applications but some where in Insurance branch I have seen BPM model on e