HVX200 Ingest workflow?

Anyone have a good workflow with ingesting hvx200 footage? Once I import all my media from a certain card or drive I then wrap them all then delete them from the final cut bins. I then find them via "bridge" from adobe's cs2 and rename all my clips from there. From there I import them back into final cut and save. Wondering if I can stay inside FCP for all of this.
I do all these steps because when you import or "wrap" your p2 clips they all come into the bin with weird generic file names and there are no options I am aware of to change it. Any help would be cool. Thanks!

Uh...if you don't like the file names, after you import them you can then delete the CLIPS (this will not delete the media...unless you select MAKE OFFLINE) from the FCP browser. Then you can go to your Capture scratch folder for that project and rename them...opening them in quicktime to view them.
BUT...this is dangerous and I highly recommend that you avoid renaming the clips. Why? Well, what if you lose them? What if the drive they are stored on dies? (This has been known to happen). When you go to re-import the clips from the .MXF files in the CONTENTS folder (which you should keep...this is for all practical purposes your source tape) then you will be stuck because the re-imports will have those numbers again, and won't match up with the clips you re-named.
No. Just add a description to the file name in the Browser. So 0004T6 becomes 0005T6 - Man walks across street. And so on. Or you can just add your notes to the DESCRIPTION section of the browser and sorth your clips by that name.
Why did you need to use CS2 to rename the clips?
Shane

Similar Messages

  • File & Transfer P2 Workflow

    Do those of you who shoot and edit frequently with Panasonic's P2 find FCP's File & Transfer really useful? What's your ingestion workflow from camera to Mac? Is it truly faster in adding scene naming?

    You missed a vital part of the tutorial. Back up the footage from the P2 card to a hard drive first, then import. Don't import directly from the card...if your media drive fails, you're hosed.
    Yes, use the Duel Systems adapter and copy the footage off the card, then import. Then back up that footage and put the drive on the shelf.
    Shane

  • FCP7 - media is becomes disconnected on Xsan unprovoked by user

    Hi folks,
    I ran across this issue a month or two ago with one of my student workers.  We were able to fix the problem on the fly and I just assumed it was user error that caused his files to go wonky, but now I can't see that as the answer.  I don't really know how to describe the problem without just saying what exactly happened:
    I exported a video yesterday morning.  The sequence was perfectly intact and there was no missing media anywhere in the project (which isn't big, 1 sequence, and only about 15 gb of ProRess422 movies in 18 clips).  After exporting I closed the project; throughout the rest of the day yesterday I worked on other projects, but never that project.
    Today, I opened the project and 6 of the 18 clips show up as offline.  When I click reconnect, FCP tells me it's looking in the root directory of the Xsan for the clips, where they have never been located.  I did a Log and Transfer from a Canon Vixia first thing when I started the project so the .mov's went straight to the Capture Scratch (which has never moved). 
    When I go to "Locate" the files, the clips in the Capture Scratch that I should be able to reconnect with are grayed out.  If I deselect "Matched Name and Reel Only" then I can select the file I want, but when I do I get the message: "File Attribute Mismatch - some attributes do not match the original, etc.."  If I say "Continue" then it reconnects and ends up being the wrong file (that threw me for a loop).
    So after this happened I went into my Finder into the Capture Scratch and went through the 18 clips.  It looks like what happened is some files were deleted, and then replaced with the same name but with a different file.  Specifically: I have clips 138-155.  Clip 146 is now the exact same as Clip 153, whereas yesterday morning they were completely different.
    Has this happened to anyone else?  It could very well be an issue with our Xsan too, but thought I'd ask around here since I've only noticed the problem in FCP.
    Running FCP 7.0.3, and OSX 10.6.8 on MacPro and Xsan.  Xsan is hooked up via fiber.

    We are having the exact same problem, but it's much more widespread.  For the record we are using the following:
    Mac OS X 10.6.8
    Xsan 2
    Final Cut Pro 7.0.3
    Sony NX5U AVCHD media logged and transferred to ProRes 422 (proxy)
    Sometimes our students can reconnect without issues, but often it's next to impossible to reconnect, so they have to batch capture their footage from their archived media.  For one or two clips, this isn't too bad, but sometimes it's almost an entire project.  I can confirm that this is the same issue because the clip paths in FCP now just list "Xsan:" (our Xsan volume name) or "Xsan:Clip ####.mov" (where there should be many other folders).  Permissions aren't an issue, because all the files show up in Finder and can play back in QT without issue.
    One thought I've had is about how FCP auto-names the ProRes clips as they are ingested from .  All the clips are sequentially numbered as Clip #1, Clip #2, etc..  If a student ingests from more than one media archive (or SDHC card), they will have multiple Clip #1s, Clip #2s, etc..  In Capture Scratch, FCP will automatically increment the clip name until it finds the next available name.  Example:
    Card A
    Clip #1
    Clip #2
    Clip #3
    Card B
    Clip #1
    Clip #2
    If I ingest the following from Card A:
    Clip #1
    Clip #3
    And then try to ingest files from Card B, this is what I get:
    Clip #1 becomes Clip #2 (the next available name)
    Clip #2 becomes Clip #4
    You can see what a mess this can become.  The issue is further compounded when our students fail to uniquely name their archived cards.  In the logging information, all the reels are listed as "NO NAME", so it's impossible to figure out whether a clip is from one card or another.
    Most recently, I've noticed the following behavior:  students reported that some clips suddenly started "flashing colors" and playing rapid random frames.  When restarting FCP, the clips in question (and several others) were offline.  This seems to be the first effect of this problem, but I can't trace it back to a specific cause.
    Lastly, today, I discovered issues with students clips that had been recaptured after going offline.  I saved the project with a new name in an attempt to distinguish the newly captured files from the old ones.  One would expect a new project folder within Capture Scratch, but instead, I got several new clips right in the main Capture Scratch folder, all named "[new project name] #".
    I'm thoroughly frustrated with all this, as are our students.  I think some of this just boils down to maintaining a very rigid ingest workflow, but the offline clips is very troubling.  Any other suggestions or similar experiences would be greatly appreciated!
    Pete

  • Help Me Configure New Hard Drive Setup

    I'm currently upgrading my 8-Core Mac to Snow Leopard, FCP Studio 3, and CS4. I've also archived all of my editorial projects to external hard drives (leaving me with blank, scratch drives) and I'm now, exclusively using an HVX200/P2 workflow. I'm hoping some forum members can help me configure the best possible hard drive configuration for media back-up and realtime edit.
    Here's what I'm working with:
    • 1, internal 250GB hard drive (OS drive)
    • 3, internal 500GB hard drives
    • 1, CalDigit VR 1.3TB external hard drive (eSATA)
    • 1, Glyph 1TB external hard drive (eSATA)
    • 1, G-Raid 1TB external hard drive (FW800)
    I also have several, external, FW800 drives which could get incorporated if need be but I'm leaving those out for now since they would be good for archiving. I'm anxious to hear what configurations people suggest.

    Int Drive 1- OSX and apps. FCP project backups
    Int Drive 2 - project & media
    Int Drive 3 - project & media
    Int Drive 4 - temp scratch/render files
    1 TB G-raid 800 - clone of system drive + disk images of all professional software (perhaps 2 partitions to keep 2 separate back up versions)
    1.3 TB Caldigit -back up of Int Drives 2 & 3
    1 TB Glyph - used when needed for temp sneekernet, temp project backup, etc.
    x

  • XDCam HD/Prores/DigiBeta and my head hurts!

    I have been doing research and have drawn some of my own conclusions but some input from those more experienced than myself would be gratefully received.
    I have a interview shot on XD Cam Disc 1080/25p winging its way to me, usually we Shoot in Digi Beta and HDV is the only HD format we have had the pleasure?!? of using thus far.
    I will be hiring a XDCam reader (probably D1) so I can transfer the files.
    Q1/ Do I/Can I transcode straight to Prores using Log and Transfer or do I have to do this using Compressor afterwards?
    Q2/ Is it worth transcoding to Prores as this is a stand alone interview and shouldn't need to be mixed with any other footage?
    Q3/ We are going to have the situation more and more where crews say "sorry no Digibeta" or "that'll cost you more". I can see my HD footage coming to be in all flavors (but hopefully 1080/25p).
    Due to this I see Prores as our most likely way forward as quite often the different footage will be put into one program.
    I was looking at getting a AJA io and then hiring decks depending on what the footage is shot on and go directly to Prores that way. But I understand that you might be stuck with 29.97 frame rate when playing into the AJA through HDMi?
    Opinions on best workflow going forwards would be gratefully received, my head hurts.
    Thanks
    John E

    Hey John
    XDCAM HD is a fully supported FCP native camera codec, no need to fear it at all ... if you like you can edit it directly in to a ProRes timeline and get full quality realtime previews, or you can edit it native and set rendering to ProRes for faster results with better potential quality if doing a lot of effects work, or you can just edit and render native ... its all very easy and has been since around FCP 5.1.2. When you've locked edit you can export as Uncompressed, ProRes or whatever other delivery codec you prefer (including XDCAM HD although that would be an odd choice).
    I will be hiring a XDCam reader (probably D1) so I can transfer the files.
    Not a good idea as the D1 is SD only I think ... get a U1 if you can, or rent a player "deck" like the PDW-F30 or better.
    Q1/ Do I/Can I transcode straight to Prores using Log and Transfer or do I have to do this using Compressor afterwards?
    No need whatsoever to do it all (see above). But if you did want to do it anyway then you would have to do it post import ... no direct to ProRes transcode on ingest with XDCAM HD (XDCAM EX yes, XDCAM HD no).
    Q2/ Is it worth transcoding to Prores as this is a stand alone interview and shouldn't need to be mixed with any other footage?
    Not worth it at all unless your goal is to hoover up disc space and man hours unnecessarily.
    Q3/ We are going to have the situation more and more where crews say "sorry no Digibeta" or "that'll cost you more". I can see my HD footage coming to be in all flavors (but hopefully 1080/25p).
    Due to this I see Prores as our most likely way forward ...
    ProRes as intermediate is fine. As for hiring decks, if you are imagining your future needs then you should really be assuming tapeless ingest/workflows rather than anything tape based, that said an AJA Io HD (not AJA Io) would be a great device to have on hand as would Matrox MXO2 and other I/O devices ... and no, I think you have misinterpreted that frame rate restriction, its not the case at all.
    Hope it helps
    Andy

  • Survey, Do you rename files before bringing them into Aperture.

    I'm curious the percentage of us that rename the original files (so we don't have a bunch of images named something like _ACH0001) before final import into Aperture. This includes either editing them then renaming them outside of Aperture and bringing the edited images into Aperture, or bringing them into Aperture, editing them, renaming the kept versions, exporting the masters with the new version name and finally re-importing them back into Aperture.
    So the question is:
    Do you rename the original files before bringing them into Aperture?
    My answer is a definite...
    Yes

    Since this thread has generated some interesting discussion, let me add some further clarification on our approach:
    1) We use a standard file naming convention NNNddddddssss, where "NNN" = photographer's initials (we need to track multiple photographers); "dddddd" = a unique alpha-numeric date code that will handle any date from 1/1/1850 to the foreseeable future; and "ssss" is a unique image sequence number.
    2) This file naming convention was established long before Aperture was released, supporting a multi-photographer, multi-camera, Photoshop-based workflow. It was also designed to handle scanned images in addition to camera generated images, as we will be scanning and cataloging many historical family images (some copies of original photos over 100 years old) over the next few years.
    3) Since our large base of image files would be converted to Aperture with this naming scheme in place, it made sense to keep this file naming scheme for new files.
    Granted, we could just use this scheme for assigning Aperture version names, without worrying about the underlying file name. However, it is more consistent, for our uses, to use the file naming process and ingestion workflow we used pre-Aperture. And the approach has the added advantage that it doesn't matter whether we use image name or version name in our Aperture views. (It seems the default sort order for new projects/albums/etc. is image name, a bit annoying actually given the focus of Aperture on versions.)
    So whether one uses just version names, or version names and renamed image names, the really important thing is to design and use an open-ended naming scheme that can handle multiple uses, and multiple image sources, on the front end of any digital asset management strategy -- whether that asset management is done in Aperture, in Bridge, or in a cataloging application like iView Media Pro.
    Mike
    PowerMac G5 Dual 2.0GHz, Radeon X800XT Graphics Card; MacBookPro 2.0Ghz   Mac OS X (10.4.6)   iMac800 and Powerbook G4 also in household

  • Importing C100 AVCHD footage (timecode, media start, clip names)

    I primarily shoot and edit AVCHD footage from Canon C100. Typical ingest workflow is to use the Data Import Utility by Pixela. This creates individual .MTS files on my with unique file names (GREAT)...BUT when I bring the files into PP CS6 it sets the media start time for each clip at 00:00:00:00 (essentially wiping out the timecode reference). Alternatively, if I simply copy entire contents of the card to my raid then use the Media Browser in PP to import the footage, the timecode—and media start time—is maintained from the shoot (GREAT)...BUT the individual clip names that it assigns always start with 00000.mts, 00001.mts and so on). This creates a problem when relinking footage...because now I've got multiple projects with the same file names. Any ingest workflow recommendations?
    Appreciate your perspective(s).
    ~ Brian

    Use the original media, which gives you the timecode, and upgrade to CC, which gives you the new Link and Locate feature, making relinking a breeze.

  • Audio post for Film-Out

    Hi Guys,
    I have been researching the workflow for creating a film-out from HVX200 and have pretty much sorted the visual side of it all.
    HVX200 Filmout Workflow
    But I am now looking at the audio side of it all. I am totally new to this side of it and was wondering if someone could tell me the exact workflow to follow to accurately post the audio for a 23.98 project. Would it be better to use an audio post house?
    Your help would be very much appreciated!
    James
    Dual 2.3Ghz PowerMac G5, 1.5 Ghz Powerbook G4   Mac OS X (10.4.6)  

    spoke to my audio guys

  • Best workflow for FCP 7 HVX200 or XHA1?

    I know these are older technologies but if you could get a great deal on an Canon XHA1 or HVX200 with 16gb card for roughly the same price (1500 for Canon, 1600 for Panasonic) what would you pull the trigger on?
    I need a camera to take on the road to do some simple interviews for a documentary. The more complicated shoots will be done by my DP and his much better equipment.
    But when I figure rental fees, convenience, and the likelihood I may need it to do some more simple projects by my lonesome in the future it could be a good investment if the price is right.
    Both cameras are being sold from (seemingly) reputable industry professionals who are upgrading their own inventory and have given histories of the respective cameras uses, environments, hours, and any repairs.
    My top priority is picture quality and workflow with FCP7. I know Canon has the better chips but is tape. How are the P2 cards with FCP7? What's the best bang for buck picture-wise? Of course other features will be taken into consideration. It's been a while since I shot on my own so I am a little out of touch with technology. Thanks for any insights!
    Thanks,
    BJ
    www.bjbarretta.com

    Both are very good cameras. The Panasonic uses P2 cards and the work flow on that (DVCPRO) and the HDV of the Canon are both supported in FCP7. The P2 is ingressed via Log and Transfer using the builtin P2 plugin which can easily be set for ProRes 422. The Canon uses basic firewire and log and capture. They both have 4 pin firewire connection and of the two the Canon is more renowned for being fidly and hard to connect. That said There are lots of XHA1 out there (I have an XHA1S no issues). Picture quality is fantastic on both and you pays your money ... your question was workflow's .. ingest the captured product either DVDPRO or HDV to ProRes and away you go..
    Jim

  • Best workflow for creating logsheets before ingesting.

    I'm looking for the best workflow for creating detailed log sheets of my avchd footage before ingesting/transferring it to FCP. I have many hours of footage shot with my Sony NX5U and only want to ingest the necessary clips. I'm using FCP 6.5 on a MacPro tower. Thanks.

    >I'm using FCP 6.5
    NO you are not.  You might be using FCP 6.0.5.
    Well, the only way you can ingest only sections of tapeless media is by using Log and Transfer.  And you cannot Log clips first, and transfer later.  You cannot do the same thing you did with tape, make a Batch Capture list.  So you would look at your footage in the FCP L&T interface, and you can send one to be transcoded while you move onto the next one to watch.  That can happen at the same time.
    But, there's another issue here.  The Sony NX5U uses an AVCHD format that is slightly different than normal...not that there is a normal.  But I discovered that importing clips up to 10 min in length work fine, but anything over 10 min, and it took 4-5 times longer to import.  meaning a 30 min clip took just over two hours.  It was something that ClipWrap2 addressed before FCP did...and they only addressed the issue with the FCP 7.0.3 update.  FCP 6.0.5...or even the last update for that version, 6.0.6, doesn't address this.  There might be issues in importing this footage into FCP 6.
    And the other options...Avid Media Composer and Adobe Premiere...don't have a "log and transfer" type interface.  They bring in the full clips only.  With Premiere, you are stuck with them being full length.  But with Avid, you acccess the footage via AMA (Avid Media Access), and then you'd make your selects on a sequence, and then consolidate/transcode only the footage on the sequence. 
    And no, there's no way to make a logging sheet for these apps either.
    The best you can do is watch the footage, note the timecode, and manually enter those numbers when you go to bring in the footage.

  • Ingesting avchd from sony hdr-sr11-Editing 1080i 60 need a workflow

    Hello all,
    So I have successfully ingested some test shots into fcp for a cousins wedding that i am shooting in a couple of weeks. The footage looks great once transcoded to apple pro res 422. I do have a few questions.
    1) In the log and transfer window do I need to check remove pulldown? I though this was for progressive mode
    2) Should I edit in a native 1440x1080i 29.97 sequence? then render and export that sequence?
    3) Do I need to worry about deinterlacing issues?
    I have monitored it out through my canopus and it looks terrible. Alot of jagged edges. I have tried the deinterlace filter. No go. I looks fine on my monitor though?
    3) I will be mastering to SD/DVD
    4) Once complete, do I render out a reference quicktime apple pro res import that into compressor
    then import that into studio pro? What is the best way to downconvert this final locked sequence?
    5) What would be a good workflow? I do plan on doing some after effects work for this.
    I have searched all over for a workflow and everyone seems to have their own opinion.
    Anyone?
    thanks in advance,
    darryl

    No. It doesn't work. The Log & Transfer window still shows "downmixing to stereo"!
    I have also read in the Apple HD Users Manual that ALL AVCHD 5.1 surround sound will be DOWNMIXED to Stereo. Why they would do it is only they know! G H U !
    Most Gurus very silent on this issue ... ignorance is bliss ... As the Silfen said (a long time back) Ozzie's lost to the pathways - don't let the glitter of mobility pale your thoughts of the solidity of the apps!
    Satya - A truth

  • Ingesting HDV and AVCHD in same project - workflow?

    Whats the workflow with ingesting HDV and AVCHD into same project? Im guessing convert both to 'Apple intermediate codec' i think this is straightforward for avchd as is in prefs pane of log and transfer, but how do i do this for hdv?
    Also set the avchd to record at 1080i or does it not make much difference once captured / pro ressed up.?

    Bring both in as ProRes. Much more processor efficient.

  • Ingesting AVCHD workflow

    I'm coming from FCP where I ingested AVCHD footage straight from my camera, and only brought in the footage that I needed using I and O points during the ingesting process. With CS6 I see lots of people copying their entire cards to their hard drives and from there ingesting it into CS6. This seems it could quickly eat up a lot of hard drive space with copying large clips when you only need small segments. Is there a reason why it seems to be generally recommended to do it this way rather than just ingesting straight from the camera the parts you need?
    Thanks.

    Ingesting from a camera is real slow, because of the slow speed of the cards used. Second, you need the complete directory structure for reliable operation. Third, you need reliable backup of your source material. Last, HDD's are cheap, so what is a couple of 100 GB's of disk space.

  • Need Help Determining Least Common Demoninator for Frame Rate, Codec, and Workflow

    I need help determining the best timeline setting and Compressor workflow to integrate footage with varying frame rates and codecs that I'm currently upres'ing for a multi-camera concert performance destined for HD broadcast output. I'm assuming the network needs 29.97.
    Thus far, I've been working with Apple ProRes Proxy files to create lo-res edits. Now, I've started the task of offlining and ingesting new, HD clips from the proxy references. The content originates from either Panasonic HVX200 or Panasonic GH1 cameras.
    Looking at the material, it appears the cameras were not shooting with the same settings and, somehow, a PAL GH1 got into the mix. Some of the performances have the PAL GH1 and other do not.
    Here's the breakdown of the varying sources. I got this info from the Log & Transfer columns.
    HVX Cameras
    Format: 1080p24
    Source Format: DVCPRO HD 1080i60
    Shooting Rate: 24
    Vid Rate: 29.97
    TC Format: Non-Drop
    GH1 NTSC
    Format: 1080i60
    Source Format: AVCHD 1080i60
    Shooting Rate: 30
    Video Rate: 29.97
    TC Format: Drop
    GH1 PAL
    Format: 1080i50
    Source Format: AVCHD 1080i50
    Shooting Rate: 25
    Video Rate: 25
    TC Format: Non-Drop
    ANOTHER GH1 NTSC
    Format: 1080p24
    Source Format: AVCHD 1080p24
    Shooting Rate: 24
    Video Rate: 23.98
    TC Format: Non-Drop

    Call the TV station/network and get their spec sheet first. You need to know more than frame rate.
    Once you have that, you can work backwards to arrive at a workflow.
    As a general priniciple, you'll get a more seamless translation of format when you add frames rather than removing them. (eg 24p to 27.97 rather than 29.97 to 24p)
    At least all the material starts out in the 1080 world.
    Do all your conversions before you start editing. (I'd use ProRes or ProRes LT for the editing codec).
    Budget a bunch of time to sync the material or figure out a quick cutting style that minimizes sync drift.
    What a nightmare.
    x

  • Remote Voice Over Workflows using FCP - thoughts?

    I am going to try and keep this as simple as possible.
    I am Producer and host for an Australian Drag Racing Show that we air nationally...
    We record these shows in Full 1080 HD using modern XFILE systems (EVS) using the Apple ProRes 422 (HQ) codec.
    Our ENG/EFP team use XDCAM (full 1080i HD).
    We edit these shows using FCP 7 amongst a few suites.
    I also do some offline content on my 17inch i7 MBP using FCP too.
    My question is thus:
    The Post Production Team are based in Sydney on the east coast, whilst I am based in Western Australia - this is like being in LA in comparison to New York!
    I want to reduce my travel commitment by air, as in the past I have travelled weekly to commentate these shows (40+ shows a year!)
    I have - as does the production house - a reasonably good ADSL Internet connection.
    I do a lot of Producing and program checks using iChat Theatre within FCP 7. It is quite good.
    What I want to get to though is a way in being able to more often commentate these shows remotely from home, without the need for me to physically having to be across country on every occasion - for my sanity's sake.
    My co-commentator also lives near me, however our editors are all based across country, together in Sydney.
    I see the following options as being the way to do this:
    OPTION 1.
    When we ingest (log and transfer) the vision, I will always be in Sydney at that time. There are usually several shows generated from this vision.
    I copy the Quicktime MOV files that are for use in the shows to my external (eSATA) HD.
    Editors in Sydney cut the racing components of our show. They then email me the FCP project file.
    (I have FCP, the required Audio and Mics)
    Then, I open this FCP Project File and re-link and re-render as required once I receive this and we commentate the shows using the VO utility within Final Cut.
    I then do an Audio export only and send the editors the updated FCP Project file, for them to receive and re-link the vision as required, sound mix and print.
    OPTION 2.
    The Editors cut the shows, then export either within FCP or Compressor a reduced quality file size to send via FTP or sendspace.
    I then import this vision locally into FCP, render the vision and commentate it accordingly and just send back the audio.
    SO:
    What would you all suggest as the easiest and best way to do this. We do have tight work timeframes in terms of getting content to air.
    I think Option1 is by far the best, and what I am not sure about is if I did Option2 - what settings for export and I want to avoid bogging down the editors on handling cumbersome vision exports anyway?
    Thoughts?
    Dean...

    I would probably go with option 2, myself - I usually export my cuts and run the movie thru compressor with the H.264 LAN or H.264 800KBPS presets. These are found under Apple/Other Workflows/Web/Streaming/Quicktime 7/
    These take up roughly 10MB per minute and 5MB per minute respectively.
    While I have used Option 1 (duplicate media, transferring a project file), this can get problematic when the editors have generated media - like effects, color correction, etc...
    my 2¢
    Patrick

Maybe you are looking for