Best workflow for converting NTSC tv show to PAL?

The company I work for edits a TV show in DVCPRO HD 1080i60. We distribute it to a station in Europe, and for about a year we've sent them the programs on DVCAM tape, 16:9 letterbox, NTSC, rendered in a 720x480 sequence, 29.97 fps. I figured they took care of the process of switching it PAL since they never asked us to do anything differently.
Well now they have. Now any tape we send them needs to be in 16:9 anamorphic and PAL.
In regards to the first issue, changing to 16:9 anamorphic seems as simple as clicking the Anamorphic check box in the sequence settings. But the PAL issue has me stumped. An engineer friend told me that all I should have to do is print the program to a PAL DVCAM deck, and that should do the trick. So my thinking is that I export the program as a DVCPRO HD Quicktime file, put that into a PAL 16:9 anamorphic sequence, and print to tape using a PAL deck.
Anything I'm missing? Is it as simple as that or more complicated? Any thoughts are welcome.
jesse.

Sorry fellow Missoulian, it is NOT that simple. FCP doesn't convert the frames rates properly...the proper pulldown removal, frame rate conversion does not happen well in FCP. If you have Avid 4.0...it would. That version does this VERY well. BUT, we are talking FCP.
Best option, output NTSC and send to a dub house to have the conversion done on proper machinery. Bill the client for this dub...unless it is part of the deliverable and therefore part of the money they pay you. If they changed the deliverable to require this conversion, then they should be then paying more money so that you can have this conversion done right.
Second best option, export a self contained QT file, and use Compressor to do the conversion, frame settings set to FULL. The drawback? This takes a LOT of time. And you will have to do tests to see if it looks right, and adjust settings to match.
ME? I send it out of house to be dubbed. But you are in Missoula where that doesn't exist, and finding someone with a KONA/DECKLINK/MATROX card is a challenge.
Shane

Similar Messages

  • Best workflow for converting HD footage for broadcast?

    I've been banging around the forums for a while tonight looking for someone who's been down this road before, but to no avail. Here goes...
    We shoot a weekly "Top 20" show using a Sony XDCAM EX-1 @ 1080p. It's a Top 20 video countdown show, which means that along with my footage (a DJ in a studio as your Top 20 host), I incorporate graphics and music videos ripped from DVD's. The final product from all of this winds up on a local cable channel, which broadcasts SD.
    I use the Sony XDCAM Xfer software to convert the data from the SxS card to QT files. (So far, so good.) I open up FCP and place these files into my project. Once I drag one of these clips into my timeline, I'm asked by FCP if I want to change the sequence settings to match the clip I'm putting onto the timeline. I answer "No" (because I've preset the FCP sequence settings to 720x480, since the cable channel broadcasts in SD. Could this be where I'm screwing up?) I bring everything into the timeline (including the ripped videos) and get everything edited nice and tight. But wait...
    I've been staring at my Mac monitor this whole time. When I play the project back and look at it using the broadcast monitor, well...it looks sorta sick. Certainly not what I would expect from HD-to-SD footage. Everything looks pixelated and sorta blurry. If it looks this way in the studio, you can imagine what it looks like when it airs on TV. It's not pleasing to the eye.
    I would ask about what the best way is to deliver a show like this to a cable channel, but since the stuff on my broadcast monitor looks so crappy, the problem's gotta be home-grown. What am I leaving out?

    If you know you are going SD why film in HD at all?
    Its not like you are going to be re-airing your shows down the road in HD,
    Most likely, the video's you are ripping from DVD are not HD either...
    Depending on the deck you are exporting your show to, some of them have a down convert feature, mine does Sony HVR-M25U.
    I have a weekly show I do, but I do it all in SD until the channels it broadcasts on will take HD footage.
    Though if I am doing a commercial, I usually shoot it HD and leave it HD

  • Workflow for converting NTSC to PAL with Premiere/AME

    Hi everyone,
    After searching the forums, I see that to convert NTSC to PAL for  standard def DVD, most recommend using Canopus Procoder software for the  MPEG2 encoding, or using a hardware based solution.  I don't have the  money for those options, so I was hoping you could clear up something  for me.  Which of these options is best?:
    Option #1:  import the NTSC footage into Premiere and edit on a NTSC  timeline with settings matching the original footage.  Then use Adobe  Media Encoder to encode a PAL format MPEG2 file for the standard def  DVD.
    Option #2:  import the NTSC footage into Premiere and edit on a  timeline set up with PAL resolution and framerate settings, and simply  scale up the NTSC footage to match the larger PAL resolution.  Then use  Adobe Media Encoder to encode a PAL format MPEG2 file for the standard  def DVD.
    Thanks for the help!!
    Mike
    Intel i7-930 2.8GHz
    12 GB RAM
    1 GB VRAM
    Adobe CS 5

    pal to ntsc is not quite the same as ntsc to pal.
    pal has more scanlines and there for makes a better end result to ntsc than ntsc to pal.
    Usually ntsc converted to pal does not look very good on a tv.
    That's why i recommend to leave it in ntsc, most pal players can be switched to ntsc.

  • What is best workflow for interpret NTSC footage into PAL project?

    Premier Pro CC cutting a film for HD broadcast TV. Majority of footage shot PAL at 1080p, so I made a 1920 x 1080 25p project. Is it okay to make a progressive project for broadcast? Or should I switch to interlaced?
    I am supposed to deliver an NTSC tape to the network.
    Some footage is 29.97 and 23.97.  Is it okay to Interpret footage in bin as 25p PAL and then cut it into my sequence.  It seems to play properly; I'm just concerned about problems in the final output. I've read that people duplicate the clip, then interpret to 25fps PAL, and leave the original NTSC alone.  This sounds like a good idea. Working on a MacBook Pro Mountain Lion.
    When I output the final QT movie, what are the best settings and codec to use for the final
    Hi-Res output which I will bring on a drive to a post house to do final CC and layback to tape?
    Many thanks in advance.
    Patrice Shannon

    Convert NTSC <--> PAL http://forums.adobe.com/thread/1209530 may help

  • Workflow for dual NTSC and PAL DVD project

    Hello,
    I am really in need of professional advice here, and any tips would be GREATLY appreciated. I am not an expert in Final Cut Studio, but I am OK to get things done on the project I am working on. Some things, despite cracking through multiple manuals, I can't properly figure out... THANKS AGAIN for any help!
    I am working on a DVD project which I want to release in two versions: NTSC version in US, and PAL version for Poland — same content. Content was created using DV-NTSC Quicktime files, made on import from an American mini-DV SONY camcorder into Final Cut Pro. All the titles created in Photoshop were created in 720 x 540 and converted to 720 x 480 for the sequences.
    I made one NTSC DVD using Studio Pro, and it seems to work okay, with some minor tweaks. But I really need to know what is the best way to make a PAL DVD with minimal quality loss. I need to set up this project correctly, because I have a **** of a lot of work to do, and I don't want to screw it up by incorrect setup.
    So many questions... Should I use separate Final Cut sequence files, with different settings, for each disc? Should I finish my work in sequences with NTSC settings in Final Cut, and then use Compressor to export to PAL from within Final Cut? Really, what is the best workflow for something like this? I would dearly appreciate any advice.
    Many thanks!
    Rad

    First off - you can do this. Finish your work in NTSC. Then go buy the Graeme Natress plug for FCP. http://www.nattress.com/standardsConversion.htm
    Download and install the app. THen follow the instructions on his site. It's a short turorial movie and you're done.
    Heads up - you'll need to first export NTSC Quicktime files of each of your movies.
    Hope this helps.
    Brian

  • Best workflow for burning Blu-ray and DVD

    Hi,
    What's the best workflow for effciency sake for burning both a Blu-ray and a DVD of the same project?
    Essientially I'd like to export and create a menu once, then be able to downrez to DVD. However, I'm not sure that is possible since blu-ray and DVD are differnt codecs.  
    Thank you

    The only way that you can get acceptable re-scaling of menus is to go from BR to DVD, and do the editing in Photoshop.  Simply re-scaling the menu here will not give acceptable results.
    I usually copy the various button layers from the BR menu to a new DVD menu file created using the provided Photoshop preset.  You will still have to do  some manual re-scaling and alignment.  The thing that has given me the most trouble over the years is round button highlights - a segment is often chopped off.  Font sizes will probably need adjusting too.
    Unless you are adept with handling layers and groups within Photoshop, it may be easier to start again.
    Any attempt to convert within Encore is unlikely to produce acceptable menus in my experience.

  • What is the best workflow for Final Cut Pro X across multiple computers?

    I travel on a team that is in a different location every week, so all of our video editing has to be fairly moble. We have 3 macbook pros and would like to use final cut X on all of them. What would be the best workflow for us all to work on the same project and sync it across the computers? (eg, one of us editing the footage, one editing the sound, one working on visual effects?)

    My first question would be how well 5.1 would work with SL. People have reported problems.
    As far as cameras are concerned, I would guess most any tape-based camera would work and so much depends on the kind of projects you work on that I wouldn't want to start guessing at your needs.
    If you want broad access to tapeless and advanced formats try to locate and purchase an upgrade disk to 6 or 7.
    Good luck.
    Russ

  • Best workflow for working on a project on two computers (i.e desktop and a

    What is the best workflow for the following--
    I have a project where I want to do some work on my laptop, then work on the desktop, then back to the laptop?
    As a video guy, in Final Cut I do this by having two identical copies of my media, and simply transferring the FCP project file back and forth between the two computers.
    Is there an equivalent in Aperture of transferring your selections/edits between two copies of Apeture with the original master files on both in identical projects?

    I don't know what you mean by "bandwidth options". If you mean the frequencies and bands the iPad can use with cell carriers, those are listed in the Wireless and Cellular area of the tech specs. For iPads sold in the US, see:
    http://www.apple.com/ipad-mini/specs/
    If that's not what you meant, please post back and clarify what you mean by "bandwidth options".
    Regards.

  • Best format for converting video

    What is the best format for converting videos for your iPod? I am using handbrake but there are alot of settings. What is the best one? mp4,avi,ogm? and what about codecs? aac audio, mp3 audio,h.264 video mpeg 4 video, .mp4 video? Need help?

    See this Wikipedia article for some encoders:
    * http://en.wikipedia.org/wiki/Theora#Encoding

  • Best workflow for colour grading in CS5.5

    What is the best workflow for colour grading in Premiere Pro CS5.5? I don't like the 3 way colour grader in Premire I like to use Levels and Curves, so I normaly use Color Finese 3 LE in After Effects but it's hard to export an entire timeline to AE due to it not improting transitions and title text correctly.
    I see in CS6 there is now Speed Grade available from Premiere Pro. can you export your entire timeline into Speed grade and it retain your titles and trasitions?
    So how are most folks colour grading their finished timelines in Premire Pro?
    Kevin

    If I can CC in Pr, I will. What prevents me from staying in Pr is if I need to do a lot of masking, secondary type corrections.  Ae is better for that, IMO.
    I guess my general rule is, "The more love I need to put on my footage, the more likely it is I'll need to take the project to Ae."
    When I do CC in Pr, my tools of choice are RGB Curves (which is CUDA friendly, and I have a CUDA card) for CC, and I use Three-Way or Fast CC mostly for increasing saturation.  I leave the other controls alone.
    The advantage to doing CC in Pr, is the CUDA accelleration.  If you use CUDA effects, you can work a lot faster, due to less rendering.  Once you go to Ae, you lose CUDA, but you gain flexibility.  Therefore, there's not a single one-size-fits-all answer.
    I also have a lot more available plugs in Ae.  I like Frischluft Curves, which is an oustanding curves based plug.  And the native Hue/Saturation effect in Ae is extremely useful for single hue corrections.  Pr doesn't have a similar native effect.
    And then as a last step, after getting all my corrections "evened out," I'll add Magic Bullet Looks to an adjustment layer for an overall treatment.
    I only opened SpeedGrade once after installing it.  I don't see why anybody would use it, unless they were already familiar with it.  You can't go back to Pr easily.  And in my world, projects are rarely finished.  Clients think they can make massive changes at any stage of post, even after mastering and delivery.  I've had clients make changes to spots after they've been airing for several days.  I laugh at editors who name anything "final."

  • Best Workflow for Multicam Sequences?

    I have a project that includes video from 2 cameras and audio from 2 audio recorders. I synced everything up using PluralEyes, then imported it back into Premiere. Then I opened the synced sequence, deleted the camera audio, nested the video, and enabled multicam. After that, I started going through and editing the multicam footage by selecting the appropriate camera by pressing 1 or 2 (or command-clicking on the appropriate camera in multicam view). My question is, now that I've synced everything up and finished my rough multicam edits, how do I mark the selections of the sequence that I would like to keep?
    For example, part of the sequence is an interview. Instead of just cutting the sequence up, I would like to first mark and label the portions of the interview that I would like to keep. That way I can easily go through and know what the interviewee is speaking about in each clip without having to play the whole thing. I understand that to do this on the original clips in the source monitor, I would need to create subclips. But how do I do it on a synced multicam sequence? Is there a way to open the sequence in the source monitor? What's the best workflow for something like this?
    I've been using Premiere for all of two days (switched from FCPX), so any help would be much appreciated. Thanks.

    I use comment markers for this sort of thing as I prefer to keep my project bins thin. To open a sequence in the source monitor, right-click on the sequence in the project window and select Open in Source Monitor. Position the play head at a point of interest and press M. Go to Window>Markers to view a list of markers and add comments. When you want to review a marked section click the corresponding marker in the marker window and the play head will advance to that point. Btw, you can do this within the timeline and in any monitor, not just the source monitor.

  • Best workflow for doing a QC check?

    Hi all,
    A client wants me to do a technical QC check on video files I receive before editing them. The items they want checked are as follows:
    o   File format (GOP structure, bitrate, wrapper, codec, chroma subsampling)
    o   Macro blocking
    o   Artefacts induced during transfer from VT, frame rate conversion, size or bitrate conversion
    o   Field dominance issues
    o   Chroma/luminance levels
    o   Aspect ratio discrepancies
    o   Flashing/strobing
    o   Frame slicing
    o   Audio levels to EBUR128 recommendation.
    o   Wide dynamic range
    o   Audio distortion, clipping, phasing and dc offset etc
    Most of these are fine, as they can be spotted during manual playback. But what would be the best workflow for monitoring the Chroma/luminance levels? I figured I could turn on the YC waveform and play the clip back through a monitor - but the main issue is keeping an eye on both the picture & the waveform at the same time.
    Can the scopes be set to give me a indication/alarm when a specified peak occurs? The same way audio does when it clips, where you get a red dot on top of the level meter?
    And is there anything else Im overlooking?
    Thanks in advance - cheers!

    Thanks shooternz,
    The client has specifically requested each file be QC checked. They will be delivering files to us for editing, which will be broadcast after we return the files.
    So we need to manually watch each video we receive to check for any audio peaking or pixelation/slicing/artefacts, etc, that may have occured during the enciode - but I'm just wondering if the YC waveform can be set to raise an alarm or "peak" if the video file goes outside of legal limits at any stage. We will be viewing the video file - so it'll be awkward to also have to try to watch the YC waveform too, so hoping that part can be automated via a 'peak alarm' or similar!

  • Best Workflow for Deleting?

    Hi Folks.
    Would anyone like to suggest the best workflow for deleting files after a project? Let's assume that i have no future need for any of the captured footage or PE7 files. It would be helpful to know what the default locations are for all the footage and files that are now on my computer.
    Thanks, Stan

    In my book I recommend that, whenever you start a new project, you create a new folder for it. That way, all of your captured video, project files, rendered video and scratch files are in the same folder. Removing it all from your hard drive is as simple as deleting the folder.
    If you didn't do that, things get a bit more complicated. Once you've backed up what you want to keep, you can manually delete the captured video, rendered files, etc., from the sub-folders -- but you'll need to do it manually. Hopefully the files will be recognizable by the names you gave your project and capture files when you created them.

  • Best workflow for creating logsheets before ingesting.

    I'm looking for the best workflow for creating detailed log sheets of my avchd footage before ingesting/transferring it to FCP. I have many hours of footage shot with my Sony NX5U and only want to ingest the necessary clips. I'm using FCP 6.5 on a MacPro tower. Thanks.

    >I'm using FCP 6.5
    NO you are not.  You might be using FCP 6.0.5.
    Well, the only way you can ingest only sections of tapeless media is by using Log and Transfer.  And you cannot Log clips first, and transfer later.  You cannot do the same thing you did with tape, make a Batch Capture list.  So you would look at your footage in the FCP L&T interface, and you can send one to be transcoded while you move onto the next one to watch.  That can happen at the same time.
    But, there's another issue here.  The Sony NX5U uses an AVCHD format that is slightly different than normal...not that there is a normal.  But I discovered that importing clips up to 10 min in length work fine, but anything over 10 min, and it took 4-5 times longer to import.  meaning a 30 min clip took just over two hours.  It was something that ClipWrap2 addressed before FCP did...and they only addressed the issue with the FCP 7.0.3 update.  FCP 6.0.5...or even the last update for that version, 6.0.6, doesn't address this.  There might be issues in importing this footage into FCP 6.
    And the other options...Avid Media Composer and Adobe Premiere...don't have a "log and transfer" type interface.  They bring in the full clips only.  With Premiere, you are stuck with them being full length.  But with Avid, you acccess the footage via AMA (Avid Media Access), and then you'd make your selects on a sequence, and then consolidate/transcode only the footage on the sequence. 
    And no, there's no way to make a logging sheet for these apps either.
    The best you can do is watch the footage, note the timecode, and manually enter those numbers when you go to bring in the footage.

  • Best workflow for creating games for pc and starling games in AIR

    best workflow for creating games for pc and starling games in AIR.
    I need to develop for tablets with starling and AIR as well as for pc without starling ie: downloading from a server.
    In starling you have to
    1. instantiate starling
    2. use texture atlases for loading graphic assets - well its the recommended way
    3. use juggler
    etc...
    ie: with all these differences is it best to:
    1. create completely different projects
    2. just create different classes for the starling specific parts
    3. use a conditional compile constant
        eg: CONFIG::STARLING
                "then do this"
    or      CONFIG::CPU
              "then do this"
    I currently have the latter but it feels a little messy. I would prefer separate classes for each platform where the image loading is concerned.
    There must be so many game developers out there but in large gaming companies that know exactly what to do. Perhaps you can point me in the right direction with a book or something.
    CHEERS

    The other thing is that I can't see the games being very different as you suggest. I have downloaded Lee Brimelows tut on creating a starling game on Lynda.com and the code is great for creating games using states for menu, play, gameover etc... The only difference is that he uses texture atlas and there is a great difference in loading them etc... so technically you would only need to handle them in a different class as collision and the rest of game play should be the same in both. I think!!!
    Well, actually not exactly but near enough - maybe it might be easier to create separate projects and just cut and paste the new code introduced.

Maybe you are looking for