Seeking 'offline' workflow advice

Hi.
I'm presently sorting through 35,000 photos in my main catalog based on my desktop Mac with the relevant drives. It's a long process that I'm doing over the span of many weeks and I'd like to be able to change places in the house when I do this.
I'd like to be able to do this on my MacBook laptop, ideally over network, or maybe with just the catalog file. (I can't hook the external drives onto the laptop).
Is there a fast and simple way to do this and not mix up my metadata. I'm sorting pictures into collections, rating and so on, adding keywords, etc.
I'm assuming I can copy my main catalog to the laptop, and re-copy it back after editing. Does that make sense? Will it have any drawback? Obviously at that point I can't write changes to XMP...
The ideal way would seem to me to be able to read my main catalog on the desktop machine over the network on my laptop. But I don't think that can be done.
If anyone knows of a simple method that's not risky, or too time consuming (over 10 minutes of setup and update times), I'm willing to forget it as it might not be worth the trouble; but I was wondering if anyone had already scratched their heads about this.
Thanks
Luc
ps: Lr is my most favorite software ever. It's not perfect (what is?) but it's perfect for me. (Just wanted to say that, OK?)

File->Export as catalog
File->Import from catalog
That's my method.
If you select a large group of images and want to export the raw files it can take a while but you don't have to sit there watching the progress bar!
All the best!

Similar Messages

  • Film workflow advice

    Hello,
    I need some workflow advice for a feature film and I would really appreciate some input and technical expertise. I am working with telecined footage from 16mm film and I need to perform reverse telecine with Cinematools so that I can edit the footage in Final cut. My end format will be a 35mm blow up from my original 16mm negative, so I need Cinematools to track my footage correctly so that I can later export a cut list for the negative cutter.
    My digital footage is SD and is on DV cam tapes. The lab used a telecine speed of 23.98 when they transferred the footage. The lab provided flex files which I used to create a cinema tools database and to batch capture all of the footage.
    First, I need to confirm that the footage was captured correctly.
    To capture I used the FCP preset: DV NTSC 48 kHz (29.97 fps DV-NTSC Best Quality 720x480)
    I did not use the FCP capture preset "DV NTSC 48 kHz Advanced (2:3:3:2) Pulldown Removal" because my understanding is that this preset is designed to remove pulldown during capture when used with external hardware designed to remove pulldown during capture, correct? In this case I will be removing the pulldown with Cinema Tools instead.
    I also am assuming that setting the capture fps to 23.98 would have been wrong because my footage is coming from DV Cam tapes, which can only be 29.97, right? So even though the telecine speed was 23.98, the footage needs to be captured at 29.97 and then processed with Cinematools to make it 23.98 or 24 fps, correct?
    Second, I need to know if my goal should be to make my footage 23.98 or 24 fps with Cinematools. If my end goal is to export an EDL for a negative cutter to go back to my 16mm footage to cut from, should I edit at 24 fps? I think the answer is yes, but then I am worried that my telecine was done at a speed of 23.98. My audio was recorded at 30 fps - though my understanding is that the fps of the audio doesn't matter since audio is recorded in time and not frames. However, I did read in the Cinematools manual that if the footage is 23.98 as opposed to 24 fps, then it means that my footage is going to playback at a slightly slower rate, thereby affecting audio sync. If that is true, will I have to slow my audio slightly to maintain sync? This doesn't seem ideal.
    Third, I am having trouble when I do a reverse telecine test with Cinematools on my footage. I am looking at burned-in key code and film frame numbers in my footage and observing that when I advance through my footage frame by frame in FCP, the film frame numbers of every frame are perfectly clear and legible (they advance thus: A2, B2, C1, D1, D3). However, when I view a clip in Cinematools (pre-reverse telecine), the frame numbers are blurred and have interframe flicker (the interlacing field pattern). The same is true after I perform reverse telecine on the clip. The cinemtools manual mentions this as something to be avoided in a clip that has been reverse telecined (in "Checking you Reverse Telecine Results on page 132) - however I see this problem in clips that I have not yet reverse telecined.
    I am worried because another person worked with this footage before me and I fear that they performed reverse telecine on the footage already (and incorrectly) and directed Cinematools to overwrite the original file instead of creating a new one. If this is true then I would have to start over and recapture everything. On the other hand, it seems like if that were the case I would see the problem in FCP as well. Is there a way that I can tell for sure?
    Thank you so much in advance for any advice you can give me! I really appreciate it.

    Thanks Jim. The project is sort of like an "extended music video".
    So it is a record, 45 minutes long and then using that as a guide,
    we shot the footage. Its not gonna be just lipsyncing all the time, but
    there will be other things inserted there as well.
    It is just the workflow that bothers me and mainly:
    1 Should I convert HDV material after capturing with prores 422 or stay in native
    2 When should I convert from PAL to NTSC?
    3 Should I s t a r t, by importing the 5.1 surround mixes to FCP6 and then work
    with that as a guide?
    I´ve done some music videos before, but this is more like a music movie and
    will be sold as a sales DVD.
    I would appreciate any help you guys can give.
    Thank you, timo

  • Looking for some workflow advice

    I'm starting work on a project that has media coming from several different sources, and I'd like some workflow advice. I'm using the CS6 Production Premium suite on a Windows7 computer. My primary video source is a Canon XF-100 shooting 24/1080p MXF video, but I'll also be incorporating AVC-HD shots taken with a 60D, XA10, GoPro, and iPhone. So, here are my main questions:
    1. Should I transcode or not? Most of the footage (at least 2/3rds or so) will be from the XF-100, which Premiere can edit natively. I'm inclined just to put the AVC-HD clips straight into an MXF-formatted project and let Premiere render them in the sequences as needed. But I'm also wondering if it would be better for overall quality and consistency to transcode everything into the same format before importing into the project and set the project to that format.
    2. If I do transcode, what format? Should I just convert the AVC-HD stuff to MXF for use in an MXF-native project? Or should I convert everything to another format entirely?. If so, which one? (I'm thinking AVC-Intra 100 at 24/1080p, but I'm not sure.)
    Any experienced sages out there want to offer their $0.02? Thanks in advance,
    Stu

    transcoding will not produce any better quality. you can actually degrade the quality in most cases, unless transcoding to raw.  transcoding is usually done for proxy editing to allow for faster editing, or compatibility, if the original file cannot be read by the editing application. premiere pro usually handles mixed formats well, just make sure to either create the sequence from a clip that matches the destination format or create a sequence manually and specify the format that will be used for export.
    if you do need to transcode to get some files to read, the avid dnxhd codec is a popular choice as it takes very little cpu and gpu resources to read.  it has several quality options you can choose from, but it is limited to 1080p.  if you transcode for faster editing (proxy editing), use dnxhd 36 8bit, and relink to the originals when done to keep their quality.

  • InDesign (or other poster) workflow advice

    I have been looking at the "Aperture InDesign Intergration" plugin primarily in order to integrate a "poster workflow".
    http://www.macosxautomation.com/applescript/aperture/indesign/index.html
    I have used the Indesign Demo for a project and basically the linking of images was really fantastic, the ability to manipulate the image behind a "window" was really great as well. Effectively it seemed like a seamless workflow.
    I'm curious to know if anyone has any advice as to whether I need the full CS5.5 - which I guess includes Illustrator and Photoshop? I do plan to do some work in a vector program such as AI as I do a bunch of 3D CAD modeling and I'd like to "bump" some of this into my presentations and vector seems like the way to go.
    I guess what I am wondering is if I should just go ahead and bite the bullet and buy this software for 700 clams or if I have options or other considerations here.
    Hoping to have it set up shortly so any advice is really welcome.
    Thanks
    Jon

    BIG QUESTION: Is there any reason to belive that Aperture will develop a better quality image if it begins with a NEF file rather than a DNG?< </div>
    No reason to speculate on or to wait for Apple to do anything.
    Use the format you want to use, find an application that suits your needs, might not be Aperture.
    bogiesan

  • Architecture Portfolio Website Workflow Advice (newbie)

    hi all.
    i am looking at adobe products and already have creative suite for a (primarily) poster creation workflow which i am very happy with. i need to redo an existing "iFrames" based website after having created a nice blog using the WordPress Blog Themes.
    I am hoping someone can help me with advice on adobe specific software i might use of other advice, suggestions etc.
    background:
    i have been really happy with my ability to organize a very large amount of images in an Aperture database on the mac and to publish them easily to Flickr (among other things). this has now freed me up to focus on creating a new website for an old website that needs updating. right now i have just created a /link/ for the pictures on the webpages which takes the viewer to the flickr albums.
    i am looking at "themes" such as the WordPress Photography theme (http://theme.wordpress.com/themes/photography/) and i am also looking at sites that are using the "autoviewer" or "autoviewer pro" software.
    http://www.iloveconcrete.net/drive1/drive1.html
    http://www.simpleviewer.net/autoviewer/
    anyway, i am familiar with using wordpress UI but i am desperately trying to find a method where i don't have to use the wordpress image uploader. i guess the holy grail would be to just "publish" album "code" out of Aperture that i would then embed into a WP theme website which i am sure i could create in a week or something.
    anyway, does anyone know of anywhere i can ask such questions or get some leads on keeping this workflow simple but create a nice website at the same time?
    TIA and apologies for the off-topic post.
    jon

    hi nancy, dweb3d.
    this is really great help and i very much appreciate it.
    can i please ask you some follow ups here as i try and get this together?
    for instance my thought at the moment is to use the sidewinder theme [http://graphpaperpress.com/themes/sidewinder/] which i am sort of guessing uses a "parallel" or "plugin" (not sure how you would terms this but) it is called Photoshelter [http://www.photoshelter.com/about/wordpress].
    am i correct in thinking that it is THIS plugin that makes the functionality of sidewinder complete?
    also, would something as straighforward as a WP plugin (such as NGG gallery) allow me to basically export my Aperture Album images to the desktop and then upload them to FTP to get them into the WP Blog? or are the answers here indicating that i would need a customized set of coding instructions embedded in the blog to get this to happen?
    i mean, i guess in an ideal world at some time in the future there could be e link between my aperture Albums and the images displayed in my WP portfolio blog but for the meantime can anyone help me with understanding what it might take to do a reasonably fluid facsimile of this workflow...?
    i guess i am not understanding how all the parts discussed here either relate or don't relate...
    THANKS

  • Workflow advice on matching different cameras and lenses

    Greetings. I'm currently trying to match six cameras and all their different lenses to produce identical images and I need advice on fine tuning my workflow on making the camera profiles.
    Me and my team are shooting weddings. We have Nikon D3s, Nikon D3, 2x Canon 5D mk III, Canon 1D mk 3, Canon 5D mk II with 17 lenses and I'm in the middle of the process of matching their colors so when we shoot together I don't go crazy trying to match the white balance between all the cameras in Lightroom.
    The issue I have is that I managed to make all the colors, contrast, saturation and brightness for all cameras and lenses on daylight shots identical (with minimal differences) but shots under 2800k light are very different.
    I'll explain what I've done so far so that you can find my error and hopefully decide to help me.
    I bought a 24 patch color checker and took photos with all cameras - once with a 2800k halogen lamp and then with a 5500k strobe (Nikon SB-800 @ 1/16 power). I shot in RAW. The exposures are good - no blown highlights or unexposed parts. In Camera raw I applied the "Camera neutral" color profile for all cameras. I exported the shots in DNG. For each camera and lens I imported the halogen and flash lit file into DNG Profile editor.
    In the chart section I created a color table for the "2850 K only" halogen lit shot and then for the flash lit shot choosing "6500 K only".
    So far so good. The colors are good but to achieve them for the different cameras or lenses I need to input different color temperatures and tint. What I want is to set identical color temperature for all photos (say 5500k) and to get identical colors regardless of camera or lens.
    So the next step I took to get at least the daylight shots identical is to play with the "White balance calibration" under the "Color matrices" tab. For instance - DNG Profile editor determines that the flash shot should be set to 6050k +8 tint. I move the "white balance calibration" sliders until I reach the numbers I want above the flash shot (5500k). Because the colors shift I press the "Create color table" button again. My goal is to show the program that I want these (calibrated) colors to be achieved at 5500k +0 tint and not 6050k +8 tint. I get the result I want but that affects the 2850k shot. If I readjust the "white balance calibration" sliders for the 2850k shot and use "create color table" again, that ruins the 6500k adjustment.
    So using this workflow I get either the daylight shots right or the tungsten lit shots, but not both.
    So in short my question is - How do I create proper profiles that produce identical colors among all cameras and lenses under all white balance settings?
    Thanks in advance for your help and for your patience to read all this.
    PS
    I fear I may have put this topic in a wrong section. If you don't mind, I'd post it in the Lightroom section.

    Greetings. I'm currently trying to match six cameras and all their different lenses to produce identical images and I need advice on fine tuning my workflow on making the camera profiles.
    Me and my team are shooting weddings. We have Nikon D3s, Nikon D3, 2x Canon 5D mk III, Canon 1D mk 3, Canon 5D mk II with 17 lenses and I'm in the middle of the process of matching their colors so when we shoot together I don't go crazy trying to match the white balance between all the cameras in Lightroom.
    The issue I have is that I managed to make all the colors, contrast, saturation and brightness for all cameras and lenses on daylight shots identical (with minimal differences) but shots under 2800k light are very different.
    I'll explain what I've done so far so that you can find my error and hopefully decide to help me.
    I bought a 24 patch color checker and took photos with all cameras - once with a 2800k halogen lamp and then with a 5500k strobe (Nikon SB-800 @ 1/16 power). I shot in RAW. The exposures are good - no blown highlights or unexposed parts. In Camera raw I applied the "Camera neutral" color profile for all cameras. I exported the shots in DNG. For each camera and lens I imported the halogen and flash lit file into DNG Profile editor.
    In the chart section I created a color table for the "2850 K only" halogen lit shot and then for the flash lit shot choosing "6500 K only".
    So far so good. The colors are good but to achieve them for the different cameras or lenses I need to input different color temperatures and tint. What I want is to set identical color temperature for all photos (say 5500k) and to get identical colors regardless of camera or lens.
    So the next step I took to get at least the daylight shots identical is to play with the "White balance calibration" under the "Color matrices" tab. For instance - DNG Profile editor determines that the flash shot should be set to 6050k +8 tint. I move the "white balance calibration" sliders until I reach the numbers I want above the flash shot (5500k). Because the colors shift I press the "Create color table" button again. My goal is to show the program that I want these (calibrated) colors to be achieved at 5500k +0 tint and not 6050k +8 tint. I get the result I want but that affects the 2850k shot. If I readjust the "white balance calibration" sliders for the 2850k shot and use "create color table" again, that ruins the 6500k adjustment.
    So using this workflow I get either the daylight shots right or the tungsten lit shots, but not both.
    So in short my question is - How do I create proper profiles that produce identical colors among all cameras and lenses under all white balance settings?
    Thanks in advance for your help and for your patience to read all this.
    PS
    I fear I may have put this topic in a wrong section. If you don't mind, I'd post it in the Lightroom section.

  • LR EXTERNAL HD WORKFLOW ADVICE?

    My hardware setup is pretty simple; Powerbook G4 with 2 external HDs (250G and 500G). LR application and Libraries are on laptop (so I assume all Metadata and XMP files are stored on the laptop). Due to storage space limitations on laptop, I'm planning to move RAW image files to EHD and point LR to folder locations there. However, I will often need to do work on images when not connected to EHD (like when I travel). I'd like some advice on a good workflow to assure that when I reconnect to the EHD my latest image Develop work done remotely is synch'd back to my image folders on the EHD. I'm assuming this will require moving the images (or folders) I expect to work on to my laptop temporarily, in which case the workflow will include removing these images (folders) from my laptop when I reconnect so that they don't keep using up storage space there. The whole point of this is to keep my laptop lean and mean, while my image directories on the EHDs grow big and bad.
    A related question is that when I am not connected to my image folders, I see that I can use most LR functions, but not the Develop function. That makes sense. However if I use some of the Basic Develop functions in the Library Module, is there a need to synch up any modifications made there when I reconnect to my image folders on the EHD(keep in mind that my LR app and Library is on my laptop)?
    Thanks in advance; I've spent many hours searching the archive for an answer; found most of the answer but not all of it.
    - E!

    Simple in theory, but in trial it wasn't.
    When I exported the DNG files to the folder in my laptop labeled Mobile, they didn't show up in LR, although the Finder showed they were there. So I tried to add this folder into the Library, and was prompted to Import these files. I'm assuming that I don't really want to import these files a second time to my laptop, since I already imported them to my EHD once. Is that correct?
    Once I've figured out how to get the files to my laptop, I'm curious if the import procedure back to the EHD will recognize to simply add my changes to the original DNG files.
    I further will need to design a workflow to suit my situation. Because in fact, my first import is usually to my laptop when I am in the field, after which files get moved to my EHD. But they stay on my laptop until I've sorted through them, and ideally I'd like to keep several months' images on my laptop with the originals on my EHD.
    Sounds like there's no elegant silver bullet, but just some workarounds I'll have to cobble together to suit my intended workflow. Any advice or links would be appreciated.
    - E!

  • VHS/Hi8 capture - futureproof workflow advice

    I have a massive project on my hands....  I have hundreds of VHS tapes and tens of Hi8 tapes I want to convert to digital form.  For the past several years, I've been meaning to get around to this project, and have accumulated quite a bit of hardware and software.  I've read a ton of formus and blogs on the topic, and quite a few questions were answered, but was hoping to get some advice from people who've done this and have some retrospective advce. 
    Here is my list of software/hardware
    MBP i7 with 10.7.2 (will prob downgrade to 10.6 for this for canopus software reasons)
    JVC HR-S7800U SVHS deck (with TBC)
    Sony RDR-VX500 VHS/DVD combo recorder
    Canopus ADVC-300
    Sony Hi8 Camera (S-Video Out)
    Sony HVR-A1U HDV camera
    Blu-Ray Burner - will put in external case for use with MBP)
    Final Cut Pro 7
    Final Cut Pro X
    I have many questions and would really like advice on setting up a workflow to capture these tapes.  I want to futureproof this collection, meaning I want to capture the best uncompressed video possible while at the same time not becoming wed to Final Cut or Apple-only formats (a la FCP7->X issues).  Remember realmedia was really neat about 15 years ago???
    What capture format is the best:
    Uncompressed 8-bit NTSC 48kHz or Uncompressed 10-bit NTSC 48kHz
    Anyone optimize the hardware setting on the ADVC-300?  I wanted to just capture without the canopus software to make everythign consistent. Where do I put all the dip switches?
    IRE 0 or 7.5
    Video Audio adjust on or off
    Chroma filter on or off
    External or Internal vide sync mode
    Thank you very much!!

    Sequence Preset: DV NTSC 48kHz
    Capture Preset: DV NTSC 48kHz
    Not a forum expert here, however I completed a very similar project your are describing here.  Very tedious and had its issues and challenges.
    Here are a few things you should think about.  One of the big issues with VHS format is that it is analog and has no timecode as found on modern dv tape based and file based cameras. Is this important to you?  To me it was a big factor in utilizing it with other NLE suites down the road.
    What I did was to capture/record each hour of vhs to minidv tape to create an archival copy with timecode and 48khz audio, then capture with FCP.  In addition to timecode and ability to cleanly log and transfer clips, the audio track  will be compatible upon capture in FCP.  Batch Capturing of non digital tape may (or "will" is a better term here) create audio drift and other audio issues. 
    I would recommend doing the same with the Hi8 tapes.
    If you proceed without the above process and just batch capture directly, do some research on the camera that was used to record the tapes you are working with.  The audio track recordings vary widely from camera to camera and a become issue which must be addressed in altering sequence and capture presets in FCP. 
    I know recapturing dv essentially "doubles" work but down the road you will always have digital tape archival backups which can be easily recaptured if necessary. Use quality tape. I made sure my dvtape deck heads were clean before and during the transfer and upon FCP capture there appeared to be absolutely no generational "loss" of video or audio quality.
    As Studio X answered above, DV file format stored in your project file on scratch disk for FCP will be fine for most NLE suites in the future.  Just remember to back up your scratch drive on site and off site. 
    Hope this helps and good luck!

  • Soundtrack re-edit workflow advice

    I cut online videos to production music tracks and would like some input on good workflow practice.
    I need to be able to quickly re-edit the production tracks to match the timing of my video tracks so I currently do it in in Final Cut Pro. Needless to say, I don't get much control or functionality in FC so occassionally I send to Soundtrack Pro and re-compile. ST doesn't seem to be particularly well set up for re-editing music tracks either though.
    So I'm wondering if anyone has any advice for other software which will enable me to quickly establish beats/bars which can be cut out or added to match my audio to video narrative which also works well with FC.
    Cheers,
    Jesse.

    JOhn
    I think this mail is sent by the workflow mailer using the item type System Mailer. This is auto generated notification when the notification itself is canceled or retired. You can customize the subject there based on the source.
    Thanks
    Nagamohan

  • Online/offline workflow PPRO

    I have experimented with 720x480 dv ntsc widescreen versions of my AVCHD files for editing/cutting on a laptop.....so that I can then move the drive over to my 6-core tower to then reconnect the AVCHD original files for final encoding at full res. I make my cuts/edits in a 720x480 dv ntsc widescreen seq.....then copy them and paste them into an AVCHD seq and reconnect .mts files without issue.
    Problem: I do ALOT of multicam work. Would like to be able to "cut" the multicam on my laptop in DV NTSC but when I try to bring the nested multicam seq over into an AVCHD @ 1920x1080 I can't get the "cut" multicam to resize. Because I am unable to change the seq settings for the existing DV NTSC seq.
    Am I missing something?
    The only way I could get this to work was to import the NTSC files into the AVCHD seq. (which makes them appear smaller in the frame) and cut them there, then reconnect originals with no prob.
    Is there a way to do multicam editing with the use of lower res files? without this resizing issue?
    Please help. Thanks! :o)

    function(){return A.apply(null,[this].concat($A(arguments)))}
    Allow me to give you what I believe to be a valid reason why I am being "impatient". 
    That was part of a well thought-out post that deserves a thoughtful reply.  You have to realize that most people only spend a limited amount of time in the forum.  Of those that do visit, only a select percentage will probably have the same type of media and workflow that you do.  Of those, many will not have, or have had, the same problem(s) that you do.  Of those that do, only a few may have a solution that worked for them.  You have to give those few folks time to get to your post, read and understand it, and craft an answer.
    You see many posts getting almost immediate replys.  Most of those have one of two things in common: they're easy to answer by almost anyone because the folks posting are just getting started with Pr.  Or the folks answering are members of the forum "regulars" who spend much more time here than the average member, have a ton of experience, and come across those questions pretty quickly after they've been posted.
    Your question isn't run-of-the-mill, and these days, there aren't a whole lot of people who edit offline with Pr any more.  I don't, which is why I couldn't help you out with technical info.  It's just not part of my workflow.  Plus, you posted at a time of day when the usual company of regulars didn't appear to be online.  And once they got online, it's likely they had to wade through a ton of topics, further delaying any attention to yours.
    So if you have hard questions that you expect will need advanced users to help you answer, you should expect to have to wait longer than than the new user who wants to know, "What sequence preset do I use?"  And you have to be prepared for the possibility that no one knows the answer, or even where to start.  Users like that will usually just not post anything, further adding to the appearance that you're being ignored.
    As a Community Professional, which badge often indicates a forum moderator also, I advocated patience and lowered expectations because demanding a response in all caps so soon after your OP is poor forum etiquette, and may alienate what few users remain that may actually be able to help you.
    -Jeff

  • Digitize resolutions offline workflow without rendering

    Hello Guys,
    one more question with offline resolutions: we are trying to digitize miniDV material over SDI or DigiBeta over SDI (blackmagic decklink and the other workstation is equiped with Kona LH) in OfflineRT. Whatever we do to store our material offline and work offline we have to render it completely and miss audio. With 10Bit PAL or NTSC no Problem. What is wrong?? Thank you very much for an answer.

    mmmmmm.... Try opening the capture preset editor for DV to OfflineRT. (Duplicate it) then select the digitizer to be your Kona card... might just work like a charm! I can select it from there... I'd think you can too.
    You'd probably want the 8 or 10 bit digitizer set... and your Kona control panel you simply leave alone. dunno about the Decklink gear.
    Seems to me it might work just fine, and the Decklink gear might also work.
    You'll end up with a custom capture preset for 8 or 10 bit uncompressed to OfflineRT that way.
    Jerry

  • Does anyone here make movie trailers?  Looking for workflow advice.

    I'm pretty new to editing and i've mostly done montage/tribute videos on youtube.  My process is slow and laborious. I'll scout a bunch of clips in genre or subject of the montage and then go through a fairly long trial and error process trying to put it together like a puzzle.
    I'd like to get better as an editor and i'm interested in making story driven trailers.  I can't really find any resources online to teach workflow for something like this.  I'm wondering how much is planned in the minds eye based of the editor and how much is just trial and error pulling clips and trying to arrange them in some semblence of order.  I understand most trailers have several key moments of exposition to clue the audience in a little bit about the plot.  And then a bunch of enticing visual shots to make you curious to see the film.
    I'm wondering if there is an effecient way to plan out a trailer and pull clips to use without having to go through a long random trial and error process of what looks good next to each other.  Pretty much any tips on the process of making a trailer would be helpful.  I know theres a million different ways to approach it and creativity is key. Do some people just have what it takes creatively or can it learned?
    In general I plan to use a few songs from the subject movie and the tone of the song will reflect the tone of the clips and i'll try and highlight a few emotional dimensions of the film with a few different songs.

    Its a big question so here is my way of answering..graphically.
    (everything has "structure"...films, books, plays, songs...life)
    https://www.google.co.nz/search?q=film+plot+structure&espv=2&tbm=isch&tbo=u&source=univ&sa =X&ei=RIjNU_vZNMn08QXijYJo&ved…

  • Two-computer workflow advice?

    I have a desktop Intel iMac; soon I'll have a Mac Book Pro to replace my PB G4, and it will be practical to run Aperture on both (day to day, and for traveling with the MBP). But what is the best way? If I want to import and edit images, not just view them, on both machines, then I see three general methods:
    1. A single library on a portable external drive (the Library chosen in Preferences on bboth machines -- not just referenced masters). Advantage: automatic syncing of work on the two machines. Disadvantage: without the portable drive, nothing's available at all.
    The other alternatives use separate but identical libraries (with managed images) on the two machines. To synchronize the libraries:
    2. Maintain a vault on an external drive. When work on one machine is done, update the vault. Then restore the Library from that vault on the other machine. (I haven't actually tried this.) OR
    3. Work with Projects instead: when work is done on one computer, Export the Project(s) and Import it/them on the other.
    I'm assuming both machines have internal drives big enough for the whole library. Also that that referencing images doesn't much help, or make a lot of difference, in these scenarios. Also that Previews are a separate question (could be handled differently on the two computers). So my questions are:
    - do people use all these methods?
    - have I left something out, possibly something obvious?
    - are there big advantages or disadvantages I might be ignoring?
    This kind of workflow must be pretty common. It would be nice to have some word from Apple, in the shape of one of those useful online Documents perhaps, about the various approaches. (I've searched but haven't found anything like that.)
    Thanks for any counsel.

    You may be right that exporting/importing Projects is
    the way to go.
    But this situation is only in the very narrowest
    sense an instance of multi-user access to a database.
    The Aperture license lets you install on two
    machines. (You can't run on both if they're networked
    together, which means in practice that you have to
    turn off Airport on one.) The user will presumably be
    using only one copy of Aperture at a time, and
    changes to the database will occur only from one at a
    time.
    It's a shame Aperture doesn't have a built-in way to
    sync these, which puts it off on the user: it's
    obviously vital to remember to update before you do
    new work on both machines, creating an impossible
    sync situation.
    Still, the question should only be how a user (with a
    desktop and a laptop) can most efficiently, and
    reliably (rememberably!) do that.
    I fully agree. Way back at version 1.0 I made that feature request: an Aperture-specific documented/supported protocol for the (typical for photogs) use of Aperture on a laptop in the field and then later on a desktop.
    It is an enterprise-critical issue for a huge portion of Aperture's user base, but unfortunately Apple still has not accomodated. Perhaps Apple is waiting for some other app to build a great field-laptop to office-desktop workflow, and steal Aperture market share.
    -Allen Wicks

  • Workflow advice for 45 minute music film

    All the footage in HDV PAL 25fps, around 15 hours of footage. I have 5.1
    mixes too about the material, I would like it to be 5.1.
    Final output: normal DVD, but it should be in NTSC.
    Would someone be so kind and suggest a good rational workflow for this?
    I have the Final Cut Studio and 3GHz 8 core Mac Pro with 9 gigs of RAM and
    ATI X1900.
    Would be much appreciated.

    Thanks Jim. The project is sort of like an "extended music video".
    So it is a record, 45 minutes long and then using that as a guide,
    we shot the footage. Its not gonna be just lipsyncing all the time, but
    there will be other things inserted there as well.
    It is just the workflow that bothers me and mainly:
    1 Should I convert HDV material after capturing with prores 422 or stay in native
    2 When should I convert from PAL to NTSC?
    3 Should I s t a r t, by importing the 5.1 surround mixes to FCP6 and then work
    with that as a guide?
    I´ve done some music videos before, but this is more like a music movie and
    will be sold as a sales DVD.
    I would appreciate any help you guys can give.
    Thank you, timo

  • Workflow advice please -- NEF, DNG or NEG compressed

    Thanks to kgelner and others I see that the DNG file format is not the 'universal' format that I thought it was. It still takes camera specific knowledge to develop a DNG file.
    DNG does have two clear advantages over NEF or CR2 files--Adobe has provided a specification for where to embedd metadata into a DNG file. Something that Canon and Nikon don't provide for their RAW files. Aperture makes this advantage seem insignificant since it keeps its metadata in the internal database--sending out RAW files to clients or coworkers doesn't seem like something I want make part of my workflow.
    The one advantage left to the DNG file for a committed Aperture user might be the fact that DNG supports a lossless compression. When I use this option on my D2x files they shrink from about 19Mb to 10~11Mb. A big storage advantage, it also speeds up hard-drive access time for loading files on my system! This does require an extra processing step before importing.
    Nikon has a compressed NEF option that they say: "has almost no efffect on image quality." That doesn't sound lossless to me, but it does double the number of images you can fit on a CF card.
    BIG QUESTION: Is there any reason to belive that Aperture will develop a better quality image if it begins with a NEF file rather than a DNG? Of course I can shoot a bunch of test images and compare for hours (still may do that) but that doesn't always reveal the truth either. (Some test images may compress well other important images shot later may compress badly.)
    I am eager to see what the collective brain has to say to this.

    BIG QUESTION: Is there any reason to belive that Aperture will develop a better quality image if it begins with a NEF file rather than a DNG?< </div>
    No reason to speculate on or to wait for Apple to do anything.
    Use the format you want to use, find an application that suits your needs, might not be Aperture.
    bogiesan

Maybe you are looking for