Best approach to making Documentary in iM 6

I am making a Documentary of my Parents life. I have around 5 hours of DV tape of them sitting on a couch and talking. Not terribly exciting to watch, so I want to import scanned photos (stills) using the audio as a bed for the still. It seems to be very cumbersome and slow to extract audio, then dump the video, import the still, apply Ken Burns, get the audio and still in sincronization . . . .
And then the audio gets out of sinc with each subsequent edit.
My question is, what is the best process? Should I extract all the audio first, then splice in the photos? I am sure this can be done better with FCx or FCp, but I don't have these programs and want to stay with iM6.
Jerry

Extract all the audio first.
It may take quite a while (..the latest iMovie does the job differently from earlier versions..) so go and have a few cups of coffee while it's all extracting ..or a meal, or watch a film!
When it's all extracted, you'll have your "audio bed" below the video, and in sync with it.
[NOTE: 'Extracting' simply silences the original clip(s), and copies the audio onto the track below. So you can always "re-instate" the original audio, back into the video clip(s), just by raising the audio level of the relevant clip(s) from zero to 100%.]
When you use transitions, though, to dissolve the "talking heads" into a still photo, or something else, the entire video is shortened by the same duration as the transition you're using (except for the 'Overlap' transition).
This means that whenever you use a transition - instead of a straight cut - to change between the "talking heads" and a still photo, the video will become OUT OF SYNC with the separate audio track.
So just before and after any transition, you may want to split the relevant audio clip(s), so that after adding your transition you can jiggle the audio along the audio track to reposition it in sync with the video.
Here's a simple diagram:
..and you can see it in more detail if you choose 'Start Slideshow' after clicking on this link.
Note that there's an easy method in iMovie to paste a still over (..and remove..) unwanted video: click on iMovie HD and Preferences, and tick the box marked 'Extract audio when using "Paste Over At Playhead"'. Then import your photo from iPhoto (..using Ken Burns or not ..whatever you prefer). The photo will appear at your playhead position, or at the end of the movie.
If it's at the end of the movie, select it and use Apple+'X' to copy-&-cut it, then put your playhead where you want the pic to be and on the top-line menu use 'Advanced' and 'Paste Over at Playhead' ..or just the shortcut ShiftApple'V' to Insert at that point.
That substitutes your photo for the original video, but leaves the audio intact, instead of your rather long-winded "..extract audio, then dump the video, import the still, apply Ken Burns, get the audio and still in sincronization.."
Do your video in small chunks, so that you're not ceaselessly jumping through huge lengths of video ..but note especially that marking places in your video with 'Bookmarks' will let you jump instantly forward and backward to them, instead of having to scroll through all that material.
Edit your video in chunks of, say, 20 mins at a time. That makes it easier on your Mac, and lets you keep track more easily of what you're doing.
When you're satisfied with your 1st twenty mins, start a new project and work on the next 20 mins. You'll pick up speed and start to work far faster. Then you may want to finally go back to the first two segments and "tweak' them a bit, when you've learned, by experience, a few more shortcuts!

Similar Messages

  • Best approach for RFC call from Adapter module

    What is the best approach for making a RFC call from a <b>reciever</b> file adapter module?
    1. JCo
    2. Is it possible to make use of MappingLookupAPI classes to achieve this or those run in the mapping runtime environment only?
    3. Any other way?
    Has anybody ever tried this? Any pointers????
    Regards,
    Amol

    Hi ,
    The JCo lookup is internally the same as the Jco call. the only difference being you are not hardcoding the system related data in the code. So its easier to maintain during transportation.
    Also the JCO lookup code is more readable.
    Regards
    Vijaya

  • Best approach to archival of large databases?

    I have a large database (~300 gig) and have a data/document retention requirement that requires me to take a backup of the database once every six months to be retained for 5 years. Other backups only have to be retained as long as operationally necessary, but twice a year, I need these "reference" backups to be available, should we need to restore the data for some reason - usually historical research for data that extends beyond what's currently in the database.
    What is the best approach for making these backups? My initial response would be to do a full export of the database, as this frees me from any dependencies on software versions, etc. However, an export takes a VERY long time. I can manage it by doing concurrent multiple exports by tablespace - this is able to be completed in < 1 day. Or I can back up the software directory + the database files in a cold backup.
    Or is RMAN well-suited for this? So far, I've only used RMAN for my operational-type backups - for short-term data recovery needs.
    What are other people doing?

    Thanks for your input. How would I do this? My largest table is in monthly partitions each in its own tablespace. Would the process have to be something like: alter table exchange partition-to-be-rolled-off with non-partitioned-table; then export that tablespace?

  • Best Approach to Compare Current Data to New Data in a Record.

    Which is the best approach in comparing current data to new data for every single record before making updates?
    What I have is a table with 5,000 records that need to be updated every week with data from a data feed, but I dont want to update every single record week after week. I want to update only records with different data.
    I can think of these options:
    1) two cursors (one for current data and one for new)
    2) one cursor for new data and one varray for current data
    3) one cursor for new data and a select into for current data.
    Thanks for your help.

    I don't recommend it over merge, but in theory you could use a checksum (OWA_OPT_LOCK.checksum()) on the rows and if they differ then copy the new row into the destination table. Or you might be able to use rowscn to see if things have changed.
    Like I said, I don't know that I 'd take either approach over merge, but they are options.
    Gaff
    Edited by: Gaff on Feb 11, 2009 3:25 PM
    Actually, rowscn between 2 tables may not be an option. I know you can turn a rowscn into a time value so if that rowscn is derived from anything but the column values for the row that would foil that approach.

  • Best approach for green screen over multiple clips?

    So I know this may fall under the category of after effects, but starting with a project in premiere with many clips and cuts, what would be the best way to apply green screen over all of these clips? I'd think that making all of these clips after effects compositions would be excessive and not the best approach. So is there a better way to go about doing this?

    In these cases it's best that you test with one clip before going all in and having to change direction after much is done.  You can get by with Ultra Key in PPro if the screens are done correctly and clean enough.  Much will also depend on the expected result.  That is, you may wind up doing lots of tweaking for shots that might not pay off if they're quick/action, etc.  I usually go round house for AE involving complex shots that need to be really clean, but have opted for Ultra in PPro for quick clips with action.  When I do use Ultra Key I use more than one instance of the key.   So, test it both ways and use this as an opportunity to know the Pro's of both methods.

  • Is Dynamic Link best approach toward building/integrating AE title sequences for PP TV Master?

    I’m attempting to create my first TV Pilot using Premiere Pro and After Effects using down converted 4k 29.97 -1080P 29.97 for the full screen portions which I have now roughly arranged in Premiere Pro. I would appreciate advice and any useful links toward, how should I best approach building the show’s title sequences???
    Reading comments about work flow here, I’m thinking I should start my title builds in After Effects in equivalent 4K 29.97 set up (still finding the AE setting for do this confusing?) then possibly creating  “a composition right in AE and then import it to PrPro project via Adobe Dynamic Link -> Import After Effects Composition...”
    Does this sound about right? Any tips/links would be greatly appreciated by this FCP user in transitioning over to PP/AE. I also have Mocha for AE but have yet to explore how to do three way work flows...Any links on that would be greatly appreciated as well.
    Thanks for your input guys. This forum has already proved so useful in making the FCP transition and learning these great new tools!
    Best,
    M. Workman

    AE is obviously awesome tool for titling!
    When you create your brand new AE comp, dialog box with ALL settings appears.

  • Migrating, Fresh Install, or Best Approach?

    I'm currently running Mountain Lion on this late 2008 MBP. For the past couple years it's had issues where it goes off into never never land and I believe it's from corrupt system files or something. Sometimes it's different programs that cause the issue like if I open Preview, Text-Edit, etc. The computer will work fine unless I open one of them and then it just starts running totally sluggish and getting stuck. Closing the programs doesn't fix the issue. I'll have to restart and avoid those programs as long as I can. I've also noticed normal image files of mine and files I created going corrupt at times for no reason. I've tried every thing in the book, upgraded operating systems, run repairs, purchased new HD etc. Nothing fixes it.
    So, I'm finally upgrading to Yosemite and figure now it's the time... once and for all... to fix the issue by doing the first complete erase of the OS and fresh install after all these years. So right now I'm cloning the drive to an external via Data Rescue 3. Once that is complete then what is the best approach here to be sure none of any corrupt system code or anything makes it over so that it's like I have a brand new MBP (just that's really old...LOL)?
    I'm thinking the migration option from the external backup clone I'm making would be best but with that I'm not sure how it installs the old programs if you chose to have it copy the apps over. Won't it copy the apps code from this install? If that's the case it sounds like migration might be a bad choice for me as it's risking possible corrupt code back in? At the same time I want my settings, images, passwords, email, iPhone backups, music, etc. So won't it be really difficult if I don't use the migration choice or is it simple to copy that stuff over? Please advise on the best and easiest way for me to get what I need onto this computer but without letting any possible old corrupt code in...
    Thanks!

    fluxxy1245 wrote:
    hi
    I recently purchased the latest mountain Lion.
    i was wondering what was the best thing to do.
    A complete fresh install
    or just upgrading from the previous OSX Lion.
    I have 3 macs :
    - iMac 27" (2009)
    - Mac Mini (2010)
    - MacBook Pro 13" (2011)
    The 3 of them came with with Snow Leopard.
    And I did an upgrade to Lion, shortly after it was relased.
    If we upgrade :
    - does leave unnecessary files
    - is there a difference in speed and hard-disk space
    or does it manage it all very well
    Thanks for the info
    maybe some links would be welcome that give an in-depth explanation about that
    There is some house-keeping that you need to do before/after upgrading to Mountain Lion.
    For installed, third-party applications (and device drivers where indicated), does the vendor offer latest OS X Mountain Lion support. Drivers that worked in Snow Leopard or Lion, may not work well in Mountain Lion without upgrades.
    You may have to re-install some third-party applications that have system dependencies that have changed in Mountain Lion.
    You may have to update your Printer preferences if printers do not work after the upgrade.
    You should count on a minimum of 4GB of memory (8GB is better) in each Mac.
    Although the upgrade process will not touch /Users, it is sound due diligence to backup user data on all these Macs beforehand.
    Upgrading is simpler, multiple Mountain Lion downloads not withstanding. If these Macs have been through prior OS X updates, they may not operate as fast as a full, clean install would deliver. If you want to do a full, clean install, download the free Lion Diskmaker. It will automatically build a bootable, USB installer for Mountain Lion from your first ML purchase and download. I used it to make a clean install on my SSD. Works perfectly.

  • COST CENTER CHANGES FOR OPEN PO REQUIRE BEST APPROACH HOW TO DO IT

    Dear All,
    we are changing the cost center for open PO's kindly tell us the best approach how to do it for all open PO's line items for open PO's are 4000.
    we will totally block the old PO's and change the PO's with NO GR and IR immediately, but what and how to do for if there are GR, IR, or one of them their. and also what if IR and GR differences are thier, kindly provide all the best possible approaches.
    below are the scenarios.
    Open PO without GR/IR
    Open PO only with GR
    Open PO only with IR
    Open PO with IR/GR without difference
    Open PO with IR/GR with differences
    Service entry sheet
    kindly provide me all the best approaches to achieve this task. keep in mind beside reversal of GR or IR any other approach.
    qsm sap
    Edited by: qsm sap on Feb 15, 2010 12:08 PM

    Hi,
    Open PO without GR/IR
    Open PO only with GR
    Make Acct assigment as changeable at time of IR in SPRO for Acct Assigmet 'K' ...so that u can change the cost Center while doing MIRO. if u do not want to go with mass change.
    Service entry sheet
    You can change Cost center while Doing SES. No issue
    for others... reverse the IR as one option.
    Regards,
    Pardeep Malik

  • COST CENTER CHNAGES FOR OPEN PO"S BEST APPROACH

    Dear All,
    we are changing the cost center for open PO's kindly tell us the best approach how to do it for all open PO's line items for open PO's are 4000.
    we will totally block the old PO's and change the PO's with NO GR and IR immediately, but what and how to do for if there are GR, IR, or one of them their. and also what if IR and GR differences are thier, kindly provide all the best possible approaches.
    below are the scenarios.
    Open PO without GR/IR                    
    Open PO only with GR                    
    Open PO only with IR                    
    Open PO with IR/GR without difference          
    Open PO with IR/GR with differences          
    Service entry sheet
    kindly provide me all the best approaches to achieve this task.
    qsm sap

    Hi,
    Open PO without GR/IR
    Open PO only with GR
    Make Acct assigment as changeable at time of IR in SPRO for Acct Assigmet 'K' ...so that u can change the cost Center while doing MIRO. if u do not want to go with mass change.
    Service entry sheet
    You can change Cost center while Doing SES. No issue
    for others... reverse the IR as one option.
    Regards,
    Pardeep Malik

  • R/3 4.7 to ECC 6.0 Upgrade - Best Approach?

    Hi,
    We have to upgrade R/3 4.7 to ECC 6.0
    We have to do th DB, Unicode and R/3 upgrade. I want to know what are the best approaches available and what risks are associated iwth each approach.
    We have been considering the following approaches (but need to understand the risk for each approach).
    1) DB and Unicode in 1st and then R/3 upgrade after 2-3 months
    I want to understand that if we have about 700 Include Program changing as part of unicode conversion, how much of the functional testing is required fore this.
    2) DB in 1st step and then Unicode and R/3 together after 2-3 months
    Does it makes sense to combine Unicode and R/3 as both require similar testing? Is it possible to do it in 1 weekend with minimum downtime. We have about 2 tera bytes of data and will be using 2 systems fdor import and export during unicode conversion
    3) DB and R/3 in 1st step and then Unicode much later
    We had discussion with SAP and they say there is a disclaimer on not doing Unicode. But I also understand that this disclaimer does not apply if we are on single code page. Can someone please let us know if this is correct and also if doing Unicode later will have any key challenges apart from certain language character not being available.
    We are on single code pages 1100 and data base size is about 2 tera bytes
    Thanks in advance
    Regards
    Rahul

    Hi Rahul
    regarding your 'Unicode doubt"' some ideas:
    1) The Upgrade Master Guide SAP ERP 6.0 and the Master Guide SAP ERP 6.0 include introductory information. Among other, these guides reference the SAP Service Marketplace-location http://service.sap.com/unicode@sap.
    2) In Unicode@SAP can you find several (content-mighty) FAQs
    Conclusion from the FAQ: First of all your strategy needs to follow your busienss model (which we can not see from here):
    Example. The "Upgrade to mySAP ERP 2005"-FAQ includes interesting remarks in section "DO CUSTOMERS NEED TO CONVERT TO A UNICODE-COMPLIANT ENVIRONMENT?"
    "...The Unicode conversion depends on the customer situation....
    ... - If your organization runs a single code page system prior to the upgrade to mySAP ERP 2005, then the use of Unicode is not mandatory. ..... However, using Unicode is recommended if the system is deployed globally to facilitate interfaces and connections.
    - If your organization uses Multiple Display Multiple Processing (MDMP) .... the use of Unicode is mandatory for the mySAP ERP 2005 upgrade....."
    In the Technical Unicode FAQ you read under "What are the advantages of Unicode ...", that "Proper usage of JAVA is only possible with Unicode systems (for example, ESS/MSS or interfaces to Enterprise Portal). ....
    => Depending on the fact if your systems support global processes, or depending on your use of Java Applications, your strategy might need to look different
    3) In particular in view of your 3rd option, I recommend you to take a look into these FAQs, if not already done.
    Remark: mySAP ERP 2005 is the former name of the application, which is named SAP ERP 6.0, now
    regards, and HTH, Andreas R

  • I have a MacBook Pro 5,4 running OSX 10.6.8 and Safari 5.1.10. A website i like has a known bug with 5.1.10 and recommends I install a newer version of Safari or use Firefox or Chrome. Just looking for advice on the best approach. Thanks!

    I have a MacBook Pro 5,4 running OSX 10.6.8 and Safari 5.1.10. A website i like has a known bug with 5.1.10 and recommends I install a newer version of Safari or use Firefox or Chrome. Just looking for advice on the best approach. Thanks!

    Unfortunately, Safari cannot be updated past 5.1.10 on a Mac running v10.6.8.
    So, the options are to upgrade to a newer OS X or use Firefox or  Chrome.
    Be aware, Apple no longer support Snow Leopard v10.6 >  www.ibtimes.com/apple-kills-snow-leopard-os-x-106-no-longer-receives-security-u pdates-1558393
    See if your Mac can run v10.9 Mavericks >  OS X Mavericks: System Requirements
    If so, you can download and install Mavericks for free from the App Store.
    Read prior to upgrading >   Upgrading to 10.7 and above, don't forget Rosetta! | Apple Support Communities

  • What is the best approach to handle multiple FK with single table.

    If two tables are joined with each other with more than one ways, for example
    MAIN table is (col1, col2,....coln, person_creator_id, person_modifier_id)
    PERSON table is (person_id, name, address,........ phone) etc
    At database level PERSON_CREATOR_FK and PERSON_MODIFIER_FK are defined.
    Objective is to create a report that shows
    col1, col2...coln, person creator name, person modifier name
    If above two objects are imported with FKs in a EUL and discoverer plus is used to create above report. On first inclusion of person name discoverer plus will ask you to pick the join (provided the checkbox to disable this feature is not checked). Once you pick 'person creator' join it will never allow you to pick person modifier name.
    One solution is two create a custom folder with query like
    select col1, col2,...coln,
    pc.name, pc.address,.... pc.phone
    pm.name, pm.address,.... pm.phone
    from main m,
    person pc,
    person pm
    where m.person_id_creator = pc.person_id
    and m.person_id_modifier = pm.person_id
    Second solution is to import the PERSON folder twice in EUL (optionally named one as perosn_creator and other as person_modifier) and manually define one join per table. i.e. join MAIN with PERSON_CREATOR on person_creator_fk and join MAIN with PERSON_MODIFIER table using person_modifier_fk.
    Now discoverer plus will let you drag Name from each person folder without needing to resolve multiple joins.
    Question is, what approach is better OR is there a better way?
    With solution 1 you will not be able to use functions on folder items.
    With solution 2 there is a EUL design overhead of including same object multiple times and then manually defining all join (or deleting unwanted joins), and this could be a problem when you have person_modifier and person_creator in nearly all tables. It could be more complicated if person table is further linked other tables and users want to see that information too. (for instance, if person address is stored in LOCATION table joined with location_id and user want to see both creator address and modifier address....now you will have to create multiple LOCATION folders).
    A third solution could be to register a function in discoverer that return person name when person_id is passed. This will work perfectly for above requirement but a down side is the report will run slower if they need filters on person names (then function will be used in where clause). Also, this solution is very specific to above scenario, it will not work if you want the report developer the freedom to pick any attribute from person table (lets say, person table contain 50 attributes then its not a good idea to register 50 functions).
    Any comments/suggestion will be appreciated.
    thanks

    Hi
    In a roundabout way you have really answered your own question :-)
    In my opinion, the best approach, although by all means not the only approach - see below) would be to have the object loaded as two folders with one join going to the first folder and the second join to the other folder. You would of course name the folders appropriately.
    Here's a workflow that I use all of the time and one that I teach when I'm giving Discoverer Administrator training. It might help you:
    1. Bring in the PERSON folder to begin with
    2. Make all necessary adjustments to bring it up to deployment standard. These adjustments would be: folder name (E.g PERSON_CREATOR), item names, item placement, default positions, default aggregation and so on.
    3. Create or assign the required lists of values
    4. Create any required calculations
    5. Create any required conditions
    6. Create the first join from this folder to MAIN.
    7. Click on the heading for the folder and press CTRL-C.
    8. Click on the heading for the business area and press CTRL-V. A second copy of the folder, complete with all of the adjustments you made earlier will be inserted into the business area.
    Note: joins are not copied, everything else is.
    9. Rename this folder to say PERSON_MODIFIED
    10. Rename the items as appropriate
    11. Add a join from this folder to MAIN - you're done
    Other ideas that I have used and work well would be to use a database view or create a complex folder. Either will work, In both cases you would need to join on some other column other than the ones you referred earlier.
    I hope this helps
    Best wishes
    Michael

  • What's the best approach for handeling about 1300 connections in Oracle.

    What's the best approach for handling about 1300 connections in Oracle 9i/10g through a Java application?
    1.Using separate schema s for various type users(We can store only relevant data with a particular schema.     Then No. of records per table can be reduced by replicating tables but we have to maintain all data with a another schema     Then we need update two schema s for a given session.Because we maintain separate scheama for a one user and another schema for all data and then there may be Updating problems)
    OR
    2. Using single schema for all users.
    Note: All users may access the same tables and there may be lot of records than previous case.
    What is the Best case.
    Please give Your valuable ideas

    It is a true but i want a solution from you all.I want you to tell me how to fix my friends car.

  • Best approach for IDOC - JDBC scenario

    Hi,
    In my scenarion I am creating sales order(ORDERS04) in R/3 system and which need to be replicated in a SQL Server system. I am sending the order to XI as an IDoc and want to use JDBC for sending data to SQL Server. I need to insert data in two tables(header & details). Is it possible without BPM?  Or what is the best approach for this?
    Thanks,
    Sri.

    Yes, this is possible without the BPM.
    Just create the Corresponding Datatype for the insertion.
    if the records to be inserted are different, then there wil be 2 different datatypes ( one for header and one for detail).
    Do a mutlimapping, where your Source is mapped into the header and details datatype and then send using the JDBC sender adapter.
    For the strucutre of your Datatype for insertion , just check this link,
    http://help.sap.com/saphelp_nw04/helpdata/en/7e/5df96381ec72468a00815dd80f8b63/content.htm
    To access any Database from XI, you will have to install the corresponding Driver on your XI server.
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3867a582-0401-0010-6cbf-9644e49f1a10
    Regards,
    Bhavesh

  • Best approach to return Large data ( 4K) from Stored Proc

    We have a stored proc (Oracle 8i) that:
    1) receives some parameters.
    2)performs computations which create a large block of data
    3) return this data to caller.
    compatible with both ASP (using MSDAORA.Oracle), and ColdFusion (using Oracle ODBC driver). This procedure is critical in terms of performance.
    I have written this procedure as having an OUT param which is a REF CURSOR to a record containing a LONG. In order to make this work, at the end of the procedure I have to store the working buffer (an internal LONG variable) into a temp table, and then open the cursor as a SELECT from the temp table.
    I have tried to open the cursor as a SELECT of the working buffer (from dual) but I get the error "ORA-01460: unimplemented or unreasonable conversion requested"
    I suspect this is taking too much time; any tips about the best approach here? is there a resource with REAL examples on returning large data?
    If I switch to CLOB, will it speed the process, be compatible with callers, etc ? all references to CLOB I saw use trivial examples.
    Thanks for any help,
    Yoram Ayalon

    We have a stored proc (Oracle 8i) that:
    1) receives some parameters.
    2)performs computations which create a large block of data
    3) return this data to caller.
    compatible with both ASP (using MSDAORA.Oracle), and ColdFusion (using Oracle ODBC driver). This procedure is critical in terms of performance.
    I have written this procedure as having an OUT param which is a REF CURSOR to a record containing a LONG. In order to make this work, at the end of the procedure I have to store the working buffer (an internal LONG variable) into a temp table, and then open the cursor as a SELECT from the temp table.
    I have tried to open the cursor as a SELECT of the working buffer (from dual) but I get the error "ORA-01460: unimplemented or unreasonable conversion requested"
    I suspect this is taking too much time; any tips about the best approach here? is there a resource with REAL examples on returning large data?
    If I switch to CLOB, will it speed the process, be compatible with callers, etc ? all references to CLOB I saw use trivial examples.
    Thanks for any help,
    Yoram Ayalon

Maybe you are looking for