Timing beats for music video - workflow best practices?

Hi all,
I am currently working on a short video that needs to be timed to music. Obviously in some respects, a tool like Final Cut or Premier would be more suited to the task in certain ways, but for reasons of the technical specification of the output, it has to be done in After Effects.
So I am trying to nail down a good workflow for timing various elements of the project to the beat of the soundtrack. What I'm doing so far is, I have expanded the waveform of the soundtrack so I can get some rough visual on where obvious changes happen. See the screenshot to the left. I placed a marker on the exact frame where the drum kick happens, where I would want to do something cool with the video.
This is obviously of limited utility, because although the beat does coincide with one of the peaks of the waveform, it's not apparent from looking at it which one it would be. It's also not quite possible to ascertain the exact frame on which the beat happens using this method, as each peak is several frames long and includes several elements of the audio.
So the next thing I tried is simply listening to the track with RAM-preview, and pressing the * key to place markers on the soundtrack on the beat. Maybe some of you are better at rhythm games than I could hope to be, but I find that while this lets me put down a lot of markers quickly, it's very inexact.
Which, alright, is to be expected - I might not get every marker on the exact frame on the first try, but I can go back and slide them a few frames, right? But the problem here is with scrubbing audio, and with visual feedback with the playhead. When you RAM preview, the playhead does not update on every frame - it updates only every seven or eight frames or so. Which means that if I watch the timeline to see if my markers are in the right place, I can't tell when the playhead passes over them.
So instead, I hold down the Ctrl button and scrub. This kinda works, but the playback of the audio is quite choppy and discontiguous. As such it's hard to get a sense of each frame as part of the song, and even if I do hear a beat it's hard to recognize it as such (this technique works better in Final Cut somehow, for some reason it manages to scrub while sounding less discontiguous). So then what I do is I stop on a frame I think might possibly be the beat, and I hold Ctrl and Left Click on the playhead, which loops a short segment (five frames?) of the audio. Then I do the same for a few frames before and afterward, just to make sure I targeted the right frame. Now, finally, I have the marker in the right place for a single beat.
There's got to be a better way.
I wonder if anyone who has been faced with this problem has figured out a more effective workflow? I am thinking there might be a separate piece of software I might use that lets me go through the audio and place markers better - and then allows me to somehow export marker data to After Effects? Or maybe another workflow I haven't thought of.
Your thoughts?

Cutting music accurately to beats is usually a simple matter of math. Most music, especially music with a strong beat, runs at a specific number of beats per minute. That's usually easy to figure out. There are times when you have a tempos shift or a fermata, acc. or ret. or other change they are usually for a short duration and pretty easy to fix. Figure out the tempo and you're about 90% there.
Another tool, and one that I use quite often is Sound Keys from Trapcode. This is much more versatile than using the built in Convert audio to keyframes.
If I have a fairly long segment that has a bunch of beats quick beats I will sometimes create a small solid, add hue and saturation to the small solid set to colorize, add this expression to colorize hue - index * 140, trim the length of the layer to the length of the beat, then duplicate the layer for as many beats as I want to cover, sequence the layers, and then use that layer sequence as a guide. The setup looks like this:
You could also use a loopOut expression on rotation to rotate a small layer say 45º on every beat. For example, if you had a beat every 20 frames set 3 hold keyframes, one at the first frame for 0º, one at frame 20 for 40º then put one at frame 40 for 0º and simply add the expression loopOut("cycle") to rotation. Now the little cube would snap to a different angle on every beat.
These tricks will help you with some fast cuts right to the beat but let me give you some advice on cutting to music that I learned 42 years ago (yes, I am an old guy) when I was learning to edit to music at my first job in film. My mentor had been in the business since 1952, had worked as an editor on a ton of feature films and a zillion commercials. He told me that if you precisely match the beat, which was pretty easy to find and mark with a wax pencil on mag stock, your cuts will look out of sync. The problem with that precision is that the eye and the ear are not precisely in sync. The lead or lag time depends on the shot, where the shot is taking your eye, where the cut moves your eye, and the mood the music is creating. The best technique is to listen to the music and place markers as you tap out the beat. In the old days I did this by tapping a wax pencil on the mag stock as it ran through the Steenbeck or Moviola, today I do it by tapping the * key, then I cut and adjust the cut point letting the shot dictate when the music and beat line up emotionally. You'll get a better result and your work will go much faster than fussing around trying to precisely match the beat with every cut. I have shifted cuts as many as 4 or 5 frames ahead or behind to beat to get sequence to feel right.
I do use the two techniques I listed above to cut short segments of very tight cuts and they help save time, but setting your comp resolution to auto and your comp Zoom factor to 25%, ram preview to "From current time," and just cutting to the beat by feel will give you a much more pleasing and emotional response to your sequence than precisely and mathematically exactly matching the beat. The final piece of advice I can give you about cutting music is to make the first cut, then walk away for at least a half hour, then come back and look at what you have done. You'll instantly see where you need to make adjustments and it will take you less than half the time to cut your sequence.

Similar Messages

  • Workflow Best Practices for 11i and R12

    A workflow best practices document targeted more towards System Administrators is published. Please read metalink note 453137.1. Please note that this is not a troubleshooting guide. This is more of a practices that can lead to a healthier workflow system.
    Any suggestions/improvements from the field is welcome.

    Hi Narayan;
    Please see:
    Interesting Documents Concerning E-Business Suite 11i to R12 Upgrades [ID 850008.1]
    Upgrade Advisor: E-Business Suite (EBS) Technology Stack Upgrade from 11.5.10.2 to 12.1.2 [ID 253.1]
    Those should helps you
    I suggest also use search mechanisim and make search on EBS forum, you can see our previous topic which is mention same&similar issues
    Regard
    Helios

  • Building complex flash game in Flash Builder 4 - Workflow/Best Practices

    I'm investigating switching to Flash Builder 4 for building a complex game that currently lives purely inside Flash CS4.  CS4 is a pretty terrible source code editor and debugger.  It's also quite unstable.  Many crashes caused by bad behavior in the SWF will take out the entire IDE so are almost impossible to debug.  And I've heard other horror stories.  To be clear, for this project I'm not interested in the Flex API, just the IDE.
    Surprisingly, it seems Flash Builder 4 isn't really set up for this type of development.  I was hoping for an "Import FLA" option that would import my Document Class, set it as the main entry point, and figure out where other assets live and construct a new project.  What is the best workflow for developing a project like this?
    What I tried:
    -Create a new Actionscript Project in the same directory where my CS4  lives
    -Set the primary source file to match the original project's source file and location
    -Set my main FLA as "export to SWC", and added "SWC PATH" to my flash builder 4 project.
    -Compile and run.. received many errors due to references to stage instance. I changed these to GetChildByName("stagename").  Instead, should I declare them as members of the main class?  (this would mimic what flash CS4 does).
    -My project already streams in several external SWF's.  I set these to "Export SWC" to get compile-time access to classes and varaibles. This works fine in cs4, the loaded SWF's behave as if they were in the native project.  Is the same recommended with FB4?
    -Should I also be setting the primary FLA as "export to swc"?  If not, how do I reference it from flex, and how does flex know which fla it should construct the main stage with?
    Problems:
    -I'm getting a crash inside a class that is compiled in one of the external SWF's (with SWC).  I cannot see source code for the stack inside this class at all.  I CAN see member variables of the class, so symbol information exists.  And I do see the stack with correct function names.  I even see local variables and function parameters in the watch window! But no source.  Is this a known bug, or "by design"? Is there a workaround?  The class is compiled into the main project, but I still cannot see source.  If FLEX doesn't support source level debugging of SWC's, then it's pretty useless to me.   The project cannot live as a single SWF.  It needs to be streaming and modular for performance and also work flow. I can see source just fine when debugging the exact same SWC/SWF through CS4.
    -What is the expected workflow with artists/designers working on the project?  Currently they just have access to all the latest source, and to test changes they run right through flash.  Will they be required to license Flash Builder as well so they can test changes?  Or should I be distributing the main "engine" as a SWF, and having it reference other SWF files that artists can work on?  They they compile their SWF in CS4, and to test the game, they can load the SWF I distribute.
    A whitepaper on this would be awesome, since I think a lot of folks are trying to go this direction.  I spent a long time searching the web and there is quite a bit of confusion on this issue, and various hacks/tricks to make things work.  Most of the information is stale from old releases (AS2!).
    If a clean workflow I would happily adopt Flash Builder 4 as the new development tool for all the programmers.  It's a really impressive IDE with solid performance, functional intellisense, a rich and configurable interface, a responsive debugger..I could go on and on.  One request is shipping with "visual studio keyboard layout" for us C++ nerds.
    Thanks very much for reading this novel!

    Flash builder debugging is a go!  Boy, I feel a bit stupid, you nailed the problem Jason - I didn't have "Permit Debugging set".  I didn't catch it because debugging worked fine in CS4 because, well, CS4 doesn't obey this flag, even for externally loaded SWF files (I think as long as it has direct access to the SWC). Ugh.
    I can now run my entire, multi SWF, complex project through FB with minimal changes.  One question I do have:
    In order to instantiate stage instances and call the constructor of the document class, I currently load the SWF file with LoaderContext.  I'm not even exporting an SWC for the main FLA (though I may, to get better intellisense).  Is this the correct way of doing it?  Or should I be using , or some other method to pull it into flex?  They seem to do the same thing.
    The one awful part about this workflow is that since almost all of my code is currently tied to symbols, and lives in the SWF, any change i make to code must first be recompiled in CS4, then I have to switch back to FB.  I'm going to over time restructure the whole code base to remove the dependency of having library symbols derive from my own custom classes.  It's just a terrible work flow for both programmers and artists alike.  CS5 will make this better, but still not great.  Having a clean code base and abstracted away assets that hold no dependencies on the code  seems like the way to go with flash.  Realistically, in a complex project, artists/designers don't know how to correctly set up symbols to drive from classes anyway, it must be done by a programmer.  This will allow for tighter error checking and less guess work.  Any thoughts on this?
    Would love to beta test CS5 FYI seeing as it solves some of these issues.
    Date: Thu, 21 Jan 2010 15:06:07 -0700
    From: [email protected]
    To: [email protected]
    Subject: Building complex flash game in Flash Builder 4 - Workflow/Best Practices
    How are you launching the debug session from Flash Builder? Which SWF are you pointing to?
    Here's what I did:
    1) I imported your project (File > Import > General > Existing project...)
    2) Create a launch configuration (Run > Debug Configuration) as a Web Application pointing to the FlexSwcBug project
    3) In the launch config, under "URL or path to launch" I unchecked "use default" and selected the SWF you built (I assume from Flash Pro C:\Users\labuser\Documents\FLAs\FlexSwcBug\FlexSwcBugCopy\src\AdobeBugExample_M ain.swf)
    4) Running that SWF, I get a warning "SWF Not Compiled for Debugging"
    5) No problem here. I opened Flash Professional to re-publish the SWF with "Permit debugging" on
    6) Back In Flash Builder, I re-ran my launch configuration and I hit the breakpoint just fine
    It's possible that you launched the wrong SWF here. It looks like you setup DocumentClass as a runnable application. This creates a DocumentClass.swf in the bin-debug folder and by default, that's what Flash Builder will create a run config for. That's not the SWF you want.
    In AdobeBugExample_Main.swc, I don't see where classCrashExternal is defined. I see that classCrashMainExample is the class and symbol name for the blue pentagon. Flash Builder reads the SWC fine for me. I'm able to get code hinting for both classes in the SWC.
    Jason San Jose
    Quality Engineer, Flash Builder
    >

  • Iso, Shutter Speed, Aperture, White balance for music videos or film?

    Hello everyone. I just started making music videos. I just want some tips or can someone  direct me to a good  guide for music videos. I know the basics of iso, shutter speed, and aperture already. I read some guides and so far I know I gotta put  my shutter at 1/50 or if i'm recording something at high speed, i  increase it. is it so? Also, I always put my aperture at the lowest. I  have a Canon T2i and Canon lenses 50 f/1.8 and kit lens. I always put my  aperture at the lowest 3.5(kit lens) or 1.8. The tricky part is the ISO. I don't  know where to put my ISO. I know the lowest ISO is good but sometimes  the video looks dark. Do I fix this in video editing programs? Like should I always  stay at ISO 100? What about white balance? I use Final Cut 7.
    Also, what about during the day and night? Indoors? etc I also have a cheap LED lights.

    your questions are about the very basics of photography.
    I would consider to buy a book, which explains photography 'from the ground' ("... for dummies.." not meant as an offence!)
    in short:
    a scenery offers a defined amount of light.
    Your cameras 'film' aka sensor, needs a defined amount of light to deliver a picture.
    So, on recoprding, you have to control the amount of light, and the controls are:
    aperture - opened or closed
    shutter - shorter or longer
    speed (ISO) - higher or lower.
    Imagine the light as a stream of tiny balls, coming from your scenery: a bigger hole = aperture lets more balls into the cam. A shorter shutter speed reduces the amount of balls. And a higher ISO catches more balls than a lower. ..
    Understanding that, you realize that those parameters are crosswise-related: an open aperture + a shorter shutter speed results in the same amount of light on the sensor as a closed aperture and a longer shutter speed.
    If the amount of light is still too much, lower the sensitivity = ISO.
    If the amount of light is too little, open the aperture, or use a longer shutter speed or raise the sensitivity.
    When you set one or two parameters in this equation, you have to adjust the other ones: Video-makers start with the shutter speed, usually set to 1/2 of frame-rate = 1/50 - 1/60 - why? Because it allows a nice looking 'smear' aka motion-blur aka 'film look'.
    Now, with a given speed, you can conrtol teh amount of light for the sensor only by aparture; and when you reach limits, by setting ISO.
    When you understood this BY PRACTICING, you're next step is the concept of 'shallow focus'/depth of field. And when you understood this BY PRACTICING, you'll will understand the need of Neutral Densitity Filters for movie-makers...
    But first things first: buy a book; read your cam's amnual; practice the settings and watch carefully the different results, diff. settings result.
    ... and it's really just about basics - independently of your device and/or your software.
    None of the things mentioned above can be done or 'fixed' in post... (basically...)
    happy movie making!

  • I have an iphone 5s and when I make a playlist on itunes for music videos and sync my iphone the playlist does not show up. What can I do to fix the problem?

    I have an iphone 5s and when I make a playlist on itunes for music videos and sync my iphone the playlist does not show up. What can I do to fix the problem?

    Version 11.1.5.5 of iTunes was just released today. If you update to that, does that help with the recognition troubles?
    http://www.apple.com/itunes/download/

  • Text pluggin for music video text

    I am looking for a text plugin for music video text. Can anyone recommend one?

    You mean the typical text insert that announces the song title and artist?
    It's already there in FCP: Effects > Generators > Text > Lower Third.

  • Best export option for music video?

    Hello,
    I'm trying to get a music video that my friends' band made in the suitable format (to get it transferred to betamax and handed over to local television stations). The video was actually made in Flash and exported to uncompressed quicktime video. The problem we encountered is that the guys who handle the transfer to betamax couldn't play the file on their windows pcs (not even in quicktime for windows).
    So, my question to you guys is; which format should I export the uncompressed video to in quicktime, keeping in mind that I want maximum quality (preferably no loss at all) and compatibility on a windows machine.
    Help greatly appreciated; we're on a deadline .
    Thanks in advance,
    Pieter-Jan Beelaerts

    Open it in iTunes and export it in DV to a DV tape camera and give the tape to the station. Or buy Flip4mac , the $49 or $99 version and export it to Windows Media.
    Or open in MPEG Streamclip and export to DV, and provide it on a DVD if it is short enough (DV is 13GB per recorded hour).
    If they want a tape, then export to DV and then open that in iMovie, and export to tape.
    Any TV station wil be OK with DV, they must get viewer submissions up the hoop.
    I would worry about quality degradation as well.

  • Best compression for music video.

    hello...
    i'm currently working on a music video using my DV cam, and flash animation.
    I've exported the animation in DV Pal format (since in a mpeg2 format takes ages to render). What am asking is, which compressor should i use to export the whole thing onto a dvd? I usually do it with the 3divx compressor but i was wondering if there are any tips you could offer...
    thanks in advance.

    If you are creating a DVD, start with your DV footage and encode in Compressor, BitVice or whatever encoder you have to MPEG2/m2v video format.

  • Adding Audio from a Premiere Layer into AE for music video

    Hello All,  My situation is this.
    I'm doing a Dance remix Music video.  The original song was a ballad. The footage used for the Dance Mix consists of both the ballad clips sped up (ballad music was at 61 beats per minute) and footage shot to the dance track (127 BPM) all of which is Green screen. 
    The way that I approached it was to put the .wav song on one track in Pre Pro and then I synced up the rough mix edit clips to the main music track, time stretching the Ballad clips, making the mouth movements match and body movements move in rhythm.  After I got all my rough edits done I discarded their audio tracks as I thought that I would not need them further, and it was too many tracks to look at (Pre Pre could use a Shy switch).  So I ended up with one Audio track and 12 Video tracks, each track containing one pass of the filming.
    Then I selected each rough edited clip and used "Replace with an AE composition".  In AE I then did my green screen keying.  One Scene I created was a composited club scene in AE where the artist is on a stage and there are flashing lights behind her using several instances of CC Light Sweep on a Solid Layer.  When I Dymically Linked back to Premiere the light sweeps were not in rhythm with the music as I had no audio to listen to in AE.
    Its a lot of render time tweaking in AE and then going to Pr Pro and having to wait for more rendering to see the results.
    My Question is how would I go about  putting the audio track into the AE Linked comp.  The song is over 5 minutes and most of the clips are a few frames to a couple of seconds long.
    I think my approach was wrong in the first place and that I should have made an AE project the length of the song and done all my editing and compositing in that one Comp instead of putting things in Pr Pro first, which now makes sense, since I don't see any benefit of being in Pre Pro at this time.  And I don't see any way of including part of the audio track when making the AE Linked comp.
    Thanks

    I don't really get the problem, I must admit. If the final audio exists and the clips have been mangled correctly in Premiere, the timing should match in any case. Why would you even need the clip's source audio? The way to work in AE is to simply create an audio preview and place markers on important cues by hitting the multiply key on the numpad while the sound plays... Everything else shouldn't matter.
    Mylenium

  • Workflow best practices

    Hi all,
    I am new to workflow, and I would like to know what are the best practices in WF design, for example locking objects, checking existence... It would be great if one of you guys can provide me with snapshots of the skeleton of generic WF for change or approval for example. Thank you in advance
    Best regards,
    Haku

    Hi,
    Normally there are lots of business objects related to the business specific scenario, for which you're modelling a workflow.
    For example for employees (Person administration module), there are the business objects BUS1065 and EMPLOYEET (there are of course other such as FAMILY).
    Always look at these first to check if a method can be used, or which BO to extend (delegation).
    In many cases there are also standard workflow supplied by SAP which can cover the functional requirement, or which can serve as a basis.
    Locking objects your specific case, (i'm now not at work, so this is from the top of my head). enqueueing and dequeueing an employee is a method of bus1065/employeet (ther should also be a task for it). When making HR-PA flows, you could as a first step include an enqueueing of the person, if this fails, you could create a loop around it, which waits for let's day 5 minutes and then try it again, if after 20 times (or less) you can't enqueue an employee you can send a message to someone. This could be implemented in every workflow you make and I recommend it.
    Kind regards, Rob Dielemans

  • IP Video conferencing best practice - Tanberg/Cisco hardware

    We are currently experiencing intermittant issues with our Video conferencing internal and external network with intermittant screen fragmentation. We have separate VLAN's configured on our internal network for the Video traffic only.   We use the movi client on the with the majority of our remote users.  I'm wondering what is the best practice based in setting up and support a Video conference network.

    Have you set up QoS policies for video?
    I have a very good network readiness document you are welcome to, if you want to ping me your email address?
    I can't seem to copy it properly into the tech support app on my iPad, drop me an email to [email protected] and I'll send over the info - should help!
    Sent from Cisco Technical Support iPad App

  • Music on Hold: Best Practice and site assignment

    Hi guys,
    I have a client with multiple sites, a large number of remote workers (on and off domain) and Lync Phone Edition devices.
    We want to deploy a custom music on hold file. What's the best way of doing this? I'm thinking
    Placing the file on a share on one of the Lync servers. However this would mean (I assume) that clients will always try to contact the UNC path every time a call is placed on hold, which would result in site B connecting to site A for it's MoH file. This
    is very inefficient and adds delay onto placing a call on hold. If accessing the file from a central share is best practice, how could I do this per site? Site policies I've tried haven't worked very well. For example, if a file is on
    \\serverB\MoH\file.wma for a site called "London Site" what commands do I need to run to create a policy that will force clients located at a site to use that UNC path? Also, how do client know what site
    they are in?
    Alternatively, I was thinking of pushing out the WMA file to local devices via a Group Policy, and then setting Lync globally to point to %systemdrive%\MoH\file.wma. Again, how do I go about doing this? Also, what would happen to LPE devices that wouldn't
    have the file (as they wouldn't get the GPO)?
    Any help with this would be appreciated. Particularly around how users are assigned to sites, and the syntax used to create a site policy for the first option. Any best practice guidance would be great!
    Thanks - Steve

    Hi StevehootMITS,
    If Lync Phone Edition or other device that doesn’t provide endpoint MOH, you can use PSTN Gateways to provide music on hold. For more information about Music On Hold, you can check
    http://windowspbx.blogspot.in/2011/07/questions-about-microsoft-lync-server.html
    Note: Microsoft is providing this information as a convenience to you. The sites are not controlled by Microsoft. Microsoft cannot make any representations regarding the quality, safety, or
    suitability of any software or information found there. Please make sure that you completely understand the risk before retrieving any suggestions from the above link.
    Best regards,
    Eric

  • Highest Quality Live Video Streaming | Best Practice Recommendations?

    Hi,
    When using FlashCollab Server, how can we achieve best quality publishing a live stream?
    Can you provide a bullet list for best practice recommendations?
    Our requirement is publishing a single presenter to many viewers (generally around 5 to 50 viewers).
    Also, can it make any difference if the publisher is running Flash Player 10 vs Flash Player 9?
              Thanks,
              g

    Hi Greg,
    For achieving best quality
    a) you should use RTMFP connection instead of RTMPS. RTMFP has a lower latency.
    b) You should the player 10 swc.
    c) If bandwidth is not a restriction for you, you can use higest quality values. WebcamPublisher class has property for setting quality.
    d) You can use a lower keyframeInterval value, which in turn  will send videos in full frames rather than by a video compression algorithm.
    e) You should use Speex Codec. Speex Codec is again provided with player 10 swc.
    These are some suggestions that can improve your quality depending on your requirements.
    Thanks
    Hironmay Basu

  • Dropbox workflow best practices?

    I'm looking for best practices on using Dropbox for an InDesign/InCopy workflow. Specifically, in my own testing with two computers, I've realized that the lockout function does not work when using Dropbox, so multiple people can  edit the same file simultaneously... which is obviously problematic. Suggestions for how to avoid this? If I create a "lock" folder, and move files in there before editing them, will my designer's Dropbox refresh fast enough to prevent him from also opening that file?
    How are your own Dropbox workflows set up, for those of you that use it? Are there other hiccups, hangups or landmines I should watch out for?

    Well the issues are what you stated yourself, namely that Dropbox doesn't copy the lock file across, and if you manually "locked" it by shifting it into a different folder another user could still grab it before Dropbox had shifted their copy.
    In that scenario though, you could speed up the shift by making sure that the InDesign file was updated on both computers first.  To do that you'd have to save the file, wait until it was uploaded (you get the green tick) and wait at least the same amount of time again (for it to download on the other computer) then shift it.  This means that Dropbox would shift it quicker because that is the only action it is transferring between computers.
    Another thing to be aware of though is that if your Dropbox is slogging way at another task (say uploading a folder of linked images) it might put doing that shift down the list.
    In conclusion I don't think it's realistic to expect Dropbox to stop two users editing the same file, you need some other project structure to achieve that.

  • Getting audio from Pre Pro into AE for Music Video

    Hello All,  My situation is this.
    I'm doing a Dance remix Music video.  The original song was a ballad. The footage used for the Dance Mix consists of both the ballad clips sped up (ballad music was at 61 beats per minute) and footage shot to the dance track (127 BPM) all of which is Green screen. 
    The way that I approached it was to put the .wav song on one track in Pre Pro and then I synced up the rough mix edit clips to the main music track, time stretching the Ballad clips, making the mouth movements match and body movements move in rhythm.  After I got all my rough edits done I discarded their audio tracks as I thought that I would not need them further, and it was too many tracks to look at (Pre Pre could use a Shy switch).  So I ended up with one Audio track and 12 Video tracks, each track containing one pass of the filming.
    Then I selected each rough edited clip and used "Replace with an AE composition".  In AE I then did my green screen keying.  One Scene I created was a composited club scene in AE where the artist is on a stage and there are flashing lights behind her using several instances of CC Light Sweep on a Solid Layer.  When I Dymically Linked back to Premiere the light sweeps were not in rhythm with the music as I had no audio to listen to in AE.
    Its a lot of render time tweaking in AE and then going to Pr Pro and having to wait for more rendering to see the results.
    My Question is how would I go about  putting the audio track into the AE Linked comp.  The song is over 5 minutes and most of the clips are a few frames to a couple of seconds long.
    I think my approach was wrong in the first place and that I should have made an AE project the length of the song and done all my editing and compositing in that one Comp instead of putting things in Pr Pro first, which now makes sense, since I don't see any benefit of being in Pre Pro at this time.  And I don't see any way of including part of the audio track when making the AE Linked comp.
    Thanks

    Export just the audio track from Pr as a .wav file and then add it as a layer in your AE comp.
    Cutting is much easier in Pr, especially the audio.  The first time you try cutting audio to video in AE, you'll know what I mean.
    -Jeff

Maybe you are looking for