Boilerplate render pipeline

I figured out the answer to the question that I added into the answer to the footage properties post.
With the goal of loading a video and processing it in batch, I had gotten this far:
var footage = app.project.importFile(new ImportOptions(File("c:\\some.avi")));
var comp = app.project.items.addComp(footage.name, footage.width, footage.height, footage.pixelAspect, footage.duration, footage.frameRate);
var layer = comp.layers.add(footage);
layer.openInViewer();
This loads footage, creates a suitable composition and adds the video to a layer. I wasn't sure how to then get it into the timeline. There is just on more step:
comp.openInViewer();
Do you know of a skeleton render pipeline either all extendscript, or perhaps a combination of aerender.exe and scripts. I think that scripts on redifinery, for example, expect there to be a human waving a mouse around who then starts the script. I would to achieve this:
Video file -> Load video and do stuff to it in after effects -> Render
Do you know how to do that, or what I might be overlooking the scripting guide. I've read through everything in the guide but haven't necessarily grasp the implications of what it says.
I should clarify, the middle part where you do stuff in after effects should be accomplished by scripting, i.e. not:
1 Create a project
2. Create a composition
3. Create layers
4. Setup transformations,
5. Save.
and then process the prepared project. I want to automate steps 1-5.

Thank you. My question is inarticulate, looking at it again. Really, I'm asking for a skeleton script that is for transforming video, where I think that consists of some set of actions that I do.
I usually go through this routine:
Import media
Open in footage panel and set in and out points
Create a composition from the footage, which creates a layer and adds it to the timeline, where the clip is trimmed from the in and out points.
Then I "do stuff".
Then I render and save the project.
Here is my revised code in caveman syntax, as opposed to maintainable code like http://www.crgreen.com/boethos/
(in other words, I suppose I'm not actually looking for "boilerplate" code, though that is interesting too. I mean the types of api calls needed to do basic setup and teardown for transforming and rendering video in batch)
app.project.importFile(new ImportOptions(File("c:/some.avi")))
var footage = app.project.item(1)
var comp = app.project.items.addComp(footage.name, footage.width, footage.height, footage.pixelAspect, footage.duration, footage.frameRate)
var layer = comp.layers.add(footage)
layer.inPoint = 1.0
layer.outPoint = 2.0
comp.displayStartTime = 0.0
comp.openInViewer()
This still doesn't work as I expect though. The timeline starts at 0 and lasts for the duration of the footage. I'm not sure how you trim.
I'm also not sure if trimming in the timeline is different from trimming in the footage panel and then adding trimmed footage to a composition.

Similar Messages

  • SetVector and GPU render mode

    Hello,
    If i use the setVector method on a BitmapData in GPU render mode on a mobile device, it works very slow.
    This changes if the render mode is CPU. The method is fast enough.
    The problem lies in the fact that if I use CPU render mode the app i;m developing is getting very slow, while on GPU it works great.
    setPixel32 is even slower,it even won't work with lock/unlock.
    Where the problem lies? How can I bipass this since setPixel32 or setVector is a must have when working with bitmaps.
    UPDATE:
    I've tested alot, and it seems the problem lies not with setVector method but with copying transparent pixels
    You can find the code here since i've asked the question o stackexchange too:  http://stackoverflow.com/questions/21827049/gpu-vs-cpu-render-mode-adobe-air

    Hi Lucian,
    Unfortunately, I am unaware of any tutorials on the net I could recommend, though I am sure there are a few -- not on the subject of setting up render targets, specifically -- most likely-- but on setting up Stage3D + shaders.
    One source I would recommend, though, if this is your first time setting up a Stage3D render pipeline, besides the code examples in the Stage3D Adobe reference, is this great book:
    http://www.amazon.com/Adobe-Stage3D-Molehill-Programming-Beginners/dp/1849691681/ref=sr_1_ sc_1?ie=UTF8&qid=1392715725
    It's a quick read.  Nicely presented and explained.  I think in 3-5 days you could be up and running.
    I would also browse this site.  You might get some ideas:
    http://www.flashandmath.com
    Regarding using render targets to identify areas that have changed and need further processing:
    This is a fairly common practice when interactivity is key, and works well in some use-cases.
    For example, in Photoshop, there is a filter called "Liquify", with which you can stretch and compress an image in real-time.  There are 2 ways to do this effect, live: one is to map the source -- undistorted-- image onto a grid of polygons, and start pushing and pulling the vertices.  The other method -- same principle, but finer -- is to paint the compression field into a render target ( a texture ), temporarily, while you work in real time by 'painting' the effect ( note: the displacement vectors are based off of the gradient of the compression field ).  Conceivably, you could also paint the vectors directly, as if painting a normal map.  Once the user is happy with the look, then the user clicks 'apply', and the render target is used one last time to bake the displacements into the source image.
    Other example:  I once worked on a game where the player could use a laser beam to scar all the buildings of a 3D city.  The way this was done was that each building had its own render target ( usually a 256x256 ), into which the user could 'draw' ( even though the player never saw this render target, which was just a grayscale mask ).  The render target was then fed into the building's exterior shader, which would use it to identify which areas to show the destruction effect into.
    Overall, the idea is to use a render target ( usually smaller that the source texture it will end up modifying ) to paint either a mask, or vectors ( like a normal map ), which is then used along with the source texture in the final shader to create the look you want.
    Although I haven't had to use render targets for this particular situation, lately, I use render targets frequently, otherwise, to draw my assets into and apply shaders, and then draw the render target into the backbuffer, at the very end. 
    The reasons for this is explained here ( along with other tips ):
    http://forums.adobe.com/thread/1399727?tstart=30

  • After Effects layer render breaks

    Adjustment layer breaks 3d render for layers below.  Is there a way to break Stencil mode at a layer (other than precomposing)?

    No. That's inherent in AE's render pipeline
    Mylenium

  • Render times increasing progressively, exponentially

    I am working with some high-res quicktimes (1280x1024) from SnapZ. I have pre-processed the clips to be native to my final frame rate (23.98) I have cut the basic full-frame clips to VO in a sequence matching the clips' settings, then pasted them into a DV sequence, where I'll be adding SD cutaways and zooming in on parts of the interface. (It's software training stuff)
    I'm noticing some very strange behavior when rendering just the backgrounds in the SD sequence. If I render all, it starts out rendering at what I would consider normal speed, estimating about an hour, but then starts slowing down until the estimated render time is over 10 hours for about 9 minutes of content. All the clips hve identical motion settings, no overlays, filters, etc.
    If I cancel the render, and select everything but the first partially-rendered clip, now it estimates about 14 minutes to render, and in fact renders much more of the timeline, but then it starts slowing down again. Removing the partiallly rendered clip from the selection wiill speed it up again, but once one clip has slowed its render down, it will no longer render at normal speed.
    I suspect that there is some cache file that is getting clogged up with data from my hi-res media, and slowing down the process. I am running FCP 5.1.4, OS 10.4.9, on a 2.66 Intel Mac Pro with 5GB of RAM. My processor usage is at around 14% on all four processors during render.
    Any suggestions for which files to trash to clear the clog from my render pipeline?

    nope, just a series of hi-res quicktimes scaled down in the DV sequence, no filters, graphic overlays, etc. All the clips are handled the same way.
    The clips themselves are from SnapZ, and I had to nest them into sequences and re-export them to fix the frame rate issue. The whole 10fps issue was very stubborn, but I found that I could nest them into sequences and batch export them using item settings, which resulted in clips I knew would match the settings of my cusotm hi-res sequences exactly.
    FCP gets pretty sluggish when I am working in the native resolution, but I need to keep the full-res clips so I can add the zooms later. To share the content between sequences, I had to get the frame rates all locked down to 23.98. I just don't understand how two clips with identical settings in one sequence can require longer render times by a factor of 40.
    I haven't been able to find any obvious settings or files I can modify to un-clog the rendering so far. Perhaps there is some master/affiliate relationship between the hi-res insatnce and the DV instance. I'll try a replace edit with the master clip and see if it speeds things up at all.

  • Render bender

    I've done a search on strange render results and can't find anything specific to what I'm getting on my FCP 5.1.4 on my Intel.
    I've got a 1080 50i sequence with a still image import from a 1080 50i clip that's copied onto 2 layers directly above the original. It's not pretty and I appreciate someone might know a better way of doing it, but, I key out a range of luminance on layer 2 and set the output of the luma keyer to 'matte', go on to layer 3 and set the composite mode to 'travel matte - luma', then colour correct (in this instance using the 3-way) layer 3 and layer 1. This is to give me complete control over the areas that are classed as highlights and shadows and then correcting them differently.
    When I'm sitting on the clip unrendered, with the look I want everything's fine. Render it and the result is different to the unrendered version - the colours are changed randomly on the render clip. It seems to be associated with the sequence's codec. If I change the codec I get different results. I've seen the comments about flakey 10bit rendering but this seems to get a dodgy result whether 8 or 10 bit. I've re-booted, started from scratch on completely different projects and still there's a problem, it just looks a little different to last time!
    In case any of you are interested, I've put the xml of the sequence on a public folder at:
    http://www.box.net/public/dt5x7eaj01
    (I think you'll be presented with an advert first, press 'my file' to see the xml)
    Anyone have any insight on this? Thanks.
    beautiful intel mac pro 3ghz and powerbook g4   Mac OS X (10.4.7)   8gb ram, blackmagic decklink hd, 1tb firewire 800 external g-tech

    Nope. I can't make it work... Despite the fact we can see the matte that's generated by the luma key in the monitor, it doesn't seem to be actually rendered. It's like it only gets displayed for the user and doesn't get passed on to the render pipeline, meaning that just a full black screen gets processed. I couldn't even export the matte as a still frame or movie to re-import it.
    I'm writing to Apple about it. Thanks for your help, Jerry.

  • Performance with the new Mac Pros?

    I sold my old Mac Pro (first generation) a few months ago in anticipation of the new line-up. In the meantime, I purchased a i7 iMac and 12GB of RAM. This machine is faster than my old Mac for most Aperture operations (except disk-intensive stuff that I only do occasionally).
    I am ready to purchase a "real" Mac, but I'm hesitating because the improvements just don't seem that great. I have two questions:
    1. Has anyone evaluated qualitative performance with the new ATI 5870 or 5770? Long ago, Aperture seemed pretty much GPU-constrained. I'm confused about whether that's the case anymore.
    2. Has anyone evaluated any of the new Mac Pro chips for general day-to-day use? I'm interested in processing through my images as quickly as possible, so the actual latency to demosaic and render from the raw originals (Canon 1-series) is the most important metric. The second thing is having reasonable performance for multiple brushed-in effect bricks.
    I'm mostly curious if anyone has any experience to point to whether it's worth it -- disregarding the other advantages like expandability and nicer (matte) displays.
    Thanks.
    Ben

    Thanks for writing. Please don't mind if I pick apart your statements.
    "For an extra $200 the 5870 is a no brainer." I agree on a pure cost basis that it's not a hard decision. But I have a very quiet environment, and I understand this card can make a lot of noise. To pay money, end up with a louder machine, and on top of that realize no significant benefit would be a minor disaster.
    So, the more interesting question is: has anyone actually used the 5870 and can compare it to previous cards? A 16-bit 60 megapixel image won't require even .5GB of VRAM if fully tiled into it, for example, so I have no ability, a priori, to prove to myself that it will matter. I guess I'm really hoping for real-world data. Perhaps you speak from this experience, Matthew? (I can't tell.)
    Background work and exporting are helpful, but not as critical for my primary daily use. I know the CPU is also used for demosaicing or at least some subset of the render pipeline, because I have two computers that demonstrate vastly different render-from-raw response times with the same graphics card. Indeed, it is this lag that would be the most valuable of all for me to reduce. I want to be able to flip through a large shoot and see each image at 100% as instantaneously as possible. On my 2.8 i7 that process takes about 1 second on average (when Aperture doesn't get confused and mysteriously stop rendering 100% images).
    Ben

  • Animated GIF not cycling frames in Tomcat

    I have pages that use animated gif files to get across a point.
    In the IDE, they cycle the images as they should
    SunAppserver cycles them properly.
    I only get the first frame in Tomcat (5.5.7)
    Over on the java.net, I saw this on a page concerning JAI:
    GIF
    The decoder supports animated GIF files and GIF files with transparent background. Only the first frame of an animated GIF file may be loaded via JAI; subsequent frames must be obtained via direct use of the ancillary codec classes.
    Um, am I missing something, or is this a Tomcat problem ?

    You just need transparency in your file. Not sure if it has to be pre-multiplied. In graphic converter you need to remove the backgrounds. Haven't used GC in years so can't say how you can be sure it's gone but in Photoshop, Illustrator and After Effects you can choose to have a checkerboard background to tell you when you are seeing 'thru' the image and by how much (the overall opacity or mask of the whole image if that doesn't confuse the issue).
    If you are using PS images they will almost certainly generated with a set of rectangles including a perimeter cropping box and white background and these need to go either before it's bitmapped in a vector base app or the relevant pixels erased after it has been bit mapped in GC. As previously mentioned, if you have a unique bkgd colour, colour selecting/keying are some ways to do this). Much faster for >10 images to generate the plots without the bkgd but this may not be possible, what software is source material coming from?
    I regularly bring in vector based moving art-work into keynote with alpha from Adobe AE, Apple Quartz and rendered .mov files. Some codecs (H.264) don't support a separate alpha channel (RGBA) but PNG (slow to render) and animation and a bunch of others do. Pre-multiplying removes the A channel in the RGBA (by multiplying the R/G/B by the A values) thereby speeding and making compatible with some GPUs/render pipelines.

  • Transparent Animated Gif not possible anymore in Keynote 09?

    I have created an animated GIF (a series of transparent GIFs) with Adobe Image Ready, which was showing and playing OK on keynote 2.0.2. I have tried the same animated GIF on 5.0, and it seems the transparency of GIF is gone, it plays OK though. It would appear silly to lose this feature on later keynotes.
    Any ideas why Keynote 5.0 does show the GIF without the transparency?
    Any ideas are appreciated!
    -AstroGrad

    You just need transparency in your file. Not sure if it has to be pre-multiplied. In graphic converter you need to remove the backgrounds. Haven't used GC in years so can't say how you can be sure it's gone but in Photoshop, Illustrator and After Effects you can choose to have a checkerboard background to tell you when you are seeing 'thru' the image and by how much (the overall opacity or mask of the whole image if that doesn't confuse the issue).
    If you are using PS images they will almost certainly generated with a set of rectangles including a perimeter cropping box and white background and these need to go either before it's bitmapped in a vector base app or the relevant pixels erased after it has been bit mapped in GC. As previously mentioned, if you have a unique bkgd colour, colour selecting/keying are some ways to do this). Much faster for >10 images to generate the plots without the bkgd but this may not be possible, what software is source material coming from?
    I regularly bring in vector based moving art-work into keynote with alpha from Adobe AE, Apple Quartz and rendered .mov files. Some codecs (H.264) don't support a separate alpha channel (RGBA) but PNG (slow to render) and animation and a bunch of others do. Pre-multiplying removes the A channel in the RGBA (by multiplying the R/G/B by the A values) thereby speeding and making compatible with some GPUs/render pipelines.

  • Transparent Animated Gif in Keynote 9

    Hello,
    I have been trying to insert some of my transparent animated gif in Keynote 9 with zero success. I have checked that the background of the file was actually transparent - it is - and I have attempted to use the Alpha tool - not working with gif images. Should I give up completely?
    Previous posts on the subject have not provided any solution so I would immensly appreciate any further suggestion on this!
    Thanks!

    You just need transparency in your file. Not sure if it has to be pre-multiplied. In graphic converter you need to remove the backgrounds. Haven't used GC in years so can't say how you can be sure it's gone but in Photoshop, Illustrator and After Effects you can choose to have a checkerboard background to tell you when you are seeing 'thru' the image and by how much (the overall opacity or mask of the whole image if that doesn't confuse the issue).
    If you are using PS images they will almost certainly generated with a set of rectangles including a perimeter cropping box and white background and these need to go either before it's bitmapped in a vector base app or the relevant pixels erased after it has been bit mapped in GC. As previously mentioned, if you have a unique bkgd colour, colour selecting/keying are some ways to do this). Much faster for >10 images to generate the plots without the bkgd but this may not be possible, what software is source material coming from?
    I regularly bring in vector based moving art-work into keynote with alpha from Adobe AE, Apple Quartz and rendered .mov files. Some codecs (H.264) don't support a separate alpha channel (RGBA) but PNG (slow to render) and animation and a bunch of others do. Pre-multiplying removes the A channel in the RGBA (by multiplying the R/G/B by the A values) thereby speeding and making compatible with some GPUs/render pipelines.

  • In After Effects..How do I add a ramp to a stroke so it fades to bkgrd as a reflection?

    I am trying to create a reflection on a solid black background with bot text and an image/movie. Whenever I use linear wipe it wipes out the inside but leaves the stroke solid.
    In simple step by step terms how do i apply a ramp/gradient to a stroke to either a text or image in after effects?

    What kind of stroke are you applying?  It sounds like you're using Layer Styles, but Layer Styles occur AFTER plugin effects in the render pipeline, meaning the stroke is being applied to the layer AFTER the linear wipe, rather than before it.
    You can fix the issue in a number of ways; the most obvious is to precompose the layer and add your Linear Wipe to the precomp.

  • Pre-Comp with Collapse Transformation / Masking / Expression Controls / Etc

    Is there an issue with Pre-Comps and with the Collapse Transformation enabled?
    I have a Pre-comp that has various layer modes in it and in the main comp, the Pre-Comp has collapse transformation enabled to see the layer modes so the text can blend into the background (otherwise the text is just a flat color).
    The pre-comp works completely fine until I add a mask to it or say a expression control and then all of a sudden the collapse transformation stops working and the text loses it's properties.
    This cannot be working as intended...otherwise I can't do any sort of reveal with a mask with anything that is pre-composed and has any sort of blend modes within the pre-comp.
    The pre-comp also loses it's collapse transformation if you even put a expression control on it.

    Contrary to what you think it works as intended. Effects require rasterization as do masks. This is inherent how AE's render pipeline works. You need to restructure your project.
    Mylenium

  • Again with the Continuously Rasterize of 3D Shape Layers and Effects Problem in CS3

    Hi! I have been looking into the forum for some help about this issue and although it have found a lot of related threads I really can't make this combination of features works well...
    I mean I can't apply any "Distort" effect to any 3D "Shape Layer" cause finally the effect cause unexpected results (basically the effects loose his "alignment" with the layer as soon as I transform that layer even if I make a Pre-comp and apply the transformation there (cause I still need rasterization activated on it to maintain quality in zooms and scales). So, really could be a way that allow me to apply this effects to a continuously rasterized layer (or Pre-com) and it works as well as if they were applied to a Solid/Image Layer when use some kind of transformation or camera movement over it? Uf, I still really hope so...
    Well, I'm now a little desperated so I'll be very grateful for any help; and sorry if I'm missing something obvious but I think I have tried all the possibilities (pre-compound and turn off 3D Layer option on both items with any combination, turn on or off rasterize option here and there too, apply the effect in the pre-comp layers or in the comp ones, etc...) and I think at this moment I start to loosing my faith...
    PS: Ouh, and I hope you understand my "English" too...

    >So, really could be a way that allow me to apply this effects to a
    >continuously rasterized layer (or Pre-com) and it works as well as if
    >they were applied to a Solid/Image Layer when use some kind of
    >transformation or camera movement over it?
    That's completely impossible. It goes against how AE effects (and ultimately the entire render pipeline) work. I think I know what you are hoping to achieve, but it is mathematically and technically illogical. Displacment effects work on pixels, not the original vector data, anyways. Only dedicated vector effects can manipulate path contours. AE is no different than Illustrator or Freehand in that regard.
    Also: whenever you change the scale of a Shape Layer, you change the size of of the vector data which is then rasterized based on the bounding box and in some cases the cropping area defined by the composition borders. Therefore the layer size would steadily increase anyways and the result of your displacement would be different for each frame - what looks okay with 5px displacment values at frame 1 would barely be noticable on a layer that is "blown up" to 10 times the size on frame 100.
    So for all intents and purposes you need to work with big layers and without continuous rasterization, no matter how much you may dislike it.
    Mylenium

  • Effects blurring layers

    OK, so I didn't want to jack the "Blurs" thread started by Darrell Toland, but I have a similar problem...
    I have two examples... Motion Problem 1 and Motion Problem 2. Just download the QT file and take a look...I don't have a website to post on, and FTPs change it to .flv and look like crap. Sorry.
    Anyway, it's a map that zooms in to an area (Atlantic Ocean). Atlantic Ocean text has Volumetrix effect on it (glow). Problem is, the effect makes the text blurry and it doesn't move correctly with the rest of the zoom. You can see that in Motion Problem 1. With Motion Problem 2, I split the text at the point where the glow starts (making it two layers) and put the effect on the second layer. So it goes from looking good to jumping and getting blurry right when it gets to the layer with the effect.
    I keep thinking back to specialcase's render pipeline comments about applying effects to top (parent) layers that have been nested rather than individual layers, but that wouldn't work here.
    Any suggestions how to keep the text from blurring when an effect is applied? Thanks.
    Jonathan
    Message was edited by: synergy1
    Message was edited by: synergy1

    Thanks, Peter. I don't know why I didn't just post this on your forum...since you developed the effect! I've got it bookmarked and all. I'll mess with the fixed layer stuff. But what about the effect making the text blurry, or "soft." You can see that in Problem 2...where it goes from crisp (no effect) to blurry (just as the effect starts) or in Problem 1, where the "Atlantic Ocean" text is softer than the rest since it has the effect applied.
    Jonathan

  • GPU Performance with Illustrator - New Notebook Specs

    I'm looking at purchasing a new notebook for Illustrator use.  The system requirements for Illustrator list only certain models of NVIDIA video adapters.  The notebooks I'm looking at have the NVidia Optimus K1100M 2GB video adapter.  I would like confirmation that the Optimus series of adapters will not provide Windows GPU performance and that I should be looking at only the adapters listed within this link:
    System requirements | Illustrator
    Thanks!
    Brent

    Thanks for writing. Please don't mind if I pick apart your statements.
    "For an extra $200 the 5870 is a no brainer." I agree on a pure cost basis that it's not a hard decision. But I have a very quiet environment, and I understand this card can make a lot of noise. To pay money, end up with a louder machine, and on top of that realize no significant benefit would be a minor disaster.
    So, the more interesting question is: has anyone actually used the 5870 and can compare it to previous cards? A 16-bit 60 megapixel image won't require even .5GB of VRAM if fully tiled into it, for example, so I have no ability, a priori, to prove to myself that it will matter. I guess I'm really hoping for real-world data. Perhaps you speak from this experience, Matthew? (I can't tell.)
    Background work and exporting are helpful, but not as critical for my primary daily use. I know the CPU is also used for demosaicing or at least some subset of the render pipeline, because I have two computers that demonstrate vastly different render-from-raw response times with the same graphics card. Indeed, it is this lag that would be the most valuable of all for me to reduce. I want to be able to flip through a large shoot and see each image at 100% as instantaneously as possible. On my 2.8 i7 that process takes about 1 second on average (when Aperture doesn't get confused and mysteriously stop rendering 100% images).
    Ben

  • Keylight pre-comp not respecting garbage mattes?

    Hi all!
    I'm using a keying method I saw on an old tutorial from Aaron Rabinowitz and Andrew Kramer. With it, you use many copies of the Color Key effect to get a noisy rough key. Then use Simple Matte with a negative value to get a tight green halo around your subject.
    Then you use your high-end keyer (like Keylight) to only key the halo out. It gives better results than trying to build multiple keys to get rid of dark areas or uneven lighting in the entire scene.
    In this example, I've drawn a couple garbage mattes to get rid of large sections of the scene. I then do the Color Key and Simple Matte trick. Then all that is precomped. In my new comp, the only effect is Keylight. However, when I switch over to Screen Matte view, I still see the objects I masked out.
    Theoretically, those objects shouldn't even exist in the render pipeline if they are removed in a precomp, right? Any idea what's going on?

    I'm still having the same confusion. Maybe this image will help.
    If I pre-rendered the precomp with the super tight green border, there's no way Keylight would ever see source footage, so why does it here?

Maybe you are looking for