Batch processing and rendering multiple clips in SpeedGrade CC?

I'm new to SpeedGrade CC, just watched 2 hrs of Lynda training, and I'm just about ready to go. Before people jump on my question, let me walk through what my indended use will be.
Unlike most of the content / workflow that was discussed in the training, I'm not color grading a sequence of clips stitched together in a timeline, but multiple clips that have been pre-edited to length, that I want to apply the same color correction two. This will only be done to small groups of clips, maybe 4-5 at a time, but since I'm all about efficiencies, I wanted to ask what the best workflow for doing this is.
Let's assume that I've taken one of the clips and adjusted everything natively in Sg (no Dynamic link from Pr). I like where I ended up with the settings so I saved a .look preset file.
So what is the next best way to handle applying these settings to the other files? Creating mutliple, separate Sg projects doesn't seem the efficient way, and having to cue up each succesively for Render, equally as slow. In the lessons the instructor illuded to working with and processing "dailies", which I also assume would be achieved through a batch process, but that isn't covered.
I appreciate the advice!
Steve

Interesting ... process ... you have there. Hmmm. I can't think of any way you could work in Sg that isn't on a timeline. Whether made in PrPro or there in Sg(native) ... it's a video editing program, and that's done on a timeline. Plus the way the both PrPro and Sg are designed, you MUST define and name a project before you can start to work.
Now, other than where the working files for the project will be kept, you don't really have to complete the forms out in PrPro especially. After you give your project a name and say where it's files will be kept, you can simply skip the rest and when you create a new sequence & drop a clip onto it, the sequence settings will be set to match your footage.
Now ... do you have all one type footage (codec, frame size & rate) or different kinds, say some 1080p-24fps, some 720i-60fps, some 460p-29.976fps, that sort of thing?
You know, what I'm thinking ... might actually be the easiest. Create a project in PrPro ... and a new sequence for each type of footage. Use the media browser panel to import all your footage into the project panel ... drag & drop a few similar clips to a sequence, then DL that over to Sg (takes a couple seconds) to grade/look 'em. Save 'em back to PrPro, then render that sequence out. Then when you know you've got a good render, either delete the clips from that timeline & re-use it, or create a new one. Do your next group. Rinse and repeat, so to speak.
I take it you've no reason to save the sequences of graded clips past rendering them, so you should be able to use just the one "projects" and import folders as necessary, removing them as you will. You won't spend near any time with the "project" details, but the programs will be happy.
Again, as noted above you can either copy a grade to other clips on a sequence or put an "adjustment layer" over the clips of a sequence in PrPro (project panel: new item -> adjustment layer) and then grade that ... it will automatically be applied to all clips under it.
And before you ask again, there isn't any way to work a single clip without it being a "project" with a timeline. These aren't photoshop, where you can open a single image.
Neil

Similar Messages

  • Selecting and Dragging multiple clips - CS6 Sniper mode

    With the introduction of ability to select Edit Points, now it's inconvenient to drag multiple selected clips or add clips to selection because in some cases - depending on timeline zoom state and/or clip duration. The edit points are altered instead of the clips, that behaviour drives me nuts. It leads to a necessity to do one of the following actions before selecting and/or dragging multiple clips with comfort:
    sniper mode - put the cursor over a place in the selection where the cursor will not indicate Edit Point trimming mode.
    to avoid loosing selection when using sniper mode - Group the clips after adding every clip to selection and/or after a dragging operation
    dynamically change zoom state so that you will not have to use sniper mode
    use copy/cut/paste - may involve a necessity to do additional pre-pasting steps
    IMHO, this behaviour should be rethinked.
    How fast it was in CS5.5 and older versions:
    Please, Premiere Pro Programming Team, especially Tim Gogolin, Steve Hoeg, Peter Lee, Gerry Miller, James Mork, Vivek Neelamegam, Axel Schildan, Jerry Scoggins, Sven Skwirblies, Tod Snook, Jesse Zibble, rethink the behaviour of selecting and dragging clips. Start giving links to beta-versions to other editors too, not only to editors like Philip Bloom.
    Selecting edit points is sometimes helpful and sometimes distracting.
    To get the best of two worlds (CS6 edit points vs. CS5.5 and older) - a switch could be added to Keyboard Shortcuts list, like the "hidden" Add Clip Marker shortcut.
    Let's name it "Application > Sequence > Select Edit Points".
    When Shift- or Shift+Alt- key is pressed and some Clip is selected, clicking on an edit point of another clip should add a clip to selection in the same way as previous versions of Premiere Pro did:
    To drag clips in legacy sniper-free mode only a switch can help. Let's name it "Application > Sequence > Select Edit Points when Multiple Clips are Selected".
    To add comfort to the editing process - selecting/dragging multiple clips - additional switches cannot be avoided. Additionally you may add an option to the Preferences > Trim:
    Allow Selection tool to choose Roll and Ripple trims without modifier key
    Allow Selection tool to select Edit Points while multiple Clips are selected
    Feature Request/Bug Report Form
      ******BUG******
    Concise problem statement:Inconveniences when selecting and dragging multiple clips
    Steps to reproduce bug:
    1.Create a Selection of multiple clips with Shift+Click
    2.Try to drag the clips
    Results:With the introduction of ability to select Edit Points, in some cases (depends on timeline zoom state and/or clip duration) now it's inconvenient to select and drag multiple selected clips because the edit points are selected instead of the clips
    Expected results:Clips should be selected/dragged like in CS5.5 and older
    Link: https://www.adobe.com/cfusion/mmform/index.cfm?name=wishform

    With all due respect Kevin, nobody at Adobe has the right to change a program SO drastically.  I am also a mouse person, and am MUCH faster using the mouse, vs the keyboard. 
    When people have invested YEARS into a program like Adobe, they get used to things, how things work, the feel of a program.  I really don't see how you all can just change it on a whim.  IF you are going to do this, BOTH methods should still work.  You cannot just stop the way people are used to editing.  You can't just decide to change a program so drastically, that it doesn't feel like a program we have grown to love over many long years.  If Adobe on a whole is changing things, just because a few people in your company voice opinions like I've quoted below, it's not the way to go.  Adobe should be really putting feelers out there, and seeing how people are really using the program.  Things should be getting easier, more powerful, sure that's natural, but you can't just decide to remove things that mouse users do every day. :-(
    Upgrading a program, and changing it to this degree, are two different things in my book.  I have not yet upgraded, but it is very scary - the things I'm seeing.  For one, way way too many bugs.  The amount of bugs on so many boards being reported are just beyond comprehension. 
    And a final note, I for one, am very glad Steven is posting these videos ( I'd like to hear him go through steps though - just watching the video with no audio of what he is doing is not so great, however after watching some of his posts, he's knows what he's doing, and has a good head to think with )
    I mostly read posts, but I thought I would voice my opinion, if none of us speak up, nothing will get changed.  I'm really hoping that Adobe implements some change in the next revision .5 or whatever it will be called.  Reading posts like this one are really depressing.  Did you watch Steven's video on a simple task as moving multiple clips around on a timeline ?  Doing this with a mouse just isn't going to work for many, removing simple tasks by way of using a mouse would stop me from upgrading.
    Please address this in upcoming updates.
    Dave.
    Kevin Monahan wrote:
    I went to NLE school in Hollywood in the 90's. Their mantra was, "never touch the mouse." I rather like the fact that I need to touch the mouse much less in CS6, but this is my personal opinion and the way I was trained.

  • How Do You Apply Process Effects To Multiple Clips In A Multi-Track?

    A multi-track sequence brought over from Premiere, I have over a dozen clips from a funeral service video that need noise reduction.
    But, because it's a process effect, I can only work on one clip at a time. Furthermore, I may need to apply several passes of the NR effect to eliminate the noise without sounding weird.
    How then do I apply that effect (and its parameters) to the rest of the clips in the sequence?
    In Premiere, you can copy and paste effects (and their parameters) from one clip to another. There is no such thing in Audition that I can see.

    Hi, Alex.
    It seems you want to do some broadband de-noise processing. While one can do a noisefloor sampling from one clip and apply it to others (great for when you have multiple takes from the same setup), it may be best to noise sample each individually if the mic has moved position enough to cause the tonal characteristics to change.
    An example would be if on-screen talent is walking in an outdoor location. In one part of the location, a particular section of the clip has a lot of BG traffic noise. Another recording has a fountain in the BG - while it still may have some of the traffic in it, the predominant BG noisefloor is "strong fountain/weak traffic", so like others have mentioned, it makes more sense to make a new BG sampling to match the characteristics of this clip.
    All that being said, here is how you BATCH PROCESS / broadband de-noise a bunch of clips, it's a 4-tiered / 21-step process when starting from scratch.
                                  NOTE: there is a manual cheat after the Batch Processing lesson - skip to that if you wish
    (WARNING - this type of processing is most successful if the file has been EQ'd or gain boosted (if needed) before processing. This is also true for the noisefloor sample. It should be taken from the pre-adjusted clip so that it all matches level)
    1) find / open a source clip that best represents the BG noisefloor you wish to sample as your broadband source for processing,
    then in the WAVEFORM editor window press alt-shift-C to save this selection as a new file - give it a unique name that represents its function
         a. ideally, this selection will be more than 5000 samples
         b. should be free of non-BG noise sounds (Dx, footsteps, mouth sounds, it should sound like a very short ambiance loop)
    2) with your new sample saved and your source clip still open in the UI:
         a. load your noise print file
         b. adjust the Noise Reduction and Reduce by  sliders (I suggest you start with a noise reduction value of 100% with a reduce value of 3 to 6 dB)
         c. under Advanced Settings, set Spectral to 0%, Smoothing to 20, Precision to 7, and Transition width to 2dB - these settings
             are like painting not with a wide brush but with a thinner brush, which means less cancellation of frequencies you want to keep and less artifacting
         d. toggle on/off the Output Noise Only function to see just what you are subtracting in your process - a great way to make adjustments until your
             noisefloor-only sound is free of Dx and other Production sounds
         e. optional advanced step - in the process UI Frequency window, you can draw a curve to select the frequency ranges that will be processed - this is another
             way of "thinning the brush" so that you are NOT processing where it's not needed in the broadband range - i.e. hiss from a low-quality mic,
             open the FA window [alt-Z] to see realtime Freq. Analysis on playback of your clip with the processing;
             adjust your settings and the curve in the window to counter the frequency range of the noisefloor - this process is like alchemy, wicked awesome
         f. once you're happy with your settings,  click the Save Effects Preset button (down-arrow on hard-drive) next to the Presets pulldown button.
                   do NOT apply your process yet - that's the next step
    3) now, you're going to create the Favorite that the Batch Process will use to process your clips
         a. select Favorites / Start Recording Favorite (every user function of the software is now being recorded into a macro)
         b. in the Noise Reduction UI, click the Load Noise Print button and select your saved file
         c. now, select your preset from the pulldown menu
         d. click the Select Entire File button below the UI Freq. window
         e. click Apply
         f. click Favorites / Stop Recording Favorite
    4) now, under EDIT, select Batch Process
         a. select your new Favorite from the pulldown tab
         b. drag and drop the files into the Batch window or load them from the Load button in the upper left of the Batch window
         c. click the Export Settings button in the bottom left
         d. set any pre/postfix labeling additions (this works a lot like Adobe Bridge's Batch Rename tool - amending the original filename to differentiate
         e. set a NEW save location to isolate the files
         f. set the format / type / bitrate  (best to keep the original format or a higher quality); when finished, click OK
         g. in the lower right corner, click the RUN button and watch the magic right before your eyes
    [HERE IS THE CHEAT]
    Now that you're learned the magic of batch processing (FYI a new favorite can contain / execute multiple existing effect processes), here is what I recently did in a doc with interview footage - guys in front of a green screen with a less than perfect sound environment.
    1) in Multitrack mode, double-click your clip to process
    2) in Waveform mode, select the noisefloor area to sample and type shift-P
    3) type ctrl-shift-P to open the UI - adjust your De-noise settings for ALL of your processing
    4) type ctrl-A to select the entire file
    5) click Apply; type ctrl-shift-S to save this processed file (I add "_PROC") as an alternate to the original which still exists should you want to go back to it easily - the ALT file will be in the multitrack timeline
    6) type G (de-select I/O area) then type F12 key - returns to Multitrack mode
    7) double-click your next clip to process and repeat steps 3-7 until finished (you won't need to tweak the settings in 3 after the initial step 3)
    In 3-5 minutes, you're Done!
    I discovered another layer of coolness within Audition regarding CLIP Effects Racks where one can setup a stack of effects, save as a User Preset, and apply them to either an individual clip, a set of clips, or to a track.  This is great when you want to apply, for example, a Parametric EQ for filtering an outdoor location or boosting a lav mic's High Freq. Shelf for addition sibilance PLUS a Tube-modeled Compressor to handle dynamics...you spend a lot of time tweaking the settings "just right", and now you want to apply to more than one clip on a track.
    In a short film I'm working on, I use this technique to apply EQ and Room Ambiance (reverb) to camera perspective changes on a single track of ADR for off-camera Dx.  I place the Dx in a track. Splice on the camera edits to make unique clips, select the Audio Perspective preset I made for each camera angle, and Voila!
    Have fun!
    -CS

  • Batch Processing and Putting Two files together?

    Hello,
    I'm trying to find out if there is a way, in Photoshop, to automate placing a logo file and border from another file into a set of photos? Basically, I have a folder of, let's say, 4x6 images, and I have a file that has two layers, a thin transparent border layer, and a layer housing the logo. I would like to find out if it's possible to automate the process where I can batch a lot of files to put this file (or the two layers) onto the original image, then save and close and go on to the next file. Any ideas how to accomplish this? Thanks!
    Regards,
    Dave

    Here is a simple script I made a while back that allows you to place one of two different logo files on the image, depending whether the image is upright or horizontal in orientation.
    All you need is to put you two logo files in a folder and tell the script which folder they are in. After that, when you run the script, it will place the appropriate logo file onto you image depending on the orientation. I used "C:\\MyLogoA.tif" and "C:\\MyLogoB.tif" for this script.
    You can run this script from a batch process.
    var doc = app.activeDocument; // This defines the active document
    var width = doc.width.value; // This is the width of the original image
    var height = doc.height.value; // This is the height of the original image
    // Call the placeLogo function
    if(width>height){
    placeLogo("C:\\MyLogoA.tif");
    }else{
    placeLogo("C:\\MyLogoB.tif");
    // This is the placeLogo function
    function placeLogo(path)
    // =======================================================
    var id35 = charIDToTypeID( "Plc " );
    var desc8 = new ActionDescriptor();
    var id36 = charIDToTypeID( "null" );
    desc8.putPath( id36, new File( path ) );
    var id37 = charIDToTypeID( "FTcs" );
    var id38 = charIDToTypeID( "QCSt" );
    var id39 = charIDToTypeID( "Qcsa" );
    desc8.putEnumerated( id37, id38, id39 );
    var id40 = charIDToTypeID( "Ofst" );
    var desc9 = new ActionDescriptor();
    var id41 = charIDToTypeID( "Hrzn" );
    var id42 = charIDToTypeID( "#Pxl" );
    desc9.putUnitDouble( id41, id42, 0.000000 );
    var id43 = charIDToTypeID( "Vrtc" );
    var id44 = charIDToTypeID( "#Pxl" );
    desc9.putUnitDouble( id43, id44, 0.000000 );
    var id45 = charIDToTypeID( "Ofst" );
    desc8.putObject( id40, id45, desc9 );
    executeAction( id35, desc8, DialogModes.NO );
    // =======================================================

  • Batch processing and replication

    Oracle 11gr2 (11.2.0.3) Linux x86_64
    I wanted to know if anyone has come up with a solution for replicating batch process data. Oracle recommends in the documentation (as a best practice) to not replicate batch processing data through streams, rather to run the batch process on the source and then on the dest database. If we cannot do that, what are our options for this?
    Thanks all.

    Anyone have any ideas/thoughts?

  • Batch processing and parallelism

    I have recently taken over a project that is a batch application that processes a number of reports. For the most part, the application is pretty solid from the perspective of what it needs to do. However, one of the goals of this application is to achieve good parallelism when running on a multi CPU system. The application does a large number of calculations for each report and each report is broken down into a series of data units. The threading model is such that only say 5 report threads are running with each report thread processing say 9 data units at a time. When the batch process executes on a 16-CPU Sun box running Solaris 8 and JDK 1.4.2, the application utilizes on average 1 to 2 CPUs with some spikes to around 5 or 8 CPUs. Additionally, the average CPU utilization hovers around 8% to 22%. Another oddity of the application is that when the system is processing the calculations, and not reading from the database, the CPU utilization drops rather increase. So goal of good parallelism is not too good right now.
    There is a database involved in the app and one of the things that does concern me is that the DAOs are implemented oddly. For one thing, these DAO's are implemented as either Singletons or classes with all static methods. Some of these DAO's also have a number of synchronized methods. Each of the worker threads that process a piece of the report data does make calls to many of these static and single instance DAO's. Furthermore, there is what I'll call a "master DAO" that handles the logic of what work to process next and write the status of the completed work. This master DAO does not handle writing the results of the data processing. When each data unit completes, the "Master DAO" is called to update the status of the data unit and get the next group of data units to process for this report. This "Master DAO" is both completely static and every method is synchronized. Additionally, there are some classes that perform data calculations that are also implemented as singletons and their accessor methods are synchronized.
    My gut is telling me that in order to achieve, having each thread call a singleton, or a series of static methods is not going to help you gain good parallelism. Being new to parallel systems, I am not sure that I am right in even looking there. Additionally, if my gut is right, I don't know quite how to articulate the reasons why this design will hinder parallelism. I am hoping that anyone with an experience is parallel system design in Java can lend some pointers here. I hope I have been able to be clear while trying not to reveal much of the finer details of the application :)

    There is a database involved in the app and one of
    the things that does concern me is that the DAOs are
    implemented oddly. For one thing, these DAO's are
    implemented as either Singletons or classes with all
    static methods. Some of these DAO's also have a
    number of synchronized methods. Each of the worker
    threads that process a piece of the report data does
    make calls to many of these static and single
    instance DAO's. Furthermore, there is what I'll call
    a "master DAO" that handles the logic of what work to
    process next and write the status of the completed
    work. This master DAO does not handle writing the
    results of the data processing. When each data unit
    completes, the "Master DAO" is called to update the
    status of the data unit and get the next group of
    data units to process for this report. This "Master
    DAO" is both completely static and every method is
    synchronized. Additionally, there are some classes
    that perform data calculations that are also
    implemented as singletons and their accessor methods
    are synchronized. What I've quoted above suggests to me that what you are looking at may actually be good for parallel processing. It could also be a attempt that didn't come off completely.
    You suggest that these synchronized methods do not promote parallelism. That is true but you have to consider what you hope to achieve from parallelism. If you have 8 threads all running the same query at the same time, what have you gained? More strain on the DB and the possiblility of inconistencies in the data.
    For example:
    Senario 1:
    say you have a DAO retrieval that is synchronized. The query takes 20 seconds (for the sake of the example.) Thread A comes in and starts the retrieval. Thread B comes in and requests the same data 10 seconds later. It blocks because the method is synchronized. When Thread A's query finishes, the same data is given to Thread B almost instantly.
    Senario 2:
    The method that does the retrieval is not synchronized. When Thread B calls the method, it starts a new 20 second query against the DB.
    Which one gets Thread B the data faster while using less resources?
    The point is that it sounds like you have a bunch of queries where the results of those queries are bing used by different reports. It may be that the original authors set it up to fire off a bunch of queries and then start the threads that will build the reports. Obviously the threads cannot create the reports unless the data is there, so the synchrionization makes them wait for it. When the data gets back, the report thread can continue on to get the next piece of data it needs if that isn't back it waits there.
    This is actually an effective way to manage parallelism. What you may be seeing is that the critical path of data retrieval must complete before the reports can be generated. The best you can do is retrieve the data in parallel and let the report writers run in parallel once the data the need is retrieved.
    I think this is what was suggest above by matfud.

  • Is there a batch process for opening multiple raw images and applying the same, stored preset?

    I often have multiple exposures of the same subject, which I would like to treat identically. At present I have to open each one individually in Camera Raw, apply the preset, save, go on to the next, etc.
    Looking for a way, if there is one, to select them all, in order to apply the preset to all at once. Or perhaps there may be a scripting method to allow this?

    Which version of photoshop, camera raw and operating system are you using?
    You should be able to open all the images inside camera raw, Select All  (upper left of the camera raw dialog above the filmstrip of open images) and then apply the settings to all the selected images.

  • JSF component processing and rendering order

    Hi,
    I have a request level managed bean that has two list boxes on it.
    The first is always displayed, and the second should only ever be displayed when the first one has had a value selected in it.
    So, I use something like this:
    <h:selectOneListbox styleClass="selectOneListbox" value="#{moveBean.sourceListIndex}" size="1" onchange="javascript:submit();" >
        <f:selectItems value="#{moveBean.sourceLists}" />
    </h:selectOneListbox>                               
    <h:selectOneListbox styleClass="selectOneListbox" value="#{moveBean.destinationListIndex}" size="1" onchange="javascript:submit();" rendered="#{moveBean.sourceListIndex != null}">
        <f:selectItems value="#{moveBean.destinationLists}" />
    </h:selectOneListbox>                                The second list is correctly only ever displayed when the first one has been selected.
    However, the value of the second list ("#{moveBean.destinationListIndex}") is never applied.
    Given that this is essentially a request mode operation, I'm loathe to make it into a session bean, but I'm not faced with too many options.
    Help!
    Suggestions?
    -Chris

    You have four options:
    1) Make it a session bean.I did half of this. It was the value of the selects that I moved to a session object and kept the getting of the values of the list boxes to a request scope.
    <h:selectOneListbox styleClass="selectOneListbox" value="#{sessionObject.sourceListIndex}" size="1" onchange="javascript:submit();">
        <f:selectItems value="#{databaseBean.sourceLists}" />
    </h:selectOneListbox>The databaseBean is the request scoped object. This has worked for me in this instance.
    2) Put the data to be stored in session in
    context.getExternalContext().getSesisonMap().
    3) Put the data to be transferred from page to page
    in h:inputHidden.
    4) Use componentbindings.How would this solve the problem? An example, please.

  • Batch processing a bunch of clips?

    hi,
    i have a couple of dozen interview clips (m2t) which i need to add visual time code to for client apprisal.
    i looked in ame, but saw no ability to do the above. is there any way of doing this other than individually on the timeline and then exporting to mp4?
    thanks

    thanks shooternz, unfortunately i probably should have added in my original post that i wanted them produced as individual files rather than all off the tl in one hit.
    each interview has to go to the interviewee for release, so obviously i can't send hours and hours of interviews to each interviewee and expect them to find, let alone check theirs (heck, an hour or so is already sending me to sleep;-() -  i had a similar job years ago and used an extension in sony vegas called production assistant which did this sort of thing almost automatically.
    maybe if there's no alternative i'll jump back into vegas for this project.

  • Batch processing and maplisteners -detecting when the maplisteners complete

    Hi
    I have a scenario.
    Get data from data source 1 and load to cache (say datasource cache) . on this cache I have set maplistener that does some transformation puts the data to another cache. I want to start another process when data transformation is complete.
    Since I have not implemented synchronous listener, I have no way to know when the transformation is complete (as we may not know when all threads for maplistener for the datasource cache completes.
    Is there is a way to find out all the tranasformation is complete. (Please note that N record/object in a datasource may form one transformed record, hence counting may not be good ide)
    It is possible to do an update a flag in each record in datasource cache and make another thread to check if all records are transformed start the process. But I would like hear from your experience if you have any other better solution, that I can leverage from coherence itself. (in otherways, If I sense some inactivity in tranformed cache, I can safely assume that the transformation process is over). Views welcome !!!!
    regards
    Ganesan

    Hi Ganesan,
    Why don't you fire off some events from the transformation threads when they finished?
    You should be able to know how many transformation operations were to be done. When you get that many events that they completed, you are done.
    Best regards,
    Robert

  • How to batch process (color correct) multiple MXF files?

    Newb here with a question I cannot find an answer for. I'm very new to CS5.5 and am very frustrated with something that should be a simple procedure, at least it was with my last NLE.
    I am shooting amateur hockey games using a Canon XF100. Typically I shoot at 720 60p and with all the starts and stops of the game there may be 30-40 MFX files for a game. I'm having no problems importing into CS5.5, the files all show up on the timeline.
    I want (need) to be able to color correct ALL these files at once using the fast color corrector and/or the 3 way color balance. For the life of me I cannot seem to do this and I cannot find any answers despite many hours of searches.
    Last night I did (after much mucking around) manage to 'nest' the files and do the color correction but when I went to export to Media Encoder (to render as MP4 file) I kept getting error messages and was unable to proceed. This happened 6 times, the encoding would start and then after about 20 minutes, and some progress, it would give me an error message.
    Can someone please, pretty please!, explain a simpler way to do this? I feel like I am missing a very obvious step and I'm probably just phrasing the question incorrectly.
    Many thanks in advance.
    Dave
    Adobe CS5.5, Windows 7 Premium
    Intel i7
    9 GB Ram
    500 GB Velociraptor HDD
    Dual 27" monitors

    Unfortunately, you're doing it exactly in the right way. I just nested a sequence and applied the Fast Color Corrector to the nest and it exported beautifully. Could you try nesting and color correcting a short sequence with clips other than the MXF files generated by the Canon XF100 using the same methodology? I'm trying to see if it's your footage is the issue or something else.

  • How to batch processing and renaming of jpg's

    Hello,
    I am sort of new with PS CS5 Extended and I am trying to automate a bunch of JPG files by saving them as High Quality into another folder then add the following suffix _hr.jpg. Can this be possible ?
    Scenario:
    Source Folder: Original
    Original File Name: DSC_xxxx.jpg
    Target Folder: hr
    Save As: DSC_xxxx copy_hr.jpg
    Quality: 12
    Thanks for your help,
    G

    Yes, use Bridge or Camera Raw.
    Benjamin

  • How to create multiple pdf files from multiple batch processing

    I have several file folders of jpeg images that I need to convert to separate multi-page pdf files. To give a visual explanation:
    Folder 1 (contains 100 jpeg image files) converted to File 1 pdf (100 pages; 1 file)
    Folder 2 (contains 100 jpeg image files) converted to File 2 pdf (100 pages; 1 file)
    and so on.
    I know I can convert each folder's content as a batch process, but I can only figure out how to do so one folder at a time. Is it at all possible to convert multiple folders (containing jpegs) to multiple pdf files? Put differently, does anyone know how to process a batch of folders into multiple (corresponding) pdf files?
    Many thanks.

    There are two approaches to do this:
    - First convert all JPG files to PDF files using a "blank" batch process. Then combine all PDF files in the same folder using another batch process and a folder-level script (to do the actual combining). I have developed this tool in the past and it's available here:
    http://try67.blogspot.com/2010/10/acrobat-batch-combine-all-files-in.html
    - The othe option is to do everything in a single process, but that requires an application outside of Acrobat, which I'm currently developing.
    If you're interested in any of these tools, feel free to contact me personally.

  • Acrobat Pro X batch processing different in Win 7

    Greetings,
    My company just completed moving everyone from Win XP to Win 7 (yes, I know, but better late than never). I regularily use Acrobat Pro X to batch process and password protect large numbers of a variety of documents (Word, Publisher, Excel and PowerPoint 2-up printouts.)
    Under Windows XP the process was pretty straight forward:
    1) Open Acrobat Pro
    2) Secect "Batch Process" File -> Create -> Batch Create Multiple files... (a window appears).
    3) Drag a large group of documents into the window (you can drag over 50 docs, it doesn't matter), then begin the process (which automatically walks through all the documents and PDFs them.)
    4) When the process is complete, open up the Action Wizard File -> Action Wizard -> Select the appropriate action
    5) A window appears, drag the PDFs into the window and go. All PDS quickly get the security applied.
    Done
    With Windows 7 the functionality is different (I really don't know if it is the operating system change or some type of policy change that I am unaware of). The process is far slower because I literally have to PDF and Apply Security to each document one at a time. The process goes like this:
    Under Windows 7:
    1) Select a group of documents, but NO MORE THAN 15.
    2) Drag the documents over the "Acrobat Pro X" icon and launch them this way. Each document created a "temporary PDF". Multiple windows open up, stacked on top of each other.
    3) Go to each window individually and first save the file (that way the file name is preserved), then go to the Action Wizard and apply the security. Then close the PDF.
    4) Repeat this process for each open window (document).
    5) Repeat as necessary until you have the several hundred documents processed.
    Literally this is a "hands-on" process for each document. Is there a better way? Am I missing something in the Acrobat or Windows 7 settings?
    If I try to batch process the old way under Windows 7 I get a series of error messages for each document. (I can't even get to the action wizard process.)
    Any suggestions?
    Is there a third party app that will work without having to administer it so much?
    Thank you,
    TPK

    Hi Test Screen Name,
    While reproducing the problem I realized I was in error as to how far in the sequence the problem occured. I actually do get as far as batch creating PDF,. The only difference there is that I can no longer "drag and drop" files in the batch create window. I have to use the "Add files..." command in the upper left of the batch create window.
    So, the application batch creates the files. Afterward, I use the Action Wizard to batch "Password Protect" the files. It is during this command run that the error occures. (Note: I am trying to save over the old files by having them save to the same directory under the same name, just like I used to be able to do.) The error I get is:
    Action Completed.
    Saved to: \\HOME\path-to-original-files\
    Warning/Errors
    The file may be read-only, or another user may have it open. Please save the document in a different
    Document: Name of document1.pdf
    Output: Name of document1.pdf
    The file may be read-only, or another user may have it open. Please save the document in a different
    Document: Name of document2.pdf
    Output: Name of document2.pdf
    The error message loops through all the documents. I don't have the documents open or in use. By default they shouldn't be "read-only". This all did not occure when I previously used the application with Windows XP.
    I have not yet tried saving them to a different directory. I will try that later today. (I didn't want to have a lot of versions of the same documents, it tends to be confusing.)
    Thank you for your reply,
    TPK

  • Architectural design for FTP batch processing

    Hello gurus,
    I would like your help in determining the design for the following.
    We receive several HL7 messages as a text file and copied to a shared network folder. All these files are created into several different folders depending on the region, message type. We need to come up with a B2B process to read all the files from the netwrok folder using FTP (batch process) and translate if needed (depending on the scenario) and transfer the files over to other destination folder on the network (using FTP).
    For this, we can create TPs with Generic FTP channel and this works without any issues. By doing this way, we need to create TP for each and every type of message which reads the files from its own specified directory location on the network based on the the polling interval.
    My question is, instead of creating TPs for each and every type of file, is there a way by which I can write a common web service that reads the source files from the network and based on the type of the file route to the proper destination folders. If it is possible, I would like to know the architecture for accomplishing this task.
    I really appreciate your kind help on this.
    Thanks and regards,
    Raghu

    Hi Raghu,
    Is it a B2B communication scenario?
    By doing this way, we need to create TP for each and every type of message which reads the files from its own specified directory location on the network based on the the polling interval.Why cann't you have only one TP with multiple documents, channels and agreements?
    My question is, instead of creating TPs for each and every type of file, is there a way by which I can write a common web service that reads the source files from the network and based on the type of the file route to the proper destination folders. If it is possible, I would like to know the architecture for accomplishing this task.Depends on your use case and products you want to use. You can very well use FTP adapter with BPEL and poll for files. Use DVM in composite to figure out the destination and send it there. You may use OSB if it is a typical routing case with heavy load and performance is a concern. You may use B2B as well here. So ultimately you need to figure out what you want and what tools you want to use.
    Regards,
    Anuj

Maybe you are looking for