Best practice for smooth workflow in PrE?

Hi all.  I'm an FCP user for many many years, but I'm helping an artist friend of mine with a Kickstarter video...and he's insistent that he's going to do it himself on his Dell laptop running Win7 and PrE (I believe v11, from the CS3 package)...so I'm turning to the forum here for some help.
In Apple Land (that is, those of us still using FCP 7), we take all our elements in whatever format they're delivered to us and transcode them to ProRes, DVCPro HD or XDCAM...it just makes it easier not to deal with mixed formats on the timeline (please, no snarky comments about that, OK, I turn out broadcast work every week doing this so this method's got something going for it...).  However, when I fired up PrE I see that you can edit in all sorts of formats, including long-GOP formats like .mts and mp4 files that I wouldn't dream of working with natively in FCP...I don't enjoy staring at spinning beachballs that much. 
Now, remembering that he's working with a severely underpowered laptop, with 2 gig of RAM, and a USB2 connection to his 7200 rpm "video" drive...and also considering that most of the video he'll be using will come in two flavors (AVHCD from a Canon Vixia 100, and HDV from a Canon EX-something or other), what would be the best way to proceed to maximize the ease at which he can edit?  I'm thinking that transcoding to something like Motion-JPEG or some other inter-frame compressed AVI format would be the way to go...it's a short video and he won't have that much material so file size inflation isn't an issue...speed and ease of processing the video files on the timeline (or do they call it a "Sceneline") is.
Any advice (besides "buy another computer") would be appreciated...

Steve, thanks, this is helping me now.
I mention MJPEG because, as an Interframe compression method, it's less processor-intensive than GOP style MPEG compressions.  Again, my point of reference is the Mac/FCP7 world (so open my eyes as to how an Intel processor running Win7 would work differently), but over there best practice says NOT to edit in a GOP-base codec (XDCAM being the exception which proves the rule, eg, render times), but to transcode everything from, say, h264 or AVCwhatever into ProRes.  YES, I know PrE (and PPRO) doesn't use ProRes...not asking that.  But, at least at this juncture, any sort of a hardware upgrade is out of the question...this is what he's gonna be using to edit.  Now if I was going to be using an underpowered Mac laptop to try and edit, I most certainly would not try to push native AVCHD .mts files or native h264 files through it...those don't even work well with the biggest MacPro towers.  What is it about PrE that allows it to efficiently work with these processor-intensive formats?  This is the crux of the issue, as I'm going to advise my friend to "work this way" and I don't want to send him down the garden path of render hell...
And finally, your advice to run tests is well-given...since I have no experience with PrE and his computer, I guess that's where we'll start...

Similar Messages

  • Best Practice for Conversion Workflow

    Hello,
    I'm converting video files from our "home grown" virtual media reserve to iTunes U. Some of the files are in RM format, some are already compressed .mov's (not H.264) and some I have the original DV files for.
    Anyone out there have a best practice for converting these file types for posting to iTunes U? I have Final Cut Studio (Compressor), QT Pro and Squeeze available to me.
    Any experience you have with this would be helpful.
    Thanks,
    Jeana

    For converting old files to a podcast compatible video and based on the machine you have, consider elgato turbo.264. It is a fairly priced "co-processor" for video conversion. It is comprised of an application and a small USB device with a encoder chip in it. In my experience, it is the fastest way to create podcast video files. The amount of time that you will save will pay for the device quickly (about $100). Plus it does batch conversion of any video that your system currently plays through QuickTime. it has all the necessary presets and you can create your own. It has a few minor limitations such as not supporting (at this moment) enhanced podcasting features such as chapter markers and closed-captions but since you have old files for conversion, that won't matter.
    For creating new content, the workflow varies a lot. Since you mention MP3s, I guess you are also interested in audio files. I would stick with GarageBand, especially if you are a beginner plus it supports enhanced podcasts.
    In any case the most important goal is to have the simplest and fastest way to go from recording to publication. The less editing the better. To attain that, the best methods will require the largest investments. For example, for video production the best way is to produce the content live so when you finish recording it is only a matter of encoding and publishing. that will require the use of a video switcher that can ingest at least one video camera and a computer output to properly capture presentation material. That's the minimum. there are several devices that can do this for you. Some are disguised PCs and some connect to a PC for tapeless recording. You can check the Tricaster, which I like but wish it was a Mac and not a Windows Xp PC. Other routes may include video mixers from manufacturers like Edirol, Pansonic or Sony connected to a VTR or directly to a Mac for direct-ti-disc capture. I f you look at some of the content available in iTunes U, you will see what I explain here. This workflow requires preparation and sufficient live support but you will have your material ready for delivery almost immediately after the recorded event. No editing required. Finally, the most intensive workflow is to record everything separately and edit it later, which is extremely time consuming.

  • Best Practices for smooth video playback

    I'm using a low res. video as a reference movie in a 5.1 project. 23.98 fps, H.264. Video playback is very choppy. In the video playback window the Resolution setting in the pulldown menu is greyed out. It's set to Full Res. How do I change this setting? Also any tips for achieveing smooth video playback?
    Thanks.

    jasonjantzen wrote:
    Ha, typical. No repsonse. Where's Durin when you need him?
    Give the man a break - he's probably asleep, and now you're expecting him to work over the weekend...
    I'm sorry, but I can't help you with the video problem. The only time I tried this recently, I couldn't get the damn video to play in Audition at all, so I gave up; fortunately it didn't matter. The only thing I know is that the video engine it uses has been changed at some stage (probably CS5.5) - heaven only knows what the implications of this are.

  • Workflow not completed, is this best practice for PR?

    Hi SAP Workflow experts,
    I am new in workflow and now responsible to support existing PR release workflow.
    The workflow is quite simple and straightforward but the issue here is the workflow seems like will never be completed.
    If the user released the PR, the next activity is Requisition released that using task TS20000162.
    This will send work item to user (pr creator) sap inbox which when they double click it will complete the workflow.
    The thing here is, in our organization, user does not access SAP inbox hence there are thousands of work item that has not been completed. (our procurement system started since 2009).
    Our PR creator will receive notification of the PR approval to theirs outlook mail handled by a program that is scheduled every 5 minutes.
    Since the documentation is not clear enough, i can't digest why the implementer used this approach.
    May I know whether this is the best practice for PR workflow or not?
    Now my idea is to modify the send email program to complete the workitem after the email being sent to user outlook mail.
    Not sure whether it is common or not though in workflow world.
    Any help is deeply appreciated.
    Thank you.

    Hello,
    "This will send work item to user (pr creator) sap inbox which when they double click it will complete the workflow."
    It sounds liek they are sending a workitem where an email would be enough. By completing the workitem they are simply acknowledging that they have received notification of the completion of the PR.
    "Our PR creator will receive notification of the PR approval to theirs outlook mail handled by a program that is scheduled every 5 minutes."
    I hope (and assume) that they only receive the email once.
    I would change the workflow to send an email (SendMail step) to the initiator instead of the workitem. That is normally what happens. Either that or there is no email at all - some businesses only send an email if something goes wrong. Of course, the business has to agree to this change.
    Having that final workitem adds nothing to the process. Replace it with an email.
    regards
    Rick Bakker
    hanabi technology

  • Best practices for applying sharpening in your workflow

    Recently I have been trying to get a better understanding of some of the best practices for sharpening in a workflow.  I guess I didn't realize it but there are multiple places to apply sharpening.  Which are best?  Are they additive?
    My typical workflow involves capturing an image with a professional digital SLR in either RAW or JPEG or both, importing into Lightroom and exporting to a JPEG file for screen or printing both lab and local. 
    There are three places in this workflow to add sharpening.  In the SLR, manually in Lightroom and during the export to a JPEG file or printing directly from Lightroom
    It is my understanding that sharpening is not added to RAW images even if you have added sharpening in your SLR.  However sharpening will be added to JPEG’s by the camera. 
    Back to my question, is it best to add sharpening in the SLR, manually in Lightroom or wait until you export or output to your final JPEG file or printer.  And are the effects additive?  If I add sharpening in all three places am I probably over sharpening?

    You should treat the two file types differently. RAW data never has any sharpening applied by the camera, only jpegs. Sharpening is often considered in a workflow where there are three steps (See here for a founding article about this idea).
    I. A capture sharpening step that corrects for the loss of sharp detail due to the Bayer array and the antialias filter and sometimes the lens or diffraction.
    II. A creative sharpening step where certain details in the image are "highlighted" by sharpening (think eyelashes on a model's face), and
    III. output sharpening, where you correct for loss of sharpness due to scaling/resampling or for the properties of the output medium (like blurring due to the way a printing process works, or blurring due to the way an LCD screen lays out its pixels).
    All three of these are implemented in Lightroom. I. and II. are essential and should basically always be performed. II. is up to your creative spirits. I. is the sharpening you see in the develop panel. You should zoom in at 1:1 and optimize the parameters. The default parameters are OK but fairly conservative. Usually you can increase the mask value a little so that you're not sharpening noise and play with the other three sliders. Jeff Schewe gives an overview of a simple strategy for finding optimal parameters here. This is for ACR, but the principle is the same. Most photos will benefit from a little optimization. Don't overdo it, but just correct for the softness at 1:1.
    Step II as I said, is not essential but it can be done using the local adjustment brush, or you can go to Photoshop for this. Step III is however very essential. This is done in the export panel, the print panel, or the web panel. You cannot really preview these things (especially the print-directed sharpening) and it will take a little experimentation to see what you like.
    For jpeg, the sharpening is already done in the camera. You might add a little extra capture sharpening in some cases, or simply lower the sharpening in camera and then have more control in post, but usually it is best to leave it alone. Step II and III, however, are still necessary.

  • Best Practices for Workflow Development

    Hi,
    I'm compiling best practices in developing workflows in TEO/CPO, and accepting inputs. The result will be made available for the community to make use of it, and continuously improve the content. If you have any sort of best practices, please let me know.
    Thank you in advance,
    Renato Fichmann

    Good ones, thanks! Incorporated to the document.
    Currently have the following items on my index (yet to be cleaned up)
    5    Guidelines.................................................................................................................................. 8
    5.1    Style.................................................................................................................................... 8
    5.1.1    Naming Conventions..................................................................................................... 8
    5.1.2    Capitalization Conventions.......................................................................................... 9
    5.2    Usability.............................................................................................................................. 9
    5.2.1    Targets vs Target Groups............................................................................................. 9
    5.2.2    Global Variables and Extended Target Properties....................................................... 10
    5.2.3    Create Alerts, Incidents and Other Tasks................................................................... 10
    5.2.4    Classify Processes using Categories........................................................................... 10
    5.2.5    Provide Automation Summary................................................................................... 10
    5.2.6    Provide Descriptions to Processes, Global Variables and Target Groups.................. 11
    5.2.7    Process with trigger based on tasks must have tape name as trigger condition.......... 11
    5.2.8    Approve condition activities use approval index instead of approval choice............ 11
    5.2.9    Do not put test data into global variables................................................................... 11
    5.2.10    Add Knowledge Base to Alerts and Incidents.......................................................... 11
    5.2.11    Disable “Resume execution if interrupted” and “Archive complete instances” from monitoring/problem detection processes           11
    5.2.12    Make Incident Classes unique in the tap.................................................................. 11
    5.2.13    Make Alert Classes unique in the tap....................................................................... 11
    5.2.14    Make sure fail activities are not failing the process if the process has to create an incident even when the activities fail        12
    5.2.15    Do not include targets in TAP, only target groups based on target type................. 12
    5.3    Error Handling................................................................................................................... 12
    5.3.1    Assignments and Notification..................................................................................... 12
    5.3.2    Error handling outside the step................................................................................... 13
    5.3.3    Incidents generated as result of error.......................................................................... 13
    5.4    Non-Functional................................................................................................................. 14
    5.4.1    Cross-Environment Practices...................................................................................... 14
    5.5    Performance....................................................................................................................... 14
    5.5.1    To Archive or Not to Archive.................................................................................... 14
    5.5.2    Parallelize when possible............................................................................................ 15
    5.5.3    Tables and Loops........................................................................................................ 15
    5.5.4    Processing Data Tables............................................................................................... 15
    5.5.5    Scripting...................................................................................................................... 15
    5.5.6    Prefer XPath over regular expressions or text parsing when multiple values need to be retrieved in a single activity    16
    5.5.7    Prefer XML transforms rather than sequences of activites or scripts........................ 16
    5.5.8    Set the number of active sessions on Terminal targets appropriately........................ 16
    5.5.9    Optimizing the Number and Lifecycle of Tasks......................................................... 16
    5.5.10    Optimizing Process Database Grooming.................................................................. 17
    5.5.11    Spacing Scheduled Automation Loads...................................................................... 18
    5.6    Generic BPEL Best Practices............................................................................................ 18
    Waiting for more feedback before release the first draft for community review.

  • Best Practice for Securing Web Services in the BPEL Workflow

    What is the best practice for securing web services which are part of a larger service (a business process) and are defined through BPEL?
    They are all deployed on the same oracle application server.
    Defining agent for each?
    Gateway for all?
    BPEL security extension?
    The top level service that is defined as business process is secure itself through OWSM and username and passwords, but what is the best practice for security establishment for each low level services?
    Regards
    Farbod

    It doesnt matter whether the service is invoked as part of your larger process or not, if it is performing any business critical operation then it should be secured.
    The idea of SOA / designing services is to have the services available so that it can be orchestrated as part of any other business process.
    Today you may have secured your parent services and tomorrow you could come up with a new service which may use one of the existing lower level services.
    If all the services are in one Application server you can make the configuration/development environment lot easier by securing them using the Gateway.
    Typical probelm with any gateway architecture is that the service is available without any security enforcement when accessed directly.
    You can enforce rules at your network layer to allow access to the App server only from Gateway.
    When you have the liberty to use OWSM or any other WS-Security products, i would stay away from any extensions. Two things to consider
    The next BPEL developer in your project may not be aware of Security extensions
    Centralizing Security enforcement will make your development and security operations as loosely coupled and addresses scalability.
    Thanks
    Ram

  • Best practices for ARM - please help!!!

    Hi all,
    Can you please help with any pointers / links to documents describing best practices for "who should be creating" the GRC request in below workflow of ARM in GRC 10.0??
    Create GRC request -> role approver -> risk manager -> security team
    options are : end user / Manager / Functional super users / security team.
    End user and manager not possible- we can not train so many people. Functional team is refusing since its a lot of work. Please help me with pointers to any best practices documents.
    Thanks!!!!

    In this case, I recommend proposing that the department managers create GRC Access Requests.  In order for the managers to comprehend the new process, you should create a separate "Role Catalog" that describes what abilities each role enables.  This Role Catalog needs to be taught to the department Managers, and they need to fully understand what tcodes and abilities are inside of each role.  From your workflow design, it looks like Role Owners should be brought into these workshops.
    You might consider a Role Catalog that the manager could filter on and make selections from.  For example, an AP manager could select "Accounts Payable" roles, and then choose from a smaller list of AP-related roles.  You could map business functions or tasks to specific technical roles.  The design flaw here, of course, is the way your technical roles have been designed.
    The point being, GRC AC 10 is not business-user friendly, so using an intuitive "Role Catalog" really helps the managers understand which technical roles they should be selecting in GRC ARs.  They can use this catalog to spit out a list of technical role names that they can then search for within the GRC Access Request.
    At all costs, avoid having end-users create ARs.  They usually select the wrong access, and the process then becomes very long and drawn out because the role owners or security stages need to mix and match the access after the fact.  You should choose a Requestor who has the highest chance of requesting the correct access.  This is usually the user's Manager, but you need to propose this solution in a way that won't scare off the manager - at the end of the day, they do NOT want to take on more work.
    If you are using SAP HR, then you can attempt HR Triggers for New User Access Requests, which automatically fill out and submit the GRC AR upon a specific HR action (New Hire, or Termination).  I do not recommend going down this path, however.  It is very confusing, time consuming, and difficult to integrate properly.
    Good luck!
    -Ken

  • Best practice for upgrading task definition without deleting task instances

    best practice for upgrading task definition in production system without deleting or terminating task instances
    If I try and update a task definition with task instances running I get the following error:
    Task definition 'My Task - Add User' may not be modified while there are active task instances
    Is there a best practice to handle this. I tried to force an update through the console but that didn't work. I tried editing the task from the debug page and got the same error.

    1) Rename the original task definition.
    2) Upload the new task definition with the original name.
    3) Later, after all the running tasks have timed out, delete the old definition.
    E.g., if your task definition is "myWorkflow":
    1) Rename "myWorkflow" to "myWorkflow-old-2009-07-28"
    2) Upload the new task definition as "myWorkflow".
    Existing tasks will stay linked to the original (renamed) workflow definition.
    New tasks will use the new definition.
    As the previous poster notes, depending on the changes you are making, letting the old task definitions stay active could have bad side-effects and might be better avoided.

  • Best practice for distributing/releasing J2EE applications.

    Hi All,
    We are developing a J2EE application and would like some information on the best
    practices to be followed for distributing/releasing J2EE applications, in general.
    In particular, the dilemma we have is centered around the generation of stub, skeleton
    and additional classes for the application.
    Most App. Servers can generate the required classes while deploying the EJBs in the
    application i.e. at install time. While some ( BEA Weblogic and IBM Websphere are
    two that we are aware of ) allow these classes to be generated before the installation
    time and the .ear file containing the additional classes is the one that is uploaded.
    For instance, say we have assembled the application "myapp.ear" . There are two ways
    in which the classes can be generated. The first is using 'ejbc' ( assume we are
    using BEA Weblogic ), which generates the stub, skeleton and additional classes for
    the application and returns the file, say, "Deployable_myapp.ear" containing all
    the necessary classes and files. This file is the one that is then installed. The
    other option is to install the file "myapp.ear" and let the Weblogic App. server
    itself, generate the required classes at the installation time.
    If the first way, of 'pre-generating' the stubs is followed, does it require us to
    separately generate the stubs for each versions of the App. Server that we support
    ? i.e. if we generate a deployable file having the required classes using the 'ejbc'
    of Weblogic Ver5.1, can the same file be installed on Weblogic Ver6.1 or do we
    have to generate a separate file?
    If the second method, of 'install-time-generation' of stubs is used, what is the
    nature/magnitude of the risk that we are taking in terms of the failure of the installation
    Any links to useful resources as well as comments/suggestions will be appreciated.
    TIA
    Regards,
    Aasif

    Its much easier to distribute schema/data from an older version to a newer one than the other way around. Nearly all SQL Server deployment features supports database version upgrade, and these include the "Copy Database" wizard, BACKUP/RESTORE,
    detach/attach, script generation, Microsoft Sync framework, and a few others.
    EVEN if you just want to distribute schemas, you may want to distribute the entire database, and then truncate the tables to purge data.
    Backing up and restoring your database is by far the most RELIABLE method of distributing it, but it may not be pratical in some cases because you'll need to generate a new backup every time a schema change occurs, but not if you already have an automated
    backup/maintenance routine in your environment.
    As an alternative, you can Copy Database functionality in SSMS, although it may present itself unstable in some situations, specially if you are distributing across multiple subnets and/or domains. It will also require you to purge data if/when applicable.
    Another option is to detach your database, copy its files, and then attach them in both the source and destination instances. It will generate downtime for your detached databases, so there are better methods for distribution available.
    And then there is the previously mentioned method of generating scripts for schema, and then using an INSERT statement or the import data wizard available in SSMS (which is very practical and implements a SSIS package internally that can be saved for repeated
    executions). Works fine, not as practical as the other options, but is the best way for distributing databases when their version is being downgraded.
    With all this said, there is no "best practice" for this. There are multiple features, each offering their own advantages and downfalls which allow them to align to different business requirements.

  • Best practice for a deplomyent (EAR containing WAR/EJB) in a productive environment

    Hi there,
    I'm looking for some hints regarding to the best practice deployment in a productive
    environment (currently we are not using a WLS-cluster);
    We are using ANT for buildung, packaging and (dynamic) deployment (via weblogic.Deployer)
    on the development environment and this works fine (in the meantime);
    For my point of view, I would like to prefere this kind of Deploment not only
    for the development, also for the productive system.
    But I found some hints in some books, and this guys prefere the static deployment
    for the p-system.
    My question now:
    Could anybody provide me with some links to some whitepapers regarding best practice
    for a deployment into a p-system ??
    What is your experiance with the new two-phase-deploment coming up with WLS 7.0
    Is it really a good idea to use the static deployment (what is the advantage of
    this kind of deployment ???
    THX in advanced
    -Martin

    Hi Siva,
    What best practise are you looking for ? If you can be specific on your question we could provide appropriate response.
    From my basis experience some of the best practices.
    1) Productive landscape should have high availability to business. For this you may setup DR or HA or both.
    2) It should have backup configured for which restore has been already tested
    3) It should have all the monitoring setup viz application, OS and DB
    4) Productive client should not be modifiable
    5) Users in Production landscape should have appropriate authorization based on SOD. There should not be any SOD conflicts
    6) Transport to Production should be highly controlled. Any transport to Production should be moved only with appropriate Change Board approvals.
    7) Relevant Database and OS security parameters should be tested before golive and enabled
    8) Pre-Golive , Post Golive should have been performed on Production system
    9) EWA should be configured atleast for Production system
    10) Production system availability using DR should have been tested
    Hope this helps.
    Regards,
    Deepak Kori

  • SAP Best Practices for CRM 5.0 is available

    Hello,
    I would like to announce the availability of SAP Best Practices for CRM 5.0.
    SAP Best Practices for CRM allows a fast, safe and predictable implementation of pre-configured CRM business scenarios.
    It can be used to accelerate customer implementation projects as well as setting up demo or evaluation systems.
    For details about SAP Best Practices in general please see:
    <a href="http://www.service.sap.com/bestpractices">http://www.service.sap.com/bestpractices</a>
    For the concrete content of Best Practices for CRM 5.0 please see:
    <a href="http://help.sap.com/bp_crmv150/CRM_DE/index.htm">http://help.sap.com/bp_crmv150/CRM_DE/index.htm</a>
    Best regards,
    Joerg

    Hi Devendra!
    For the Best Practices you can have all the useful installation and informations guides here.
    <a href="http://help.sap.com/">Best Practices for SAP</a>
    Choose here the Bast Practices tab on the line-menu.
    You have to eb careful while installing the BP -> you have to use all the time the right BP release according to your SAP release.
    I hope this helps you!
    Best regards,
    Zsolt

  • Best practice for phone number column data type

    Hi,
    I hope I have posted this in the right forum, if not just notify me and i will remove it to the correct area.
    What would be considered best practice for storing a phone number.
    Number of Varchar2.
    Ben

    Well I was thinking that Number would have the following disadvantages,
    1. Users entering phone numbers into a form may be tempted to pre format with spaces, thereby throwing up an error.
    2. Mobile phone numbers may have leading zeros
    3. Calculations are not carried out on phone numbers.
    I was leaning towards a varchar2 type.
    Ben

  • Best Practice for the Service Distribution on multiple servers

    Hi,
    Could you please suggest as per the best practice for the above.
    Requirements : we will use all features in share point ( Powerpivot, Search, Reporting Service, BCS, Excel, Workflow Manager, App Management etc)
    Capacity : We have  12 Servers excluding SQL server.
    Please do not just refer any URL, Suggest as per the requirements.
    Thanks 
    srabon

    How about a link to the MS guidance!
    http://go.microsoft.com/fwlink/p/?LinkId=286957

  • Best practice for version control B2B, ESB and BPEL

    Hello,
    we are setting up a new system using B2B, ESB and BPEL. The development team is more experienced working with PL/SQL, Oracle Workflow and we are worried that Jdeveloper generates changes to the source files during development and that we might have problems with the version control.
    Is there any best practice for setting up version control for these systems? Do we need to take anything in particular into consideration when setting up the projects?
    We are using Serena Dimensions 9.1 for version control with the add-on in Jdeveloper.
    Thanks in advance!

    I believe JDeveloper has a plugin for Dimensions.
    I havent used it but to get it, go to tools (It may be help I don't have JDeveloper on this machine to confirm) check for updates.
    If you select the thrid party check box - next, you will see an entry for dimentions.
    Configure the connection and develop as you would any other project.
    cheers
    James

Maybe you are looking for