GPS/Places Workflow Question

OK, so I'm new to this GPS thing (bought a Dakota 10) and I'm heading off to Scotland for a couple of weeks where I'll be taking a ton of photos. I plan on importing everything into Places, including, obviously, the tracks I generate each day. My question concerns the optimal way to record the tracks. Should I just create one long track each day? Is it easier to manage multiple smaller tracks?
I will also be laying the tracks into Google Earth in order to show where we went on our web pages, so the same question will apply there. Easier to use one big track or several smaller ones per day?
I realize this isn't rocket science, but I figured that this community may have some tips/hints that I could leverage.
Thanks.

You can check out earlier comments I made about importing tracks - put "Places" in the search for this forum. The latest update, Ap 3.03, has added support for seconds (previously hours and minutes were the default), so this addresses one issue I had when importing tracks.
As for one versus multiple tracks per day, if data storage on the GPS tracker is not an issue I'd go for as few tracks as possible (i.e. one per day). I took me more time to synch them correctly in Aperture than I expected, so having it link to hundreds of images at once (i.e. all your day's images) will speed things up.
FYI, select a highlighted image that you drag onto the imported track carefully (this is the one you are using to synch the time at a particular location). If you spend a lot of time in one place, it turns into a spagetti mess of GPS data points and is almost impossible to find the right synch line. So sometimes I synched using a first photo, sometimes a last photo, or sometimes another random image that was distinct on a map...
One other thing - if you use Places to add the GPS data to images, and it doesn't look right, you can select all the images, then remove the longitude/latitude data, and try linking to the GPS data again. Sometimes took a few tries, but I ended up happier this way. Hopefully the new seconds display will help, assuming you've set your camera as close to UTC as possible. I used http://wwp.greenwichmeantime.com/ to synch my cameras before I left, and probably got to within half a second.

Similar Messages

  • RED Workflow questions with Mac Pro (including third party plugins)

    Hello all,
    I’ve been searching many forums for the better part of a day trying to get some workflow questions sorted. I’m experiencing (very) slow export times, and mediocre playback for a machine that should be screaming fast.
    Here is what I’m working with:
    2014 Mac Pro
    -2.7 GHz 12-core intel xeon E5
    -64GB Ram
    -Dual AMD FirePro D700 6GB
    -1TB Flash Storage
    Editing all footage off 96TB Raid 6 mini-sas server (getting about 1100mbs read/write rate according to AJA system test) which is faster than any Thunderbolt/TB2 drive array I have.
    Media I work with is footage from the RED Epic (normally 5K) as well as DSLR footage from the 5d.
    Software:
    -PrPro CC 2014 (8.1)
    -Magic Bullet Looks 2.5.2
    My question(s) pertains to RED post-pro workflow in combination with third party plug-ins and the different approaches to make it more efficient.
    Right now, majority of the clients need a 1080p HD master, and they are generally anywhere from 2-8 minutes (usually). So my sequence settings are as follows:
    Video:
    Editing Mode: RED Cinema
    Size: 1920 x 1080
    Audio: 48Hz
    Video Previews
    Preview File Format: I-Frame Only MPEG
    Codec: MPEG I-Frame
    1920x1080
    Maximum Bit Depth unchecked
    Maximum Render Quality unchecked
    Composite in Linear Color checked
    Export Settings
    H.264
    1920x1080
    VBR 1 pass
    Target Bitrate 12mbs
    Max bitrate 12mbs
    Maximum render quality/depth/previews unchecked
    Issues I have:
    -Playback is fine at 1/2 or even full, but once effects (especially magic bullet looks) start to go on the clips, it’s very choppy and has difficult playback at 1/4
    -Export times (especially with magic bullet looks) will take the better part of 1-4 hours for a video that is 3-6 minutes long. This doesn’t seem like it should be the case for a maxed out MacPro
    So my questions are:
    Do these seem like the right sequence/export settings for mastering at 1080p? If not, what would you suggest?
    Would using offline editing help at all?
    Do you place your effects on adjustment layers?
    Is there anyway to improve export settings when using an array of filters?
    Have you stopped using third party plugins for their inefficiency in unreliability and switched to more integrated applications like SpeedGrade?
    Is there any other tweaks that you would suggest for RED workflow with PrPro?
    Should I consider switching to FCPX or (besides the iMovie-likeness) does it carry problems of its own?

    Hi This Is Ironclad,
    thisisironclad wrote:
    Hello all,
    I’ve been searching many forums for the better part of a day trying to get some workflow questions sorted. I’m experiencing (very) slow export times, and mediocre playback for a machine that should be screaming fast.
    The biggest issue is that most people have is that updating OS X causes certain folders to be set to Read Only. See this blog post: Premiere Pro CC, CC 2014, or 2014.1 freezing on startup or crashing while working (Mac OS X 10.9, and later).
    thisisironclad wrote:
    Hello all,
    I’ve been searching many forums for the better part of a day trying to get some workflow questions sorted. I’m experiencing (very) slow export times, and mediocre playback for a machine that should be screaming fast.
    Here is what I’m working with:
    2014 Mac Pro
    -2.7 GHz 12-core intel xeon E5
    -64GB Ram
    -Dual AMD FirePro D700 6GB
    -1TB Flash Storage
    It's a nice base system. How about an additional speedy disk for media cache files. You also did not mention which version of OS X you are running.
    thisisironclad wrote:
    Software:
    -Magic Bullet Looks 2.5.2
    The Red Giant website does not indicate that this software is yet updated to work with Premiere Pro CC 2014.1 (8.1). Proceed with caution here.
    thisisironclad wrote:
    Issues I have:
    -Playback is fine at 1/2 or even full, but once effects (especially magic bullet looks) start to go on the clips, it’s very choppy and has difficult playback at 1/4
    I would not use this plug-in until you get the OK from the manufacturer.
    thisisironclad wrote:
    -Export times (especially with magic bullet looks) will take the better part of 1-4 hours for a video that is 3-6 minutes long. This doesn’t seem like it should be the case for a maxed out MacPro
    Again, I suspect your plug-in.
    Keep in mind that exports are largely CPU based but you can make sure that GPU acceleration is enabled for AME at the bottom of the Queue panel.
    thisisironclad wrote:
    So my questions are:
    Do these seem like the right sequence/export settings for mastering at 1080p? If not, what would you suggest?
    It's OK.
    thisisironclad wrote:
    Would using offline editing help at all?
    No need when you should be able to edit natively. Relinking might also be an issue.
    thisisironclad wrote:
    Do you place your effects on adjustment layers?
    That's one way you can do it with the benefit of being more organized.
    thisisironclad wrote:
    Have you stopped using third party plugins for their inefficiency in unreliability and switched to more integrated applications like SpeedGrade?
    I do. Of course, that's a preference.
    thisisironclad wrote:
    Is there any other tweaks that you would suggest for RED workflow with PrPro?
    Try the following:
    Sign out from Creative Cloud, restart Premiere Pro, then sign in
    Update any GPU drivers
    Trash preferences
    Ensure Adobe preference files are set to read/write(Hopefully you checked this out already)
    Delete media cache
    Remove plug-ins
    If you have AMD GPUs, make sure CUDA is not installed
    Repair permissions
    Disconnect any third party hardware
    If you have a CUDA GPU, ensure that the Mercury Playback Engine is set to CUDA, not OpenCLYou have AMD GPUs.
    Disable App Nap
    Reboot
    thisisironclad wrote:
    Should I consider switching to FCPX or (besides the iMovie-likeness) does it carry problems of its own?
    I really shouldn't answer that question.
    Hope this helps.
    Thanks,
    Kevin

  • Hi, new ipad2 user. Recieve some emails that should have images but in their place are question marks. any advice on how to fix this would be great. thanks alot..brad

    Hi, new ipad2 user. Recieve some emails that should have images but in their place are question marks. any advice on how to fix this would be great. thanks alot..brad

    On your iPad - go to Settings>Mail, Contacts, Calendars>Load Remote Images>On. Try that first.
    You can also try a reset. Reset the iPad by holding down on the sleep and home buttons at the same time for about 10-15 seconds until the Apple Logo appears - ignore the red slider - let go of the buttons.
    Quit the mail and all other apps. Go to the home screen first by tapping the home button. Quit/close open apps by double tapping the home button and the task bar will appear with all of you recent/open apps displayed at the bottom. Tap and hold down on any app icon until it begins to wiggle. Tap the minus sign in the upper left corner to close the apps. Restart the iPad. Restart the iPad by holding down on the sleep button until the red slider appears and then slide to shut off. To power up hold the sleep button until the Apple logo appears and let go of the button.

  • Yet Another Workflow Question

    Ok I too, like many others here, am new to the Mac (thanks to Apple's I'm a Mac, I'm a PC ads that my wife couldn't get enough of). I have done some searching around and I see that there are quite a few iMovie workflow questions out there. I have not quite found what I am looking for however, so I thought I would make my first post tonight. So here it goes...
    I have 3 different ways I capture video:
    1. Canon Vixia HF10 (HD)
    2. Canon Powershot (SD)
    3. Blackberry Storm (SD...I know it isn't a good phone)
    I record everything to SD cards. I am wanting to know the best way to store my raw video for editing at any time. Do I copy the AVCHD file structure (for the Vixia) and .avi files (for the other non HD) to my hdd, or do I just import into iMovie '09 and let it reside there, or both? I noticed that iMovie had an archival option (which appears to just copy the AVCHD structure to my hdd), which is why I ask. I want to always keep my raw video in case I decide to go back later and create a new video.
    After I have the raw video archived, I would like to know the best way to use iMovie. Depending on where I end up storing the raw video, should I keep the imported video in iMovie once I am finished with a project, and then reimport it at a later date if need be? Or, do I leave it in iMovie as events? I guess this all rely depends on the first question...where do I store the raw video for archival purposes...
    Finally, when exporting my iMovie project, should I store that in more of a, pardon the Windows reference, "My Videos" folder with a original size, web optimized size, and ipod optimized size? Thus, keeping the actual exported version of the project separate from the raw video?
    I hope I have asked the right questions here. I appreciate any and all help I can get!
    Ron

    Welcome Ron to the  iMovie boards..
    very interesting : 'switchers' care sooo much for 'storage strategies' ..
    the by Apple intended workflow/concept for iApps is:
    any 'photocam' related material (still or movin') comes-in via iPhoto, and is stored in an iP Library (=you can tell iP to create 2/many Libs, if you prefer to organize manually....)
    any 'camcorder' related material HAS to be imported by iM - why? because, iM has some internal routines to make such material editable (codecs, thumnails, stuff....). the same material as 'file by Finder' does not import.. in most cases!
    storage..
    iP stores in its Library (local/internal HDD and/or ext. HDD)
    iM stores in Events (local/internal HDD and/or ext. HDD)
    to make Projects/Albums accessible to any iApp, you should keep your fingers off that structure.
    Erasing Events 'kills' projects.
    allthough, once 'shared to media browser' there's a 'copy' of your project WITHIN the project file. (= the socalled Media Browser is no single Folder somewhere hidden in the system)
    there's this Spacesaver feature to erase any Event content which is not in use in any project to keep Events lean.
    use the Archive feature from within iM to keep things easy and convenient.. if you miss a single file of the SDcard file-structure, the whole card's content is kaputt ..
    summary:
    • use iApps as intended.
    • use iP for cameras, it stores 'raws' (the avi too)
    • use iM for camcorders, use Archive to store raws..
    • purchase a dozend of HDDs to store your material..

  • Sharpening export workflow question

    I have a sharpening workflow question. Say I have pictures from a portrait session I just finished. I have to send 10 pictures the client ordered to a print lab and I also will make some small facebook sized pictures and upload them to my business facebook page. The level of sharpening needed for large prints (I upload to print lab as RGB JPEGS) and sharpening needed for the very small sRGB facebook-sized pictures is different. In Lightroom I have the option to set the sharpening on export and have a bunch of presets that alter the export size, color space, sharpening, etc(WHCC print lab, facebook, Client CD, etc). I don't see how to do that in Aperture. I see they have the option if you have a printer, but not on normal export.
    For those of you that have to export batches of pictures in multiple different sizes (with different levels of sharpening), what is your workflow? I could use some photoshop droplets/actions after Aperture export but I was hoping there was a way to avoid the extra step. Am I overlooking an export feature? The BorderFX plug-in looks like the only other option.
    Thank you in advance for time and help!
    Scott

    Frank Scallo Jr wrote:
    The thing is guys - Once a file is sized down it WILL lose sharpening - what we are doing is sharpening the full size RAW file or rather what the full size output would be like. Once we export a version sized down it will lose some of the 'bite'. LR has sharpening options on 'output' which is not only smart but a necessity. Adobe realizes that output for screen needs another sharpen. Apple either doesn't know or didn't bother. It makes ANY output for screen less than best.
    Bear in mind that there seem to be two separate issues going on here - sharpening adjustments not being applied on export, and resizing.
    As far as resizing is concerned, Aperture appears to use something roughly equivalent to Photoshop's Bicubic Sharper setting. Because of this I've never had much problem with Aperture's exports when used for the web, but obviously everyone's taste for sharpening differs which is why an option for output sharpening would be good.
    Sharpening adjustments not being applied on export is a separate issue and should be reported via the feedback form ASAP by everyone who is experiencing the bug.
    Now printing is another animal - I wouldn't print directly from RAW in aperture either if I'm printing small. Again, LR beats Aperture here as well since they include output sharpening for print.
    Aperture has had output sharpening for printing since 2.0 came out (unless it in was 1.5). In A3 you need to turn on 'More Options' and scroll down, I can't remember where it is in A2. I don't know how effective it is as I print via a lab, but it's there and it's been there for a long time...
    Ian

  • Problems installing IWork Pro 9.0.3 Not sure where to place my question but I'm having problems installing iwork 9.0.3

    Not sure where to place my question but I'm having problems installing iwork 9.0. My MacBook Pro doesn't read the disk and ejects after a minute. I just bought the mac and iwork. Any thoughts?

    https://discussions.apple.com/community/iwork

  • Nokia 6220 - GPS,Maps,Navigation Question

    This may be a silly question but I'm a little confused over this
    I am considering buying a 6220 (SIM free), mainly because it's an 'upgraded' version to my existing 6300.
    The idea is instead of having a SAT NAV (for the occasional trips to places I've never been before) & a mobile phone & a MP3 player etc - I get an all-in-one in a small package
    Now to the question - What is the difference between 'Nokia Maps' and 'Navigation' ?.
    Apparently 'Nokia Maps' is free but 'Navigation' isn't
    Am I right in assuming that 'Nokia Maps' only shows you where you are & Where you want to go BUT doesn't give constant updates as you drive/walk between the 2 - a bit like if you have a map & ask a passerby to shows you where you are & where you want to go on the map - but doesn't tell you how to get there (you have to figure it out yourself)
    BUT 'Navigation' is where you can have it in the car & it tells you the directions ('take next left'/'3rd right at next roundabout' etc) each step of the way i.e. Just like a Car 'SAT-NAV' equivalent
    I don't like the idea of paying for a 'satnav/phone/media player' combi-device then paying EXTRA so that you can use it as a SAT-NAV, especially if you only going to use it maybe once or twice in a year.
    Motto: Illegitimis non carborundum

    25-Apr-200712:11 PM
    luckieb wrote:
    I think I have worked out what is going on here...
    I hard reset my N95, installed the maps fresh (which took a very long time, but that is a different subject, so I won't get going on that)
    I then checked if I could do full 7 digit postcode search while at home and still no, with "Network Use" set to "Never"
    However, I have now come into the office, defined the wlan acess point (mine is broken at home) and set "Network Use" to "when needed" and I can do full 7 digit postcode search. If I then change that setting to "never" it reversts back to truncating the postcode.
    I would appreciate if other N95 owners could confirm that this happens on their N95 too, and if so, it looks like maps is pulling additional info from somewhere to enchance the downloaded map.
    You seem to be right. I cleared all the maps, then downloaded the maps for England using maploader and tested it. With network connection you can do full postcode search, but as soon as the internet connection is lost or turned off the system reverts back to truncating the postcode.
    If you use the route planner while you have the network connection you can use the full postcode to set the destination point while having the GPS position as your starting point and the destination point gets stored as your destination.
    Cheers,
    -jh
    hemmo

  • IPhoto 6 workflow questions -- iPhoto & Elements

    All -- just installed iLife 6, but have not used it yet. The combination of iWeb & .mac looks very promising for finally being able to easily share my photos with my family/friends. But I am still not clear on the best file workflow to be able to take advantage of all the iPhoto/iWeb/,mac functionality, and also the editing power of Adobe Elements (the editing software I am using). If you all could help me with this I would be so grateful.
    I shoot JPEG.
    I move a photo from iPhoto to Element to edit it.
    If my goal is to move it back to iPhoto to share, how should I resize it? How should I optimize it for the best quality while viewing on a monitor, but with enough quality that if folks want to make a print, they can?
    What format should I save it back to iPhoto in? JPEG? Or can iPhoto handle TIF or PSD and is that recommended?
    What about a master, unedited image. Should I worry about that? And if so, where will that live?
    If I plan on printing a photo, should I optimize the file for that purpose and print from Elements instead of bringing the file back into iPhoto?
    As you can see from my questions, I am still unclear on the relationship of iPhoto and Elements (or any other editing software). I would be so grateful for any help you can give in this discussion, or other places you can point me to get this information.
    Thanks everyone!
    Greg

    haha, the way I work is, if a program doesn't do what I want the way I want it, then I find a program that does. For many, just iPhoto will fit the bill.
    This is what I do. I take a photo shoot. If it is important photos, I upload them to ClubPhoto so others can download and print them. Most often I do not print my 4x6 photos. I wait till there is a special at ClubPhoto for half price prints and order all my yearly prints at that time. I buy one photo album that fits 200 photos and I order 200 photos from my albums online to be printed. Year in photos are now done.
    If someone comes over and says, hey, I love that photo, can you print it out for me? then I use my Printers software to print the 4x6 or 8x10.
    I used .mac and now iWeb to keep family and friends informed of whats going on in Photos and videos. It's more or less just for fun.
    When I said I have an iWeb site, I mean that I have used iWeb, a new application in the iLife suite to publish a web site with photos movie clips and a blog.
    You do not have to do any resizing if you are going to use iWeb as the software does it for you behind the lines.
    If you are resizing to upload anywhere else or for another purpose, it all depends on what size you need for your purpose. For album sites, 800x600 is a good size.
    For emailing a photo, either do it right within iPhoto and you will get a choice on what size you want to send, or export to the desktop, at which time you can input the size you want to export the photo at.
    I always keep full resolution photos in my library. If I want a smaller size, I always export the image at a smaller size and keep the full size in the library.

  • Workflow  questions

    I have few questions related to Workflow:
    1.  Can Workflow be triggered by receiving an e-mail in in-box with the attachment? What configuration or Basis settings are needed? e.g. Various Business areas to submit their expense journal posting via Excel spreadsheet to Accounting for posting to SAP after various levels of approval.
    2.  Will the attachment be saved in SAP? Where and what is the configuration or Basis settings are?
    Thanks in advance with your answers - Points will be awarded.

    Hi,
    Best way to go forward with this is to let XI receive the mails through the mail adapter (you can connect the Integration Server to an e-mail server and exchange messages). The mail adapter can process the message from the e-mail server, converts the e-mail to the XI message protocol and then send the message to the sap backend.
    From there you can start your workflow by SAP_WAPI_CREATE_EVENT for instance (place the code in the server proxy).
    Hope it helps ,
    Bert

  • Two workflow questions, (1) importing and (2) saving/exporting

    Apologies if the below questions are too elementary. I've just started using Lightroom. I've been trying out various editing/organizing programs and I haven't yet found the perfect one. Can someone tell me if Lightroom is capable of the following:
    1. I shoot in Raw+Jpeg format. When I copy images from my camera to my computer, I always create a new folder that represents my event and put the jpegs in \Pictures\[Event]\ and the raw images in \Pictures\[Event]\Raw\. Is there a way to instruct Lightroom to perform this function through the import module, i.e. have a single import event that copies jpegs in an \...\[Event]\ folder while copying dng to \...\[Event]\Raw\ and importing only the dng files into my Lightroom catalog. Is this possible?
    2. Also, can someone explain how Lightroom handles edits? Say I've edited a dng in my library. Is the dng file changed on my computer itself? How about the jpeg (remember, I copy jpeg+dng to my computer)? Is there a way to instruct Lightroom to automatically (i.e. default behavior) overwrite the jpg file when any changes are made to the dng file, but prevent the dng file from being overwritten? What about instances where I also want to overwrite the dng - how do I accomplish this?
    Thanks.
    =|

    theNE0one wrote:
    I completely understand and appreciate how Lightroom treats raw images. However, if I shoot Raw+jpg, what difference does it make whether or not the jpg file changes? If my raw file is untouched, I don't also need to make sure that the jpg remains in its virgin state...even if the majority of users prefer to keep both untouched, at least allow the option for people that want a different method of handling their files!
    Lightroom has no feature to overwrite your JPGs automatically.
    Might I suggest a workflow change ... shoot raw only, when you need a .jpg, have Lightroom export one for you.
    If Lightroom handled Face/Places/Sharing well, I wouldn't need to mess with a second program (i.e. Picasa), but right now it doesn't look like it can handle what I want. I understand that I can use Metadata/Keyword tags to identify People and Places, and that there are plug-ins to upload to Picasa web albums, but these options aren't executed well enough in LR3 for me to give up Picasa. The plug-in doesn't work nearly as cleanly as the native Google program, and "Faces" is a chore in Lightroom. Picasa can automatically detect faces and makes tagging/organizing/working with images much easier. While I'm on the topic, can anyone explain why such a useful feature (that's available on free and basic image software) is not included in Lightroom? Is there any possibility that it'll be included in the final version of LR3? What about the upcoming Aperture 3? Is it that "Faces" is perceived as a basic consumer feature that isn't used in higher-end software? Will this always be excluded from this "class" of software?
    Using LR3 Beta to decide whether the software works the way you want isn't a good idea. I'm guessing, but I don't know for sure, that the uploading to Picasa will work a lot better in the finished product. Hard for me to imagine a program being easier to tag/organize than Lightroom, and while I don't use Picasa, I can't see how it could be easier in tagging and organizing. It is my opinion (and you might think differently) that using two different organizing software programs on your photos is more trouble than it is worth, especially since you have to constantly "synchronize" the two different organizers.
    No one here works for Adobe, and even if there was someone here from Adobe, they can't tell you what will be in future releases. We can only guess, and we don't know.
    Finally, can you explain xmp? I've done some keyword tagging on my images within Lightroom. If I were to later use another program, would the keywords be picked up? From your explanation it sounds like there are two aspects to a file, the image section and the xmp section. Is the xmp section the part that includes information about the photo? I.e. settings, metadata, etc.??
    Keywords that Lightroom writes to xmp should be readable by almost every other photographic application. But please be aware that Lightroom, by default, does not write keywords to the photo files. You have to specifically tell Lightroom to do so, by either turning on an option, or by using Ctrl-S on selected photos. Xmp includes all the metadata, which includes captions, keywords, Lightroom edits, and basically any information supplied by the user rather than by the camera.

  • Workflow Question: 24p (Varicam) Masters, 29.97 down converts for offline

    I know in part this question has been raised here before, and I know the Cinema Tools method for getting back to 24p... but I would like to discuss briefly the pending workflow on a project I am about to begin with someone who has tackled this before.
    I am cutting a feature, shot on Varicam at 24p. The film has already been rough cut in FCP and I am inheriting the project to re-work the rough cut and "on-line" finish the film. The initial editor did not reverse telecine the clips before cutting, so all the clips and the timeline are in 29.97. I know I could finish the off-line using the DV footage, then reverse telecine the EDL to on-line back to the 24p HD masters, but there are a few other issues.
    First, there is over 19 hours of original scenes that currently make up a 2 hr rough cut. This cut needs a pretty extensive re-edit before it's ready for the sound designer to step in, maybe 2-4 weeks of work. The FCP project has not been broken down by scenes, just master clips for each reel (yikes!). I am not confident in the audio that is on these DV tapes and I would prefer to work with the original HD source to finish the job, as the audio editing will fall mostly to me to accomplish. This project will also need extensive color timing as well. From a general workflow standpoint, I feel that if I had all the HD loaded, and re-organized by scene that I could work more efficiently to finish all the aspect of this film.
    So the nuts and bolts of my question is this. Can I convert both the timeline AND the already logged master clips to 24p so that I can load in the HD as whole reel clips as they currently are logged and refer back to the timeline? I want to work from the existing timeline, but also have ALL the footage loaded in as HD 24p so I can sub clip by scene and rework the cut.
    Thanks in advance.

    HD isn't as simple as SD...not by a long shot. With SD, we captured offline at 29.97, and then onlined at 29.97. The biggest thing to worry about was that our shoot masters were 29.97 NDF but stock footage was 29.97DF, and that tended to mess up EDLs for the online.
    But then comes HD. 720p...1080i...1080p. 23.98, 24, 29.97, 59.94. So many options...things got really complex. Now with HD you really need to know what your deliverable is...even before you shoot. Because everything is geared toward getting that final master. You CAN proceed without knowing that, but then getting the proper master will require more hoop jumping. So, at this point, you need to figure out what you want to deliver and work towards that. 1080 23.98psf HDCAM? 720p 59.94 D5? 1080i 29.97 HDCAM? You have to figure this out.
    Editing 24p footage from the Varicam (which translates to 23.98) at 29.97 DV was a poor choice. DVCPRO HD 720p24 isn't even twice the file size of DV. DV is 3.6MB/s, and DVCPRO HD 720p24 is 5.4MB/s...so editing native would have been the wise choice. But that is moot. Moving on.
    When I edited a 16mm film...shot on 16mm, telecined to HDCAM at 23.98...I captured DV downconverts that ran at 29.97. I asked the post facility doing the online if I should do it at 29.97, or should I reverse telecine to 23.98...they said stick with 29.97, so I did. Then we changed post facilities and they said I really should have been editing at 23.98...because I have now made the online rather tricky. But this is what they had me do. Export an EDL of the 29.97 timeline. Use Cinema Tools to convert that from a 30fps EDL to a 24fps EDL. Then give them this and the final project. They were able to rebuild the cut, with some issues where I had speed changes, but they did it.
    So what you need to do now, is find the place that will be doing the online, and get someone there to get this thing where it needs to be. It isn't as simple as recapturing at 29.97 or 59.94. The footage was shot at 59.94, but with flags set to make it 23.98 when captured. If you had a Kona or Decklink capture card, you could capture it at 59.94, but that format won't match up with the 29.97 EDL...different time base. And you cannot capture 59.94 as 29.97...not unless the footag was flagged in the camera as 720p30.
    This is a very complex situation and I have barely scratched the surface. THis is why you need help from an HD expert, preferrably someone from the post facility where you will be doing the online.
    Shane

  • Procedural/Workflow question

    Hi all,
    After finishing the CS6 Production Premium book, I decided to start afresh with the projects I am busy with. Last Friday we shot some more footage of one of the ships we are using. I have around 11 Scenes shot, a few takes of each. Following the book, I checked the files with Bridge, I ingested them with Prelude and used it to perform a rough cut - including metadata notes and markers stating in and out points of the footage (and any pieces in between I wanted cut out). I then took that cut to Premier, where, using the razor tool, I cut out the unwanted sections and was left with a series of scenes, back to back of the ship against the bluescreen. I then took each shot seperately and replaced them with an After Effects composition. In After Effects, I proceded to use the keying techniques I have just picked up from the other book I reading through (Mark Christiansen...you are a genius!) and have keyed out each of the shots rather well (I hope to get better at keying as I get more practice).
    My question is now, as follows...
    I need to place these ships against various background plates. Some of them will be created from scratch in After Effects (space scenes) and some are live footage shots of vistas and fields etc. In each of the shots, the models have real movement (done camera side). In some of the composites, the models movements will be sufficient, in others, I will have to animate them accross the background. The procudures in the book, was to put the forground and background together in Premier. As I see it, I have three choices, and I want to know if there is a reason to choose one above the other, or if this is a call I can make based on which I enjoy more!
    1) Place background on seperate layer in Premier
    2) Add background in After Effects, include models animation and timing in composition and have it linked to Premier (as the models layers currently are)
    3) Add background in after effects (for reference only), do animations and composition for models then remove background layer and re add it in Premier
    Logically, I assume that the 2nd choice makes more sense, I just want to confirm this, as having worked with the full suite for the first time now, I am very excited to see how all the products integrate so nicely. Since the choices are there, I want to make sure I am not missing a great workflow for the sake of shortcuts!
    Thank you
    Pierre

    Hi Fuzzy,
    Thanks for the confirmation!
    I started some of the compositing work yesterday while waiting for replies, and did it this way anyway. It works nicely, and then, once I am done in AE, if I have a rendered preview already, Premier does its render nice and fast. 3 Scenes done yesterday, 2 completed, one to finish touch ups today. It is so exciting to be at this point finally after 10 months! I have "final quality" footage, "final quality" backgrounds - let the compositing begin!
    Dave,
    Thanks for the tip. I am familiar with the light wrap, and a few techniques that I picked up from Todd, as well as some of the other books I have in my every growing Adobe library. I am currently experimenting with having a sliver thin edge matte and leaving that quite transparent so that the background bleeds through. I want to see how that looks, otherwise, light-wrap - it shall be.
    Todd,
    Thanks for the link. I have not got the new CS6 Learn by Vdeo yet, still the CS5 one, and some of the other books. But I do recall the light wrap techniques quite well. Ill put a few together for the team to evaluate, I am under the impression that light-wrap will work better for the characters and their backgrounds, rather than the ships, as they travel rather fast, and spend a lot of time in dark space backgrounds.
    Anyway,
    Thanks again all (oh, and on a personal note - STILL no teeth on young master Devereux! *sigh*) and I hope you all have a great holiday season - If you can stay away from the forum for more than 2 days - its a good start! - Ill be away for a week, but doubt Ill be able to stop myself checking in every other day!
    Pierre

  • Some workflow questions

    Hello All,
    I have a couple of seemingly simple questions:
    I have a session bean method with a void return type, and I have an EJB
    Control for the session bean.
    I can invoke this method only with a Control Send node, since a Control Send
    and Return node does not see
    this method, and when I drag the method from the EJB Control and drop it
    onto a Control Send and Return
    node, it gets converted to a Control Send node.
    My questions is:
    1. Is the method invoked synchronouly, meaning, does it wait until the
    actual EJB method returns, or is
    it just put into a message buffer?
    2. Is this method participating in the implicit transaction the previous
    (Control Send and Return) node is part of?
    3. If the answer to either of the following two questions is no, what can I
    do, to ensure the synchronous invocation and the participation in the
    implicit transaction?
    A fourth question:
    When I start a workflow with a Client Request and Return node, and I have a
    couple of
    synchronous invocation nodes after the Client Request node, to a Worklist
    Control and a
    Session EJB, they form an implicit transaction before the Client Return
    node.
    If I throw an exception in any of these nodes, and don't catch the
    exception, what happens?
    1. The workflow aborts, all method calls on the previous nodes are rolled
    back.
    2. The method calls on the previous nodes are rolled back, the workflow does
    not abort.
    Which one? If the second, what can I do with that workflow?
    A fifth question:
    What exactly happens in the following case:
    My workflow processing is blocking (waiting for, does not matter that
    actively or passively) at a Control Receive node waiting for a callback.
    What happens if an invocation of a Client Request related method or a
    callback on another Control or another callback no the same Control comes
    in?
    1. Will it be buffered in a Message Buffer and be retrievable when the
    processing leads to a place
    where it will be read from the buffer (appropriate node for the method call,
    or event choice containing the appropriate call) and calling process
    returns?
    2. Calling process will block until the method call can be processed at a
    similar node?
    3. Will it be silently dropped?
    4. Will an exception be thrown in the calling process? If yes, what
    Exception?
    5. Will something weird happen, and the receiving process jump to some
    totally improper place in the execution
    of the workflow?
    Thanks for the (hopefully fast) answers in advance,
    Regards,
    Robert Varga

    Hello Steven,
    Some of my questions was answered in the documentation in the web service
    areas. Most of them were not.
    When here I state that something is not covered in the documentation, that
    means that I was not able to find it stated clearly for the thing I was
    searching at. It may be cleared in other places of the documentation
    regarding a similar thing. It may also be possible that I was simply
    overlooked it, after days of browsing the documentation. I would quite
    happily accept it if someone just bumped my nose into the particular place
    in the docs that answered these questions mentioned here.
    So let me just explain what I did not find in the documentation:
    Unfortunately the documentation is more geared to a web-service viewpoint,
    which, I agree, has much common with business processes (jpd files), but the
    business process design view also contains some differences to a normal .jws
    web-service.
    The biggest difference is, that when you design a .jpd, you assume to have
    something that behaves like an instruction pointer. That is supposed to keep
    track which stateful node the processing is currently blocked at.
    The .jws web-services do not have this in the design view.
    Now the fourth and fifth question was specifically about the behaviour and
    existence of this instruction pointer-like behaviour, and these questions
    were not answered in the documentation at all.
    It is nowhere specified what happens to a business process instance if it
    throws an exception, and it has no exception paths handling that exception.
    I assume that it remains in the state that it was in before the invocation
    that triggered the operations that resulted in the exception thrown.
    It is also not covered in the documentation whether this instruction pointer
    is really there and how it exactly behaves. The only thing in the
    documentation which I was able to find was a side note, that if a message
    buffer is in place, it is not guaranteed, that the retrieval of the
    invocations from the message buffer will be in the order of the invocations
    themselves. It is even a bit vague, whether message buffers are per method
    or per jws/jpd instance.
    But it is definitely not covered, what happens in case of an out-of-order
    invocation to a business process. In case of a jws, it is clear that you are
    supposed to handle if an invocation comes, that is not supposed to come at
    all.
    But in case of a business process (jpd), it is not covered in the docs, what
    would happen with the invocation, what would happen with the calling
    process, and what would happen with the called process. And this is
    important information for being able to plan for unexpected things, or even
    expected things, for example when a callback is timed out, but it comes in
    later, when we are blocking at a different control, or even at the same
    control, but in a different iteration.
    It is also not clearly covered, if and if yes, how one can create a
    syncronous callback, and what will be in the same transaction with the
    callback invocation (only the Control Receive node, receiving the callback,
    or the following nodes as well?). Is disabling the message-buffer on the
    Client Response node enough?
    It is possibly covered somewhere what the case is with the void EJB methods,
    but I was not able to find it at all. EJB method invocations are supposed to
    be synchronous, but void methods are callable upon with Control Send only
    which is asynchronous as far as I was able to determine. I was not able to
    ascertain what is the real case, and neither was I able to find a properly
    documented way to ensure synchronous invocation to a void method on an EJB
    control (which explicitely states that this happens).
    Regards,
    Robert Varga
    "Steven Ostrowski" <[email protected]> wrote in message
    news:[email protected]...
    I'm pretty sure the answer to each of these questions is in the WLI
    documentation.
    There is a section on how to enable message buffers to accomplish
    exactly what you are talking about. There is also a section on what
    creates an implicit transaction and what does not. I believe control
    sends start one, but you should reference the docs.
    Robert Varga wrote:
    Hello All,
    I have a couple of seemingly simple questions:
    I have a session bean method with a void return type, and I have an EJB
    Control for the session bean.
    I can invoke this method only with a Control Send node, since a Control
    Send
    and Return node does not see
    this method, and when I drag the method from the EJB Control and drop it
    onto a Control Send and Return
    node, it gets converted to a Control Send node.
    My questions is:
    1. Is the method invoked synchronouly, meaning, does it wait until the
    actual EJB method returns, or is
    it just put into a message buffer?
    2. Is this method participating in the implicit transaction the previous
    (Control Send and Return) node is part of?
    3. If the answer to either of the following two questions is no, whatcan I
    do, to ensure the synchronous invocation and the participation in the
    implicit transaction?
    A fourth question:
    When I start a workflow with a Client Request and Return node, and Ihave a
    couple of
    synchronous invocation nodes after the Client Request node, to aWorklist
    Control and a
    Session EJB, they form an implicit transaction before the Client Return
    node.
    If I throw an exception in any of these nodes, and don't catch the
    exception, what happens?
    1. The workflow aborts, all method calls on the previous nodes arerolled
    back.
    2. The method calls on the previous nodes are rolled back, the workflowdoes
    not abort.
    Which one? If the second, what can I do with that workflow?
    A fifth question:
    What exactly happens in the following case:
    My workflow processing is blocking (waiting for, does not matter that
    actively or passively) at a Control Receive node waiting for a callback.
    What happens if an invocation of a Client Request related method or a
    callback on another Control or another callback no the same Controlcomes
    in?
    1. Will it be buffered in a Message Buffer and be retrievable when the
    processing leads to a place
    where it will be read from the buffer (appropriate node for the methodcall,
    or event choice containing the appropriate call) and calling process
    returns?
    2. Calling process will block until the method call can be processed ata
    similar node?
    3. Will it be silently dropped?
    4. Will an exception be thrown in the calling process? If yes, what
    Exception?
    5. Will something weird happen, and the receiving process jump to some
    totally improper place in the execution
    of the workflow?
    Thanks for the (hopefully fast) answers in advance,
    Regards,
    Robert Varga

  • A Few Workflow Questions

    I shoot Raw or Raw plus jpg and I am not yet clear on how to include Lens Correction in my workflow. In the past, I have typically opened in LightRoom and set white point, exposure and black level to start. Occasionally a crop and straighten might be required. Then I'd typically reduce noise and sharpen the image. In this new era, where exactly do I use Lens Profile Correction?
    A another question is that there seems to be a lack of richness in the ACR 6.1 version of lens correction, I therefore, much prefer to us CS5 Filter to make those corrections. From LR this would require an Export to CS5; when and how should this be done?

    The lens correction happens at very early stage of the raw processing pipeline. The subsequent raw processing steps are all made "lens correction" aware so that you can combine them anyway you want. Precisely because ACR/LR is able to handle the lens correction at such an early stage, they are the best place to do lens correction (for vignetting for example), compared with the LC filter in PS CS5. When processing raw, the LC is also more predictable and subject less to what the user might have done to the image in the past, such as cropping, scaling, color transform, on-camera LC etc.
    The LC in ACR/LR emphasizes the auto/batch operation for its target users; The LC filter in PS CS5 gives more controls to the user. Overtime, I would expect the two take the best of each other.

  • Workflow question: removing images from project

    How can I tell Aperture to remove a file from its database without removing it from the hard drive? My workflow is:
    1. Import images to a Transfer folder
    2. Edit images
    3. Move remaining images to a folder with the shoot name, under a Photos folder
    In Aperture, I've got a Photos project, with Albums for each shoot name folder and I've got a Transfer project that the unediting shoot is in. After I edited my first shoot in 1.5, I used the Relocate Master command to move the files of the shoot to the new folder in the Finder. I then made a new Album under the Photos project and copied the photos from Transfer to the new Album. That all worked great. As a last step, I wanted to remove the photos from the Transfer project. However, I can't: I can only delete the original file, which will delete it from the hard drive. I can't figure out how to tell Aperture to simply stop managing a file. It's really basic issue but it's stopping me from using Aperture to manage my library (or at least do it efficiently).

    Wow, I was really tired when I posted that. No, you were correct the first time: I was looking to delete images and for that your suggestion worked perfectly.
    What I was doing when I asked the second question was re-arranging my library in Aperture. I decided to remove everything and to use a structure that has a Folder labelled "Photos" at the top level, then multiple Projects below that, each named with a year. In each Project, there are albums for that year's shoots.
    Before I removed the old structure, I exported out as a Project the last shoot I edited. Pre-1.5, I never bothered keeping editing images in Aperture - I just used it as a tool for editing the shoot. However, now that Aperture can reference external files (i.e., I'm not worried about its database), I can keep the editing images and their modifications in Aperture (modifications like keywords and image adjustments). So, after creating my new structure, I wanted to import all the editing images of my last shoot; sort of kicking off my greater use of Aperture. However, after importing the Project, I couldn't figure out how to move the images into the new shoot Album, in my "2006" Project. Hence my "can't move images"-post to you.
    Your tip about dragging into the Project, not the Album, worked great, moving the images into their new place. Last night (morning, actually), my clumsy workaround was to drag the edited images into the new shoot album, then import the originals again, then use the stamp tool to copy the changes from the exisiting images to the new ones. I then deleted the images I'd imported via the existing Project. Much more of a pain than knowning what you told me!
    Thanks again.
    fh

Maybe you are looking for