AE not reading all the XMP data from DNG files

I'm working on a very large time-lapse for a construction company - I have a sunset that is broken between two folders and rendering the folders seperately makes Quicktime unhappy during playback and I get an ugly jump in sunset exposure.
So I combined the two folders in Lightroom (LR) from my RAW files (NEF) and exported the entire sequence as DNG files with simpler numbering. The sequence is about 1560 files and constitutes about 22 GB of data. I created a new LR library from the DNG sequence to make sure it all looked good and it does.
I imported the same DNG sequence into AE, checked a few frames in the preview and sent it out to render. Two thirds of the way through the QT movie, exported in H.264 at 30 fps, I start seeing problems. Huge sections look like the XMP data has not been read, then there's a good section and then bad again. I matched the movie time to preview in AE and indeed saw the same problems in the AE preview. So the problem begain in AE's reading the files from the folder, not in rendering.
I went back to LR for file comparison. There was no problem in LR. All the DNGs look great.
I'm on a MacBookPro Retina with 2.8 GHz Intel processor, 16GB 1600 MHz DDR3 Ram and NVIDIA GeForce GT 650M 1024 MB graphics
I have the standard 69GB cache on an external drive and have 11GB RAM available for AE.
Installed CPUs =8
CPUs reserved for other apps = 2
RAM allocation per background CPU = 1GB
Actual CPUs that will be used = 6
I have the latest AE from the Adobe Cloud, although I haven't done the most recent update yet.
My thought was to break the folder up into smaller segments for rendering, but that's not the problem. I see the XMP hasn't been consistantly read in preview. To me it doesn't make sense that the folder is too large for preview because it's not even rendering it there, just reading the files...?
Any insight, solutions?
Thanks,
Dennis

Moominman wrote:
I am basically trying to export xmp files from a set of low resolution dng files so that I can access my Lightroom edits in the RAW files. I have separated the RAW and dng files in different folders
Hi Andy,
I dunno how best to get extracted xmp files into the raw folders, but if you are comfortable with exiftool, you can use it to extract xmp sidecars from DNG files.
If you want a turn-key solution which does not required you to futz with exiftool, then consider a free plugin I wrote:
robcole.com - xEmP
It will allow you to create xmp sidecars with all your DNG adjustments and metadata (which can then be applied to the non-dng raw files).
However, if you won't need the DNGs in your catalog afterward, then the easiest way is to convert them back to proprietary raw format using this plugin (also free, and I wrote it):
robcole.com - UnDNG
Conceptually, you can think of it as converting the DNGs to proprietary raw format, but note: it doesn't convert anything, it just allows existing raw files that are NOT in the catalog, to replace the DNGs that are in the catalog. All adjustments and metadata and everything else will be preserved (just like when you convert a proprietary raw to DNG format).
Rob

Similar Messages

  • Reading all the Exif data from TIFF Image

    Hi!
    I am trying to get Exif information out of an image. I succeeded partly because I get The Baseline (EXIF IFD0) and EXIF Sub IFD, but not the rest (The Maker Note, EXIF Interoperability IFD and IFD1). Can anyone tell me what I could try to get all the data out. Any help is appiciated
    URL url = new URL(paramName);
    ImageInputStream stream = ImageIO.createImageInputStream(url.openStream());
            Iterator iter = ImageIO.getImageReaders(stream);
            ImageReader reader = (ImageReader) iter.next();
            reader.setInput(stream);
            BufferedImage image = reader.read(0);
            IIOMetadata metadata = reader.getImageMetadata(0);
            org.w3c.dom.Node metadataRoot = metadata.getAsTree(metadata.getNativeMetadataFormatName());
            DOMSource doc = new DOMSource(metadataRoot);
            return doc;

    There is a separate one for Firefox, but it appears to only be compatible with Firefox versions 0.7-3.0.x, so Opanda has not made it compatible with Firefox 3.5.x and 3.6.x versions, I suppose:
    http://www.opanda.com/en/download/
    See this and the link in that post for a possible solution: http://forums.steves-digicams.com/1048665-post6.html

  • AP PAYABLES- Not getting all the DUE DATE's in with split schedule payments

    Hello All,
    We have some issues with AP Data loading's into our DW from EBS 11.5.10 AP - PAYABLES.
    One of our customer is using split schedule and share payment into few payment. Our
    sql is not reading all the due dates for PAYABLES.
    We are using PAYMENT_NUM=1 from ap_payment_schedules_all table as condition to load the Data to avoid duplicate rows coming for Payables.
    Some hints: removing the "PAYMENT_NUM=1" from the where clause gives all the due_dates but then we have duplicate rows for Payables.
    Please help to modify our query so that it will work for split schedule payment.
    select
    inv.invoice_num,
    inv.doc_sequence_value,
    sob.currency_code,
    inv.invoice_date,
    'EH'||inv.vendor_id vendor_id,
    'EH'||inv.vendor_site_id vendor_site_id,
    ael.ae_line_number distribution_line_number,
    inv.invoice_currency_code,
    aeh.accounting_date,
    'EH'||ael.code_combination_id code_combination_id,
    nvl(ael.entered_dr,0)-nvl(ael.entered_cr,0) accounted,
    nvl(ael.accounted_dr,0)-nvl(ael.accounted_cr,0) amount,
    fuser.user_name,
    fuser2.user_name user_name2,
    inv.payment_status_flag,
    'PAYABLES' rowtype,
    inv.discount_amount_taken,
    inv.invoice_type_lookup_code invoice_type,
    inv.exchange_rate,
    inv.exchange_date,
    tax.name,
    inv.source,
    inv.attribute6 eflow_doc_id,
    sysdate transfer_date,
    sch.hold_flag,
    inv.cancelled_date,
    sch.due_date
    from
    ap.ap_invoices_all inv,
    apps.ap_ae_headers_all aeh,
    apps.ap_ae_lines_all ael,
    ap.ap_tax_codes_all tax,
    ap.ap_payment_schedules_all sch,
    gl.gl_sets_of_books sob,
    applsys.fnd_user fuser,
    applsys.fnd_user fuser2
    where
    aeh.ae_header_id=ael.ae_header_id and
    inv.set_of_books_id=sob.set_of_books_id and
    inv.invoice_id=sch.invoice_id and
    sch.payment_num*1=1 and ---------------------------------------------- *
    fuser.user_id=inv.last_updated_by and
    fuser2.user_id=inv.created_by and
    ael.tax_code_id=tax.tax_id(+) and
    ael.ae_line_type_code='LIABILITY' and
    inv.invoice_id=ael.source_id and
    ael.source_table='AP_INVOICES' and
    aeh.gl_transfer_flag='Y'
    Thanks,
    Aman

    Hello All,
    We have some issues with AP Data loading's into our DW from EBS 11.5.10 AP - PAYABLES.
    One of our customer is using split schedule and share payment into few payment. Our
    sql is not reading all the due dates for PAYABLES.
    We are using PAYMENT_NUM=1 from ap_payment_schedules_all table as condition to load the Data to avoid duplicate rows coming for Payables.
    Some hints: removing the "PAYMENT_NUM=1" from the where clause gives all the due_dates but then we have duplicate rows for Payables.
    Please help to modify our query so that it will work for split schedule payment.
    select
    inv.invoice_num,
    inv.doc_sequence_value,
    sob.currency_code,
    inv.invoice_date,
    'EH'||inv.vendor_id vendor_id,
    'EH'||inv.vendor_site_id vendor_site_id,
    ael.ae_line_number distribution_line_number,
    inv.invoice_currency_code,
    aeh.accounting_date,
    'EH'||ael.code_combination_id code_combination_id,
    nvl(ael.entered_dr,0)-nvl(ael.entered_cr,0) accounted,
    nvl(ael.accounted_dr,0)-nvl(ael.accounted_cr,0) amount,
    fuser.user_name,
    fuser2.user_name user_name2,
    inv.payment_status_flag,
    'PAYABLES' rowtype,
    inv.discount_amount_taken,
    inv.invoice_type_lookup_code invoice_type,
    inv.exchange_rate,
    inv.exchange_date,
    tax.name,
    inv.source,
    inv.attribute6 eflow_doc_id,
    sysdate transfer_date,
    sch.hold_flag,
    inv.cancelled_date,
    sch.due_date
    from
    ap.ap_invoices_all inv,
    apps.ap_ae_headers_all aeh,
    apps.ap_ae_lines_all ael,
    ap.ap_tax_codes_all tax,
    ap.ap_payment_schedules_all sch,
    gl.gl_sets_of_books sob,
    applsys.fnd_user fuser,
    applsys.fnd_user fuser2
    where
    aeh.ae_header_id=ael.ae_header_id and
    inv.set_of_books_id=sob.set_of_books_id and
    inv.invoice_id=sch.invoice_id and
    sch.payment_num*1=1 and ---------------------------------------------- *
    fuser.user_id=inv.last_updated_by and
    fuser2.user_id=inv.created_by and
    ael.tax_code_id=tax.tax_id(+) and
    ael.ae_line_type_code='LIABILITY' and
    inv.invoice_id=ael.source_id and
    ael.source_table='AP_INVOICES' and
    aeh.gl_transfer_flag='Y'
    Thanks,
    Aman

  • Multiple reads of the same data from Multiprovider by Query (Bex)

    Hello, guys!
    We're having issue with performance of a query built on Multiprivider. During our investigation, we've found out that within one run of a Query, it several times refers to InfoProvider for the same data (see image attached).
    Do you have ideas what can be a reason for multiply reads of the same data from Multipvovider?

    Hello Nikita,
    By "copy of a query" i meant something like this as shown below :
    *Kindly click on the screenshot for a better view.
    1) See the highlighted portions below in the screenshot . See Query 2 highlighted and name of the BEx query highlighted.
    2) See the highlighted portions . See Query 3 highlighted and name of the BEx query highlighted.
    As you can see from the above screenshots i have used the same BEx query 2 times by the name of Query 2 & Query 3 . Infact i have not attached the complete screenhsot . In that i have used it 6 times.
    I have to analyze this a bit in detail but what i am guessing is that when this WEBi is called the single BEx is also called multiple times. And hence it hits the Info Provider multiple times resulting in a decreased performance.
    But this does not mean that this is wrong approach. There are various areas where you can improve for example :
    1) Either improve your BEx query if possible or use aggregates or something like that .
    2) Use the  Query stripping setting in WEBi so that unused dimensions and measures are not pulled resulting in an improved performance. It's switched on by default.
    Thanks!!
    Regards,
    Ashutosh Singh

  • Removing all the hidden data from photos

    If I export a photo to share, I often want to remove all the extra hidden stored data such as date taken exposures etc. Is this easy?
    Rod

    Hi rod,
    By accident I found that a little app I use (ImageWell) to add connotations and text to my screenshots also strips most if not all the EXIF data from an image that you drag out of the Image well.
    I just checked it out again. I dragged an image into the well, changed the name of the image and dragged it out. The EXIF data was gone when I opened it with Preview and did Tools/Get info.
    ImageWell

  • Problems reading xmp data into dng files

    Hi there,
    I am working on a Mac and have exported files as lossy dng for colour correction externally. They have done the work for LR 5 process (I am using CC) and returned xml files. I cannot seem to read this data into the dng files for further editing in Lightroom. I have tried re-importing them and I have tried reading metadata from file. Any help would be appreciated as I usually simply read the xmp to the original RAW but the RAW files are currently on an external drive in another country!!!
    Thanks,
    Andy

    Moominman wrote:
    I am basically trying to export xmp files from a set of low resolution dng files so that I can access my Lightroom edits in the RAW files. I have separated the RAW and dng files in different folders
    Hi Andy,
    I dunno how best to get extracted xmp files into the raw folders, but if you are comfortable with exiftool, you can use it to extract xmp sidecars from DNG files.
    If you want a turn-key solution which does not required you to futz with exiftool, then consider a free plugin I wrote:
    robcole.com - xEmP
    It will allow you to create xmp sidecars with all your DNG adjustments and metadata (which can then be applied to the non-dng raw files).
    However, if you won't need the DNGs in your catalog afterward, then the easiest way is to convert them back to proprietary raw format using this plugin (also free, and I wrote it):
    robcole.com - UnDNG
    Conceptually, you can think of it as converting the DNGs to proprietary raw format, but note: it doesn't convert anything, it just allows existing raw files that are NOT in the catalog, to replace the DNGs that are in the catalog. All adjustments and metadata and everything else will be preserved (just like when you convert a proprietary raw to DNG format).
    Rob

  • Error 105, Could not read full block (2048 bytes) from checkpoint file ~/dirchk/sdfsdj.cpe

    Hi expert,
        i am getting below error in goldengate  due to mount point full and i released the space and still the same error for all gg processes. i can see the *cpe cpr file become 0 bytes. so i deleted and re added the extract and repliacat and while adding the replcat i used add replicat  checkpoint table because of that multiple entries of same replicat came in checkpoint table . my checkpoint details also  present in ./GLOBALS. now my doubt is if add  replicate with mentioning checkpoint table name  will duplicate entry will be created or what is the work around for this.
    MANAGER RUNNING
    Invalid checkpoint for EXTRACT  qqqq   (error 105, Could not read full block (2048 bytes) from checkpoint file XXXXXXXXX)
    Invalid checkpoint for EXTRACT  qqq(error 105, Could not read full block (2048 bytes) from checkpoint file XXXXXXXXXX)
    Invalid checkpoint for REPLICAT qqq  (error 105, Could not read full block (2048 bytes) from checkpoint file XXXXXXXXXXX)

    Hi Kariyath
    Increase the page size of your windows machine.Check for the recommanded page size.Remember that the recommanded page size should be the lower limit.If you have any issue feel free to ask.Your prob will be solved
    Award suitable points

  • Not able to extract performance data from .ETL file using xperf commands. getting error "Events were lost in this trace. Data may be unreliable ..."

    Not able to extract  performance data from .ETL file using xperf commands.
    Xperf Commands:
    xperf –i C:\TempFolder\Test.etl -o C:\TempFolder\BootData.csv  –a process
    Getting following error after executing above command:
    "33288636 Events were lost
    in this trace. 
    Data may be unreliable
    This is usually caused
    by insufficient disk bandwidth for ETW lo
    gging.
    Please try increasing the minimum
    and maximum number of buffers
    and/or
                    the buffer size. 
    Doubling these values would be a good first at
    tempt.
    Please note, though, that
    this action increases the amount of me
    mory
                    reserved
    for ETW buffers, increasing memory pressure on your sce
    nario.
    See "xperf -help start"
    for the associated command line options."
    I changed page size file but its does not work for me.
    Any one have idea, how to solve this problem and extract ETL file data.

    I want to mention one point here. I have total 4 machines out of these 3 machines above
    commands working properly. Only one machine has this problem.<o:p></o:p>
    Hi,
    I consider that you can try to use xperf to collect the trace etl file and see if it can be extracted on this computer:
    Refer to following articles:
    start
    http://msdn.microsoft.com/en-us/library/windows/hardware/hh162977.aspx
    Using Xperf to take a Trace (updated)
    http://blogs.msdn.com/b/pigscanfly/archive/2008/02/16/using-xperf-to-take-a-trace.aspx
    Kate Li
    TechNet Community Support

  • How to read and write a data from extrenal file

    Hi..
    How to read and write a data from extrenal file using Pl/sql?
    Is it possible from Dyanamic Sql or any other way?
    Reagards
    Raju

    utl_file
    Re: How to Create text(dat) file.
    Message was edited by:
    jeneesh

  • My Laptop with Synced Firefox crashed and i had to do a fresh install, how can i retrieve all the synced data from the Mozilla Servers

    I had set up Firefox sync on my Windows 7 Laptop which crashed and I had to reinstall windows. I used to be able to share content with my Firefox synced Android too. Is there a way i can retrieve the synced data from the Mozilla servers? I tried to set up sync on my new installation on a Mac 10.6 but when i click on Set-up Sync it asks me:
    # Set-up new sync account (i already have one)
    # I already have a sync account
    When i select the second option it tells me to Add a device. I went in the "Sync Options" and tried the replace all data on this computer with my sync data, but nothing happened. When i try to connect from my Android it also asks me to add a device with the 3 boxes with letters in them.
    So in both the cases it is asking me to "Add a device" but i cannot connect to my sync account in either case to sync data from there

    Please search the forums. This has been covered here extensively.

  • All the creation dates for my files are wrong?

    I've had this problem for a long time but I finally want to fix it once and for all. All the documents, pictures, and other files on my computer have the same creation date - 1/5/09. When you right click a file it shows the correct creation date but if I just left click it then it shows the wrong date. is there a way i can fix it?

    The creation date of your system will be the same for most of your system documents; there's easy no way to change this information. However, you might also see a more current date, indicating Modified information, and that will be more current.
    There are computer commands that will blank out much of your system data, but that's to be done in cases of major problems. Don't worry about the creation date of your computer/data.
    Post here if you have addtional questions, comments, praise!

  • Hot Sync does not bring in all the contact data from my Centro

    I have a Sprint Palm Centro that has worked well and sync'd fine with previous version of Palm Desk Top.  I recently downloaded and installed Palm DT by Access V 6.2.2.  After several tries, I was finally able to get a fairly good sync by selecting Handheld Overwrites Desktop.  BUT, it will not bring in the phone numbers or email address in the 6th and 7th field of contact numbers ( I have these all set to Mobile and Email).  The numbers are on the phone but will not pull into my desktop.  Any suggestions?
    Post relates to: Centro (Sprint)

    Are you sync'ing the new device to the same username in Palm Desktop as an older device?
    If you are, this will cause problems on the new device. When you sync the device, it saves the device settings, 3rd party apps, preferences, etc. in the backup folder in the username folder on the PC. When you sync the new device it copies the contents of the backup folder for the old device to the new device. This will cause corruption in the settings for the new device thus causing erratic behavior of the new device.
    If you did not have the issue with the previous version, you may want to revert back to the version you had on the PC before upgrading. I can supply the instructions for a "clean" uninstall of Palm desktop.
    For reference purposes, click on the following link for the support page for your device on the kb.palm.com webpage.
    http://www.palm.com/us/support/centro/centro_sprint/
    There are links on the page to the user guide, troubleshooting, how to's, downloads, etc.
    Post relates to: Palm i705

  • Updating all the changed data from Vo based table into database

    hi,
    there is a advanced table in my page and there r 14columns in it. the data for this columns is coming from 8 different tables. here the table is VO(sql query) based table.there is also a apply button on the page. when i click "apply" which is submit button, i want all the changes made on the advanced table to be captured and get updated on the respective columns in the database..
    can anyone helpme out in doing this.
    Thanks
    Chandu

    SQL query based VO is not updatable. You have to base your ViewObject for all EntityObjects which represent your 8 different tables.
    Its good practice to base all your VO on EO, even for read-only, unless your query does not contain, distinct, union & group by clause.
    Setting "updatable=false" property of EO make behavior of read-only.
    Is your VO based on EO?
    See this- http://howtolearnadf.blogspot.com/2012/04/how-to-create-oracle-adf-view-object.html
    http://weblog.singhpora.com/2010/02/read-only-view-object-also-be-based-on.html

  • Thunderbird not able to import all the previous rss from opml file

    I'm using Thunderbird (linux) and move distros. So I exported all the rss feeds with the option available and got the opml file. Then imported the opml file back to the new Thunderbird. But thunderbird didn't retrieve all the rss feeds lists from the opml file and only displayed some. I know beacuse I checked what being displayed with what listed in the opml file. Was there any limit to how much feed lists that you can import?

    Hi,
    it took me some time to understand what the code should do.
    First of all please let me say, that wiring trough the error cluster is immanent for debugging. One problem I see is that the refnum from the document element is zero, so that all other calls for childs are failing. You will only notice this by debugging or execution highlighting (see James hint). You will also see a hint for a problem in this region of the code when you look at your indicator loadXml. This once was always False.
    I have changed two things:
    LoadXml.vi -> Read from Text File instead of binary
    Main.vi -> Using the read text for input of the LoadXml invoke method and not the pretty printed one.
    Now LoadXml performs the action and your 2D array is filling in some stuff. My advice would be to carefully debug your code and look, where references get lost and where errors occur.
    Good Luck
    Tyler
    Certified LabVIEW Architect

  • IdocReceiver not processing all the idocs produced from PFAL in OrgModeler

    Dear experts,
    We are on below Nakisa version.
    Name Nakisa OrgModeler
    Version 4.0 SP1
    Build 0910021700
    I ran the initial load for the OrgModeler and it produced around 500 idocs. When I followed the command line interface, it processed around 56 idocs. The idoc status becomes 03 (Data Processed to Port OK) at the R/3 end for all the idocs. However idocreceiver is skipping many idocs in between and processing only few. I checked the database for HRP1000 and HRP1001 and the records loaded are not even 20% of the records in R/3 for the objects and relationships(HRP1000 obj types C, O, Q,S and HRP1001 002/003/007/008/011/012 etc) we filter at the distribution model.
    I think  the issue might not be an authorzation problem as its fetching the records from SAP R/3 and loading few records in the database.
    I checked with my Basis consultant about another reciever running and grabing the missing records. He confirmed that only one idocreceiver running at nakisa end. I even happened to check the SAP gateway server(SMGW) and with TP Name NAKIDOC and symbollic destination NAK_RFC_DEST I can see only one record which is appearing on and off. When the record appear on the gatway server, the corresponding idoc being processed at command line interface. Till the other record appears on the sap gateway server, the idocs are missing at the command line interface.
    Below is the screen shot from idocreceiver. (After the idoc 1695358 it processed 1695366 then 1695372 etc)
    Any suggestions would be a great help.

    Dear All,
    My Basis guy finally realised that he started a Jco provider at the netweaver end and causing the issue. He stopped it and all the idocs processed successfully.
    I learned the below points and might useful for few folks.
    1.No idoc being processed and all the idocs failing with 'Bean IDOC_INBOUND_ASYNCHRONOUS not found on host'. It should be an authorization error and assigning the Authorization Object: S_IDOCDEFT (EDI_TCD = "WE30"; ACTVT = "03") will resolve the issue.
    2. Few idocs are being processed at idoc receiver end.
    i. Check for multiple idoc receivers which might be active and running.
    ii. Check the number of connections and timeout parameters and set them to the maximum.
    Thanks,
    Manohar

Maybe you are looking for