Perform Recover Edges automatically

Sorry if this has been asked before, but my search didn't turn up usable results.
At the moment, I convert my RAWs to DNG upon import into LR. Then I drag the DNGs upon a link to the Recover Edges - Application. They gain about 0.2 Megapixels with no visible degregation (NEFs from Nikon D300).
This is a time consuming process (in Win XP, I get an error when I try to drag more than about 25 images. Also, I have to confirm the conversion for each file)
I am aware of the fact that not all RAW formats see an improvement from RE, but for all those that do, an option in the DNG Converter "Apply Recover edges after conversion" would be welcome.

Hi,
Yes, I also would have liked Adobe to put this functionality in.
I am sure they could, but perhaps are afraid of the problems and confusion
it might cause...
As to my utility, this is what I do:
Convert all my raw files to DNG in the same folder, and (automatically)
delete the original RAW files.
Then, I import into Lightroom.
Not nice, but hey, it works
Greeings,
Auke
Auke, this really is what I would like the Adobe DNG Converter to look
like.
Acting outside of Lightroom would still rather confuse my workflow (e.g.
concerning copying images to another, predefined folder).
And I would prefer Adobe themselves to take sole responsibility for the
conversion of the invaluable originals.
 
But thanks very much for the invitation.
>

Similar Messages

  • RECOVER DATABASE AUTOMATIC UNTIL SCN

    Hi,
    in 10gR2 on Win 2003, I receive the following error :
    RECOVER DATABASE AUTOMATIC UNTIL SCN 584413;
    ORA-00905Unfortunately documentation site is not available :
    Gateway Timeout
    The proxy server did not receive a timely response from the upstream server.
    Reference #1.24e70cc3.1270721326.3a6c8f5 Do you know the correct syntaxe ?
    Thank you.

    Hi,
    The below link can help you.
    http://ss64.com/ora/recover.html
    Best regards,
    Rafi.
    http://rafioracledba.blogspot.com/

  • Enable "Recover Edges Option" in preferences for DNG

    Enable a "Recover Edges Option" in preferences for DNG Conversion.  This is particularly useful when applying lens distortion corrections.
    P.S.  Your always encouraging people to adopt DNG well....Duh!

    can anyone think of any reasons why this option is just entirely absent from my Trackpad Preferences?
    Hi, David. You seem surprised that you don't have this feature. Was it once there, but now you've found that it's gone missing, or has it never been there?
    Which 1.5GHz PB do you have? The one that was released in April '04, or the one that was released in January '05?
    The earlier 15" 1.5GHz model did not have a scrolling trackpad, though the later 1.5GHz model did....

  • Help! Can't get DNG Recover Edges to work with RAW files from Olympus E-M5.

    I've imported the RAWs into LR 4.4 (on a Mac), converted to DNG and run the plug in as directed.
    Then I go into the develop module and select crop, then click "Reset."
    Nothing happens!
    What am I doing wrong?
    I can see the missing image data in the import module, but can't recover it.

    Thanks for your link but i don't think it has something to do with relinking libraries since i never had to to that on my first machine. I'm currently pursuing a different approach: I'm comparing the files used by ccpd and cups on both machines with each other. These are the results:
    On the machine with working capt service:
    http://pastebin.com/Fyif7zxz
    http://pastebin.com/n0KQj14z
    On the machine with defect capt service:
    http://pastebin.com/rtUW5Gfm
    http://pastebin.com/vh94E3Jv
    One thing that i noticed is the two libraries resolv and nss which are used on the defect machine. Could it be that?
    Last edited by Picoli (2013-02-24 22:49:25)

  • Errors using Lightroom DNG Recover Edge plug-in

    I'm trying the new Adobe Labs' DNG Recover Edge plug-in with Lightroom 4.2, but I'm having troubles on Canon 5D Mark II and Mark III images.
    The plug-in:
    http://labs.adobe.com/downloads/lightroomplugins.html
    http://labsdownload.adobe.com/pub/labs/lightroomplugins/lightroomplugins_dngre_docs.pdf
    When I apply it to an uncropped image, I often get this error message: "An internal error has occurred. ?:0: attempt to compare nil with number" When I get this message, Also, after that, I have to cancel the process manually or else it apparently runs indefinitely with no result.
    I got that error message all but one time that to use the plug-in on my first attempt to get this working.
    One exception was when it appeared to work, creating a new DNG with the suffix "_full" on the filename. However, it only increased the DNG size by a vanishingly small amount. Lightroom still reports its size as the same, 5760x3840 pixels in the frame around the image in the Library and in the overlaid info text (hit "I" to reveal). The metadata panel shows the larger 5796x3870 size.
    After that, the plug-in appeared to work, but it created an image that was labeled "2 of 2," with a dark frame around it in the Library module. However, I couldn't find 1 of 2.
    Then I quit and restarted Lightroom, and both of the images appeared, grouped as a stacked image. Comparing the two images, though, there was no difference, though the metadata panel said the one with the recovered edges should have more pixels.
    Anybody else seeing these errors? They seemed to be less of an issue after my restart of Lightroom but the plug-in still seems to be misbehaving in some ways.

    Regarding the plug-in problems, I also haven't been able to reproduce it since the initial failure. This includes trying it with the initial images for which the plug-in didn't work; I suspect the problem wasn't with the specific images but with some other issue about Lightroom's state. Lightroom seemed to be working OK aside from the plug-in issue. I first used the plug-in immediately after installing it, with no Lightroom restart.
    Regarding the cropping issue, it finally occurred to me that the "full" DNG created with the plug-in was cropped. So yes, once I reset the crop, I got the full 5796x3870 view. It was an expectations management issue for me -- I'd expected to see the full uncropped version as soon as I created the full DNG. This is an RTFM failing on my part, since the documentation (I now see) says, "This results in new DNG images (automatically imported) where clearing the crop will give you the full image area..." I'm sure you thought about this software behavior carefully (for example in situations where somebody had cropped and rotated an image already before creating the full DNG), but FWIW I expected to see the full view when I created the full DNG. (As an aside, at least to me, the phrase "a.k.a., active area, stage 3 image area" in the documentation doesn't help to explain anything.)
    Regarding the user crop with the 5D3, yes, again a case of RTFM failure. You do need to change "add cropping info" to on.
    Thanks for your responses.

  • Recover OCR and VOTE disk after complete corruption of ASM disk groups.

    Hi Gurus,
    I am simulating a recovery situation to perform recover of OCR and Vote files after complete corruption of ASM related disks and diskgroups. I have setup my environment as follows:\
    Environment: RAC
    OS: OEL 5.5 32-bit
    GI Version: 11.2.0.2.0
    ASM Disk groups: +OCR, +DATA
    OCR, Vote Files location: +OCR
    ASM Redundancy: External
    ASM Disks: /dev/asm-disk1, /dev/asm-disk2
    /dev/asm-disk1 - mapped on +OCR
    /dev/asm-disk2 - mapped on +DATA
    With the above configuration in place I have manually corrupted +OCR, +DATA diskgroups with dd command. I used this command to completely corrupt +OCR disk group.
    dd if=/dev/zero of=/dev/asm-disk1. I have manual backups as well as automatic backups of OCR and Vote disk. I am not using ASMLib.
    I followed this link:
    http://docs.oracle.com/cd/E11882_01/rac.112/e17264/adminoc.htm#TDPRC237
    When I tried to recover OCR file, I could not do so as there is no such diskgroup which ASM can restore the OCR, Voting disk to. I could not Re-create OCR and DATA diskgroups as I cannot connect to ASM instance. If you have a solution or workaround for my situation please describe it. That will be greatly appreciated.
    Thanks and Regards,
    Suresh.

    Please go through the following document which have the detailed steps to restore the OCR
    How to restore ASM based OCR after complete loss of the CRS diskgroup on Linux/Unix systems [ID 1062983.1]

  • DB Performance problem

    Hi Friends,
    We are experiencing performance problem with our oracle applications/database.
    I run the OEM and I got the following report charts:
    http://farm3.static.flickr.com/2447/3613769336_1b142c9dd.jpg?v=0
    http://farm4.static.flickr.com/3411/3612950303_1f83a9f20.jpg?v=0
    http://farm4.static.flickr.com/3411/3612950303_1f83a9f20.jpg?v=0
    Are there any clues that these charts can give re: performance problem?
    What other charts in OEM that can help solve or give assitance performance problem?
    Thanks a lot in advance

    ytterp2009 wrote:
    Hi Charles,
    This is the output of:
    SELECT
    SUBSTR(NAME,1,30) NAME,
    SUBSTR(VALUE,1,40) VALUE
    FROM
    V$PARAMETER
    ORDER BY
    UPPER(NAME);
    (snip)
    Are there parameters need tuning?
    ThanksThanks for posting the output of the SQL statement. The output answers several potential questions (note to other readers, shift the values in the SQL statement's output down by one row).
    Parameters which I found to be interesting:
    control_files                 C:\ORACLE\PRODUCT\10.2.0\ORADATA\BQDB1\C
    cpu_count                     2
    db_block_buffers              995,648 = 8,156,348,416 bytes = 7.6 GB
    db_block_size                 8192
    db_cache_advice               on
    db_file_multiblock_read_count 16
    hash_area_size                131,072
    log_buffer                    7,024,640
    open_cursors                  300
    pga_aggregate_target          2.68435E+12 = 2,684,350,000,000 = 2,500 GB
    processes                     950
    sessions                      1,200
    session_cached_cursors        20
    shared_pool_size              570,425,344
    sga_max_size                  8,749,318,144
    sga_target                    0
    sort_area_retained_size       0
    sort_area_size                65536
    use_indirect_data_buffers     TRUE
    workarea_size_policy          AUTOFrom the above, the server is running on Windows, and based on the value for use_indirect_data_buffers is running a 32 bit version of Windows using a windowing technique to access memory (database buffer cache only) beyond the 4GB upper limit for 32 bit applications. By default, 32 bit Windows limits each process to a maximum of 2GB of memory utilization. This 2GB limit may be raised to 3GB through a change in the Windows configuration, but a certain amount of the lower 4GB region (specifically in the upper 2GB of that region) must be used for the windowing technique to access the upper memory (the default might be 1GB of memory, but verify with Metalink).
    By default on Windows, each session connecting to the database requires 1MB of server memory for the initial connection (this may be decreased, see Metalink), and with SESSIONS set at 1,200, 1.2GB of the lower 2GB (or 3GB) memory region would be consumed just to let the sessions connect, before any processing is performed by the sessions.
    The shared pool is potentially consuming another 544MB (0.531GB) of the lower 2GB (or 3GB) memory region, and the log buffer is consuming another 6.7MB of memory.
    Just with the combination of the memory required per thread for each session, the memory for the shared pool, and the memory for the log buffer, the server is very close to the 2GB memory limit before the clients have performed any real work.
    Note that the workarea_size_policy is set to AUTO, so as long as that parameter is not adjusted at the session level, the sort_area_size and sort_area_retained_size have no impact. However, the 2,500 GB specification (very likely an error) for the pga_aggregate_target is definitely a problem as the memory must come from the lower 2GB (or 3GB) memory region.
    If I recall correctly, a couple years ago Dell performed testing with 32 bit servers using memory windowing to utilize memory above the 4GB limit. Their tests found that the server must have roughly 12GB of memory to match (or possibly exceed) the performance of a server with just 4GB of memory which was not using memory windowing. Enabling memory windowing and placing the database buffer cache into the memory above the 4GB limit has a negative performance impact - Dell found that once 12GB of memory was available in the server, performance recovered to the point that it was just as good as if the server had only 4GB of memory. You might reconsider whether or not to continue using the memory above the 4GB limit.
    db_file_multiblock_read_count is set to 16 - on Oracle 10.2.0.1 and above this parameter should be left unset, allowing Oracle to automatically configure the parameter (it will likely be set to achieve 1MB multi-block reads with a value of 128).
    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • Automatic creation of applicant activities

    Dear Experts,
    I have created application status and applicant activity type and also configured for automatic creation of applicant activity. But after performing applicant action automatic applicant activity is not reflecting.
    Did I miss any step.
    Help me to solve this issue.
    Regards,
    Goldy

    have u maintained the feature PACTV???

  • How to measure the performance of a SQL query?

    Hello,
    I want to measure the performance of a group of SQL queries to compare them, but i don't know how to do it.
    Is there any application to do it?
    Thanks.

    You can use STATSPACK (in 10g its called as AWR - Automatic Workload Repository)
    Statspack -> A set of SQL, PL/SQL, and SQL*Plus scripts that allow the collection, automation, storage, and viewing of performance data. This feature has been replaced by the Automatic Workload Repository.
    Automatic Workload Repository - Collects, processes, and maintains performance statistics for problem detection and self-tuning purposes
    Oracle Database Performance Tuning Guide - Automatic Workload Repository
    http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14211/autostat.htm#PFGRF02601
    or
    you can use EXPLAIN PLAN
    EXPLAIN PLAN -> A SQL statement that enables examination of the execution plan chosen by the optimizer for DML statements. EXPLAIN PLAN causes the optimizer to choose an execution plan and then to put data describing the plan into a database table.
    Oracle Database Performance Tuning Guide - Using EXPLAIN PLAN
    http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14211/ex_plan.htm#PFGRF009
    Oracle Database SQL Reference - EXPLAIN PLAN
    http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_9010.htm#sthref8881

  • Catalog Problem - Automatic vendor assignment problem...

    Hello All
    One of the procurement catalog (Green flow Items) was corrupted by uploading a supplier catalog Schema. We got this corrected by once again uploading the correct schema to this catalog and mapping it to the master catalog. There after the corrupted schema is published.
    After this when the users are creating the shopping carts with the items in this recovered catalog, 'Automatic Vendor Assignment' facility is not working . All these shopping carts are being directed to the Professional Purchaser, even though they are Green route shopping cart
    Due to the 'Automatic Vendor Assignment' failure; if the requisitioner manual enters then Vendors details when creating the Cart then the Vendor Details will appear in the Professional Purchasers In-Box.
    Alternatively; the Vendor Details will not appear.
    Did I miss any thing while correcting the the Procurement cataloge.
    Any help or guidance on this is very much appreciated.
    Thank you in advance.
    Wishes & Regards,
    Mahesh. J

    Hi,
    Please check returned OCI value from Catalog. Did you get correct business partner ?
    [https://service.sap.com/sap/support/notes/395312|https://service.sap.com/sap/support/notes/395312]
    Implement the Note 487917 and check the log in SLG1 transaction.
    Regards,
    Masa

  • Header Text field manadatory in Automatic Inv. settlement plan

    Hi Friends,
    We have made field Header text Manadatory (RBKP-BKTXT), so that when we perform LIV you must enter the details.
    Now we are planning to do Invoice plan and in that when performing MRIS system automatically makes LIV as per settings. Because of above settings we couldn't able to perform MRIS. System is throwing error that " Please Enter Text in Doc. Header Text Field"
    How to over come this.
    Regards,
    Sai Krishna

    Hi,
    We have to do thru user exit or do validations thru FI.
    Regards,
    Sai Krishna

  • How to Perform Loose Routing using Proxy?

    Dear all,
    It seems that proxyTo() changes the Request-URI of the SIP message. How can I proxyTo() the next hop without changing the Request-URI?
    Proxy p;
    p.proxyTo(nextHop);

    you could push a route using the following SipServlet API on the SipServletRequest:
    public void pushRoute(SipURI uri)
    If the sipURI has a lr parameter, WLSS will perform loose routing automatically.
    This will add a Route header to the req with the value you have provided and send the req to that hop.
    cheers,
    Mihir

  • Automatic UNDO

    if we use automatic undo management in oracle database, may we still have a chance to get ERROR"Snapshot too old" or it may be avoided as Oracle manages undo segments itself....
    Regards

    If you are 9i this note is worth a look:Note:301432.1Performance Slowdown During Heavy Undo Segment Onlining
    I have discussed about this topic at my blog:
    http://jaffardba.blogspot.com/
    The question was about 'Sanpshot too old eror', not about the performance related to AUTOMATIC UNDO.
    One more parameter that may cause is undo_retention.
    The other work around to avoid is to run the most long query and use v$undostat view and set the undo tbs and undo_retention parameters.
    Jaffar

  • Oracle Text ALTER INDEX Performance

    Greetings,
    We have encountered some ehancement issues with Oracle Text and really need assistance.
    We are using Oracle 9i (Release 9.0.1) Standard Edition
    We are using a very simple Oracle text environmet, with CTXSYS.CONTEXT indextype on Domain Indexes.
    We have indexed two text columns in one table, one of these columns is CLOB.
    Currently if one of these columns is modified, we are using a trigger to automatically ALTER the index.
    This is very slow, it is just like dropping the index and creating it again.
    Is this right? should it be this slow?
    We are also trying to use the ONLINE parameter for ALTER INDEX and CREATE INDEX, but it gives an error saying this feature is not enabled.
    How can we enable it?
    Is there any way in improving the performance of this automatic update of the indexes?
    Would using a trigger be the best way to do this?
    How can we optimize it to a more satifactory performance level?
    Also, are we able to use the language lexers for indexes with the Standard Edition. If so, how do you enable the CTX_DLL?
    Many thanks for any assistance.
    Chi-Shyan Wang

    If you are going to sync your index on every update, you need to make sure that you are optmizing it on a regular basis to remove index fragmentation and remove deleted rows.
    you can set up a dmbs_job to do a ctx_ddl.optmize and run a full optmize periodically.
    Also, depending on the number of rows you have, and also the size of the data, you might want to look at using a CTXCAT index, which is transactional, stays in sync automatically and does not need to be optimized. CTXCAT indexes do not work well on large text objects (they are good for a couple lines of text at most) so they may not suit your dataset.

  • Who start/stop Performance Monitor

    Hi all,
    Is there's any audit trace I can check on who did the start/stop on Performance Monitor tool?
    Because I've created the Performance Monitor tool to run for a week, then on day 3 I found it was stopped.
    I ran it again & it's now completed.
    So, how can I tell who did the job to stop the tool on day 3 that day? I just can't seem to find on on Event Viewer.
    Thank you in advanced!
    ---Pat

    Hi Patrick M,
    Based on your description, I want to confirm whether this
    Performance Monitor was stopped automatically. Please check if it occurred (stop working) with a cycle. In other words, please check if Performance Monitor stops automatically every 3 days. Just guess.
    There is a similar question, it may provide some thoughts.
    http://social.technet.microsoft.com/Forums/en-US/69d209eb-6717-499f-9b8e-0de09eb11695/performance-monitor-every-weekend-is-stopped-automatically?forum=winservergen
    Hope this helps.
    Best regards,
    Justin Gu

Maybe you are looking for

  • How do I delete photos PERMANENTLY?

    I have moved photos to the trash can and emptied the trash can multiple times and they keep popping back up in Iphoto. Not just Iphoto but many other files are not actually being deleted. Plus, I'm finding random snapshots of my family's phone number

  • No receiver found in Receiver Determination

    Hello Experts, Sub: Calling integration process (which in software component1) from sender service interface_Out (which is in another software component2) in Receiver Determination Fails.        There are two software components, S/W C1,  S/W C2,    

  • Help: how to show 9i forms without browser

    Hi, I want to show 9i forms in separate frame without browser, and I use JInitiator. I use separateFrame=true, which worked. But my question is since I don't need the browser, I want to close it. The problem is it'll close the form too. I think the r

  • Not able to Export RequestDataset from MDS in oim 11g

    Hi All, I am able to import request dataset into MDS, but not able to export the same data set. How ever, I have exported oim-config.xml, successfully. Can any one help me here. Actually I wanted to export any one of default request datasets (AssignR

  • Out of Memory,CS5

    I have been working on a complex project for months now. In one 'module' I have multiple animations, and I am using large sub animations that scale. Now in one module, I cannot even publish the file - I am getting 'out of memory' messages, even thoug