Performance improvements to LR3 features needed in LR4

There are other threads already in existence which discuss how performance appears to have decreased in the LR4 beta, and I'm sure that's partly explained by the fact that it IS a beta containing unfinished & unoptimized code, and partly because it is actually doing more work than before.
However, I see no mention from any Adobe sources of any attention being given to addressing the various performance issues that were already present in LR3.x and I think such performance tuning/optimization is sorely needed.
In particular:
Threading & parallelization - I recently upgraded my dual-core, 8GB RAM machine to quad-core (by changing only the CPU) but this gave a dissappointingly small boost in performance - it was barely noticeable during general use & exporting, surely something can be done to make better use of multiple cores, especially to make sure that the main application/editing UI *never* becomes sluggish (i.e. any LR tasks should run in the background and concede resources to the UI thread), and during export can't N-1 cores be used to process & export one image per core (in parallel) leaving one core for UI?
Poor performance and user experience of deleting files - there's no visual feedback at all (i.e. a progress bar is needed) and it takes MUCH longer than is reasonable, deleting a few hundred sometimes takes 5mins, during which time LR is completely unusable.
Importing images - even if RAW/NEF/DNG the perceived performance could surely be improved by populating the grid with thumbnails almost instantly by scanning the header of the files to extract the metadata including the thumbnails, THEN going back for a second pass which actually transfers/copies/moves the fullsize images, this approach would permit more useful work to be done while import is in progress.
Module switching latency - a necessary evil because of the separation between Library and Develop, switching between them often takes seconds, this should be instant!
That's far from an exhaustive list, but PLEASE could Adobe devote some resources to making these existing features perform better and in doing so alleviate some of the frustration & wasted time experienced in using LR day in, day out?
Others are of course very welcome to list other important performance issues which I've omitted / not aware of...

Agreed. On my list, some metadata fields (Title, Caption) have been replicated in the new Map Module, proving in principle that these fields could exist in other modules. Given the option to have metadata and keyword panels present in Develop mode, I'd never use the Library again.
I feel that LR performance/productivity has gotten short changed this time around. Let us go well beyond hiding and showing modules to customize our LR workspace across multiple displays. GPU accelerate operations. Optimize the application, catalog and cache to benefit more from SSDs. Make all operations lightening fast then worry about superfluous stuff like Maps.
Other than the book module (which is hamstrung by the Blurb-only initial limitation) nothing in LR4 is going to save me time. It feels like LR is drifting from its original high-volume mission. There are plenty of consumer-grade photo management apps out there, I don't want LR to turn into another one of them. 20 months till LR5 is a long time to wait to see if LR priorities embrace speed again.

Similar Messages

  • Performance improvement in a function module

    Hi All,
    I am using SAP 6.0 version. I have a function module to retrive the PO's . for just 10,000 records its taking long time.
    Can any one sugguest the ways to improve the performance.
    Thanks in advance.

    Moderator message - Welcome to SCN.
    But
    Moderator message - Please see Please Read before Posting in the Performance and Tuning Forum before posting
    Just 10,000 records? The first rule in performance improvement is to reduce the amount of selected data. If you cannot do that, it's going to take time.
    I wouldn't bother with a BAPI for so many records. Write some custom code to get only the data you need.
    Tob

  • MV Refresh Performance Improvements in 11g

    Hi there,
    the 11g new features guide, says in section "1.4.1.8 Refresh Performance Improvements":
    "Refresh operations on materialized views are now faster with the following improvements:
    1. Refresh statement combinations (merge and delete)
    2. Removal of unnecessary refresh hint
    3. Index creation for UNION ALL MV
    4. PCT refresh possible for UNION ALL MV
    While I understand (3.) and (4.) I don't quite understand (1.) and (2.). Has there been a change in the internal implementation of the refresh (from a single MERGE statement)? If yes, then which? Is there a Note or something in the knowledge base, about these enhancements in 11g? I couldn't find any.
    Considerations are necessary for migration decision to 11g or not...
    Thanks in advance.

    I am not quit sure, what you mean. You mean perhaps, that the MVlogs work correctly when you perform MERGE stmts with DELETE on the detail tables of the MV?
    And were are the performance improvement? What is the refresh hint?
    Though I am using MVs and MVlogs at the moment, our app performs deletes and inserts in the background (no merges). The MVlog-based fast refresh scales very very bad, which means, that the performance drops very quickly, with growing changed data set.

  • Will there performance improvement over separate tables vs single table with multiple partitions?

    Will there performance improvement over separate tables vs single table with multiple partitions? Is advisable to have separate tables than having a single big table with partitions? Can we expect same performance having single big table with partitions? What is the recommendation approach in HANA?

    Suren,
    first off a friendly reminder: SCN is a public forum and for you as an SAP employee there are multiple internal forums/communities/JAM groups available. You may want to consider this.
    Concerning your question:
    You didn't tell us what you want to do with your table or your set of tables.
    As tables are not only storage units but usually bear semantics - read: if data is stored in one table it means something else than the same data in a different table - partitioned tables cannot simply be substituted by multiple tables.
    Looked at it on a storage technology level, table partitions are practically the same as tables. Each partition has got its own delta store & can be loaded and displaced to/from memory independent from the others.
    Generally speaking there shouldn't be too many performance differences between a partitioned table and multiple tables.
    However, when dealing with partitioned tables, the additional step of determining the partition to work on is always required. If computing the result of the partitioning function takes a major share in your total runtime (which is unlikely) then partitioned tables could have a negative performance impact.
    Having said this: as with all performance related questions, to get a conclusive answer you need to measure the times required for both alternatives.
    - Lars

  • Performance improve using TEZ/HIVE

    Hi,
    I’m newbie in HDInsight. Sorry for asking simple Questions. I have queries around performance improvement of my HIVE query on File data of 90 GB (15 GB * 6).
    We have enabled execution engine has TEZ, I heard the AVRO format improves the speed of execution, Is AVRO SERDE enabled TEZ Queries or do I need upload *.jar files to WASB. I’m using latest version. Any sample Query.
    In TEZ, Will ORC Column Format and Avro compression can work together, when we set ORC compression level on hive has
    Snappy and LZO ?. Is there any Limitation of Number of columns for ORC tables.
    Is there any best compression technique to upload data file to Blob, I mean compress and upload.  I used *.gz, which compressed by 1/4<sup>th</sup> of File Size and upload to Blob, but problem *.gz is not split able and it will always
    uses less (single ) Mapper or should I use Avro with Snappy Compression . Is the Microsoft Avro Library performs snappy Compression or is there any compress which can be  split and compress.
    If data structure for file change over time , will there be necessity of reloading older data?. Can existing query works without change in code.
    It has been said that TEZ has Real Time Reporting capability , but when I Query 90 GB file (It includes Group By, order by clauses) is taking almost 8 mins of time on 20 nodes, are there any pointers to improve performance further and get the Query result
    in Seconds.
    Mahender

    -- Tez is an execution engine, I don't think you need any additional jar file to get AVRO Serde working on Hive when Tez is used.  You can used  AvroSerDe, AvroContainerInputFormat & AvroContainerOutputFormat to get AVRO working when tez is
    used.
    -- I tried creating a table with about 220 columns, although the table was empty, I was able to query from the table, how many columns does your table hold?
    CREATE EXTERNAL TABLE LargColumnTable02(t1 string,.... t220 string)
    PARTITIONED BY(EventDate string) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS ORC LOCATION '/data'
    tblproperties("orc.compress"="SNAPPY");
    --  You can refer
    http://dennyglee.com/2013/03/12/using-avro-with-hdinsight-on-azure-at-343-industries/   
    Getting Avro data into Azure Blob Storage Section
    -- It depends on what data has change , if you are using Hadoop, HBase etc..
    -- You will have to monitor your application check node manager logs if there is any pause in execution again. It depends on what you are doing, would suggest open a Support case to investigate further.

  • Performance Improvement between GDK and EDK portlets

    Are there any performance improvements to be expected by migrating a portlet from using the GDK library to EDK library? Not looking at what GDK and EDK offers, more on whether we would improve the load time of a portal page if we change a portlet from GDK to EDK.....

    With GDK, my pages inherit from "Plumtree.Remote.Csp.UI.Page" and under the hood, the context is created (SettingsManager) automatically. Apparently, this is not the case anymore with the EDK. Am I correct?
    According to the EDK doc, I need to call "PortletContextFactory.CreatePortletContext(Request,Response)" for such purpose. Still correct?
    -- Yes, correct. In the EDK, no SettingsManager is used, and the functionality is wrapped into IPortletRequest and IPortletResponse.
    The other more important change is that with the GDK, the language of the current thread is automatically set to the language passed by the portal in the "Accept-Language" HTTP header. This is not the case anymore, to my knowledge, and I found out that I need to insert this:
    String sLanguage = HttpContext.Current.Request.UserLanguages[0];System.Threading.Thread.CurrentThread.CurrentCulture=new System.Globalization.CultureInfo(sLanguage);
    Is this correct or did I miss something?
    -- You do not need to use the HttpContext object of .NET. The Plumtree EDK allows you to retrieve the language as follows: The portal language is stored in a User Pref named "strLocale". A remote portlet can read this User Pref.The only point to note is that, as with all User Prefs, you must ensure that the specific prefs are sent to the portlet in the Portlet Web Service registration.
    PortletRequest.GetSettingValue(Plumtree.Remote.Portlet.SettingType.User, "strTimeZone")

  • Tabular Model Performance Improvements

    Hi !
    We have a bulitv tabular model inline which has a fact table and 2 dimension tables .The performance of SSRS report is very slow and we have bottle neck in deciding SSRS as reporting tool.
    Can you help us on performance improvements with Tabular Inline
    Regards,

    Hi Bhadri,
    As Sorna said, it hard to give you the detail tips to improve the tabular model performance according the limited information. Here are some useful link about performance Tuning of Tabular Models in SQL Server 2012 Analysis Services, please refer to the
    link below.
    http://msdn.microsoft.com/en-us/library/dn393915.aspx
    If this is not what you want, please elaborate the detail information, so that we can make further analysis.
    Regards,
    Charlie Liao
    TechNet Community Support

  • Error message while importing upgraded LR3 catalog to new LR4 catalog

    I'm using a Win7 x64 system, 3Ghz Xeon dual core, 8 GB ram system.  After upgrading a LR3 catalog to LR4, it would not display all images in GRID VIEW in the upper level folders.  i.e. D:/Photos/2011.  It will display the images in the 2011 folder but not in the Photos folder.  And yes I do have the "Show Photos in SubFolders" selected. 
    By importing this updated LR4 catalog to a new LR4 catalog the display problem disappears.  However, I get an error message at the end of the import saying that "Lightroom could not import this catalog because of an unknown error."  This occurs after it says it is importing updated keywords.  But, as compared to the LR3 catalog, the new LR4 catalog shows the correct number of images in the catalog, shows the correct number of collections and seems to have all the keywords in the keyword list panel. 
    Because there seems to be some sort of catalog database upgrade error, I am hesitant to use LR4 for production at this time.  Does anyone have any suggestions as to what this error might be, or a work around for the error so I can be confident in my LR4 catalog?
    Thanks,
    Tony...

    I've pgraded to LR4.1 RC1 and then created a new catalog, imported from my original LR3 cat. It updated that temporary cat and then imported to the new LR4 cat without issues or error messages. I then just did an upgrade to my original LR3 cat and saved it in another location. Upgrade went without hitch or error messages and I am now using that catalog.  So far, so good with the new candidate.
    Tony...

  • I have a 27" iMac and want to play Windows games with optimal performance.  what do I need to do?

    I have a 27" iMac and want to play Windows games with optimal performance.  What do I need to do?

    Install windows with Bootcamp. http://www.apple.com/support/bootcamp/

  • DS 5.2 P4 performance improvement

    We have +/- 300,000 users that regularly authenticate using our DS. The user ou is divided in ou=internal (20,000 ids) and ou=external (280,000) uids. Approximately 85-90% percent of the traffic happens on the internal ou. The question is: Could I get any performance improvement by separating the internal branch into its own suffix/database? Would running two databases adversely affect the performance instead? We see performance impacts when big searches are performed on the ou=external branch. Would the separation isolate the issue, or those searches will most likely affect the DS as a whole?
    Thanks for your help!
    Enrique.

    Thank you for the info. Are u a Sun guy - do you work
    for sun?Yes I am. I'm the Architect for Directory Server Enterprise Edition 6.0. Previously I worked on all DS 5 releases (mostly on Replication).
    You are getting the Dukes!Thanks.
    Ludovic.

  • Pls help me to modify the query for performance improvement

    Hi,
    I have the below initialization
    DECLARE @Active bit =1 ;
    Declare @id int
    SELECT @Active=CASE WHEN id=@id and [Rank] ='Good' then 0 else 1 END  FROM dbo.Students
    I have to change this query in such a way that the conditions id=@id and [Rank] ='Good' should go to the where condition of the query. In that case, how can i use Case statement to retrieve 1 or 0? Can you please help me to modify this initialization?

    I dont understand your query...May be below? or provide us sample data and your output...
    SELECT *  FROM dbo.students
    where @Active=CASE
    WHEN id=@id and rank ='Good' then 0 else 1 END
    But, I doubt you will have performance improvement here?
    Do you have index on id?
    If you are looking for getting the data for @ID with rank ='Good' then use the below:Make sure, you have index on id,rank combination.
    SELECT *  FROM dbo.students
    where  id=@id
    and rank ='Good' 

  • Performance improvement in OBIEE 11.1.1.5

    Hi all,
    In OBIEE 11.1.1.5 reports takes long time to load , Kindly provide me some performance improvement guides.
    Thanks,
    Haree.

    Hi Haree,
    Steps to improve the performance.
    1. implement caching mechanism
    2. use aggregates
    3. use aggregate navigation
    4. limit the number of initialisation blocks
    5. turn off logging
    6. carry out calculations in database
    7. use materialized views if possible
    8. use database hints
    9. alter the NQSONFIG.ini parameters
    Note:calculate all the aggregates in the Repository it self and Create a Fast Refresh for MV(Materialized views).
    and you can also do one thing you can schedule an IBOT to run the report every 1 hour or some thing so that the report data will be cached and when the user runs the report the BI Server extracts the data from Cache
    This is the latest version for OBIEE11g.
    http://blogs.oracle.com/pa/resource/Oracle_OBIEE_Tuning_Guide.pdf
    Report level:
    1. Enable cache -- change nqsconfig instead of NO change to YES.
    2. GO--> Physical layer --> right click table--> properties --> check cacheable.
    3. Try to implement Aggregate mechanism.
    4.Create Index/Partition in Database level.
    There are multiple other ways to fine tune reports from OBIEE side itself:
    1) You can check for your measures granularity in reports and have level base measures created in RPD using OBIEE utility.
    http://www.rittmanmead.com/2007/10/using-the-obiee-aggregate-persistence-wizard/
    This will pick your aggr tables and not detailed tables.
    2) You can use Caching Seeding options. Using ibot or Using NQCMD command utility
    http://www.artofbi.com/index.php/2010/03/obiee-ibots-obi-caching-strategy-with-seeding-cache/
    http://satyaobieesolutions.blogspot.in/2012/07/different-to-manage-cache-in-obiee-one.html
    OR
    http://hiteshbiblog.blogspot.com/2010/08/obiee-schedule-purge-and-re-build-of.html
    Using one of the above 2 methods, you can fine tune your reports and reduce the query time.
    Also, on a safer side, just take the physical SQL from log and run it directly on DB to see the time taken and check for the explain plan with the help of a DBA.
    Hope this help's
    Thanks,
    Satya
    Edited by: Satya Ranki Reddy on Aug 12, 2012 7:39 PM
    Edited by: Satya Ranki Reddy on Aug 12, 2012 8:12 PM
    Edited by: Satya Ranki Reddy on Aug 12, 2012 8:20 PM

  • Why GN_INVOICE_CREATE has no performance improvement even in HANA landscape?

    Hi All,
    We have a pricing update program which is used to update the price for a Material Customer combination(CMC).This update is done using the FM 'GN_INVOICE_CREATE'.
    The logic is designed to loop on customers, wherein this FM will be called passing all the materials valid for that customer.
    This process is taking days(Approx 5 days) to get executed and updated for CMC of 100 million records.
    Hence we are planning to move towards HANA for better improvement in performance.
    We designed the same programs in the HANA landscape and executed it in both systems for 1 customer and 1000 material combination.
    Unfortunately, both the systems gave same runtimes around 27 seconds for execution.
    This is very disappointing thinking the performance improvement we should have on HANA landscape.
    Could anyone throw light on any areas where we are missing out and why no performance improvement was obtained ?
    Also is there any configuration related changes to be done on HANA landscape for better performance.?
    The details regarding both the systems are as below.
    Suite on HANA:
    SAP_BASIS : 740
    SAP_APPL  : 617
    ECC
    SAP_BASIS : 731
    SAP_APPL  : 606
    Also see the below screenshots of the system details.
    HANA:
    ECC:
    Thanks & regards,
    Naseem

    Hi,
    just to fill in on Lars' already exhaustive comments:
    Migrating to HANA gives you lots of options to replace your own functionality (custom ABAP code) wuth HANA artifacts - views or SQLscript procedures. This is where you can really gain on performance. Expecting ABAP code to automatically run faster on HANA may be unrealistic, since it depends on the functionality of the code and how well it "translates" to a HANA environment. The key to really minimize run time is to replace DB calls with specific HANA views or procedures, then call these from your code.
    I wrote a blog on this; you might find it useful as a general introduction:
    A practical example of ABAP on HANA optimization
    When it comes to SAP standard code, like your mentioned FM, it is true that SAP is migrating some of this functionality to HANA-optimized versions, but this doesn't mean everything will be optimized in one go. This particular FM is probably not among those being initially selected for "HANAification", so you basically have to either create your own functionality (which might not be advisable due to the fact that this might violate data integrity) or just be patient.
    But again, the beauty of HANA lies in the brand new options for developers to utilize the new ways of pushing code down to the DB server. Check out the recommendations from Lars and you'll find yourself embarking on a new and exciting journey!
    Also - as a good starting point - check out the HANA developer course on open.sap.com.
    Regards,
    Trond

  • The new 'improved text selection feature' is too sensitive.

    The new 'improved text selection feature' is too sensitive. It tends to highlight a whole page whenever i try to highlight just a single line. And then it keeps on crashing!
    Please fix!

    Could you please share your pdf file with us at [email protected] so that we can investigate this issue?
    Thanks,
    Ankit
    Adobe Reader Team

  • DMA Performance Improvements for TIO-based Devices

    Hello!
    DMA Performance Improvements for TIO-based Devices
    http://digital.ni.com/public.nsf/websearch/1B64310FAE9007C086256A1D006D9BBF
    Can I apply the procedure to NI-DAQmx 9? These ini-files dont seem to exist anymore in the newer version.
    Best, Viktor

    Hi Viktor,
    this page is 7 years old and doesn't apply to the DAQmx.
    Regards, Stephan

Maybe you are looking for