Aperture 3.2.3 performance improvement

Hello,
Yesterday I installed the Aperture update 3.2.3.
I noticed a performance improvement.
I'm curious what your experiences are with this update.

I had thought of starting my own post but to ask just how many issues people were experiencing - rather than what if any improved performances people have seen.
Since updating to 3.2.3, I've been completely afraid to use it!  Right out of the gate, it completely hung up my Mac and I had to force restart while using it, which then of course required to repiar permissions, and I eventually had to do the more evasive, repair aperture library.  I could be having some other kind of issues, but it's funny - I only start having problems after any kind of Aperture update!
Things on Aperture are now VERY slow.  I just imported a large project of over 2000 photos that might be affecting my library and Aperture speed, but it shouldn't affect it this much.  And now, anything, and I mean ANYTHING that has to do ANY kind of processing - takes forever.  And I'm talking about the simpliest of edits will give me the spinning beach ball.
Here's one thing that JUST freaked me out, and why I came to the forums now.  I double clicked on a photo I had been working on so that I could get back to the full browser view, showing all the pics I had in a project, and all of Apeture went all weird, and broken up.  I can only describe it as looking like Aperture's interface broke up into a puzzle on my display.  I'm afaid to try duplicating what happened.
Suffice to say... I've already skimmed the support forums, but enough to know there's something seriously wrong with the recent 3.2.3 update. 3.2.1 was just as horrible, so I was happy, running along fine on 3.2.2, but now 3.2.3 is horrible - crashes, hangs, and takes forever brushing any kind of effects into shots.
I saw that Michael L. was going to try to revert back to 3.2.2  I'm also using a 2009 iMac 27" Core i7 with 10.6.8 with 16gigs RAM.  I refuse to upgrade to buggy Lion.  I would like to know what process you did to revert back if you did?  Just time machine or how?  And how has Aperture been running if / since you reverted back?

Similar Messages

  • Tabular Model Performance Improvements

    Hi !
    We have a bulitv tabular model inline which has a fact table and 2 dimension tables .The performance of SSRS report is very slow and we have bottle neck in deciding SSRS as reporting tool.
    Can you help us on performance improvements with Tabular Inline
    Regards,

    Hi Bhadri,
    As Sorna said, it hard to give you the detail tips to improve the tabular model performance according the limited information. Here are some useful link about performance Tuning of Tabular Models in SQL Server 2012 Analysis Services, please refer to the
    link below.
    http://msdn.microsoft.com/en-us/library/dn393915.aspx
    If this is not what you want, please elaborate the detail information, so that we can make further analysis.
    Regards,
    Charlie Liao
    TechNet Community Support

  • DS 5.2 P4 performance improvement

    We have +/- 300,000 users that regularly authenticate using our DS. The user ou is divided in ou=internal (20,000 ids) and ou=external (280,000) uids. Approximately 85-90% percent of the traffic happens on the internal ou. The question is: Could I get any performance improvement by separating the internal branch into its own suffix/database? Would running two databases adversely affect the performance instead? We see performance impacts when big searches are performed on the ou=external branch. Would the separation isolate the issue, or those searches will most likely affect the DS as a whole?
    Thanks for your help!
    Enrique.

    Thank you for the info. Are u a Sun guy - do you work
    for sun?Yes I am. I'm the Architect for Directory Server Enterprise Edition 6.0. Previously I worked on all DS 5 releases (mostly on Replication).
    You are getting the Dukes!Thanks.
    Ludovic.

  • Performance improvement in a function module

    Hi All,
    I am using SAP 6.0 version. I have a function module to retrive the PO's . for just 10,000 records its taking long time.
    Can any one sugguest the ways to improve the performance.
    Thanks in advance.

    Moderator message - Welcome to SCN.
    But
    Moderator message - Please see Please Read before Posting in the Performance and Tuning Forum before posting
    Just 10,000 records? The first rule in performance improvement is to reduce the amount of selected data. If you cannot do that, it's going to take time.
    I wouldn't bother with a BAPI for so many records. Write some custom code to get only the data you need.
    Tob

  • Pls help me to modify the query for performance improvement

    Hi,
    I have the below initialization
    DECLARE @Active bit =1 ;
    Declare @id int
    SELECT @Active=CASE WHEN id=@id and [Rank] ='Good' then 0 else 1 END  FROM dbo.Students
    I have to change this query in such a way that the conditions id=@id and [Rank] ='Good' should go to the where condition of the query. In that case, how can i use Case statement to retrieve 1 or 0? Can you please help me to modify this initialization?

    I dont understand your query...May be below? or provide us sample data and your output...
    SELECT *  FROM dbo.students
    where @Active=CASE
    WHEN id=@id and rank ='Good' then 0 else 1 END
    But, I doubt you will have performance improvement here?
    Do you have index on id?
    If you are looking for getting the data for @ID with rank ='Good' then use the below:Make sure, you have index on id,rank combination.
    SELECT *  FROM dbo.students
    where  id=@id
    and rank ='Good' 

  • Performance improvement in OBIEE 11.1.1.5

    Hi all,
    In OBIEE 11.1.1.5 reports takes long time to load , Kindly provide me some performance improvement guides.
    Thanks,
    Haree.

    Hi Haree,
    Steps to improve the performance.
    1. implement caching mechanism
    2. use aggregates
    3. use aggregate navigation
    4. limit the number of initialisation blocks
    5. turn off logging
    6. carry out calculations in database
    7. use materialized views if possible
    8. use database hints
    9. alter the NQSONFIG.ini parameters
    Note:calculate all the aggregates in the Repository it self and Create a Fast Refresh for MV(Materialized views).
    and you can also do one thing you can schedule an IBOT to run the report every 1 hour or some thing so that the report data will be cached and when the user runs the report the BI Server extracts the data from Cache
    This is the latest version for OBIEE11g.
    http://blogs.oracle.com/pa/resource/Oracle_OBIEE_Tuning_Guide.pdf
    Report level:
    1. Enable cache -- change nqsconfig instead of NO change to YES.
    2. GO--> Physical layer --> right click table--> properties --> check cacheable.
    3. Try to implement Aggregate mechanism.
    4.Create Index/Partition in Database level.
    There are multiple other ways to fine tune reports from OBIEE side itself:
    1) You can check for your measures granularity in reports and have level base measures created in RPD using OBIEE utility.
    http://www.rittmanmead.com/2007/10/using-the-obiee-aggregate-persistence-wizard/
    This will pick your aggr tables and not detailed tables.
    2) You can use Caching Seeding options. Using ibot or Using NQCMD command utility
    http://www.artofbi.com/index.php/2010/03/obiee-ibots-obi-caching-strategy-with-seeding-cache/
    http://satyaobieesolutions.blogspot.in/2012/07/different-to-manage-cache-in-obiee-one.html
    OR
    http://hiteshbiblog.blogspot.com/2010/08/obiee-schedule-purge-and-re-build-of.html
    Using one of the above 2 methods, you can fine tune your reports and reduce the query time.
    Also, on a safer side, just take the physical SQL from log and run it directly on DB to see the time taken and check for the explain plan with the help of a DBA.
    Hope this help's
    Thanks,
    Satya
    Edited by: Satya Ranki Reddy on Aug 12, 2012 7:39 PM
    Edited by: Satya Ranki Reddy on Aug 12, 2012 8:12 PM
    Edited by: Satya Ranki Reddy on Aug 12, 2012 8:20 PM

  • MV Refresh Performance Improvements in 11g

    Hi there,
    the 11g new features guide, says in section "1.4.1.8 Refresh Performance Improvements":
    "Refresh operations on materialized views are now faster with the following improvements:
    1. Refresh statement combinations (merge and delete)
    2. Removal of unnecessary refresh hint
    3. Index creation for UNION ALL MV
    4. PCT refresh possible for UNION ALL MV
    While I understand (3.) and (4.) I don't quite understand (1.) and (2.). Has there been a change in the internal implementation of the refresh (from a single MERGE statement)? If yes, then which? Is there a Note or something in the knowledge base, about these enhancements in 11g? I couldn't find any.
    Considerations are necessary for migration decision to 11g or not...
    Thanks in advance.

    I am not quit sure, what you mean. You mean perhaps, that the MVlogs work correctly when you perform MERGE stmts with DELETE on the detail tables of the MV?
    And were are the performance improvement? What is the refresh hint?
    Though I am using MVs and MVlogs at the moment, our app performs deletes and inserts in the background (no merges). The MVlog-based fast refresh scales very very bad, which means, that the performance drops very quickly, with growing changed data set.

  • Why GN_INVOICE_CREATE has no performance improvement even in HANA landscape?

    Hi All,
    We have a pricing update program which is used to update the price for a Material Customer combination(CMC).This update is done using the FM 'GN_INVOICE_CREATE'.
    The logic is designed to loop on customers, wherein this FM will be called passing all the materials valid for that customer.
    This process is taking days(Approx 5 days) to get executed and updated for CMC of 100 million records.
    Hence we are planning to move towards HANA for better improvement in performance.
    We designed the same programs in the HANA landscape and executed it in both systems for 1 customer and 1000 material combination.
    Unfortunately, both the systems gave same runtimes around 27 seconds for execution.
    This is very disappointing thinking the performance improvement we should have on HANA landscape.
    Could anyone throw light on any areas where we are missing out and why no performance improvement was obtained ?
    Also is there any configuration related changes to be done on HANA landscape for better performance.?
    The details regarding both the systems are as below.
    Suite on HANA:
    SAP_BASIS : 740
    SAP_APPL  : 617
    ECC
    SAP_BASIS : 731
    SAP_APPL  : 606
    Also see the below screenshots of the system details.
    HANA:
    ECC:
    Thanks & regards,
    Naseem

    Hi,
    just to fill in on Lars' already exhaustive comments:
    Migrating to HANA gives you lots of options to replace your own functionality (custom ABAP code) wuth HANA artifacts - views or SQLscript procedures. This is where you can really gain on performance. Expecting ABAP code to automatically run faster on HANA may be unrealistic, since it depends on the functionality of the code and how well it "translates" to a HANA environment. The key to really minimize run time is to replace DB calls with specific HANA views or procedures, then call these from your code.
    I wrote a blog on this; you might find it useful as a general introduction:
    A practical example of ABAP on HANA optimization
    When it comes to SAP standard code, like your mentioned FM, it is true that SAP is migrating some of this functionality to HANA-optimized versions, but this doesn't mean everything will be optimized in one go. This particular FM is probably not among those being initially selected for "HANAification", so you basically have to either create your own functionality (which might not be advisable due to the fact that this might violate data integrity) or just be patient.
    But again, the beauty of HANA lies in the brand new options for developers to utilize the new ways of pushing code down to the DB server. Check out the recommendations from Lars and you'll find yourself embarking on a new and exciting journey!
    Also - as a good starting point - check out the HANA developer course on open.sap.com.
    Regards,
    Trond

  • Will there performance improvement over separate tables vs single table with multiple partitions?

    Will there performance improvement over separate tables vs single table with multiple partitions? Is advisable to have separate tables than having a single big table with partitions? Can we expect same performance having single big table with partitions? What is the recommendation approach in HANA?

    Suren,
    first off a friendly reminder: SCN is a public forum and for you as an SAP employee there are multiple internal forums/communities/JAM groups available. You may want to consider this.
    Concerning your question:
    You didn't tell us what you want to do with your table or your set of tables.
    As tables are not only storage units but usually bear semantics - read: if data is stored in one table it means something else than the same data in a different table - partitioned tables cannot simply be substituted by multiple tables.
    Looked at it on a storage technology level, table partitions are practically the same as tables. Each partition has got its own delta store & can be loaded and displaced to/from memory independent from the others.
    Generally speaking there shouldn't be too many performance differences between a partitioned table and multiple tables.
    However, when dealing with partitioned tables, the additional step of determining the partition to work on is always required. If computing the result of the partitioning function takes a major share in your total runtime (which is unlikely) then partitioned tables could have a negative performance impact.
    Having said this: as with all performance related questions, to get a conclusive answer you need to measure the times required for both alternatives.
    - Lars

  • DMA Performance Improvements for TIO-based Devices

    Hello!
    DMA Performance Improvements for TIO-based Devices
    http://digital.ni.com/public.nsf/websearch/1B64310FAE9007C086256A1D006D9BBF
    Can I apply the procedure to NI-DAQmx 9? These ini-files dont seem to exist anymore in the newer version.
    Best, Viktor

    Hi Viktor,
    this page is 7 years old and doesn't apply to the DAQmx.
    Regards, Stephan

  • Aperture 3 with ridiculous performance

    I started using Aperture on version 2 and I loved it. The main reason for that was that it was super fast in handling my 30 thousand photos.
    After I upgraded to Aperture 3 though, the performance is pathetic. I can barely scroll through photos. Sometimes creating a simple Smart Album takes more than 60 seconds -- and I'm not exaggerating.
    I really don't like Adobe, but I think I'll have to switch to LightRoom if I can get this fixed. I've read some posts about disk fragmentation being the cause for the problem, but I find that very hard to believe. Why wouldn't the same thing happen to Aperture 2?
    Turning off Faces didn't help.
    Any other ideas on how to fix?

    First, are you using "high performance" for the graphics settings on your MacBook Pro? You definitely want to do that.
    Second, some here have reported that deleting the Aperture preferences file has helped. I can't say as I had real performance issues so I'm not sure.
    Finally, is Aperture 3 doing anything in the background still (updating thumbnails, etc.)? This can slow things down.
    I'll note that I find A3 to be faster on Snow Leopard than A2 was. I also think the fragmentation discussion is a red herring. But I also have good performance... it'd be nice if there were a silver bullet for what the difference is between the machines but I'm not sure.

  • Performance improve using TEZ/HIVE

    Hi,
    I’m newbie in HDInsight. Sorry for asking simple Questions. I have queries around performance improvement of my HIVE query on File data of 90 GB (15 GB * 6).
    We have enabled execution engine has TEZ, I heard the AVRO format improves the speed of execution, Is AVRO SERDE enabled TEZ Queries or do I need upload *.jar files to WASB. I’m using latest version. Any sample Query.
    In TEZ, Will ORC Column Format and Avro compression can work together, when we set ORC compression level on hive has
    Snappy and LZO ?. Is there any Limitation of Number of columns for ORC tables.
    Is there any best compression technique to upload data file to Blob, I mean compress and upload.  I used *.gz, which compressed by 1/4<sup>th</sup> of File Size and upload to Blob, but problem *.gz is not split able and it will always
    uses less (single ) Mapper or should I use Avro with Snappy Compression . Is the Microsoft Avro Library performs snappy Compression or is there any compress which can be  split and compress.
    If data structure for file change over time , will there be necessity of reloading older data?. Can existing query works without change in code.
    It has been said that TEZ has Real Time Reporting capability , but when I Query 90 GB file (It includes Group By, order by clauses) is taking almost 8 mins of time on 20 nodes, are there any pointers to improve performance further and get the Query result
    in Seconds.
    Mahender

    -- Tez is an execution engine, I don't think you need any additional jar file to get AVRO Serde working on Hive when Tez is used.  You can used  AvroSerDe, AvroContainerInputFormat & AvroContainerOutputFormat to get AVRO working when tez is
    used.
    -- I tried creating a table with about 220 columns, although the table was empty, I was able to query from the table, how many columns does your table hold?
    CREATE EXTERNAL TABLE LargColumnTable02(t1 string,.... t220 string)
    PARTITIONED BY(EventDate string) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS ORC LOCATION '/data'
    tblproperties("orc.compress"="SNAPPY");
    --  You can refer
    http://dennyglee.com/2013/03/12/using-avro-with-hdinsight-on-azure-at-343-industries/   
    Getting Avro data into Azure Blob Storage Section
    -- It depends on what data has change , if you are using Hadoop, HBase etc..
    -- You will have to monitor your application check node manager logs if there is any pause in execution again. It depends on what you are doing, would suggest open a Support case to investigate further.

  • Performance improvement from OAS 4.0.7 to 9iAS

    Any performance improvement from OAS4.0.7 to 9iAS. What are the parameters to be changed to upgrade from 4.0.7 to 9iAS.
    Regards,

    Customers generally see performance improvements when upgrading to 9iAS. How the upgrade process works is explained in the documentation at http://otn.oracle.com/docs/products/ias/index.html
    null

  • Performance Improvement between GDK and EDK portlets

    Are there any performance improvements to be expected by migrating a portlet from using the GDK library to EDK library? Not looking at what GDK and EDK offers, more on whether we would improve the load time of a portal page if we change a portlet from GDK to EDK.....

    With GDK, my pages inherit from "Plumtree.Remote.Csp.UI.Page" and under the hood, the context is created (SettingsManager) automatically. Apparently, this is not the case anymore with the EDK. Am I correct?
    According to the EDK doc, I need to call "PortletContextFactory.CreatePortletContext(Request,Response)" for such purpose. Still correct?
    -- Yes, correct. In the EDK, no SettingsManager is used, and the functionality is wrapped into IPortletRequest and IPortletResponse.
    The other more important change is that with the GDK, the language of the current thread is automatically set to the language passed by the portal in the "Accept-Language" HTTP header. This is not the case anymore, to my knowledge, and I found out that I need to insert this:
    String sLanguage = HttpContext.Current.Request.UserLanguages[0];System.Threading.Thread.CurrentThread.CurrentCulture=new System.Globalization.CultureInfo(sLanguage);
    Is this correct or did I miss something?
    -- You do not need to use the HttpContext object of .NET. The Plumtree EDK allows you to retrieve the language as follows: The portal language is stored in a User Pref named "strLocale". A remote portlet can read this User Pref.The only point to note is that, as with all User Prefs, you must ensure that the specific prefs are sent to the portlet in the Portlet Web Service registration.
    PortletRequest.GetSettingValue(Plumtree.Remote.Portlet.SettingType.User, "strTimeZone")

  • Performance Improvements (Follow-Up from P2P)

    Business One response time has great impact on customer satisfaction. We want to further invest in this area and therefore asking for your feedback.
    We would like to further improve performance in areas that are important for a typical customer profile. With this goal in mind we look for you advice on specific performance bottle neck that you recommend us to focus on.
    This thread is continuation from 'Performance Improvements' <a href="http://p2p.sap.com/businessoneforum?type=join&login=1&uid=41FB661A76CED536C825C4E2B6FF4397&cid=91&go=z37225">discussion</a> in P2P SAP Business One Forum (Product Development Collaboration). Previous discussion on P2P is summarized in the attached file.

    Peter,
             We are also having intermittent performance issues with SBO. Our main usage is with A/R invoice whereby we have approx 10 x users processing A/R invoices in remote locations (WAN, via 2 x Citrix Servers - both Dual Xeon 3.06Ghz with 4gb DDR RAM). We process some +9,000 A/R invocies per month with a total of approx 250,000 rows per month. We have recently upgraded from SBO2004A to SBO2005 SP01 PL18 and have noted a huge backwards step in system response time (performance). Copied below are some recent times recored:
    We also have local users (connected directly to SBO - not via Citrix) and the performce is good.
    Look fowards to your feedback as to how we can improve on this current situation which is costing us valuable time (and money)
    P.s Can you also send me a copy of the Performance Improvement Discussion. I am unable to download the file from the link in the forum.
    Thanks Ron
    [email protected]
    "12.29 seconds just to open an A/R Invoice (from when you click on A/R Invoice button to when it actually opens)
    43.28 seconds from when you click the final add on a credit to when the credit actually processes
    18.57 seconds from when you click the final add on an invoice to when the invoice actually processes
    58.08 seconds from when you click the first add on an invoice to when the invoice actually processes"
    When you multiply these times out by +9,000 A/R invocies. The problem is quite serious!

Maybe you are looking for

  • Slow logins to domain, several event ID errors (group policy, netlogon, NTP errors)

    We have a laptop user who was experiencing slow logons in a remote office.   (Remote office has 100 users, only 1 is reporting the issue).  Helpdesk swapped computers to give the user brand new hardware.   The new laptop worked fine while in the IT d

  • Change States using Combo box

    I am creating a registration form with payment options in Adobe Flex . There would be 2 payment options, Cash and Credit card. when a user selects an option he/she has to fill in some details. I have created different states for credit and card optio

  • It's possible to disable "Transfer-Encoding: chunked" in WebDAV Finder PUT requests ?

    Hi, Since Mac OS X 10.5.4 version, Apple WebDAV finder client use "Transfer-Encoding: chunked" in PUT request. Very very few HTTP Server, HTTP Proxy support this feature. Are there a flag to disable this feature ? If not, can you append it in futur O

  • Help me in this Urgent Task !!!

    Hi Experts, I got an Assignment on Reports Description : "create an exceptional report to get all the orders that were settled during month end close" I didnt understand this. Can any one help me in this regard.

  • Network and Sharing Center: You are currently not connected to any networks!

    This happened all of the sudden, I restarted the system and BOOM! After noticing the icon with the Exclamation mark at the system tray, there was nothing wrong with communication but the reporting is really messed up! I've uninstalled the NIC driver