Much Improved Performance w/Leopard!

I am surprised I have not seen more about the performance improvement. I was worried my G4 PowerBook would take a hit, but I have been editing photos in iPhoto and CS3 PS and it's noticeably faster than in Tiger. My archive and install was done in 45 minutes, no problems, and I'm enjoying Leopard. I read in another post about creating a 00.jpg image to sort first in the stack so you can put whatever image you want there to show through. There's some room for future updates, but there must have been some serious back-end work to give such a serious boost to performance on an old Powerbook.

I agree, with my old PB 1ghz, I found an increase in response and loading of folders and webpages. It surprised me. I expected browsing of my files with cover flow to slow down the machine, I find it much more responsive then large directories loaded in Tiger's finder.

Similar Messages

  • Upgraded both computers in the household and found Lion is too disruptive to workflow.  Do I turn in the new laptop for a pre-Lion rebuild to keep Snow Leopard, or do I return computer and get upgraded memory to improve performance of existing MacBookPro?

    Upgraded both computers in the household and found Lion is too disruptive to workflow. 
    Do I turn in the new laptop for a pre-Lion rebuild to keep Snow Leopard, or do I return new computer and get upgraded memory to improve performance of existing MacBookPro?  I'm mostly still happy with existing MacBookPro, but Aperture doesn't work, the computer can't handle it.
    Other possibility is setting up virtual machine with Snow Leopard server software on new computer.
    Any opinions on what would allow moving forward with the least hassle and best workflow continuity?

    hi,
    what year and specs does the MBP have

  • How to improve performance when there are many TextBlocks in ItemsControl items?

       Hi,
       I'm trying to find a way to improve performance for a situation when there is an ItemsControl using UI and Data virtualization and each item on that control has 36 TextBlocks. Basically the item is a single string. There are so many TextBlocks
    to allow assigning different brushes to different parts of the string. Performance of this construction is terrible. I have 37 items visible on the screen and if I try to scroll up or down it scrolls into the black space and then it takes a second or two to
    show the items.
       I tried different things. For example, the most successful performance-wise was to replace TextBlocks with Borders and then draw bitmaps. In other words, I prepared 127 bitmaps for each character (I need ASCII only) and then I used those bitmaps
    to set Border.Backgrounds. It improved performance about 1.5 - 2 times but it consumed much more memory (which is not surprising, of course). Required amount of memory is so big that it throws OutOfMemoryException on 512MB emulator but works on 1GB. As a result
    I don't thing it is a good solution.
       Another thing that worked perfect is to replace 36 TextBlocks with only 6 TextBlocks. In this case the performance improvement is about 5 - 10 times but I lose the ability to set different colors to different parts of the string. It seems that
    the performance degrades dramatically with the increase of number of TextBlocks. Is there another technique to draw strings where literally each character can be of different color with decent performance?
    Thank you
    Alex

       Using Runs inside TextBlocks gives approximately the same improvement as using bitmaps 1.5 - 2 times faster but it is not even close to the case with just a couple of TextBlocks in the ItemsControl item. Any other ideas?
    Alex

  • How to improve Performance of the Statements.

    Hi,
    I am using Oracle 10g. My problem is when i am Execute & fetch the records from the database it is taking so much time. I have created Statistics also but no use. Now what i have to do to improve the Performance of the SELECT, INSERT, UPDATE, DELETE Statements.
    Is it make any differents because i am using WindowsXP, 1 GB RAM in Server Machine, and WindowsXP, 512 GB RAM in Client Machine.
    Pls. Give me advice for me to improve Performance.
    Thank u...!

    What and where to change parameters and values ?Well, maybe my previous post was not clear enough, but if you want to keep your job, you shouldn't change anything else on init parameter and you shouldn't fall in the Compulsive Tuning Disorder.
    Everyone who advise you to change some parameters to some value without any more info shouldn't be listen.
    Nicolas.

  • Please help! I should be getting much better performance! What am I doing wrong?

    Hi, there!
    I've recently started using After Effects CC and I am getting extremely poor performance, especially given my machine specs (or so I believe).
    Everything is very slow, including the actual software interface. Even with the Live Update switched off:
    - Moving components on the actual composition view is practically impossible: click, drag, wait about 15seconds, release click and it has moved somewhat in the general direction required, but not even to the place the mouse stopped.
    - Scrubbing doesn't work at all, even after the render bar on the timeview is already fully green.
    - Standard preview (spacebar) can never play at realtime. The only way to play footage at realtime is to use RAM preview (numpad 0), but even then it takes ages to render small parts (10seconds of footage, takes anywhere between 5 and 10 minutes).
    It's enfuriating! I am so convinced that my machine should perform SO much better than this!
    I have tried everything that I can think of and everything that I have found on this forum and others through out the internet. Spent the last week googling this issue and have not made much improvement!
    I have even checked the raytracker_supported_cards.txt file actually included the GeForce GTX 590. It does.
    Any help anyone can provide would be much appreciated, cos I am stumped. Many thanks!
    The following are the specs of my pc and the settings of my After Effects (relevant settings and settings I have changed from default):
    PC
    Components
    Comments
    Case
    Cooler Master HAF X
    PSU
    Silverstion ST1500 (1500W)
    Motherboard
    Asus Rampage IV Formula
    CPU
    Intel Core i7-3960X (6 physical /12 virtual cores) @ 3.3GHz
    CPU Cooler
    Noctua NH-D14 SE2011
    RAM
    4x4GB G.Skill Ripjaws
    GPUs
    2x Zotac Gefore GTX 590 3GB (4 physical GPUs with 1.5GB RAM each)
    Latest drivers installed
    SSD
    Crucial Technology M4 256GB
    Used for OS, Applications and AE cache
    HD1
    Seagate Barracuda 2TB SATAIII 7200RPM
    HD2
    Western Digital VelociRaptor HDD 10,000RPM
    Monitor
    Samsung SyncMaster 305T (2560x1600)
    OS
    Windows 7 Ultimate
    All up to date
    After Effects
    CC
    Updated to latest version 12.2.1
    After Effects
    Preferences Heading
    Setting
    Set to/
    Showing
    Comments
    General
    Allow Scripts to Write Files and Acces Network
    Ticked
    Don't think this has anything to do with it, but it is something I changed from the default values.
    Previews
    Adaptive Resolution
    1/8
    Show Internal Wireframes
    Unticked
    Zoom Quality
    Faster
    Color Management Quality
    Faster
    Previews/GPU Information
    Fast Draft
    Available
    Texture Memory
    1115MB
    Maximum allowed
    Ray-tracing
    GPU
    OpenGL version
    2.1.2
    Even though my GPUs (GTX590) are capable of OpenGL 4.4
    OpenGL Share Model
    3.0
    CUDA Driver Version
    6.3
    CUDA Devices
    4
    followed by "(GeForce GTX590, GeForce GTX590, GeForce GTX590, GeForce GTX590)"
    CUDA Current Usable Memory
    1150MB
    CUDA Maximum Usable Memory
    1500MB
    Display
    Harware Accelerate Comosition, Layer and Footage Panels
    Ticked
    Media and Disk Cache
    Enable Disk Cache
    Ticked
    Maximum Disk Cache Size
    60GB
    Cache Folder
    C:\AEcache
    Conformed Media Database Folder
    C:\AEcache
    Conformed Media Cache Folder
    C:\AEcache
    Write XMP IDs to Files Import
    Ticked
    Create Layer Markers from Footage XMP Metadata
    Ticked
    Video Preview
    Output Device
    None
    Auto-save
    Automatically Save Projects
    Unticked
    Memory & Multiprocessing
    Installed RAM
    16GB
    RAM reserved for other applications
    6GB
    RAM shared by AE
    10GB
    Reduce cahce size when system is low on memory
    Unticked
    Render Multiple Frames Simulatenously
    Ticked
    Only for Render Queue, not for RAM Preview
    Unticked
    Installed CPUs (processor cores)
    12
    CPUs Reserved for other applications
    6
    RAM allocation per background CPU
    1.5 GB
    Acutal CPUs that will be used
    6
    Other Options
    Pixel Correction
    OFF
    Fast Preview
    Adaptive Resolution
    Have played with it in Fast Draft too, but rendering was quicked on Adaptive Resolution.
    This is because for A.R. it uses multiprocessing, but for Fast Draft M.P. is disable (incompatible mode, it says).
    Renderer
    Classic 3D
    Thanks again!

    Hi, Mylenium,
    Thanks for your prompt reply, I'm afraid it hasn't helped. I don't see anything wrong in my device manager.
    I have also gone and updated every driver, especially the motherboard chipset and the Sata Intel Rapid Storage Technology driver.
    Unfortunately, this hasn't made any difference.
    Any other suggestions?
    Many thanks in advance!
    Hugo

  • A64 Tweaker and Improving Performance

    I noticed a little utility called "A64 Tweaker" being mentioned in an increasing number of posts, so I decided to track down a copy and try it out...basically, it's a memory tweaking tool, and it actually is possible to get a decent (though not earth-shattering by any means) performance boost with it.  It also lacks any real documentation as far as I can find, so I decided to make a guide type thing to help out users who would otherwise just not bother with it.
    Anyways, first things first, you can get a copy of A64 Tweaker here:  http://www.akiba-pc.com/download.php?view.40
    Now that that's out of the way, I'll walk through all of the important settings, minus Tcl, Tras, Trcd, and Trp, as these are the typical RAM settings that everyone is always referring to when they go "CL2.5-3-3-7", so information on them is widely available, and everyone knows that for these settings, lower always = better.  Note that for each setting, I will list the measured cange in my SiSoft Sandra memory bandwidth score over the default setting.  If a setting produces a change of < 10 MB/sec, its effects will be listed as "negligible" (though note that it still adds up, and a setting that has a negligible impact on throughput may still have an important impact on memory latency, which is just as important).  As for the rest of the settings (I'll do the important things on the left hand side first, then the things on the right hand side...the things at the bottom are HTT settings that I'm not going to muck with):
    Tref - I found this setting to have the largest impact on performance out of all the available settings.  In a nutshell, this setting controls how your RAM refreshes are timed...basically, RAM can be thought of as a vast series of leaky buckets (except in the case of RAM, the buckets hold electrons and not water), where a bucket filled beyond a certain point registers as a '1' while a bucket with less than that registers as a '0', so in order for a '1' bucket to stay a '1', it must be periodically refilled (i.e. "refreshed").  The way I understand this setting, the frequency (100 MHz, 133 MHz, etc.) controls how often the refreshes happen, while the time parameter (3.9 microsecs, 1.95 microsecs, etc.) controls how long the refresh cycle lasts (i.e. how long new electrons are pumped into the buckets).  This is important because while the RAM is being refreshed, other requests must wait.  Therefore, intuitively it would seem that what we want are short, infrequent refreshes (the 100 MHz, 1.95 microsec option).  Experimentation almost confirms this, as my sweet spot was 133 MHz, 1.95 microsecs...I don't know why I had better performance with this setting, but I did.  Benchmark change from default setting of 166 MHz, 3.9 microsecs: + 50 MB/sec
    Trfc - This setting offered the next largest improvement...I'm not sure exactly what this setting controls, but it is doubtless similar to the above setting.  Again, lower would seem to be better, but although I was stable down to '12' for the setting, the sweet spot here for my RAM was '18'.  Selecting '10' caused a spontaneous reboot.  Benchmark change from the default setting of 24:  +50 MB/sec
    Trtw - This setting specifies how long the system must wait after it reads a value before it tries to overwrite the value.  This is necessary due to various technical aspects related to the fact that we run superscalar, multiple-issues CPU's that I don't feel like getting into, but basically, smaller numbers are better here.  I was stable at '2', selecting '1' resulted in a spontaneou reboot.  Benchmark change from default setting of 4:  +10 MB/sec
    Twr - This specifies how much delay is applied after a write occurs before the new information can be accessed.  Again, lower is better.  I could run as low as 2, but didn't see a huge change in benchmark scores as a result.  It is also not too likely that this setting affects memory latency in an appreciable way.  Benchmark change from default setting of 3:  negligible
    Trrd - This controls the delay between a row address strobe (RAS) and a seccond row address strobe.  Basically, think of memory as a two-dimensional grid...to access a location in a grid, you need both a row and column number.  The way memory accesses work is that the system first asserts the column that is wants (the column address strobe, or CAS), and then asserts the row that it wants (row address strobe).  Because of a number of factors (prefetching, block addressing, the way data gets laid out in memory), the system will often access multiple rows from the same column at once to improve performance (so you get one CAS, followed by several RAS strobes).  I was able to run stably with a setting of 1 for this value, although I didn't get an appreciable increase in throughput.  It is likely however that this setting has a significant impact on latency.  Benchmark change from default setting of 2:  negligible
    Trc - I'm not completely sure what this setting controls, although I found it had very little impact on my benchmark score regardless of what values I specified.  I would assume that lower is better, and I was stable down to 8 (lower than this caused a spontaneous reboot), and I was also stable at the max possible setting.  It is possible that this setting has an effect on memory latency even though it doesn't seem to impact throughput.  Benchmark change from default setting of 12:  negligible
    Dynamic Idle Cycle Counter - I'm not sure what this is, and although it sounds like a good thing, I actually post a better score when running with it disabled.  No impact on stability either way.  Benchmark change from default setting of enabled:  +10 MB/sec
    Idle Cycle Limit - Again, not sure exactly what this is, but testing showed that both extremely high and extremely low settings degrade performance by about 20 MB/sec.  Values in the middle offer the best performance.  I settled on 32 clks as my optimal setting, although the difference was fairly minimal over the default setting.  This setting had no impact on stability.  Benchmark change from default setting of 16 clks:  negligible
    Read Preamble - As I understand it, this is basically how much of a "grace period" is given to the RAM when a read is asserted before the results are expected.  As such, lower values should offer better performance.  I was stable down to 3.5 ns, lower than that and I would get freezes/crashes.  This did not change my benchmark scores much, though in theory it should have a significant impact on latency.  Benchmark change from default setting of 6.0 ns:  negligible
    Read Write Queue Bypass - Not sure what it does, although there are slight performance increases as the value gets higher.  I was stable at 16x, though the change over the 8x default was small.  It is possible, though I think unlikely, that this improves latency as well.  Benchmark change from default setting of 8x:  negligible
    Bypass Max - Again not sure what this does, but as with the above setting, higher values perform slightly better.  Again I feel that it is possible, though not likely, that this improves latency as well.  I was stable at the max of 7x.  Benchmark change from the default setting of 4x:  negligible
    Asynch latency - A complete mystery.  Trying to run *any* setting other than default results in a spontaneous reboot for me.  No idea how it affects anything, though presumably lower would be better, if you can select lower values without crashing.
    ...and there you have it.  With the tweaks mentioned above, I was able to gain +160 MB/sec on my Sandra score, +50 on my PCMark score, and +400 on my 3dMark 2001 SE score.  Like I said, not earth-shattering, but a solid performance boost, and it's free to boot.  Settings what I felt had no use in tweaking the RAM for added performance, or which are self-explanatory, have been left out.  The above tests were performed on Corsair XMS PC4000 RAM @ 264 MHz, CL2.5-3-4-6 1T.     

    Quote
    Hm...I wonder which one is telling the truth, the BIOS or A64 tweaker.
    I've wondered this myself.  From my understanding it the next logic step from the WCREDIT programs.  I understand how clock gen can misreport frequency because it's probably not measuring frequency itself but rather a mathmatical representation of a few numbers it's gathered and one clk frequency(HTT maybe?), and the non supported dividers messes up the math...but I think the tweaker just extracts hex values strait from the registers and displays in "English", I mean it could be wrong, but seeing how I watch the BIOS on The SLI Plat change the memory timings in the POST screen to values other then SPD when it Auto with agressive timings in disabled, I actually want to side with the A64 tweaker in this case.
    Hey anyone know what Tref in A64 relates to in the BIOS.  i.e 200 1.95us = what in the BIOS.  1x4028, 1x4000, I'm just making up numbers here but it's different then 200 1.95, last time I searched I didn't find anything.  Well I found ALOT but not waht I wanted..

  • Improving performance of scripts in a PDF form

    Hello there.  I'm a bit new to the scripting in Acrobat and find myself with a complex form that has a number of instances (about 70)  of the following script (specific to individual fields):
    event.value=this.getField("Ranks").value + this.getField("Mod").value + this.getField("MiscMod").value;
    var trained = "Untrained";
    var skill = "Ranks";
    if ( (this.getField(trained).value != "On" ) && (this.getField(skill).value < 1)) {
    event.target.textColor = ["G", 1]; }
    else {event.target.textColor = ["G", 0];}
    The code works fine, but with this many instances, performance of form is sluggish while updating. In addition, there are approximately 200 simple addtion fields being used.
    I'm looking for any advice, scripting or otherwise, to help improve performance.
    Thanks so much in advance.

    If you create a function in the document scripts :
    function updateField() {
    event.value=this.getField("Ranks").value + this.getField("Mod").value + this.getField("MiscMod").value;
    var trained = "Untrained";
    var skill = "Ranks";
    if ( (this.getField(trained).value != "On" ) && (this.getField(skill).value < 1)) {
    event.target.textColor = ["G", 1]; }
    else {event.target.textColor = ["G", 0];}
    you could replace the code in each field with the following function call, and they would all use the same code:
    updateField()
    Without knowing exactly what 'slightly different code'' entails, you can add parameters to the function call:
    function updateField(Rank, Mod, MiscMod) {
    event.value= Rank + Mod + MiscMod;
    var trained = "Untrained";
    var skill = "Ranks";
    if ( (this.getField(trained).value != "On" ) && (this.getField(skill).value < 1)) {
    event.target.textColor = ["G", 1]; }
    else {event.target.textColor = ["G", 0];}
    and you could call it in each field with different values for 'Rank', 'Mod', and 'MiscMod' (or whatever parameters you use) like this:
    updateField(getField("Ranks").value, this.getField("Mod").value, this.getField("MiscMod").value )
    or
    updateField(getField("aDifferentRanks").value, this.getField("aDifferentMod").value, this.getField("aDifferentMiscMod").value )
    or
    updateField(47, 12, 45)
    If you need to make changes, you only need to change the 'updateField' function once, and any field which calls it will use the updated code.

  • Improve performance select in parameterized cursor.

    DECLARE
    CURSOR cur_inv_bal_ship_from(
    l_num_qty_multiplier gfstmr4_eop_transaction_type.
    inventory_multiplier_num%TYPE,
    l_str_inventory_type gfstmr9_eop_txn_rule.inventory_type_code%TYPE,
    l_str_type_code gfstmr9_eop_txn_rule.txn_type_code%TYPE)IS
    SELECT /*+ USE_NL(EPP PI EPC) */ epc.currency_code,
    SUM(ROUND(l_num_qty_multiplier * pi.inventory_qty * epc.cost_amt,2)) cost_amt
    FROM gfstm62_eop_plant_part epp,
    gfstm64_plant_inventory pi,
    gfstm60_eop_part_cost epc
    WHERE epp.gsdb_site_code = i_str_gsdb_site_code
    AND epp.end_of_period_date = i_dt_end_of_period_date
    AND pi.inventory_type_code = l_str_inventory_type
    AND pi.txn_type_code = l_str_type_code
    AND pi.gsdb_shipped_from_code = i_str_gsdb_site_code
    AND epc.rate_set_code = i_str_rate_set_code
    AND epc.financial_element_type_code = i_str_financial_element_code
    AND pi.plant_eop_part_sakey = epp.eop_plant_part_sakey
    AND pi.plant_inventory_sakey = epc.plant_inventory_sakey
    GROUP BY currency_code;
    BEGIN
    FOR l_num_index IN i_tab_inv_txn_rule.FIRST .. i_tab_inv_txn_rule.LAST
    LOOP
    --Checking for ship from flag equal to 'Y'
    IF i_tab_inv_txn_rule(l_num_index).ship_from_flag = g_con_y THEN
    --Looping through ship from cursor
    FOR l_rec_inv_bal_from IN cur_inv_bal_ship_from(
    i_tab_inv_txn_rule(l_num_index).qty_multiplier_num,
    i_tab_inv_txn_rule(l_num_index).inventory_type_code,
    i_tab_inv_txn_rule(l_num_index).txn_type_code)
    LOOP
    --Incrementing index value
    l_num_index1 := (l_num_index1 + 1);
    --Assigning cursor values to PLSQL table
    l_tab_inv_bal(l_num_index1).currency_code :=
    l_rec_inv_bal_from.currency_code;
    l_tab_inv_bal(l_num_index1).cost_amt :=
    l_rec_inv_bal_from.cost_amt;
    --Loop closing for ship from cursor
    END LOOP;
    END LOOP;
    END;
    The select query in the parameterized cursor taking long time. Below is the link in which i have shown the trace. Please let me know the way to improve performance.
    http://performancetuning1978.blogspot.com/p/performance-tuning.html
    thanks,
    Vinodh

    Hello,
    your performance-tuning picture doesn't say much. How do your tables look like, how many rows, oracle version.
    Why do you use nested-lööps as a hint?

  • Best way to improve performance?

    I'm using a Dual Core Intel Xeon and starting to do video work on it with Final Cut Pro, etc.
    I've got 8GB of DDR2 FB-DIMM at 667MHz.
    I'm using a Cinema HD display, driven by a NVIDIA GeForce 7300 GT.
    What would be your best advice at improving performance (if there is any). I doubt more RAM would help; not sure if there is anything faster than 667Mhz for this machine; Could a newer video card help rendering speed?
    I'm generally very happy with how things work, can't afford a new MacPro, and am simply wondering if I should invest in something more affordable that could give me a slightly better kick?
    Many thanks,
    Czet

    One thing that's always useful is to make sure you've got iStat installed and see how much of your RAM is actually being used because 8Gb is a lot, even for HD but it doesn't rule out a lack of RAM.
    The 'biggest' increase in speed I've ever seen on any of my kit was a SSD. I've setup my Mac Pro with 2 x 128Gb SSD in the 2nd optical bay (in a 3.5" RAID 0 caddy) and setup Final Cut Pro to use 2 x 1.5Tb Seagate 7200.4's in software RAID 0 and everything flies. A 256Gb SSD is far more expensive (and slower) than 2 x 128Gb's in a RAID 0 caddy so a no brainer.
    Graphics cards always help but in my experience the only place you'll notice significant improvements are Games and software such as Motion/Aperture where you really must have a decent card.

  • Improve Performance of Dimension and Fact table

    Hi All,
    Can any one explain me the steps how to improve performance of Dimension and Fact table.
    Thanks in advace....
    redd

    Hi!
    There is much to be said about performance in general, but I will try to answer your specific question regarding fact and dimension tables.
    First of all try to compress as many requests as possible in the fact table and do that regularily.
    Partition your compressed fact table physically based on for example 0CALMONTH. In the infocube maintenance, in the Extras menu, choose partitioning.
    Partition your cube logically into several smaller cubes based on for example 0CALYEAR. Combine the cubes with a multiprovider.
    Use constants on infocube level (Extras->Structure Specific Infoobject properties) and/or restrictions on specific cubes in your multiprovider queries if needed.
    Create aggregates of subsets of your characteristics based on your query design. Use the debug option in RSRT to investigate which objects you need to include.
    To investigate the size of the dimension tables, first use the test in transaction RSRV (Database Information about InfoProvider Tables). It will tell you the relative sizes of your dimensions in comparison to your fact table. Then go to transaction DB02 and conduct a detailed analysis on the large dimension tables. You can choose "table columns" in the detailed analysis screen to see the number of distinct values in each column (characteristic). You also need to understand the "business logic" behind these objects. The ones that have low cardinality, that is relate to each other shoule be located together. With this information at hand you can understand which objects contribute the most to the size of the dimension and separate the dimension.
    Use line item dimension where applicable, but use the "high cardinality" option with extreme care.
    Generate database statistics regularily using process chains or (if you use Oracle) schedule BRCONNECT runs using transaction DB13.
    Good luck!
    Kind Regards
    Andreas

  • Improving performance of query with View

    Hi ,
    I'm working on a stored procedure where certain records have to be eleminated , unfortunately tables involved in this exception query are present in a different database which will lead to performance issue. Is there any way in SQL Server to store this query
    in a view and store it's execution plan and make it work like sp.While I beleive it's kinda crazy thought but is there any better way to improve performance of query when accessed across databases.
    Thanks,
    Vishal.

    Do not try to solve problems that you have not yet confirmed to exist.  There is no general reason why a query (regardless of whether it involves a view) that refers to a table in a different database (NB - DATABASE not INSTANCE) will perform poorly. 
    As a suggestion, write a working query using a duplicate of the table in the current database.  Once it is working, then worry about performance.  Once that is working as efficiently as it can , change the query to use the "remote" table rather
    than the duplicate. Then determine if you have an issue.  If you cannot get the level of performance you desire with a local table, then you most likely have a much larger issue to address.  In that case, perhaps you need to change your perspective
    and approach to accomplishing your goal. 

  • Using Lightroom and Aperture, will a new ATI 5770/5870 vs. GT 120 improve performance?

    I have a MP (2009, 3.3 Nehalem Quad and 16GB RAM) and wanted to improve performance in APERTURE (see clock wheel processing all the time) with edits, also using Lightroom, and sometimes CS5. 
    Anyone with experience that can say upgrading from the GT120 would see a difference and how much approximately?
    Next, do I need to buy the 5870 or can I get the 5770 to work?
    I am assuming I have to remove the GT120 for the new card to fit?
    Thanks

    Terrible marketing. ALL ATI 5xxx work in ALL Mac Pro models. With 10.6.5 and later.
    It really should be yours to just check AMD and search out reviews that compare these to others. You didn't look at the specs of each or Barefeats? He has half a dozen benchmark tests, but the GT120 doesn't even show up or in the running on most.
    From AMD 5870 shows 2x the units -
    TeraScale 2 Unified Processing Architecture   
    1600 Stream Processing Units
    80 Texture Units
    128 Z/Stencil ROP Units
    32 Color ROP Units
    ATI Radeon™ HD 5870 graphics
    That should hold up well.
    Some are on the fence or don't want to pay $$
    All they or you (and you've been around for more than a day!) is go to Apple Store:
    ATI Radeon HD 5870 Graphics Upgrade Kit for Mac Pro (Mid 2010 or Early 2009)
    ATI Radeon HD 5870 Upgrade

  • Improve performance MAX HEAP?

    Hello,
    We are trying to improve performance of one of our interfaces FILE->SQL STAGING->TARGET.
    Here are the steps we took;
    1. We are indeed using LKM SQL BULK INSERT (which brought the performance down from 70min to 20min)
    *2. The ARRAY and FETCH sizes "DONOT" seem to improve performance at all. Probably because we are using FILE TECHNOLOGY. Experts please confirm this?*
    3. Finally, we changed the JAVA INT and MAX sizes from;
    INT: 32m to 1024m
    MAX: 256m to 2g
    a. How do we ensure that the odiparam changes have taken affect? We have been just closing all ODI Navigators, stopping agent and restarting agent. Is this correct?
    But it "DIDNT" improve performance either.
    Here is what we noticed:
    Total Physical Memory is 8G
    Physical Memory used is 3.6G
    ODI64.exe is using 0.5 G
    Java.exe is using 0.5 G
    We noticed that Java.exe was "not" using any CPU.
    b. Is ODI64.exe what we should be looking at to monitor performance? Shouldn't it be at 2G?
    c. Is there a way to define more than one CPU to be used, parallel processing?
    Please let me know your thoughts.
    Thanks!
    Edited by: user10678366 on May 18, 2013 9:09 AM
    Edited by: user10678366 on May 18, 2013 9:17 AM

    #a Changes in odiparams would be applicable only for the standalone agent. Bouncing the standalone agent would pick the latest settings from odiparams.
    #b The min heap size you have specified is 1G so your jvm should have initialized with at least that much memory. Since the processes you mentioned are using lesser memory, it seems that either your memory are not picked up or you are looking at the wrong processes.
    #c Depending upon your interface if there is data to be moved row by row through ODI then the reading an writing automatically happens in parallel.

  • Brush Lag? - Here are OpenGL settings that improve performance

    For anyone having horrible Brush lag, please try the following settings and report your findings in this thread.
    After much configuring and comparing CS4 vs CS3, I found that these settings do improve CS4's brush lag significantly. CS3 is still faster, but these settings made CS4 brush strokes a lot more responsive.
    Please try these settings and share your experiences.
    NOTE: these do not improve clone tool performance, the best way to improve clone performance right now seems to be to turn off the clone tools overlay feature.
    Perhaps Adam or Chris from Adobe could explain what is happening here. The most significant option that improved performance appears to be the "Use for Image Display - OFF". I have no idea what this feature does or does not do but it does seem to be the biggest performance hit. The next most influential setting seems to be "3D Interaction Acceleration - OFF"
    Set the following settings in Photoshop CS4 Preferences:
    OpenGL - ON
    Vsync - OFF
    3D Interaction Acceleration - OFF
    Force Bilinear Interpolation - OFF
    Advanced Drawing - ON
    Use for Image Display - OFF
    Color Matching - ON

    Hi guys,
    As I am having very little problems with my system I though I should post my specs and settings for comparison reasons.
    System - Asus p5Q deluxe,
    Intel quad 9650 3ghz,
    16gb pc6400 800Mhz ram,
    loads of drives ( system drive on 10K 74gb Raptor, Vista partition on fast 500gb drive, Ps scratch on an another fast 500gb drive, the rest are storage/bk-ups ),
    Gainward 8800GTS 640mb GPU,
    30ins monitor @2560x1600 and a 24ins 1920x1200,
    Wacom tablet.
    PS CS4 x64bit
    Vista x64
    All latest drivers
    No Antivirus. Index and superfetch is ON, Defender is ON
    No internet connection except for updates
    No faffing around with vista processes
    Wacom Virtual HID and Wacom Mouse Monitor are disabled
    nVidia GPU set to default settings
    On this system I am able to produce massive images, the last major size was 150x600cm@200ppi and the brushes are smooth until they increased to around 700+ pixels then there is a slight lag of around 1 second if I draw fast, if I take it slow then there's no lag. All UI is snappy.
    I have the following settings in Photoshop CS4 Preferences:
    Actual Ram: 14710MB
    Set to use (87%) :12790MB
    Scratch Disk
    on a separate fast 500gb - to become a 80gb ssd soon
    History state: 50
    Cache: 8
    OpenGL - ON
    Vsync - OFF
    3D Interaction Acceleration - OFF
    Force Bilinear Interpolation - OFF
    Advanced Drawing - ON
    Use for Image Display - ON
    Color Matching - ON
    I hope this helps in some way too...
    EDIT: I should also add that I Defrag all my drives with a third party defrag software every night due to the large image files.

  • While browsing the cube data Excel the circle pointer starts to spin and the excel go into a not-responding state,any recommendations to improve performance in excel?

    hi,
    while browsing the cube data Excel the circle pointer starts to spin and the excel go into a not-responding state,any recommendations to improve performance in excel? 
    I have 20 measures and 8 dimensions.
    while filtering data by using dimensions in excel it is taking so much time.
    Ex:
    I browsed 15 measures in excel and filtered data based on time(quarter  wise) and other dimesion product. It is taking long time to get  data.
    Can you please help on this issue.
    Regards,
    Samba

    Hi Samba,
    What're the versions of your Office Excel and SQL Server Analysis Services? It will be helpful if you can share the detail computer resource information to us while encountered this issue.
    In addition, we don't know your cube structure and the underlying relationships. But you can take a look at the following articles to troubleshoot the performance issue:
    Improving Excel's Cube Performance:
    http://richardlees.blogspot.com/2010/04/improving-excels-cube-performance.html
    Excel Against a SSAS Cube Slow: Performance Improvement Tips:
    http://www.msbicentral.com/Blogs/tabid/131/articleType/ArticleView/articleId/136/Excel-Against-a-SSAS-Cube-Slow-Performance-Improvement-Tips.aspx
    Regards, 
    Elvis Long
    TechNet Community Support

Maybe you are looking for

  • I want to try to boot up a Powerbook G4 externally with my MacBook Pro.

    My wife has a PowerBook G4 that has had a hard drive failure according to the Apple people. Her computer won't boot, it just turns on and the little color wheel spins forever. I want to try to recover some data from her computer. I could try loading

  • Syncing "file name", were not copied to the iPad because they could not b

    Hi, This problem has bothered me for a couple of weeks. Copied here is the exact error message iTunes delivers when I try to sync a video library over to my iPad. "Some of the items in the iTunes library, including "file name", were not copied to the

  • How to turn a silhouette into an outline?

    I need to create a line drawing of the outline of a chair. I tried following the various Illustrator tutorials to take the image of a chair and create a silhouette. However, I can't figure out how to take this silhouette and get just the outline of i

  • Recover ISCSI Data

    Hi I had created an ISCSI Volume on a NSS326 and had data on it. I then decided to increase security by adding CHAP security. The NSS326 changed the ISCSI Target name and I was unable to see the data. Is the Data that was on the previous recoverable?

  • Cannot get my AppleTV to SYNC - Windows user

    I use a Dell XPS (Vista) and I have a DLink WBR 2310 Wireless Router. I am using Trend Micro PC-cillin as my firewall. When I first got my Apple TV it would sync but I couldn't actually touch the content on the AppleTV. it was all greyed out. So I ha