JavaScript and PS CS6: Performance issue

Hi folks
I have to admit we are stuck in our development: We have written a Phothoshop Plugin unsing extensive JavaScript and Flash Panels / Action Script.
The Javascripts would, eg. select a given layer. When running the Javascript in PS CS5 or 5.1 everything is smooth and snappy but we've noticed, that the same JavaScript running in PS CS6 takes up to 300% more time.
Does anyone having observed the same performance issues?
Would it be faster to address the specific layers by their native Layer ID's rather than their names?
Why is there such a performance slow down with the same JavaScripts / ActionScript-FlashPanel between CS5 and CS6?
We have already contacted [email protected] (we are solution partner silver) but they do not start acting if you are using your own JavaScripts....
You are our last hope :-( 
I can send you some of the code but I don't want it to be publicly exposed here.
Thanks in advance,
Andrash

Hi, since nobody bothers to answer we might have to find out ourselfs.
Maybe it is caused by the way we address layers throug the script?
Which method are you using?
Are you addressing the layers directly or are you just cicling throug an array of layers?
Are you pointing for the layers by their native ID or rather their layer names?
How do you trigger the script: by another script? From a flash panel (Flex / Action Script)?
We are using Flash Panels to start the script. The script simply calls a layer by it's name (a numerical ID that we apply to the layer). The script shall look up that specific layer and check if there is some content on the layer. We created a logger to see where the heavy amount of time is consumed and it seems, that it is while jumping to the layer.
In CS5 that was all a matter of a split second. Now in CS6 it takes a couple of seconds (4sec.). We asked ADOBE Techsupport for help, but they didn't even bother to look at the problem since we are working with self written code (as every developer does.....?!?!). I wonder what techsupport is good for if not answering techical problems like this one.
I hope that, with your answers we might circle in the cause of the problem!
Cheers,
Andreas

Similar Messages

  • After Effects CS6 Performance issues

    Hello all. Well, got three new mac pro 12 core systems all running 65 - 128gigs of ram with Quadra 4000 GPU cards, up to date drivers, up to date AECS6 and dedicated 500 gig solid state drives for global cache. Their all set-up according to Adobe specifications/instructions. And, no we are not working with the ray trace activated in comps.
    We have all noticed that performance on some projects is painfully slow. In fact some projects which run fine on an older MacPro with AE CS5.5 creep on the new boxes running CS6. So slo in fact we have had to abort and jump to the older machines to get things done in a timely manner. Simple things like typing text, navigating the GUI and scrubbing the timeline are frustrating. We are not new to mac boxes or After Effects. We have all been using AE since version 4. It also seems like projects with a lot of footage in them or a lot of stills that have higher resolution dpi really seem to bog. Something is definitely wrong here. Soloing doesn't seem to help, reducing resolution doesn't seem to help and have tried about every preference option to free up performance.
    If anyone else is experiencing these same issues, I would love to hear about it or possible solutions. The global cache is really nice when all hardware/software is hitting on all cylinders, but something seems a miss. Not getting why there are performance issues on some projects and basic operational uses like text.
    Thanks in advance for any help that can be provided.
    Chris
    Chris Abolt
    Motion designer
    Abolt Media

    See images of all settings for the fastest of 3 machines.
    12 core Mac Pro
    OS 10.7.4
    128 GIGS OF RAM
    2 Quadra 4000 mac gpu cards running parallel in a cubix break-out-box
    Monitors being run one each off of each graphics card
    Internal GPU card not being used for monitors.
    Cuda drivers up to date 5.0.17
    Ae update version 11.0.1.12

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • 3tb time capsual and Apple tv performance issues

    I have just upgraded my airport extreme 1st gen to a 3tb 4th gen time capsule. I did it becuas of growing performance issues with apple TV's 1st and 2nd gen. I thought the dual band and improved power would solve the issue. I've tried all sorts of set ups to the time capsule and I'm getting just as irratic performance. Has anyone got any advice on the best set up for optimum performance please?

    To successfully stream, either audio or video, the key for uninterrupted performance is adequate bandwidth between the media source and the playback device. Your current network is primarily dependent on wireless connections. The extended network alone loses some bandwidth to maintaining the extended network itself. Also, any non-"n" clients (including the 802.11b/g AXs) will "slow down" the "n" network in order for them to be able to communicate at their top speed.
    Of your 802.11n gear, only the 4th generation TC (& I believe the ATV2) are full implementations of the 802.11n standard. Both 802.11n AXns are draft "n." This too would have an overall effect on the available bandwidth.
    Again since your network is primarily wireless, there are at least four areas to go after to look for improvements:
    You must have enough available bandwidth between the media source and destination. You can use utilities like iperf or jperf to do this. You will want to make some data transfer (throughput) measurements both between the media source and destination, and segments between them to find out where actual bottlenecks occur.
    You must have a clear wireless radio channel. Use a utility, like iStumbler to determine the other Wi-Fis operating in the vicinity that may be competing with yours. Specifically look for those with the strongest signal value and note which channels they are running on. Then change yours to be at least 3-5 channels away to prevent Wi-Fi interference.
    Know the media's bandwidth requirements. Looking at the worst-case scenario of what your ATVs can support, which would be 720p 30fps HD format. This would equate to around a minimum of 6-7 MBps (48-56 Mbps) of bandwidth between the video file source and the video player. Actually, I would recommend 20+ MBps as the minimum. 802.11g (at its best) would offer 6.75 MBps (54 Mbps), so 802.11n would be required for anything beyond SD video.
    Re-code the media if possible. If the data transfer peaks exceed the available bandwidth, the audio/video will experience drop-outs. You can always try re-encoding the media source with one that uses a tighter compression schema. However, as you can imagine, the greater the compression, the poorer the audio/video quality.

  • ITunes 7.3.2.6 and Windows Vista Performance Issues

    I recently purchased an 80GB iPod, and downloaded iTunes promptly, disregarding all the negative attention about how "iTunes crashes your computer" or "iTunes kills Windows". So far, the only real problem I've had with iTunes is the amazingly poor performance it has on my computer. When iTunes is minimized to the system tray, or compressed into Mini Player mode, it runs better, but not well.
    Running in the full mode, the player bogs down my computer substantially. Knowing my laptop isn't a powerhouse, I expected a slight bit of slowdown, since there is some evident with QuickTime.
    UPDATE: I've just stumbled across something. Apparently, wiggling my mouse over the iTunes program, iTunes runs about two or three times faster. The moment I stop moving the mouse cursor over the window, the speeds return to slow. It seems to only do this during the "Processing Album Artwork" phase. Bizarre? Very. Any explanations?
    I find this to be very annoying, especially when syncing. Syncing my library (which is about five or six gigabytes) takes an amazingly long time. Are there any workarounds, registry hacks, or anything else that can be done? Thanks!
    Message was edited by: Atlink

    Do you have ReadyBoost enabled on the PC? If so, by way of experiment, try disabling that as per the instructions from the following document:
    Troubleshooting iTunes for Windows Vista video playback performance issues
    ... any help at all with the performance issues?

  • AIR with JavaScript and AJAX (noob design issues)

    Okay, so as the subject states, I'm a noob when it comes to designing an AIR app. My question is kind of two fold:
    First, as a matter of design, I've got a main window that has several drop-down type menus "File", "Preferences", "Help" that kind of thing. My plan was to keep reusing the main window for each of my different screens (unless a pop-up/dialog type screen was called for)
    Anyway, the application I'm writing will, in part, handle a database of patrons. So under one of the menus (File in this case) I've got a "Patron" option. Clicking on "Patron" fires a function called newPatron() which in turn calls my function ebo.displayScreen('patron.htm'). This latter function takes the filename passed in and reads that file then dumps it's contents out to the main screen.
    So, my main window consists (in part) of the following html:
    <body onload="ebo.doLoad();">
         <div id="content"></div>
    </body>
    then my displayScreen function looks like this:
    function displayScreen(filename){
         var my = {};
         // get a handle on the file...
         my.file = air.File.applicationDirectory(resolvePath(filename);
         // get a handle on the stream...
         my.stream = new air.FileStream();
         //open the stream for read...
         my.stream.open(my.file, air.FileMode.READ);
         // read the data...
         my.data = my.stream.readUTFBytes(my.stream.bytesAvailable);
         // close the stream...
         my.stream.close();
         // update the screen (I'm using jQuery here)
         $("#content").empty().append(my.data);
    So anyway, this works like a champ. I click on "Patron" from my file menu and the screen changes to display the contents of patron.htm.
    Currently, patron.htm just contains the following:
    <div style="text-align:left;">
         <input type="button" value="add" onclick="ebo.add(1,2);" />
    </div>
    <div id="result"><div>
    ebo.add looks like this:
    function add(a,b){
         var my = {};
         my.result = a + b;
         $("#result").empty().append(my.result + "<br />");
    So, if anyone hasn't guessed by now, the code contained in the ebo namespace gets included on the main screen when the application loads, and my problem is that despite the fact that once the patron.htm file is loaded in the content div by clicking on the menu option, my button on that screen refuses to work. I've even tried just having the button do an alert directly,
    <input type="button" value="add" onclick="alert('AIRRocks!');" />
    but even that fails!
    So, I added some code to the main page to test a button from there...
    <body onload="ebo.doLoad();">
         <input type="button" value="add" onclick="ebo.add(1,10);" />
         <div id="result"></div>
         <div id="content"></div>
    </body>
    So, now when the main screen loads, I get an "add" button and when I click it the number 11 appears in the "result" div. When I click on the Patron menu option, the new html and javascript are loaded into the "content" div, but the add button in that area refuses to work!
    What gives? I'm sure I'm missing something. So I guess the two questions are: is my scheme of loading new content into the main window by reading the contents of a file, flawed in some way? Or am I just missing something about making calls from this dynamically loaded content? It *looks* like it should work fine, but if I can't make javascript calls from the resultant content, then the concept is no good.
    I realize this has been a somewhat long winded post, but hopefully it describes in enough detail the problem I'm having. I should maybe add that I've looked at what's in the DOM using the AIR HTML/JS Application Inspector and it looks like everything should work perfectly.
    I hope someone out there can help me and might have the patience to explain where I've gone wrong. I might also mention that the only book I've read (or am reading) on AIR with JavaScript and AJAX is "Adobe AIR (Adobe Integrated Runtime) with Ajax: Visual QuickPro Guide"... it really hasn't covered aspects of what makes good design (like what's the best way to reuse the main application window to hold interactive content)... but anyway, there you have it.
    Again, I hope someone can help me out.
    Thanks!
    Chris

    Thanks for responding, Andy. I don't think I'm losing my namespace. That
    thought had crossed my mind, which is why (and I thought I put this in my
    original post) I tried putting a somple alert in the onclick event of the
    button in my "patron.htm" file... but that simple alert doesn't even work.
    :o(
    Do you still think it's an issue with the namespace?

  • Can't access root share sometimes and some strange performance issues

    Hi :)
    I'm sometimes getting error 0x80070043 "The network name cannot be found" when accessing \\dc01 (the root), but can access shares via \\dc01\share.
    When I get that error I also didn't get the network drive hosted on that server set via Group Policy, it fails with this error:
    The user 'W:' preference item in the 'GPO Name' Group Policy Object did not apply because it failed with error code '0x80070008 Not enough storage is available to process this command.' This error was suppressed.
    The client is Windows Server 2012 Remote Desktop and file server is 2012 too. On a VMware host.
    Then I log off and back on, and no issues.
    Maybe related and maybe where the problem is: When I have the issue above and sometimes when I don't (the network drive is added fine) I have some strange performance issues on share/network drive: Word, Excel and PDF files opens very slowly. Offices says
    "Contacting \\dc01\share..." for 20-30 sec and then opens. Text files don't have that problem.
    I have a DC02 server also 2012 with no issues like like this.
    Any tips how to troubleshoot?

    Hi,
    Based on your description, you could access shares on DC via
    \\dc01\share. But you couldn’t access shares via \\dc01.
    Please check the
    Network Path in the Properties of the shared folders at first. If the network path is
    \\dc01\share, you should access the shared folder by using
    \\dc01\share.
    And when you configure
    Drive Maps via domain group policy, you should also type the Network Path of the shared folders in the
    Location edit.
    About opening Office files very slow. There are some possible reasons.
     File validation can slow down the opening of files.
     This problem caused by the issue mentioned above.
    Here are a similar thread about slow opening office files from network share
    http://answers.microsoft.com/en-us/office/forum/office_2010-word/office-2010-slow-opening-files-from-network-share/d69e8942-b773-4aea-a6fc-8577def6b06a
    For File Validation, please refer to the article below,
    Office 2010 File Validation
    http://blogs.technet.com/b/office2010/archive/2009/12/16/office-2010-file-validation.aspx
    Best Regards,
    Tina

  • Yosemite and iCloud Drive Performance Issue

    While on my home network that is connected to the Internet over FiOS 50 G bps, I upgraded my Mac Book Pro Retina from Mavericks to Yosemite.  The upgrade went smoothly.
    I first noticed that when I boot my Mac Book, it boots slower like Windows...
    When I took it into my office and used a slow WiFi hot spot, the system crawled.  Safari, Google, and other apps barely responded.
    During the upgrade, I "blindly" selected to use the iCloud Drive.   Foolish me!   I tried to turn it off, but then it warned me it would delete all documents on my Mac that are also stored on the iCloud. 
    How do I turn iCloud drive completely off when I don't want to use it and KEEP all my local files???    Later, I'd like to turn it back on when I'm on my faster network and THEN sync everything.
    Also, I started turning off specific Apps and rebooted.  Performance has improved.
    Regards,
    Nick

    But Is there an option to retain copies of the files that have been stored on iCloud?   I don't want to delete them.   Later, when I connect back to iCloud drive, if I have made any changes on any of these files on my Mac and they are newer than the version up on iCloud drive, it will automatically "sync up".....
    Thanks again,
    Nick

  • AI CS5.1 vrs AI CS6 Performance Issue MacPro

    Hi,
    I have been happily using AI CS 5.1 (from 5.5) and I have a series of files I created. They are 20x24, 300 DPI and have some type and drawing and a placed 300 DPI 16x20.psd file. They open in about 3 seconds on v5.1, and on v6 they take 3-5 minutes just to open. It's torture.
    Working with a series of 9 different but similar files like this drove me back to v5.1!
    I have a 2009 MacPro 3,1 2x2.8 Quad Core Intel Xeon with 8 GB RAM.
    Any help would be appreciated.
    Some minor points:
    Do I need more RAM for CS6? I've read on this forum that 64 bit apps like RAM, but it seems so odd that things click and pop in the older version and nearly kill me in CS6 version.
    What could be slowing down the performance? Hardware, software or the combination of the 2?
    When did Adobe drop all prgress bar feedback during open. Waiting 5 minutes for a file to open without any indication that it's working is bad. The dock icon long ago had a progress bar over the app icon. I wonder when that changed?
    Thanks in advance, and so long for now, TOM

    You have no idea how many times I've read that very document. So many I've lost count. And this is the first time the importance of what it says there  ("A linked After Effects composition will not support Render Multiple Frames Simultaneously multiprocessing") has sunk in. Thank you.
    But there definitely is something to fix, because this is very clearly broken. Time for yet another feature request. Sigh...

  • Is Vista and Flash CS3 performance issue solved?

    My performances in Flash CS3 are very poor. When I try to
    scroll up-down/left-right I got slideshow. When move the
    framecursor to view the frames I also get slideshow. Searched
    google, do the compatibility settings and the only batter thing I
    get is smoother scrolling of the code. Nothing more. Is this
    solved?
    Im running C2Duo with 4gb 800 ram with Vista x64.
    Should I go back to Flash 8?

    I have an old AlBook G4 1.67 2GB and I actually think CS3
    works great. There is a huge thread about this very issue going on
    at the moment so you might want to take a look there. I never had
    Flash 8, but MX04 also ran just fine on my laptop.
    As far as player performance, it has sped up a lot with AS3
    and Flash plugin 9. (Even if you publish AS2 for Flash 9!). I Don't
    know if there is true parity between the systems, but it is
    certainly faster.
    I've always thought it a good idea to develop on a Mac. If
    you can make your animations look smooth and good playing back in
    the Mac version of the player then you know they will look good
    across the board. I can't tell you how many sites (thankfully fewer
    these days) where I go and everything just crawls and chugs along.
    It is usually due to over high frame rates and excess transparency
    or animated raster images. (And I'm talking about the kind that
    even if it did move smoothly it wouldn't be worth it!)

  • LR 2.6 and Wacom Intuos4: Performance issues

    Hi folks,
    recently I added a Wacom Intuos4 graphic tablet to my PC (Windows 7 Ultimate 64bit; Intel QuadCore 3 GHz and 8GB RAM)  for more precise actions in Lightroom and Photoshop. Before I added the Inuos4 the Lightroom performance with the adjustment brush and the gradient tool worked well,
    without anything to complain about. Now, that I am using the graphic tablet both tools are very slow and the CPU consumption of LR rises up to about 70%-80%. In Photoshop however, the Intuos4 is no perforance issue, everything works as good as before when I am using the tablet.
    Anybody else having this problem and/or even knowing why this is so?
    Looking forward to your responses.
    Best regards
    Thomas

    doc tee wrote:
    I asked Wacom for what issues have been resolved. They told me that it has nothing to do with the LR performance problem. As expected, Wacom does not see to be responsible in this matter. So the driver update should not change anything in this matter, and it really does not.
    Which graphics adapter are you using (nVidia cards are also suspected to cause LR performance trouble)?
    This machine has a GeForce GTX 285, so either the problem does not affect all nVidia cards, or there is some other problem trigger that is missing on my system. The machine is only about 1.5 months old, and I take care not to have crappy/unneeded processes running in the background. I also do not really use the tablet much with Lightroom, so it might be that I simply do not perform the kind of actions that trigger the problem. I have tried going really overboard with gradient filters and adjustment brushes for testing purposes though, and have not been able to reproduce the problem. CPU usage does go up noticeably while dragging a gradient filter, but this is to be expected due to the computational complexity of the task, and at no point does it ever make the operation feel sluggish.
    System specs for comparison:
    Intel Core i7 920 @ 2.66 GHz
    6 GB DDR3-1600
    GeForce GTX 285 1 GB (using the 196.21 drivers)
    Windows 7 64-bit
    The system disk (where Lightroom itself is stored) is an Intel SSD, but the catalog and the photos are on rotating media.

  • NUMA and Photoshop CS6 Performance on Multiprocessor Machine

    I've been doing a bit of experimenting with the various performance settings in my BIOS in my dual processor Precision T5500 Workstation running Windows 7 x64.
    One of the settings is SMP / NUMA - Symmetric MultiProcessing vs. Non Uniform Memory Access.
    Since I have two processors, the motherboard offers the choice of mapping the blocks of RAM to the processors in two different ways - the first being every other memory block to each processor, the second being each processor gets one big block and the operating system tries to assign threads to processors so that they more often access the RAM that's tied directly to that processor.
    I have been running up to now with the more generic setting - SMP.  But today I decided to try out the NUMA setting.  Since the Windows x64 operating systems since XP x64 have been capable of intelligently allocating memory in the NUMA environment, there is some promise of performance improvement.   But the question, in the chaotic world of multithreading in Windows, is how much?
    Overall, having tested Photoshop today with this new setting, I've seen generally positive performance improvements from this setting, with about a 10% to 20% speed improvement in most Photoshop operations, and notably several operations showed larger jumps in performance.
    For example, my run time for the Photoshop benchmark at http://clubofone.com/speedtest/ went down from a best of 12.8 seconds to 11.2 seconds.
    Opening a large 8000 x 8000 pixel 1.7 GB test file went down from 21.6 to 19.4 seconds.
    A fairly large improvement was seen in responsiveness in dragging very large, complex layers around.  Dragging one of the layer groups in the 8000 x 8000 pixel test file went from 5 screen updates per second to about 8 (hard to count, but it got pretty smooth).
    Painting with a 5000 pixel soft brush across the 8000 x 8000 pixel image went down from 6.4 seconds to 5.0 seconds.
    I haven't really found anything that ends up running slower.
    If you've got a modern multi-processor machine and are running Windows x64, and you have the option to set up Non-Uniform Memory Access, I recommend using this setting.
    -Noel

    Here is the email I got from Wacom.
    This is the person that contacted me after I sent them an email
    Dick,
    If you downloaded and installed the latest Mac OS X driver, it will not work with your Intuos2 tablet.  If you OS X is 10.6+, here are driver directions to follow for a clean install and a link to the last driver:
    Open your Applications folder and locate any Tablet or Wacom Tablet folders you have
    Use the Utility in each of these folders and click ‘Remove’ under ‘Tablet Software’
    Once the software is removed, restart the computer
    After restarting, download and install the latest 6.20 driver from:http://cdn.wacom.com/U/drivers/mac/pro/WacomTablet_6.2.0w4.dmg
    As always, make sure your tablet is connected directly to your computer.
    Avoid using USB hubs, keyboard/monitor ports, or docking stations with the tablets, as they can cause inconsistent behavior.
    Linda Evenson
    From:
    Sent: Monday, December 02, 2013 6:22 PM
    To: Support Group Email
    Subject: Support Email - Installation
    <Email Edited by Host>

  • Red Epic and Premiere CS6 workflow issues

    Hello!
    I'm looking for some advice or insight regarding working with RED footage in CS6.
    In the past, I've worked with native RED footage on CS5.5 with no hiccups whatsoever. I'm not sure if it's CS6 or my computer's processing power, but RED footage has become uneditable for me.
    After I updated to Yosemite, I began experiencing problems with being unable to import RED Epic footage due to "unsupported or corrupt files," and when I tried to fix the problem with the Adobe Labs plugins, it reported a "generic importer failure."
    I then uninstalled and re-downloaded CS6, and now I can get footage into my project, but I can get no playback at all! Both the source and program monitors remain frozen on a frame and then have the yellow "Media Pending" screen. When I attempt to render a clip in my timeline (RED HD 4K 23.58 sequence) I get an "error compiling movie" box. I've tried other sequence settings, but still no dice.
    I have my source monitor playback set to 1/2 resolution and my program monitor set to 1/4 resolution. Still nothing.
    Here are my computer specs:
    MacPro 2010
    OS X 10.10.1
    2 x 2.66 GHz 6-Core Intel Xeon Processor
    32GB Memory
    ATI Radeon HD 5870 GPU
    Blackmagic Intensity Pro (Desktop Video 10.2.3)
    I know that I can process the footage in REDCINE-X Pro (which will take a million hours), but I have a major project that is all shot with RED Epic footage and I was counting on Premiere's ability to edit natively, since I'm kind of on a tight schedule.
    Am I looking at some hardware upgrades? If anyone has any advice, I'd really appreciate it!

    Hi Hannah,
    I see you posted on Creative Cow too: CreativeCOW
    Update to the most recent version of CS6.
    Make sure your GPU has more than 768 VRAM.
    Try a previously installed version of OS X.
    Thanks,
    Kevin

  • Photoshop, smart objects and dynamic filters performance issues

    Hello,
    I am quite new to Photoshop, after several years with Capture NX 2 to process thousands of NEF and  RW2 files (RAW from Nikon and Panasonic).
    I use Photoshop to read RAW pictures, convert them to a smart object, then apply several dynamic filters, mainly from the Nik Collection (Dfine, Color Efex Pro, Sharperner Pro), sometimes Topaz Denoise. I do that with actions, so I can batch process many pictures.
    But sometimes I have to manually adjust some settings, and this where I do not really understand the way Photoshop works. If I have to adjust let say the last filter on the stack, Photoshop reprocesses all the filters below, which can be very tedious as this takes lot of time.
    Is there a way to tell Photoshop to keep all intermediate data in memory, so if you have to adjust one of the last filters the process starts immediately?
    Any help would be greatly appreciate.
    Frederic.

    Thank you Chris.
    I am surprised, as for years there has been a lot of discussions about Capture NX2 which was supposed to be slow. In fact, when using the same filters (+ Nik Color Efex), NX2 is much much faster than Photoshop, and when you have to make an adjustment in any of the setttings, you can do that immediateley.
    Of course, Photoshop is completely opened and NX2 totally closed (and now not supported anymore).
    But, I really don't know how to adapt my workflow, except buying the most powerful PC possible (I already have 2 which are quite powerful), and this will still be far from being comfortable. I am used to tune manually many many pictures (adjust noise reduction, sharpening, light, colors ...), and this was quite fast with NX2.
    I am probably not on the correct forum for this, and I will try to investigate elsewhere.
    Anyhow, thank you for your answer.
    Frédéric

  • Dreamweaver CS6 performs very slowly when using Bootstrap

    I'm working with a Bootstrap template that includes alot of JavaScript and CSS files, and Dreamweaver CS6 performs EXTREMELY SLOWLY when working with this site.
    It takes a few seconds to be able to scroll, especially when working in "Split" view, and everytime I want to Preview the page in a browser I must save the file and then wait 2-5 seconds for the save to complete before I hit Preview or else Dreamweaver will crash.
    I've checked for updates to Dreamweaver CS6 but my copy seems to be up-to-date.
    Any suggestions?

    Dreamweaver performs very slowly.
    This depends on a number of factors.
    The operating system - apparently the latest version of Mac OS causes all sorts of problems
    Processor
    RAM
    Disk space
    External links to scripts and style sheets
    In my case, I do not have a problem other than the normal ones
    Windows 8.1
    i7
    16GB
    2T
    local files

Maybe you are looking for

  • HP LaserJet Professional P100 wireless printer Install

    Hello, I am trying to install this printer on a server (server 2008 x64) without much luck. It is currently installed on a clients machine by using the CD and printer cable. The printer is using a static IP as well. When trying to install the printer

  • TS3694 I have an ipad, 2 yrs old, that will not restore, due to error 3. what is error 3?

    What is error 3 ? I have a list of many but not that one. Ipad opens with a silver apple on a black screen, then when you hit HOME, it turns on for 30 seconds and a black screens comes in from the left, shutting it down. After 10 tries i got a graphi

  • Can someone help with my slow airport?

    I have tried everything, but when i connect to the net via my airport, i might as well write a letter and request the info be sent that way. I have reinstalled leapard, made sure i have all the current updates, checked and changed every setting on th

  • Cookie ain't set on Linux Tomcat 5.5

    I'm mnigrating from j2sdk 1.4 to jdk1.5 and found following problem using Tomcat 5.5 on linux (gentoo). ON windows machines probem doesn't occur. I'm using following code to set cookie with specified lifetime Cookie cookie = new Cookie("name", "value

  • External Jar

    Hi, I have implemented reading an XML feed in a Java program using api. In NWDS, I created a Java Project. And as required added 3 externals jar files. I had to import the following: import de.nava.informa.core.ChannelIF; import de.nava.informa.core.