Global Performance Cache and GPU

Nvidia's website says: The Global Performance Cache feature makes After Effects faster and more responsive than ever before by taking full advantage of the power of your computer's hardware. NVIDIA GPUs allow you to accelerate previews when drawing images to the screen for a highly interactive experience. 
does the global performance automatically use the gpu or do you have to enable it? and is this a good reason to invest in a good gpu or not really?

The global peformance cache has nothing whatsoever to do with the GPU or Nvidia. See this for details of the cache features:
https://www.video2brain.com/en/lessons/global-performance-cache-and-persistent-disk-cache
After Effects uses the GPU for almost nothing. Details are here:
http://www.adobe.ly/AE_CUDA_OpenGL_GPU

Similar Messages

  • AE CS6 Global Performance Cache doesn't appear to be caching

    If I play the timeline by hitting the space bar, AE will slowly play through the timeline, rendering as it goes. The rendered portion will have the familiar green bar at the top, but at a certain point my system will run out of memory and I'll begin losing what was first rendered. During this process I'll see small segments of blue that indicate data written to the disk cache, but the segments are small and sporadic, just a frame or two here and there.
    I've set up the cache and processor according to the Adobe tutorial, then reviewed the tutorial and re-checked the settings with our IT manager.
    After watching one of the video tutorials on AE CS6 it occurs to me that this feature may only work with RAM Preview. So I attempted what was shown in the video: Do a RAM Preview, change something on a layer (moved a path point), new RAM Preview, then UNDO.  I did not get a green bar indicating that AE had re-loaded the previously cached version.
    I then closed and reopened the project as shown in the video but did not see a blue bar indicating that the preview had been saved to disk.
    Project: 20 second comp with two video and two solid layers. Video = 1080 Canon DSLR video (same video file; one layer has a stationary mask). Solid layers each have one path with the Stroke effect applied to it.
    System: HP workstation, 6-core Xeon (hyperthreading OFF), 12GB ECC RAM, nVidia GeForce 460 SE (1GB DDR5), Win7 (64), Intel SSHD (system), 10k HDD (swap and temp files), server-class 7200RPM mirrored data drives, CS6 Production Premium. All drives are defragmented. Tried with Norton Corporate AV ON and OFF, no difference.
    As a troubleshooting measure we also tried this on a brand new HP Z420 workstation with Xeon Quad (hyperthreaded), 8GB ECC Ram, Quadro 3800 (or 3850?) graphics, system drive, data drive. Scrubbing through timeline was faster, but disk cache behavior was the same.
    My initial impression of AE CS6 is that it feels slower than CS5 and it seems to me that I must be doing something blatantly wrong. Ideas?
    Guy

    OK, here's what I've found:  On my workstation as well as our test system (now set up with a dedicated RAID just for the cache), the GPC seems to work when the resolution is set to 1/2 rather than Full (before changing it, on my workstation the resolution was set to Auto and it chose Full).
    Is there a way to get GPC working when the resolution is set to Full?

  • Global Performance Cache - Seeking Opinions

    Hi Guys -
    I am still debating whether to buy CS6. None of the new features interest me nearly as much as the GPC; my decision to upgrade depends a lot on what you guys say about it.
    I've used AfterFX long enough to know (since AE7 Pro) that the speed of a RAM preview depends on many factors. Nevertheless I hope that won't preclude you from giving me a general sense of the GPC's effectiveness.
    Is the GPC so good that it's worth upgrading for that feature alone? Is it an obvious improvement compared to previous versions, one you appreciate all the time? Or is the benefit minimal, a small step in the right direction but not something that should weigh heavily in my decision to upgrade?
    Thank you very much for taking the time to share your thoughts, I appreciate it.

    I agree with Todd that CS6 isn't as "wonky" as it was at first. I also agree that I can't stand to go back because everything is faster.
    This paragraph of yours is very revealing:
    For me very often it's just the opposite. I'm asked to make a title sequence, a dvd menu, a wedding video "more interesting please." We talk, I learn what I can, make some sketches, take notes... then I sit down with AfterFX (and Encore, Prem Pro etc.). I tend to use the same effects over and over because I don't want to (or have time to) wait for unpredictable outcomes, and changing even the simplest command, wiggle(3,4) to wiggle(3,5) means the whole comp has to re-render in RAM. It's aggravating.
    This is exactly what I mean when I talk about doing motion studies at 1/4 or 1/2 rez. Make your motion experiments on simple layers without effects. I tend to use a lot of pencil sketches shot with my phone, simple shape layers, and text layers all with nothing but motion to figure out what the scene is going to look like.  I'll even often send two, three, or even 10 different motion studies to a client via private Vimeo renders to decide how to proceed with the project. It's the same technique that cell animators and 3D artists use today. Pencil tests traditionally refer to cell animation but I use the term to describe any motion test from any app that does not have the finishing touches applied. Make a low rez 'pencil test' version of the animation to make sure things work before you paint, texture, and clean up the rest for final proof. The GMC in CS6 makes this kind of experimenting extremely fast. It also helps when you are doing the very last stages of your color and effects work on a frame. Both of these comprise less than 30% of my time working on a project and that's why I spend most of my 'creative time' with the caches turned off because they still hold on to an occasional frame or rendered in the background effect that I don't want to see.
    You have a distinct disadvantage working with folks that are not in the production business because they are not usually comfortable with 'pencil tests' but there's nothing that says you can't use them to speed up your workflow. Just keep the ones that you like as separate comps, pick your favorite, finish it up and sent the client a nice completed proof. If they don't like it then you've got 2 or 3 or 10 other pencil sketch versions already done tyat you can quickly apply effects to and clean up for final approval.
    Wouldn't you rather spend 2 hours on a DVD title checking the timing and motion with 10 versions and 1 hour adjusting the color, effects, and final look than 2 days fiddling around with 3 versions checking every frame at full rez. I'm just trying to give you a few pointers that will let you have some time in your life to do something other than sit in front of your screen and wait for full rez, full effect previews of your work.

  • Cache and performance issue in browsing SSAS cube using Excel for first time

    Hello Group Members,
    I am facing a cache and performance issue for the first time, when I try to open a SSAS cube connection using Excel (using Data tab  -> From Other Sources --> From Analysis Services) after daily cube refresh. In end users
    system (8 GB RAM), for the first time, it takes 10 minutes to open the cube. From next run onwards, its open up quickly within 10 secs.
    We have daily ETL process running in high end servers. The configuration of dedicated SSAS cube server is 8 core, 64GB RAM. In total we have 4 cubes - out of which for 3 is full cube refresh and 1 is incremental refresh. We have seen after
    daily cube refresh, it takes 10 odd minutes to open the cube in end users system. From next time onwards, it opens up really fast with 10 secs. After cube refresh, in server systems (16 GB RAM), it takes 2 odd minutes to open the cube.
    Is there, any way we could reduce the time taken for first attempt ?
    Best Regards, Arka Mitra.

    Thanks Richard and Charlie,
    We have implemented the solution/suggestions in our DEV environment and we have seen a definite improvement. We are waiting this to be deployed in UAT environment to note down the actual performance and time improvement while browsing the cube for the
    first time after daily cube refresh.
    Guys,
    This is what we have done:
    We have 4 cube databases and each cube db has 1-8 cubes.
    1. We are doing daily cube refresh using SQL jobs as follows:
    <Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
    <Parallel>
    <Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200">
    <Object>
    <DatabaseID>FINANCE CUBES</DatabaseID>
    </Object>
    <Type>ProcessFull</Type>
    <WriteBackTableCreation>UseExisting</WriteBackTableCreation>
    </Process>
    </Parallel>
    </Batch>
    2. Next we are creating a separate SQL job (Cache Warming - Profitability Analysis) for cube cache warming for each single cube in each cube db like:
    CREATE CACHE FOR [Profit Analysis] AS
    {[Measures].members}
    *[TIME].[FINANCIAL QUARTER].[FINANCIAL QUARTER]
    3. Finally after each cube refresh step, we are creating a new step of type T-SQL where we are calling these individual steps:
    EXEC dbo.sp_start_job N'Cache Warming - Profit Analysis';
    GO
    I will update the post after I receive the actual im[provement from UAT/ Production environment.
    Best Regards, Arka Mitra.

  • Query performance problem - events 2505-read cache and 2510-write cache

    Hi,
    I am experiencing severe performance problems with a query, specifically with events 2505 (Read Cache) and 2510 (Write Cache) which went up to 11000 seconds on some executions. Data Manager (400 s), OLAP data selection (90 s) and OLAP user exit (250 s) are other the other event with noticeable times. All other events are very quick.
    The query settings (RSRT) are
    persistent cache across each app server -> cluster table,
    update cache in delta process is checked ->group on infoprovider type
    use cache despite virtual characteristics/key figs checked (one info-cube has1 virtual key figure which should have a static result for a day)
    =>Do you know how I can get more details than what's in 0TCT_C02 to break down the read and write cache events time or do you have any recommandation?
    I have checked and no dataloads were in progres on the info-providers and no master data loads (change run). Overall system performance was acceptable for other queries.
    Thanks

    Hi,
    Looks like you're using BDB, not BDB JE, and this is the BDB JE forum. Could you please repost here?:
    Berkeley DB
    Thanks,
    mark

  • Query Performance with and without cache

    Hi Experts
    I have a query that takes 50 seconds to execute without any caching or precalculation.
    Once I have run the query in the Portal, any subsequent execution takes about 8 seconds.
    I assumed that this was to do with the cache, so I went into RSRT and deleted the Main Memory Cache and the Blob cache, where my queries seemed to be.
    I ran the query again and it took 8 seconds.
    Does the query cache somewhere else? Maybe on the portal? or on the users local cache? Does anyone have any idea of why the reports are still fast, even though the cache is deleted?
    Forum points always awarded for helpful answers!!
    Many thanks!
    Dave

    Hi,
    Cached data automatically becomes invalid whenever data in the InfoCube is loaded or purged and when a query is changed or regenerated. Once cached data becomes invalid, the system reverts to the fact table or associated aggregate to pull data for the query You can see the cache settings for all queries in your system using transaction SE16 to view table RSRREPDIR . The CACHEMODE field shows the settings of the individual queries. The numbers in this field correspond to the cache mode settings above.
    To set the cache mode on the InfoCube, follow the path Business Information Warehouse Implementation Guide (IMG)>Reporting-Relevant Settings>General Reporting Settings>Global Cache Settings or use transaction SPRO . Setting the cache mode at the InfoCube level establishes a default for each query created from that specific InfoCube.

  • Performance counters from nginx, Varnish Cache and Elastic Search on Linux

    Hi, we would like to monitor our few Linux devices with SCOM 2012 R2. The OS counters are working well, but we are running on Linux the applications nginx, Varnish Cache and Elastic Search from which we need the performance counters as well. Are there any
    Management Packs available to read the data from these applications?
    Thank you very much for any hint.

    I'm not aware of any pre-built management packs for the applications you have listed.
    While you could build your own management packs using OpsMgr's management pack authoring capabilities, if your monitoring needs are fairly straightforward, you could create some shell scripts to retrieve the information you want (for both health and/or performance),
    and then use the Linux/UNIX script monitoring wizard in the OpsMgr console to create the monitors you want.  Creating the monitors/rules is super-easy once you have the shell scripts.
    Michael Kelley, Lead Program Manager, Open Source Technology Center

  • LOCAL OLAP CACHE AND GLOBAL OLAP CACHE

    what is local olap cache and global olap cache...
    what is the difference between ....can you explain  scenario plz...
    will reward with points
    thanks in advance

    Hello GURU
    Local cache is specific to a user, before BW 3.0 it was only local cache available...if a user run the query data will come to cache from infoprovider and next time same query will not go to Data base instead it will fetch data from cache memory...this cache will be used only for that particular user...if some other user try the same query it will not pick up dta from cache.....
    BW 3.0 onward we have global cache which means several user can access same cache for the same query or data which is related in cache...
    Thanks
    Tripple k

  • Global Olap Cache

    Hello BW Gurus,
    We would like to use global olap cache function to optimize the performance of our queries.
    We've tried it with using the persistent cache on flat file modus. Unfortunately the size of global cache remains 0 KB, whenever we execute the different queries and system does not generate any flat file.  
    First of all I would like to ask, what is the relationship between the shared memory size (rdsb/esm/buffersize_kb) and OLAP Cache size. Which one should/must bigger? or this question is irrelevant for to realization of olap-cache.
    And could you please tell me, which other parameters should we check? 
    Debugging with the transaction RSRT (Execute and Debug with Cache Breakpoints) didn't help us.
    Thanks for your reply.
    Regards,
    Nuran Adal

    Hi
    Have a look at this:
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/c0ebfba1-5a1a-2d10-82b5-fdddc1e5955c?quicklink=index&overridelayout=true
    Thanks
    Kalyan

  • "Light Room 5.4 Encountered An Error When Reading Its Cache and Needs To Quit" It's happening every time I try to open LR... I'd be grateful for any help.

    Hi! Every time, no matter if I've restarted or powered down, even if it's the only program running, it's the same error. I've checked for any updates for my laptop, made sure it's been defragged weekly, checked system performance... all to no avail. Today is the first issue I've had with LR5.4, and I've been trying to correct it since this morning. I'd be grateful for any and all help. Thank you!

    Did you mean, "Lightroom encountered and error when reading from its preview cache and needs to quit?"
    https://forums.adobe.com/message/6333848
    I've been using LR since 1.0 and never encountered this issue until today. The above procedure for creating a new Preview Cache folder and preview files should fix the issue.

  • HTTP Headers - enabling caching and compression with the portal?

    Has anyone configured their web server (IIS or Apache) or use a commercial product to flawlessly cache and compress all content generated by the portal?
    Compression and caching is critical for making our portal based applictions work for overseas users. It should be doable, just taking advantage of standard HTTP protocols, but implementing this a complex system like the portal is tricky, we seem to be generating different values in the HTTP Headers for the same types of files (such as CSS).
    We are running Apache so can't take advantage of the built in compression capabilities of the .net portal. We are running the java vervion. 6.1 mp1, sql server 2000 (portal, search, collab, publisher, studio, analytics, custom .net and java portlets on remote server).
    Basically our strategy is to compress all outgoing static and dynamic text content (html, CSS, javascript), and to cache all static files (CSS, javascript, images) for 6 months to a year depending on file type.
    Here are some links on the subjects of caching and compression that I have compiled:
    Caching & Compression info and tools
    http://www.webreference.com/internet/software/servers/http/compression/
    http://www.ibm.com/developerworks/web/library/wa-httpcomp/
    http://www.mnot.net/cache_docs/
    http://www.codeproject.com/aspnet/HttpCompressionQnD.asp?df=100&forumid=322472&exp=0&select=1722189#xx1722189xx
    http://en.wikipedia.org/wiki/Http_compression
    http://perl.apache.org/docs/tutorials/client/compression/compression.html
    https://secure.xcache.com/Page.aspx?c=60&p=590
    http://www.codinghorror.com/blog/archives/000807.html
    http://www.howtoforge.com/apache2_mod_deflate
    http://www.ircache.net/cgi-bin/cacheability.py
    http://betterexplained.com/articles/how-to-optimize-your-site-with-http-caching/
    http://betterexplained.com/articles/speed-up-your-javascript-load-time/
    http://betterexplained.com/articles/speed-up-your-javascript-load-time/
    http://www.rubyrobot.org/article/5-tips-for-faster-loading-web-sites
    http://betterexplained.com/articles/how-to-optimize-your-site-with-gzip-compression/
    http://www.gidnetwork.com/tools/gzip-test.php
    http://www.pipeboost.com/
    http://www.schroepl.net/cgi-bin/http_trace.pl
    http://leknor.com/code/gziped.php?url=http%3A%2F%2Fwww.google.com
    http://www.port80software.com/surveys/top1000compression/
    http://www.rexswain.com/httpview.html
    http://www.15seconds.com/issue/020314.htm
    http://www.devwebpro.com/devwebpro-39-20041117DevelopingYourSiteforPerformanceCompressionandOtherServerSideEnhancements.html
    http://www.webpronews.com/topnews/2004/11/17/developing-your-site-for-performance-optimal-cache-control
    http://www.sitepoint.com/print/effective-website-acceleration
    http://nazish.blog.com/1007523/
    http://msdn.microsoft.com/library/default.asp?url=/library/en-us/IETechCol/dnwebgen/IE_Fiddler2.asp?frame=true
    http://www.fiddlertool.com/fiddler/version.asp
    http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html
    http://www.web-caching.com/cacheability.html
    http://www.edginet.org/techie/website/http.html
    http://www.cmlenz.net/blog/2005/05/on_http_lastmod.html
    http://www.websiteoptimization.com/speed/tweak/cache/
    http://www.webperformance.org/caching//caching_for_performance.html
    http://betterexplained.com/articles/how-to-debug-web-applications-with-firefox/
    Edited by tkoenings at 06/18/2007 6:26 AM

    Hi Scott,
    Does Weblogic platform 8.1 supports netscape? We have developed a portal which
    works perfectly on IE but it dies in netscape. Is netUI tags not supported in
    Netscape?
    Pls reply
    manju
    Scott Dunbar <[email protected]> wrote:
    From a pure HTML perspective Portal does it's rendering with nested
    tables.
    Netscape 4.x and below have terrible performance with nested tables.
    The
    problem is not the Portal server but rather Netscape on the client machine.
    If IE and/or a recent version of Netscape/Mozilla is not possible then
    there are
    really only two options:
    1) Faster client hardware - not likely to be an acceptable solution.
    2) Minimize the number of portlets and the complexity within the portlets.
    Neither of these solutions are a great answer, but the 4.7 series of
    Netscape is
    getting pretty old. Having said that, we've got customers who want to
    continue
    to use IE 4 :)
    Again, though, this problem is, I'm afraid out of our hands. It is the
    client
    rendering time that is the issue.
    cg wrote:
    Does anyone know of any known reasons why the 7.0 (did it also with4.0) portal
    pages can take up to almost 30 seconds to load in Netscape 4.7? I knowit is a
    very generic question but our customer still uses 4.7 and will notuse the portal
    b/c it takes so long to load some of the webapps. What the pages willdo when
    loading is that the headers will come up and when it gets to the bodyof the page
    it seems to stall and then comes up all of a sudden. For some of thepages it
    takes 6 seconds and for others it takes about 24-27 seconds.
    We have suggested using IE only but that is not an option with allof the customers
    and getting a newer version of Netscape is also out of the question.
    Any suggestions would be greatly appreciated.--
    scott dunbar
    bea systems, inc.
    boulder, co, usa

  • Ssd cache and hybrid drive

    I am looking to order the HP 17t-j000 with 8GB of memory.  It offers both SSD cache and a Hybrid drive as options, and you can choose both.    Does it make sense to get both, or is this just redundant?   I

    No the SSD cache drive with hybrid drive will outperform just the hybrid drive by quite a bit. You will get boot time performance close to a main SSD drive, but a main SSD drive would be even faster, particularly on data transfer.

  • I have Photoshop CS6 Extended Students and Teachers Edition.  When I go into the Filter/Oil paint and try to use Oil paint a notice comes up "This feature requires graphics processor acceleration.  Please check Performance Preferences and verify that "Use

    I have Photoshop CS6 Extended Students and Teachers Edition.  when I go into the Filter/Oil paint and try to use Oil Paint a notice comes up "This feature requires graphics processor acceleration.  Please Check Performance Preferences and verify that "Use Graphics Processor" is enabled.  When I go into Performance Preferences I get a notice "No GPU available with Photoshop Standard.  Is there any way I can add this feature to my Photoshop either by purchasing an addition or downloading something?

    Does you display adapter have a supported  GPU with at least 512MB of Vram? And do you have the latest device drivers install with Open GL support.  Use CS6 menu Help>System Info... use its copy button and paste the information in here.

  • I want to use a mac mini as a server supporting storage. Can I pair my macair to it for when I need to perform updates and maintenance ?

    I want to use a mac mini as a server supporting storage. I have other devices such as an iMac and iPad that will access information from the server. I do not want to purchase a monitor and keyboard as the unit will sit in a cupboard out of sight. Can I pair my macair to it for when I need to perform updates and maintenance ?

    I have a 2010 Mac Mini running Yosemite and Server which I use
    as a headless home server.
    I have is set up to allow screen sharing and can connect to it and
    control it with my iMac, Macbook Pro, iPhone, and a 2011 Mini Server
    that I use as an HTPC.
    You can check this out for all the Yosemite Server capabilities:
    https://help.apple.com/advancedserveradmin/mac/4.0/
    I have iTunes Home Sharing set up on it and have my entire iTunes
    library on it.  I can then use any of my Macs to play Movies or Songs
    from it and only keep locally a select subset of that on my individual
    devices.
    Rather than and update server, I utilize Server's Caching Service.  The caching
    server will duplicate any update download (system or MacApp Store purchases)
    any time a device that is connected to my network down loads one.  The update will
    then be stored locally and all other devices will download the update from it which
    can be faster than from Apple directly.  This has the advantage of only having to download
    once with limited bandwidth internet connections.  There is also an Update Server service
    available, but it is some what more involved in setting up.  However, it will download
    and store all available updates.
    There is another thing as well if you do not care for syncing things like Contacts, Calendar, etc.
    to iCloud, you can set Server up to sync these items across devices locally.

  • Oracle 11g result cache and TimesTen

    Oracle 11g has introduced the concept of result cache whereby the result set of frequently executed queries are stored in cache and used later when other users request the same query. This is different from caching the data blocks and exceuting the query over and over again.
    Tom Kyte calls this just-in-time materialized view whereby the results are dynamically evaluated without DBA intervention
    http://www.oracle.com/technology/oramag/oracle/07-sep/o57asktom.html
    My point is that in view of utilities like result_cache and possible use of Solid State Disks in Oracle to speed up physical I/O etc is there any need for a product like TimesTen? It sounds to me that it may just asdd another layer of complexity?

    Oracle result cache ia a useful tool but it is distinctly different from TimesTen. My understanding of Oracle's result cache is caching results set for seldom changing data like look up tables (currencies ID/code), reference data that does not change often (list of counter parties) etc. It would be pointless for caching result set where the underlying data changes frequently.
    There is also another argument for SQL result cache in that if you are hitting high on your use of CPUs and you have enough of memory then you can cache some of the results set thus saving on your CPU cycles.
    Considering the arguments about hard wired RDBMS and Solid State Disks (SSD), we can talk about it all day but having SSD does not eliminate the optimiser consideration for physical I/O. A table scan is a table scan whether data resides on SCSI or SSD disk. SSD will be faster but we are still performing physical IOs.
    With regard to TimesTen, the product positioning is different. TimesTen is closer to middletier than Oracle. It is designed to work closely to application layer whereas Oracle has much wider purpose. For real time response and moderate volumes there is no way one can substitue TimesTen with any hard wired RDBMS. The request for result cache has been around for sometime. In areas like program trading and market data where the underlying data changes rapidly, TimesTen will come very handy as the data is real time/transient and the calculations have to be done almost realtime, with least complications from the execution engine. I fail to see how one can deploy result cache in this scenario. Because of the underlying change of data, Oracle will be forced to calculate the queries almost everytime and the result cache will be just wasted.
    Hope this helps,
    Mich

Maybe you are looking for