BOINC and using GPU

I instelled boinc 7.2.42-1 from the community repo and followed the wiki instructions.
I have new GTX750 TI GPU with nvidia 343.36-2 drivers, so I thought that I harness it to do computations also. This is the info boinc gives me while starting:
Fri 09 Jan 2015 07:46:21 AM EET | | cc_config.xml not found - using defaults
Fri 09 Jan 2015 07:46:21 AM EET | | Starting BOINC client version 7.2.42 for x86_64-pc-linux-gnu
Fri 09 Jan 2015 07:46:21 AM EET | | log flags: file_xfer, sched_ops, task
Fri 09 Jan 2015 07:46:21 AM EET | | Libraries: libcurl/7.39.0 OpenSSL/1.0.1j zlib/1.2.8 libidn/1.29 libssh2/1.4.3
Fri 09 Jan 2015 07:46:21 AM EET | | Data directory: /var/lib/boinc
Fri 09 Jan 2015 07:46:21 AM EET | | CUDA: NVIDIA GPU 0: GeForce GTX 750 Ti (driver version unknown, CUDA version 6.5, compute capability 5.0, 2048MB, 2009MB available, 2082 GFLOPS peak)
Fri 09 Jan 2015 07:46:21 AM EET | | OpenCL: NVIDIA GPU 0: GeForce GTX 750 Ti (driver version 343.36, device version OpenCL 1.1 CUDA, 2048MB, 2009MB available, 2082 GFLOPS peak)
Fri 09 Jan 2015 07:46:21 AM EET | | Host name: vishera
Fri 09 Jan 2015 07:46:21 AM EET | | Processor: 6 AuthenticAMD AMD FX(tm)-6300 Six-Core Processor [Family 21 Model 2 Stepping 0]
Fri 09 Jan 2015 07:46:21 AM EET | | Processor features: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 popcnt aes xsave avx f16c lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs xop skinit wdt lwp fma4 tce nodeid_msr tbm topoext perfctr_core perfctr_nb arat cpb hw_pstate npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold vmmcall bmi1
Fri 09 Jan 2015 07:46:21 AM EET | | OS: Linux: 3.17.6-1-ARCH
Fri 09 Jan 2015 07:46:21 AM EET | | Memory: 7.76 GB physical, 0 bytes virtual
Fri 09 Jan 2015 07:46:21 AM EET | | Disk: 111.29 GB total, 106.17 GB free
Fri 09 Jan 2015 07:46:21 AM EET | | Local time is UTC +2 hours
But but... How do I know that my GPU is working and doing some work? And do I have to check if the projects I choose (currently SETI, Einstein, Rosetta and World Community Grid) support CUDA or OpenCL, or is the nVidia padge beside the project good enough clue?
One more thing that is little vague in the wiki is that in what startup script I should put this:
xhost local:boinc

I too have a similar setup for my laptop. Did you get any information on this?
I placed the following in my boinc.service systemd file, but I have no idea if it's working correctly.
ExecStartPre=xhost local:boinc
mclang wrote:
I instelled boinc 7.2.42-1 from the community repo and followed the wiki instructions.
I have new GTX750 TI GPU with nvidia 343.36-2 drivers, so I thought that I harness it to do computations also. This is the info boinc gives me while starting:
Fri 09 Jan 2015 07:46:21 AM EET | | cc_config.xml not found - using defaults
Fri 09 Jan 2015 07:46:21 AM EET | | Starting BOINC client version 7.2.42 for x86_64-pc-linux-gnu
Fri 09 Jan 2015 07:46:21 AM EET | | log flags: file_xfer, sched_ops, task
Fri 09 Jan 2015 07:46:21 AM EET | | Libraries: libcurl/7.39.0 OpenSSL/1.0.1j zlib/1.2.8 libidn/1.29 libssh2/1.4.3
Fri 09 Jan 2015 07:46:21 AM EET | | Data directory: /var/lib/boinc
Fri 09 Jan 2015 07:46:21 AM EET | | CUDA: NVIDIA GPU 0: GeForce GTX 750 Ti (driver version unknown, CUDA version 6.5, compute capability 5.0, 2048MB, 2009MB available, 2082 GFLOPS peak)
Fri 09 Jan 2015 07:46:21 AM EET | | OpenCL: NVIDIA GPU 0: GeForce GTX 750 Ti (driver version 343.36, device version OpenCL 1.1 CUDA, 2048MB, 2009MB available, 2082 GFLOPS peak)
Fri 09 Jan 2015 07:46:21 AM EET | | Host name: vishera
Fri 09 Jan 2015 07:46:21 AM EET | | Processor: 6 AuthenticAMD AMD FX(tm)-6300 Six-Core Processor [Family 21 Model 2 Stepping 0]
Fri 09 Jan 2015 07:46:21 AM EET | | Processor features: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 popcnt aes xsave avx f16c lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs xop skinit wdt lwp fma4 tce nodeid_msr tbm topoext perfctr_core perfctr_nb arat cpb hw_pstate npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold vmmcall bmi1
Fri 09 Jan 2015 07:46:21 AM EET | | OS: Linux: 3.17.6-1-ARCH
Fri 09 Jan 2015 07:46:21 AM EET | | Memory: 7.76 GB physical, 0 bytes virtual
Fri 09 Jan 2015 07:46:21 AM EET | | Disk: 111.29 GB total, 106.17 GB free
Fri 09 Jan 2015 07:46:21 AM EET | | Local time is UTC +2 hours
But but... How do I know that my GPU is working and doing some work? And do I have to check if the projects I choose (currently SETI, Einstein, Rosetta and World Community Grid) support CUDA or OpenCL, or is the nVidia padge beside the project good enough clue?
One more thing that is little vague in the wiki is that in what startup script I should put this:
xhost local:boinc

Similar Messages

  • When I try compose an email, just get compose window icon stuck in windows taskbar can't actually enlarge it and use it

    Ok, details of installation below, so when I try compose an email, I just get a compose window icon stuck in the windows
    taskbar can't actually enlarge it and use it. If I restart in safe mode with all addons disabled, then it works fine. But
    If I restart normally and manually disable addons/plugins, then close and start normally again i.e not safe mode, it
    breaks, so does not seem to be an addon or plugin but rather something with the configuration.
    Application Basics
    Name Thunderbird
    Version 31.6.0
    User Agent Mozilla/5.0 (Windows NT 6.1; rv:31.0) Gecko/20100101 Thunderbird/31.6.0
    Profile Folder
    Show Folder
    (Local drive)
    Application Build ID 20150330093429
    Enabled Plugins about:plugins
    Build Configuration about:buildconfig
    Memory Use about:memory
    Mail and News Accounts
    ID Incoming server Outgoing servers
    Name Connection security Authentication method Name Connection security Authentication method Default?
    account2 (none) Local Folders plain passwordCleartext
    account3 (nntp) news.mozilla.org:119 plain passwordCleartext stbeehive.oracle.com:465 SSL passwordCleartext true
    account5 (imap) stbeehive.oracle.com:993 SSL passwordCleartext stbeehive.oracle.com:465 SSL passwordCleartext true
    Crash Reports
    Report ID Submitted
    bp-0a8986d2-ff0c-41c3-9da6-e770e2141224 24/12/2014
    bp-01f44ba7-3143-4452-ac98-981b62140123 23/01/2014
    Extensions
    Name Version Enabled ID
    British English Dictionary 1.19.1 false [email protected]
    Lightning 3.3.3 false {e2fda1a4-762b-4020-b5ad-a41df1933103}
    Oracle Beehive Extensions for Thunderbird (OracleInternal) 1.0.0.5 false [email protected]
    Important Modified Preferences
    Name Value
    accessibility.typeaheadfind.flashBar 0
    browser.cache.disk.capacity 358400
    browser.cache.disk.smart_size_cached_value 358400
    browser.cache.disk.smart_size.first_run false
    browser.cache.disk.smart_size.use_old_max false
    extensions.lastAppVersion 31.6.0
    font.internaluseonly.changed false
    font.name.monospace.el Consolas
    font.name.monospace.tr Consolas
    font.name.monospace.x-baltic Consolas
    font.name.monospace.x-central-euro Consolas
    font.name.monospace.x-cyrillic Consolas
    font.name.monospace.x-unicode Consolas
    font.name.monospace.x-western Consolas
    font.name.sans-serif.el Calibri
    font.name.sans-serif.tr Calibri
    font.name.sans-serif.x-baltic Calibri
    font.name.sans-serif.x-central-euro Calibri
    font.name.sans-serif.x-cyrillic Calibri
    font.name.sans-serif.x-unicode Calibri
    font.name.sans-serif.x-western Calibri
    font.name.serif.el Cambria
    font.name.serif.tr Cambria
    font.name.serif.x-baltic Cambria
    font.name.serif.x-central-euro Cambria
    font.name.serif.x-cyrillic Cambria
    font.name.serif.x-unicode Cambria
    font.name.serif.x-western Cambria
    font.size.fixed.el 14
    font.size.fixed.tr 14
    font.size.fixed.x-baltic 14
    font.size.fixed.x-central-euro 14
    font.size.fixed.x-cyrillic 14
    font.size.fixed.x-unicode 14
    font.size.fixed.x-western 14
    font.size.variable.el 17
    font.size.variable.tr 17
    font.size.variable.x-baltic 17
    font.size.variable.x-central-euro 17
    font.size.variable.x-cyrillic 17
    font.size.variable.x-unicode 17
    font.size.variable.x-western 17
    gfx.blacklist.suggested-driver-version 257.21
    mail.openMessageBehavior.version 1
    mail.winsearch.firstRunDone true
    mailnews.database.global.datastore.id 8d997817-eec1-4f16-aa36-008d5baeb30
    mailnews.database.global.indexer.enabled false
    network.cookie.prefsMigrated true
    network.tcp.sendbuffer 65536
    places.database.lastMaintenance 1429004341
    places.history.expiration.transient_current_max_pages 78789
    plugin.disable_full_page_plugin_for_types application/pdf
    plugin.importedState true
    plugin.state.flash 0
    plugin.state.java 0
    plugin.state.np32dsw 0
    plugin.state.npatgpc 0
    plugin.state.npctrl 0
    plugin.state.npdeployjava 0
    plugin.state.npfoxitreaderplugin 0
    plugin.state.npgeplugin 0
    plugin.state.npgoogleupdate 0
    plugin.state.npitunes 0
    plugin.state.npoff 0
    plugin.state.npqtplugin 0
    plugin.state.nprlsecurepluginlayer 0
    plugin.state.npunity3d 0
    plugin.state.npwatweb 0
    plugin.state.npwlpg 0
    plugins.update.notifyUser true
    Graphics
    Adapter Description NVIDIA Quadro FX 580
    Vendor ID 0x10de
    Device ID 0x0659
    Adapter RAM 512
    Adapter Drivers nvd3dum nvwgf2um,nvwgf2um
    Driver Version 8.15.11.9038
    Driver Date 7-14-2009
    Direct2D Enabled Blocked for your graphics driver version. Try updating your graphics driver to version 257.21 or newer.
    DirectWrite Enabled false (6.2.9200.16571)
    ClearType Parameters Gamma: 2200 Pixel Structure: R
    WebGL Renderer Blocked for your graphics driver version. Try updating your graphics driver to version 257.21 or newer.
    GPU Accelerated Windows 0. Blocked for your graphics driver version. Try updating your graphics driver to version 257.21 or newer.
    AzureCanvasBackend skia
    AzureSkiaAccelerated 0
    AzureFallbackCanvasBackend cairo
    AzureContentBackend cairo
    JavaScript
    Incremental GC 1
    Accessibility
    Activated 0
    Prevent Accessibility 0
    Library Versions
    Expected minimum version Version in use
    NSPR 4.10.6 4.10.6
    NSS 3.16.2.3 Basic ECC 3.16.2.3 Basic ECC
    NSS Util 3.16.2.3 3.16.2.3
    NSS SSL 3.16.2.3 Basic ECC 3.16.2.3 Basic ECC
    NSS S/MIME 3.16.2.3 Basic ECC 3.16.2.3 Basic ECC

    Noticed this in the info supplied:
    Graphics Adapter Description NVIDIA Quadro FX 580
    Vendor ID 0x10de
    Device ID 0x0659
    Adapter RAM 512
    Adapter Drivers nvd3dum nvwgf2um,nvwgf2um
    Driver Version 8.15.11.9038
    Driver Date 7-14-2009
    Direct2D Enabled Blocked for your graphics driver version.
    Try updating your graphics driver to version 257.21 or newer.
    Could you update your graphics driver and retest.

  • Media Encoder CC not using GPU acceleration for After Effects CC raytrace comp

    I created a simple scene in After Effects that's using the raytracer engine... I also have GPU enabled in the raytracer settings for After Effects.
    When I render the scene in After Effects using the built-in Render Queue, it only takes 10 minutes to render the scene.
    But when I export the scene to Adobe Media Encoder, it indicates it will take 13 hours to render the same scene.
    So clearly After Effects is using GPU accelleration but for some reason Media Encoder is not.
    I should also point out that my GeForce GTX 660 Ti card isn't officially supported and I had to manually add it into the list of supported cards in:
    C:\Program Files\Adobe\Adobe After Effects CC\Support Files\raytracer_supported_cards.txt
    C:\Program Files\Adobe\Adobe Media Encoder CC\cuda_supported_cards.txt
    While it's not officially supported, it's weird that After Effects has no problem with it yet Adobe Media Encoder does...
    I also updated After Effects to 12.1 and AME to 7.1 as well as set AME settings to use CUDA but it didn't make a difference.
    Any ideas?

    That is normal behavior.
    The "headless" version of After Effects that is called to render frames for Adobe Media Encoder (or for Premiere Pro) using Dynamic Link does not use the GPU for acceleration of the ray-traced 3D renderer.
    If you are rendering heavy compositions that require GPU processing and/or the Render Multiple Frames Simultaneously multiprocessing, then the recommended workflow is to render and export a losslessly encoded master file from After Effects and have Adobe Media Encoder pick that up from a watch folder to encode into your various delivery formats.

  • AME takes a long time to start encoding when using GPU Acceleration (OpenCL)

    Have had this issue for a while not (current update as well as at least the previous one).
    I switched to using GPU Hardware Acceleration over Software Only rendering. The speed for the actual encoding on my Mac is much faster that software only rendering. However, more often than not, the encoding process seems to stick at 0% waiting to start for a long time, say anywhere from 1 minute to several minutes. Then it will eventually start.
    Can't seem to find what is causing this. Doesn't seem related to any particular project, codec, output format or anything else I can see. I'd like to keep using GPU Acceleration but it's pointless if it takes 5 times as long to even start as rendering by software-only would. It doesn't even pass any elapsed time while it's hanging or waiting. Says at 0 until it starts. Like I said, once it starts, it's a much faster render than what the software would take. Any suggestions? Thanks.
    using an iMac, 3.4Ghz, 24GB RAM, AMD Radeon HD 6970M 2048 MB

    Actually, I just discovered it doesn't seem to have anything to do with OpenCL. I just put the rendering back to software only and it is doing the same thing. hangs for a long time before it starts to run. Activity monitor shows it running 85-95% CPU, 17 threads and about 615 MB of memory. Activity Monitor doesn't say its not responding, it just seems kind of idle. It took almost 7 minutes for it to start rendering.
    1- Running version 7.2.2.29 of Media Encoder,
    version 7.2.2 (33) of Premiere.
    2- Yes, a premiere project of HD 1080 source shot with Canon 60D, output to H.264 YouTube 720, fps 23.976
    not sure what you are referring to by native rendering. The rendering setting i tried now is Mercury Playback Engine Software Only if that is what you are referring to. But OpenCL gives me the same issue now.
    3- H.264 YouTube 720, 23,976 - seems to happen on multiple output types
    4- In Premiere, I have my project timeline window selected. Command-M, selected the output location I wanted and what output codec, selected submit to Queue. Media Encoder comes up, the project loads and I select the run icon. It preps and shows the encoding process setup in the "Encoding" window, then just sits there with no "Elapsed" time occurring until almost 5-6 minutes later, then it kicks in.

  • How to use filters on ios mobile devices (iPhone/iPad) using GPU rendering (Solved)

    Many moons ago I asked a question here on the forums about how to use filters (specifically a glow filter) on a mobile devices (specifically the iPhone) when using GPU rendering and high resolution.
    At the time, there was no answer... filters were unsupported. Period.
    Well, Thanks to a buddy of mine, this problem has been solved and I can report that I have gotten a color matrix filter for desaturation AND a glow filter working on the iPhone and the iPad using GPU rendering and high resolution.
    The solution, in a nut shell is as follows:
    1: Create your display object... ie: a sprite.
    2. Apply your filter to the sprite like you normally would.
    3. Create a new bitmapdata and then draw that display object into the bitmap data.
    4. Put the new bitmapdata into a bitmap and then put it on the stage or do what you want.
    When you draw the display object into the bitmapdata, it will draw it WITH THE FILTER!
    So even if you put your display object onto the stage, the filter will not be visible, but the new bitmapdata will!
    Here is a sample app I created and tested on the iphone and ipad
    var bm:Bitmap;
    // temp bitmap object
    var bmData:BitmapData;
    // temp bitmapData object
    var m:Matrix;
    // temp matrix object
    var gl:GlowFilter;
    // the glow filter we are going to use
    var sprGL:Sprite;
    // the source sprite we are going to apply the filter too
    var sprGL2:Sprite;
    // the sprite that will hold our final bitmapdata containing the original sprite with a filter.
    // create the filters we are going to use.
    gl = new GlowFilter(0xFF0000, 0.9, 10, 10, 5, 2, false, false);
    // create the source sprite that will use our glow filter.
    sprGL = new Sprite();
    // create a bitmap with any image from our library to place into our source sprite.
    bm = new Bitmap(new Msgbox_Background(), "auto", true);
    // add the bitmap to our source sprite.
    sprGL.addChild(bm);
    // add the glow filter to the source sprite.
    sprGL.filters = [gl];
    // create the bitmapdata that will draw our glowing sprite.
    sprGL2 = new Sprite();
    // create the bitmap data to hold our new image... remember, with glow filters, you need to add the padding for the flow manually. Should be double the blur size
    bmData = new BitmapData(sprGL.width+20, sprGL.height+20, true, 0);
    // create a matrix to translate our source image when we draw it. Should be the same as our filter blur size.
    m = new Matrix(1,0,0,1, 10, 10);
    // draw the source sprite containing the filter into our bitmap data
    bmData.draw(sprGL, m);
    // put the new bitmap data into a bitmap so we can see it on screen.
    bm = new Bitmap(bmData, "auto", true);
    // put the new bitmap into a sprite - this is just because the rest of my test app needed it, you can probably just put the bitmap right on the screen directly.
    sprGL2.addChild(bm);
    // put the source sprite with the filter on the stage. It should draw, but you will not see the filter.
    sprGL.x = 100;
    sprGL.y = 50;
    this.addChild(sprGL);
    // put the filtered sprite on the stage. it shoudl appear like the source sprite, but a little bigger (because of the glow padding)
    // and unlike the source sprite, the flow filter should acutally be visible now!
    sprGL2.x = 300;
    sprGL2.y = 50;
    this.addChild(sprGL2);

    Great stuff dave
    I currently have a slider which changes the hue of an image in a movieclip, I need it to move through he full range -180 to 180.
    I desperately need to get this working on a tablet but cant get the filters to work in GPU mode. My application works too slow in cpu mode.
    var Mcolor:AdjustColor = new AdjustColor();   //This object will hold the color properties
    var Mfilter:ColorMatrixFilter;                           //Will store the modified color filter to change the image
    var markerSli:SliderUI = new SliderUI(stage, "x", markerSli.track_mc, markerSli.slider_mc, -180, 180, 0, 1);   //using slider from http://evolve.reintroducing.com
    Mcolor.brightness = 0;  Mcolor.contrast = 0; Mcolor.hue = 0; Mcolor.saturation = 0;            // Set initial value for filter
    markerSli.addEventListener(SliderUIEvent.ON_UPDATE, markerSlider);                          // listen for slider changes
    function markerSlider($evt:SliderUIEvent):void {
        Mcolor.hue = $evt.currentValue;                        
        updateM();
    function updateM():void{
        Mfilter = new ColorMatrixFilter(Mcolor.CalculateFinalFlatArray());
        all.marker.filters = [Mfilter];
    how would I use your solution in my case
    many thanks.

  • How did I get better compression when using GPU acceleration?

    Hello,
         I was just making a short, 1:12-long video encoded as H.264 in an MP4 wrapper. I had edited the video in Premiere Pro CC, and was exporting through Adobe Media Encoder at a framerate of 29.97 FPS and a resolution of 1920x1080. I wanted to do a test to see how much GPU acceleration benefits me. So, I ran two different tests, one with and one without. The one with GPU acceleration finished in 2:09. The one without it took 2:40, definately enough to justify using GPU acceleration (my GPU is an NVidia GTX 660 Ti which I have overclocked). I am sure my benefits would be even greater in most videos I make, since this one had relatively few effects.
    However, I checked the file sizes out of curiosity, and found that the one made with GPU acceleration was 175 MB in size, and the other was 176 MB. While the difference is pretty small, I am wondering why it exists at all. I checked the bitrates, and the one made with GPU acceleration is 19,824 KB/s, whereas the other is 19,966 KB/s. I used the exact same presets for both. I even re-encoded the videos again just to test it and verify the results, and the second test confirms what the first showed.
    So, what I am wondering is: What is it that made this possible, and could this be used to my advantage in future productions?
    In case it is in any way relevant, these are my computer's specifications:
    -Hardware-
    Case- MSI Interceptor Series Barricade
    Mainboard- MSI Z77A-G45
    Power Supply Unit- Corsair HX-750
    Processor- Third-Generation Quad-Core Intel Core i7-3770K
    Graphics Card- ASUS NVidia GeForce GTX 660 Ti Overclocked Edition (with two-gigabytes of dedicated GDDR5 video memory and the core overclocked to 1115 MHz)
    Random Access Memory- Eight gigabytes of dual-channel 1600-MHz DDR3 Corsair Dominator Platinum Memory
    Audio Card- ASUS Xonar DSX
    System Data Drive- 128-gigabyte Samsung 840 Pro Solid-State Drive
    Storage Data Drive- One-terabyte, 7200 RPM Western Digital Caviar Black Hard Disk Drive
    Cooling System- Two front-mounted 120mm case fans (intake), one top-mounted 120mm case fan (exhaust), and a Corsair Hydro H60 High-Performance Liquid CPU Cooler (Corsair SP120 rear exhaust fan)
    -Peripherals-
    Mouse- Cyborg R.A.T. 7
    Keyboard- Logitech G110
    Headset- Razer Tiamat Elite 7.1
    Joystick- Saitek Cyborg Evo
    -Software-
    Operating System- Windows 8.1 64-bit
    DirectX Variant- DirectX 11.1
    Graphics Software- ASUS GPU Tweak
    CPU Temperature/Usage Monitoring Software- Core Temp
    Maintainance Software- Piriform CCleaner and IObit Smart Defragmenter
    Compression Software- JZip
    Encryption Software- AxCrypt
    Game Clients- Steam, Origin, Games for Windows LIVE, and Gamestop
    Recording Software- Fraps and Hypercam 2
    Video Editing Software- Adobe Premiere Pro CC
    Video Compositing/Visual Effects Software- Adobe After Effects CC
    Video Encoding Software- Adobe Media Encoder
    Audio Editing/Mixing Software- Adobe Audition CC
    Image Editing/Compositing Software- Adobe Photoshop CC
    Graphic Production/Drawing Software- Adobe Illustrator CC

    As I said in my original post, I did test it again. I have also ran other tests, with other sequences, and they all confirm that the final file is smaller when GPU acceleration is used.
    Yes, the difference is small but I'm wondering how there can even be a variable at all. If you put a video in and have it spit it out, then do that again, the videos should be the same size. It's a computer program, so the outputs are all based on variables. If the variables don't change, the output shouldn't either. For this reason, I cannot believe you when you say "I can, for example, run exactly the same export on the same sequence more than once, are rarely are they identical in export times or file size."

  • HP Pavilion dv7-4083cl (w/ Intel Core i5 and ATI GPU): Which is its normal temperature?

    Hello,
    I have a question about HP Pavilion dv7-4083cl (w/ Intel Core i5 and ATI GPU):
    Which is its normal temperature?
    Environment: 24 °C (Celcius)
    HWMonitor and Real Temp agrees with the numbers find below.
    The two CPU Cores starts at 37 °C and in about an hour are at 53 °C, with CPU usage under 10%.
    The GPU starts at 34 °C and easly goes to 57 °C, with some peaks at 72 °C
    Regarding "Hewlett-Packard 144B": (under low/idle CPU usage)
    TZ01: starts at 40 °C and very fast goes to 55 °C or 65 °C (and more, under CPU load)
    PCH starts at 59 °C and very fast goes to 76 °C (and stays there)
    GMCH: starts at 40 °C and very fast goes to 55 °C or 65 °C
    ATI Mobility Radeon HD 5470 (TMPIN0): Starts at 34 °C and easily goes and stays at 57-58 °C
    Note: I'm also using a cooler (two fans, under its base)
    If I run Real Temp (v. 3.6) "Sensor Test", the cores goes up to 100 °C , and even a bit more.
    Which is the normal / expected behavior / operational range?
    Steady PCH at 76 °C is it normal?
    TZ01, GMCH at 55 - 65 °C is it normal?
    i5 Cores at 55 °C is it normal? (and easily going to 70 °C or more)
    GPU (ATI) at 57 °C is it normal?
    is it normal to climb up to 100-105 °C under a 5 minutes high CPU usage operation?
    Thanks!

    i roll back to bios F.25 which is the older version of bios that support my device but still suffering from same D90 system overheating error.
    i can't contact hp support or send it to offical hp center currently coz i am from Egypt and i live in far town

  • PS CS5 32-bit won't use GPU acceleration

    I have CS5 32- and 64-bit installed on both my desktop and my notebook. My desktop runs Win 7 x64 Biz and has an nVidia GTX275 display card. The notebook is a Sony runs Win 7 x64 Home and uses the nVidia GT300M. Though the notebook is numerically higher, the GTX is suppose to be more advanced.
    This is my problem: The notebook uses the GPU Open GL feature without a problem in either version. My desktop, however, only uses the Open GL for the 64-bit. In 32-bit, the option is grayed out. Can anyone point me to why the desktop 32-bit Photoshop can't/won't use the GPU hardware acceleration? I have the most recent drivers installed, so I know that's not the issue. Thanks.
    Nemo

    Yes, the driver is the issue.
    Photoshop only enables the GPU support if the driver says it is capable, and if the driver doesn't return errors.
    Somehow your 32 bit driver either thinks it is not capable, or is returning errors when it shoudn't be.

  • Publish to iOS using GPU rendering?

    Can the new publish to iOS feature use GPU rendering and if so, how does one set that?
    AIR for iOS allows CPU, GPU, Auto, and Direct options.
    Thanks.

    Is this really the only solution? I've been trying to go back to Java 1.6 following various threads found on internet but have been unable to get to work - does Adobe have any plans to fix on thier end? Here's error message I'm getting, have tried switching Air versions but didn't help. We've actually had to remove an app from app store because unable to export to fix a bug, so any help would be appreciated.
    Exception in thread "main" java.lang.Error: Unable to find llvm JNI lib in:
    /Applications/Adobe Flash CS6/AIR3.2/lib/adt.jar/Darwin
    /Applications/Adobe Flash CS6/AIR3.2/lib/aot/lib/x64
    /Applications/Adobe Flash CS6/AIR3.2/lib/adt.jar
    /Applications/Adobe Flash CS6/AIR3.2/lib
         at adobe.abc.LLVMEmitter.loadJNI(LLVMEmitter.java:577)
         at adobe.abc.LLVMEmitter.(LLVMEmitter.java:591)
         at com.adobe.air.ipa.AOTCompiler.generateExtensionsGlue(AOTCompiler.java:407)
         at com.adobe.air.ipa.AOTCompiler.generateMachineBinaries(AOTCompiler.java:1585)
         at com.adobe.air.ipa.IPAOutputStream.createIosBinary(IPAOutputStream.java:300)
         at com.adobe.air.ipa.IPAOutputStream.finalizeSig(IPAOutputStream.java:620)
         at com.adobe.air.ApplicationPackager.createPackage(ApplicationPackager.java:91)
         at com.adobe.air.ipa.IPAPackager.createPackage(IPAPackager.java:224)
         at com.adobe.air.ADT.parseArgsAndGo(ADT.java:557)
         at com.adobe.air.ADT.run(ADT.java:414)
         at com.adobe.air.ADT.main(ADT.java:464)

  • How to use GPU acceleration in final rendering process in AE CS6 updated?

    Hello guys!
    Maybe my question is a frequent, but in fact I didn't get answer listing FAQs here.
    So I have new NVIDIA Geforce card where CUDA supported. I enabled Ray-Traced 3d in "composition Settings" Advanced tab and also in fast preview settings. Everything OK and I can see no errors, but when I try to render one of video hive's projects, I see that CPU's  load is 95-100% but GPU's is only 0-2%. What is the secret? How to use GPU for project rendering?

    The GPU is used very little by After Effects.
    See this page for details of what the GPU is used for in After Effects:
    http://blogs.adobe.com/aftereffects/2012/05/gpu-cuda-opengl-features-in-after-effects-cs6. html

  • Can SpeedGrade use GPU Acceleration in New Mac Pro?

    Good afternoon, I would like to know if Adobe SpeedGrade is able to take advantage of GPU acceleration AMD FirePro D500 installed in the brand new Mac Pro. Thanks in advance for your reply.
    Message was edited by: Kevin Monahan
    Reason: More searchable title

    Supported GPU technologies (Mac):
    - AMD/nVidia OpenCL for direct link in MacOS 10.8.5 and higher
    - OpenGL for SG native
    Supported GPU technologies (Win):
    - AMD OpenCL, nVidia Cuda for direct link in Win7,Win8, Win8.1
    - OpenGL for SG native
    Note: Although SG not utilises two GPU's, it should run fine on your new Mac Pro and use OpenCL for acceleration

  • Stretched bitmaps blurred when rendered using GPU on iOS, how to avoid?

    Hi,
    in a project of mine (targeting iPhone 4) I'm using pixel art. To show it properly I stretch the bitmap using scaleX/scaleY. However, when rendering using GPU the bitmaps get blurred, ruining the clear pixel art effect that I am after. Rendering using CPU does not blur the bitmaps, but is slower.
    Any ideas on how to tell the renderer not to blur bitmaps?
    There was a similar questions raised in another thread ( http://forums.adobe.com/thread/866712 ) but it derailed into a benchmarking discussion.
    Thank you in advance.

    I did not.
    Instead I rewrote my code to scale up all my spritesheets in software upon load. Something like this:
    var bd : BitmapData = new BitmapData( _spriteWidth * magicScale, _spriteHeight * magicScale, true, 0x0 );
    var matrix:Matrix = new Matrix( magicScale, 0, 0, magicScale );
    bd.draw( originalBitmapData, matrix );
    frames.push( bd );
    The variable magicScale is stored behind the scenes in the engine so the rest of the code just acts as if the scene was not scaled.
    Note that this solution does not use any blitting techniques, it places bitmaps on the stage and then swaps the bitmapData in them. No off screen buffers, just relying on the GPU.
    Works pretty well!

  • Does it use GPU to render?

    I have a 1.1 MacPro with 2.6 quad and ATI X1800.. Will upgrading my GPU to a ati 4870 5770 help with render speeds?
    Does Compressor or FCP use GPU during transcodes or log and transfers for either encode or decode?

    No.
    See Shane's answer to your other post.
    bogiesan

  • I have installed Firefox4 to find it is not compatible with my version of OSX how do I get version 3.6 back or do I just give up and use Safari?

    I downloaded Version 4 and installed it. At no point did it say that it was not compatible with the version of OS X that I am running (10.4.11) now that it is installed it will not open but it has already up graded from 3.6 so I have lost the lot. How can I download a version of 3.6 so that I can use it again or should I just pack up and use Safari as all I can get from your site is version 4. HELP!!!

    Go to this link and find your language:
    http://www.mozilla.com/en-US/firefox/all-older.html

  • IPad2 and a new MacBook running Lion, Both Devices use the same Apple ID which is a Hotmail eMail id. Can I create a new iCloud eMail Id and use iCloud eMail and continue to use my Hotmail Id for my Apple Id and use it for iTunes, iCloud

    I have an iPad2 and a new MacBook running Mountain Lion. Both Devices use the same Apple ID which is a Hotmail eMail id. Can I create a new iCloud eMail Id and use iCloud eMail and continue to use my current Hotmail Id for my Apple Id for iTunes, iCloud.
    Note, I will use both Hotmail and iCloud eMail.

    Welcome to the Apple Community.
    In order to change your Apple ID or password for your iCloud account on your iOS device, you need to delete the account from your iOS device first, then add it back using your updated details. (Settings > iCloud, scroll down and hit "Delete Account")

Maybe you are looking for

  • Is there a way to receive phone notifications such...

    Hello Everyone, To Start I have a Lumia 800, Latest Updates all installed. I am running windows 8, completely up to date 64 bit. I am running the latest version of zune. My question to the community is: Is there a way to receive phone notifications s

  • PS CC 2014 not showing up in Mac Finder after update

    Just did the update/install of PS CC 2014, and while I can see the application in my applications folder if I do a spotlight search, it does not show up in Mac Finder when I go to my applications folder. Actually this is the case for several CC 2014

  • In reports how to upload a logo?

    Hi In reports how to upload a logo? Thanks

  • Read no. of pulse count with fixed time interval

    Hi, I am try to read no of pulse count ( Pulse period = 500ms, On time = 100msec and Off time = 400msec). Between pulses there is a fixed time interval of 2seconds. I am using NI USB DAQ 6341. Can anybody help me regarding this? 

  • Photos display from Notes with iPod Classic, but not Nano

    I have created links to 5 small .jpg photos in a single Notes file in an iPod Classic, and each link and photo works and displays just fine. However, when I use the same file and photos on a Nano, (copied the exact same file structure and content fro