Gx70 is not using its dedicated gpu

Hello everyone,
A week ago i bought a msi gx70, it al works perfect except for one thing. When i start a game, the laptop is not using its dedicated gpu (amd hd 8970) but the gpu integrated in the processor ( amd 89650 )
I've tried a lot of stuff to fix this, like disabling the onboard gpu. When i disable the onboard gpu, the pc start using a microsoft basic render drive. I also tried to uninstalll both cards and reinstall them, not working. tried going into the bios, but there is now option for me to disable the onboard gpu there.
So please, i need help because it's getting pretty frustrating.
Thanks in advance!
grts

Quote from: Dblkk;108339
OP, please let us know if those worked or not. So that we can eiher continue to help, or if it worked reply with what you did to fix the matter. So that any other users experiencing the same issue, reading this, will know what to do.
I will, but for the moment i can't :p
I gave it back to the shop to see if they can fix it and my laptop suddenly black screened after i updated the drivers. So i hope they can find some drivers where i can just manually disable the integrated gpu.
Also, when i was playing games the power button was orange if i remember correctly, but the game was still very laggy (about 40fps on smite, which is too low for this card) and in black ops ll the fps was really unstable (immense drops from 120 to 30) i think this also shouldn't be happening with this card right? And in games where you CAN manually pick the gpu, the dedicated one was just not given as an option. Also in benchmarks the dedicated card was noticed by the benchmark, but not used for some reason (having 20fps in a benchmark with this card at medium settings).
All this concludes for me that the dedicated gpu was not processing when ingame.
This is all the information i've gathered, if you have any other thoughts of what it could be, feel free to give them.
Regards
pj

Similar Messages

  • What settings to make cpu not use its...

    Hi when I Overclock my 4690k, obviously it will run on its full speed even when at idle mode or not playing, MSI boards has this settings to make cpu not use its full power when on idle mode,, so what settings is it??? Anyone???

    CPU Ratio Mode Dynamic
    EIST Enabled
    And As Svet said Balanced power plan

  • Photoshop CC 2014.2.1 not detecting my dedicated GPU

    When I open Photoshop, I get the following error message:
    "3D features require a minimum of 512MB of vRAM. Photoshop has detected less than that on your system.
    Updating the driver of your graphics card may resolve the issue."
    I am running a laptop PC with an integrated graphics card (Intel HD4600) and a dedicated Nvidia GTX 675M with 2 GB of vRAM.
    I switch between the two adapters with the press of a single button, for power-saving purposes.
    I always run at high-performance with the dedicated GPU active when running Photoshop.
    I run the latest driver from Nvidia, and in the Nvidia control panel I manually set the preferences for Photoshop to utilize the dedicated GPU. But it still does not detect my main graphics adapter.
    I checked the system information in Photoshop and cannot make sense of it. Why would it be unable to detect the GPU when every GPU demanding software I have run so far (a lot) have been able to..
    I have checked google and the very forums and found the problem in various forms with no real solution for it so far. Hopefully someone on these forums can help, considering I am paying a monthly fee for the program and the support that comes with it
    First part of System Information in case it can be useful:
    Adobe Photoshop Version: 2014.2.1 20141014.r.257 2014/10/14:23:59:59 CL 987299  x64
    Operating System: Windows 7 64-bit
    Version: 6.1 Service Pack 1
    System architecture: Intel CPU Family:6, Model:12, Stepping:3 with MMX, SSE Integer, SSE FP, SSE2, SSE3, SSE4.1, SSE4.2, AVX, AVX2, HyperThreading
    Physical processor count: 4
    Logical processor count: 8
    Processor speed: 2394 MHz
    Built-in memory: 8112 MB
    Free memory: 4011 MB
    Memory available to Photoshop: 7080 MB
    Memory used by Photoshop: 70 %
    3D Multitone Printing: Disabled.
    Windows 2x UI: Disabled.
    Highbeam: Enabled.
    Image tile size: 1024K
    Image cache levels: 4
    Font Preview: Medium
    TextComposer: Latin
    Display: 1
    Display Bounds: top=0, left=0, bottom=1080, right=1920
    OpenGL Drawing: Disabled.
    OpenGL Allow Old GPUs: Not Detected.
    No GPU information available

    Thanks for the quick response!
    But as mentioned in my initial post, I deactivate the integrated graphics adapter when using Photoshop, or any software that demands decent graphical processing power.
    I only have it activated when in power-saving mode. And of course I update to the newest display drivers via the GeForce Experience or through their website.
    Point is, I have never had any problems with any software whatsoever in detecting the proper GPU and utilizing it's power. The only software unable to register my dedicated video card is Photoshop.

  • Aperture and iVideo use ME294 dedicated GPU?

    I'm producing multimedia shows for my customers and time is always a critical factor since I have to deliver by the end of the trip which is only an hour after I collect the last material.
    I'm currently using an 2009 13" Macbook Pro which is already to slow to show the FullHD video in full resolution. But even more, the hours it takes to export the MM show to MOV format and then convert it to MP4 is somewhat frustrating. So I'm looking for upgrating but money is tied and the new machine must cope with technology improvements for the next 3-4 years.
    Is the ME294 worth the **** lot of money or just a waste because the dedicated GPU is not even used for the purpose?

    You are absolutely right. However, I forgot to mention that I did that step already 3 years ago, when I started to seriously working with this photo/video stuff.
    I need a new machine, but the question is, which one is the best value for the money. Some reviews say, that the 13" is a better value because it is only half the price. But it has not even the Iris Pro graphics, less memory and smaller SSD. By the time, I factor these upgrades in, I pay basically the same pricve then going to the 15". From that base model to the flagship, the price difference is exactly what you pay for the fast CPU, the 16GB main memory and the 512 GB SSD. Then, you get the dedicated GPU '4free' and the main question is, how much is the performance difference for my kind of work and is it worth the extra money to spend?

  • How do I turn off the tiny Shuffle-sized iPod Nano (v6 with a touch screen) when not using its iOS?

    Hello and happy new year.
    I decided to use this used iPod Nano (touch screen) as an eight/8 GB flash disk/storage (FAT32). I can access the files just fine. I can always restore it back to use iOS quickly and easily if needed.
    However, it never seems to turn off/sleep. I tried holding down the big button on top right (next to the two volume controls [+ and -]), but it never does anything. Also, it says "OK to disconnect." when disconnected.
    So, I restored its iOS v1.2 from an old 15" MacBook Pro (2008; Mac OS X 10.5.8) onto it and can turn off easily. However, I cannot access its drive since it is not a MS FS and Windows XP Pro. SP3 asked me to format. How can I turn it off and still use it as an 8 GB USB flash disk/drive without installing and using iTunes?
    Thank you in advance.

    Hey,
    Holding in the play/pause and the menu (select) button will force the iPod into disk mode. This is the stand by mode you are talking about.
    To turn your iPod off properly you will want to hold down the play/pause button until the screen goes blank and then turn the hold switch on. This is cause the iPod to go to the Apple screen when it turns back on.
    Hope this helps,
    Ms D.

  • Why block an account simply for not using its cred...

    I use Skype every couple of weeks - but I haven't needed to make a paid call for a long time. Now when I try to, I find my account has been 'blocked' - though only for paid calls. So I can't use my credit balance without jumping through Skype's 'account unblocking' hoops and waiting for you to deign to give me access to my own money.
    What possibile justification is there for blocking an account like this - not for being abused, but simply for not being used?

    I think you may need to contact customer service to clarify why your account is blocked, and also to know what your possible options are . Just open the link pasted below to see the instructions on how to get in touch with customer service -
    https://support.skype.com/en/faq/FA1170/how-can-i-contact-skype-customer-service
    IF YOU FOUND OUR POST USEFUL THEN PLEASE GIVE "KUDOS". IF IT HELPED TO FIX YOUR ISSUE PLEASE MARK IT AS A "SOLUTION" TO HELP OTHERS. THANKS!
    ALTERNATIVE SKYPE DOWNLOAD LINKS | HOW TO RECORD SKYPE VIDEO CALLS | HOW TO HANDLE SUSPICIOS CALLS AND MESSAGES

  • Oracle not using its own explain plan

    When I run a simple select query on an indexed column on a large (30 million records) table oracle creates a plan using the indexed column and at a cost of 4. However, what it actually does is do a table scan (I can see this in the 'Long Operations' tab in OEM).
    The funny thing is that I have the same query in a ADO application and when the query is run from there, the same plan is created but no table scan is done - and the query returns in less than a second. However, with the table scan it is over a minute.
    When run through SQL plus Oracle creates a plan including the table scan at a cost of 19030.
    In another (dot net) application I used the: "Alter session set optimizer_index_caching=100" and "Alter session set optimizer_index_cost_adj=10" to try to force the optimizer to use the index. It creates the expected plan, but still does the table scan.
    The query is in the form of:
    "Select * from tab where indexedcol = something"
    Im using Oracle 9i 9.2.0.1.0
    Any ideas as I'm completely at a loss?

    Hello
    It sounds to me like this has something to do with bind variable peeking which was introduced in 9i. If the predicate is
    indexedcolumn = :bind_variablethe first time the query is parsed by oracle, it will "peek" at the value in the bind variable and see what it is and will generate an execution plan based on this. That same plan will be used for matching SQL.
    If you use a litteral, it will generate the plan based on that, and will generate a separate plan for each litteral you use (depending on the value of the cursor_sharing initialisation parameter).
    This can cause there to be a difference between the execution plan seen when issuing EXPLAIN PLAN FOR, and the actual exectuion plan used when the query is run.
    Have a look at the following example:
    tylerd@DEV2> CREATE TABLE dt_test_bvpeek(id number, col1 number)
      2  /
    Table created.
    Elapsed: 00:00:00.14
    tylerd@DEV2> INSERT
      2  INTO
      3      dt_test_bvpeek
      4  SELECT
      5      rownum,
      6      CASE
      7          WHEN MOD(rownum, 5) IN (0,1,2,3) THEN
      8              1
      9          ELSE
    10              MOD(rownum, 5)
    11          END
    12      END
    13  FROM
    14      dual
    15  CONNECT BY
    16      LEVEL <= 100000
    17  /
    100000 rows created.
    Elapsed: 00:00:00.81
    tylerd@DEV2> select count(*), col1 from dt_test_bvpeek group by col1
      2  /
      COUNT(*)       COL1
         80000          1
         20000          4
    2 rows selected.
    Elapsed: 00:00:00.09
    tylerd@DEV2> CREATE INDEX dt_test_bvpeek_i1 ON dt_test_bvpeek(col1)
      2  /
    Index created.
    Elapsed: 00:00:00.40
    tylerd@DEV2> EXEC dbms_stats.gather_table_stats( ownname=>USER,-
    tabname=>'DT_TEST_BVPEEK',-
    method_opt=>'FOR ALL INDEXED COLUMNS SIZE 254',-
    cascade=>TRUE -
    );PL/SQL procedure successfully completed.
    Elapsed: 00:00:00.73
    tylerd@DEV2> EXPLAIN PLAN FOR
      2  SELECT
      3      *
      4  FROM
      5      dt_test_bvpeek
      6  WHERE
      7      col1 = 1
      8  /
    Explained.
    Elapsed: 00:00:00.01
    tylerd@DEV2> SELECT * FROM TABLE(DBMS_XPLAN.display)
      2  /
    PLAN_TABLE_OUTPUT
    Plan hash value: 2611346395
    | Id  | Operation         | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |                | 78728 |   538K|    82  (52)| 00:00:01 |
    |*  1 |  TABLE ACCESS FULL| DT_TEST_BVPEEK | 78728 |   538K|    82  (52)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("COL1"=1)
    13 rows selected.
    Elapsed: 00:00:00.06The execution plan for col1=1 was chosen because oracle was able to see that based on the statistics, col1=1 would result in most of the rows from the table being returned.
    tylerd@DEV2> EXPLAIN PLAN FOR
      2  SELECT
      3      *
      4  FROM
      5      dt_test_bvpeek
      6  WHERE
      7      col1 = 4
      8  /
    Explained.
    Elapsed: 00:00:00.00
    tylerd@DEV2> SELECT * FROM TABLE(DBMS_XPLAN.display)
      2  /
    PLAN_TABLE_OUTPUT
    Plan hash value: 3223879139
    | Id  | Operation                   | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |                   | 21027 |   143K|    74  (21)| 00:00:01 |
    |   1 |  TABLE ACCESS BY INDEX ROWID| DT_TEST_BVPEEK    | 21027 |   143K|    74  (21)| 00:00:01 |
    |*  2 |   INDEX RANGE SCAN          | DT_TEST_BVPEEK_I1 | 21077 |       |    29  (28)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("COL1"=4)
    14 rows selected.
    Elapsed: 00:00:00.04This time, the optimiser was able to see that col1=4 would result in far fewer rows so it chose to use an index. Look what happens however when we use a bind variable with EXPLAIN PLAN FOR - especially the number of rows the optimiser estimates to be returned from the table
    tylerd@DEV2> var an_col1 NUMBER
    tylerd@DEV2> exec :an_col1:=1;
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:00.00
    tylerd@DEV2>
    tylerd@DEV2> EXPLAIN PLAN FOR
      2  SELECT
      3      *
      4  FROM
      5      dt_test_bvpeek
      6  WHERE
      7      col1 = :an_col1
      8  /
    Explained.
    Elapsed: 00:00:00.01
    tylerd@DEV2> SELECT * FROM TABLE(DBMS_XPLAN.display)
      2  /
    PLAN_TABLE_OUTPUT
    Plan hash value: 2611346395
    | Id  | Operation         | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |                | 49882 |   340K|   100  (60)| 00:00:01 |
    |*  1 |  TABLE ACCESS FULL| DT_TEST_BVPEEK | 49882 |   340K|   100  (60)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("COL1"=TO_NUMBER(:AN_COL1))
    13 rows selected.
    Elapsed: 00:00:00.04
    tylerd@DEV2>
    tylerd@DEV2> exec :an_col1:=4;
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:00.01
    tylerd@DEV2>
    tylerd@DEV2> EXPLAIN PLAN FOR
      2  SELECT
      3      *
      4  FROM
      5      dt_test_bvpeek
      6  WHERE
      7      col1 = :an_col1
      8  /
    Explained.
    Elapsed: 00:00:00.01
    tylerd@DEV2> SELECT * FROM TABLE(DBMS_XPLAN.display)
      2  /
    PLAN_TABLE_OUTPUT
    Plan hash value: 2611346395
    | Id  | Operation         | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |                | 49882 |   340K|   100  (60)| 00:00:01 |
    |*  1 |  TABLE ACCESS FULL| DT_TEST_BVPEEK | 49882 |   340K|   100  (60)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("COL1"=TO_NUMBER(:AN_COL1))
    13 rows selected.
    Elapsed: 00:00:00.07For both values of the bind variable, the optimiser has no idea what the value will be so it has to make a calculation based on a formula which results in it estimating that the query will return roughly half of the rows in the table, and so it chooses a full scan.
    Now when we actually run the query, the optimiser can take advantage of bind variable peeking and have a look at the value the first time round and base the execution plan on that:
    tylerd@DEV2> exec :an_col1:=1;
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:00.00
    tylerd@DEV2> SELECT
      2      *
      3  FROM
      4      dt_test_bvpeek
      5  WHERE
      6      col1 = :an_col1
      7  /
    80000 rows selected.
    Elapsed: 00:00:10.98
    tylerd@DEV2> SELECT prev_sql_id FROM v$session WHERE audsid=SYS_CONTEXT('USERENV','SESSIONID')
      2  /
    PREV_SQL_ID
    9t52uyyq67211
    1 row selected.
    Elapsed: 00:00:00.00
    tylerd@DEV2> SELECT
      2      operation,
      3      options,
      4      object_name
      5  FROM
      6      v$sql_plan
      7  WHERE
      8      sql_id = '9t52uyyq67211'
      9  /
    OPERATION                      OPTIONS                        OBJECT_NAME
    SELECT STATEMENT
    TABLE ACCESS                   FULL                           DT_TEST_BVPEEK
    2 rows selected.
    Elapsed: 00:00:00.03It saw that the bind variable value was 1 and that this would return most of the rows in the table so it chose a full scan.
    tylerd@DEV2> exec :an_col1:=4
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:00.00
    tylerd@DEV2> SELECT
      2      *
      3  FROM
      4      dt_test_bvpeek
      5  WHERE
      6      col1 = :an_col1
      7  /
    20000 rows selected.
    Elapsed: 00:00:03.50
    tylerd@DEV2> SELECT prev_sql_id FROM v$session WHERE audsid=SYS_CONTEXT('USERENV','SESSIONID')
      2  /
    PREV_SQL_ID
    9t52uyyq67211
    1 row selected.
    Elapsed: 00:00:00.00
    tylerd@DEV2> SELECT
      2      operation,
      3      options,
      4      object_name
      5  FROM
      6      v$sql_plan
      7  WHERE
      8      sql_id = '9t52uyyq67211'
      9  /
    OPERATION                      OPTIONS                        OBJECT_NAME
    SELECT STATEMENT
    TABLE ACCESS                   FULL                           DT_TEST_BVPEEK
    2 rows selected.
    Elapsed: 00:00:00.01Even though the value of the bind variable changed, the optimiser saw that it already had a cached version of the sql statement along with an execution plan, so it used that rather than regenerating the plan. We can check the reverse of this by causing the statement to be invalidated and re-parsed - there's lots of ways, but I'm just going to rename the table:
    Elapsed: 00:00:00.03
    tylerd@DEV2> alter table dt_test_bvpeek rename to dt_test_bvpeek1
      2  /
    Table altered.
    Elapsed: 00:00:00.01
    tylerd@DEV2>
    20000 rows selected.
    Elapsed: 00:00:04.81
    tylerd@DEV2> SELECT prev_sql_id FROM v$session WHERE audsid=SYS_CONTEXT('USERENV','SESSIONID')
      2  /
    PREV_SQL_ID
    6ztnn4fyt6y5h
    1 row selected.
    Elapsed: 00:00:00.00
    tylerd@DEV2> SELECT
      2      operation,
      3      options,
      4      object_name
      5  FROM
      6      v$sql_plan
      7  WHERE
      8      sql_id = '6ztnn4fyt6y5h'
      9  /
    OPERATION                      OPTIONS                        OBJECT_NAME
    SELECT STATEMENT
    TABLE ACCESS                   BY INDEX ROWID                 DT_TEST_BVPEEK1
    INDEX                          RANGE SCAN                     DT_TEST_BVPEEK_I1
    3 rows selected.
    80000 rows selected.
    Elapsed: 00:00:10.61
    tylerd@DEV2> SELECT prev_sql_id FROM v$session WHERE audsid=SYS_CONTEXT('USERENV','SESSIONID')
      2  /
    PREV_SQL_ID
    6ztnn4fyt6y5h
    1 row selected.
    Elapsed: 00:00:00.01
    tylerd@DEV2> SELECT
      2      operation,
      3      options,
      4      object_name
      5  FROM
      6      v$sql_plan
      7  WHERE
      8      sql_id = '6ztnn4fyt6y5h'
      9  /
    OPERATION                      OPTIONS                        OBJECT_NAME
    SELECT STATEMENT
    TABLE ACCESS                   BY INDEX ROWID                 DT_TEST_BVPEEK1
    INDEX                          RANGE SCAN                     DT_TEST_BVPEEK_I1
    3 rows selected.This time round, the optimiser peeked at the bind variable the first time the statement was exectued and found it to be 4, so it based the execution plan on that and chose an index range scan. When the statement was executed again, it used the plan it had already executed.
    HTH
    David

  • How to make a Mac use dedicated GPU in Windows

    So I have this problem: when I launch games through Boot Camp on my late 2013 15 rmbp, I occasionally run into lags and frame drops in various games. The thing is, sometimes the same games run perfectly smooth with a decent frame rate, and sometimes the lags are so bad that it's almost impossible to play.
    This makes me think that my Macbook is using the integrated Iris Pro gpu instead of dedicated 750m. So my question is, how can I force Windows to use dedicated graphics in games? In NVidia control panel, there's no option like Dynamic swithing.
    P.S. I just reinstalled Boot Camp drivers using the WININSTALL drive, and checked NVidia website for updates

    Furthermore, Boot Camp only uses the dedicated GPU because of problems related to GPU switching, so this is not the problem. Anyway, you can check which GPU Windows is using > http://windows.microsoft.com/en-us/windows/video-cards-faq#1TC=windows-7
    Have you checked if there is any app open while you are playing games and they do not run as smooth as you want to?

  • G770 Dedicated GPU not being recognized

    Hi,
    So I've had my G770 for a little over 2 years. Just a few months ago, my AMD radeon graphics stopped working. Because of this, I got a complete wipe of the system and am now trying to install all the drivers again. Everything is working perfectly but the AMD Radeon Graphics card is just not installing. I have downloaded the latest drivers and when I go to install it, it seems to install. However, the installation process goes by too quickly (funny, I know) and then it says there were some warnings in the log (there weren't).
    I just want to be able to use my dedicated GPU again, so any help would be appreciated.
    Thanks.

    as you can see down in the picture, the application wow.exe has been set to high performance so it should be using the dedicated card, my laptop thought something else apparently.http://i.imgur.com/3SGJeO7.jpg

  • Dedicated gpu problem

    hey guys from past few days i am having problems with my dedicated gpu raedon hd 7670m .about 2 weeks ago it was working properly but now it has started giving problems when i am  playing games random black lines are appearing on onscreen, graphics are degrading,when taking benchmark test the software of benchmark is not detecting the dedicated gpu the games which were supposed to run on high settiings are know lagging ,i am using hp pavalion g6, product number
    C9L68PA#ACJ 
    the solutions i tried 
    unistalling and re installing new drivers  and recovering my notebook to previous dates 
    and THE  graphic card imformation tools are giving such wierd details,showing memory type as hyper memory when it is actually a gddr5 and clock speed as 0 mhz so please please i want some possible solutions

    Hello @eshanlaptopg6,
    Welcome to the HP Forums, I hope you enjoy your experience! To help you get the most out of the HP Forums I would like to direct your attention to the HP Forums Guide First Time Here? Learn How to Post and More.
    I understand that your notebook is experiencing a problem with the GPU drivers, and that a re-installation has not resolved the issue. I would be happy to assist you in this matter!
    In order to restore the software configuration for your system's graphics, I recommend following this document on Using Recovery Manager to Restore Software and Drivers (Windows 8). Once the drivers have been re-installed on your system, please update them by Using HP Support Assistant (Windows 8). 
    If your system is still lagging when you are playing your games, you can also take a look at this resource on Resolving slow system performance (Windows 8) to increase the speed and efficiency of your notebook's system.
    Please re-post with the results of your troubleshooting, and I look forward to your reply!
    Have a great weekend!
    MechPilot
    I work on behalf of HP
    Please click “Accept as Solution ” if you feel my post solved your issue, it will help others find the solution.
    Click the “Kudos, Thumbs Up" on the right to say “Thanks” for helping!

  • MSI GT60 dedicated GPU question

    Hello everyone! This is my first post
    Basically i have a MSI GT60 2OC-003US, i have two questions regarding it.
    First, I just bought a 27" monitor to plug into my notebook via HDMI. Now I have heard that the dedicated GPU (Mine is the GeForce GTX 770M) is not meant to be used at all times, however, when i plug in my HDMI to the monitor, the power light goes from white to orange indicating that the dedicated card is being used. Is there any way to make it so that it uses my integrated card instead of my dedicated? I am going to have my monitor connected at all times so i don't want it to be running off of my GTX.
    And also, if i am gaming or have a monitor plugged in, and i stop gaming or unplug the monitor, the orange light should go back to being white since the dedicated gpu related programs are closed, yet it still stays orange? Wondering how i can fix that.
    Thank you lots!

    GT60 2OC outputs video signal via dedicated GPU circuit. It's the hardware design. You can't change this architecture.
    There're lots of processes using dedicated GPU out of your notices. You can go to NVIDIA Control Panel, find Desktop menu and check "Display GPU Activity Icon in Notification Area" to enable the activity window to quickly monitor which processes are using the dedicated GPU.

  • Video out not using full widescreen on a 23.5" monitor

    I have an Asus 23.5" LCD which I use for TV and my Xbox. Both use the widescreen how it should. But when I plug my Macbook into it, I get 2" bars on the left and right. However, if I uncheck "mirror display," my desktop background fills the entire screen. I mean, the second desktop, or the extended desktop uses the full widescreen format.

    Hi and welcome to Discussions,
    in 'Mirror Mode' both the internal and the external display use the same resolution, which with the MacBook is 1280 x 800.
    Only in 'Extended Mode' the internal and the external can use different resolutions.
    You might wanna check if the Asus LCD has an option to 'remove' the black borders, when using a resolution that is not the native (full) resolution of it.
    If this is possible, you might experience however a less sharper image on the Asus since it is not using its native resolution.
    Regards
    Stefan

  • Lightroom 6 / CC2015 - Facial Recognition Terribly Slow, not using all of CPU or GPU, still keeps going when paused.

    This is actually a couple of issue but wondering if others are experiencing it and worked through it.
    I am using a Dual Xeon CPU 3.2 ghz Mac Pro, Catalog on SSD, Images files on Mirrored Pair, Dedicated GPU (max I can install in my version of Mac Pro) and hardware acceleration enabled in Lightroom.
    I watched the Lightroom 6 Facial Recognition tutorial which leaves out a lot of the bulk editing and says basically let it loose on your whole catalog.. NOT recommended.
    I started out with a couple small portrait galleries that identified a couple hundred total people to seed facial recognition so it didn't suggest everyone is the first person I confirmed (which it will do otherwise). I have also optimized my catalog multiple times.
    I have encountered the following serious performance issues and bugs with Facial Recognition.:
    Lightroom Facial Recognition goes to a ridiculous crawl after about 2000 images to be confirmed. (i.e. 2000-2300 in a couple hours, 800-1200 in the next 12 hours)
    Lightroom becomes largely unresponsive after having a fair number of images to be confirmed, even after pausing Address and Facial Recognition. So even selecting 4 rows of images can take 5 minutes with several long pauses.
    Once I select and click confirm it takes up to 2 minutes to update the "to be confirmed" list again.
    When I click on an individual at the top of the page, pause facial recognition and address lookup it still continues to "Look for similar faces" [BUG!!!!!!!!] even though all I want to do is just confirm some individuals more quickly in bulk with the images already identified.. not continue to look for more as a work around for the painfully slow responsiveness of the module.
    The odd part is that with all of the performance issues Lightroom will not use more than 20-30% of my two Xeon CPUs, barely touches my GPU (<10% CPU, 30% memory), my and no more than 35% of my memory. Computer Temps are also barely above startup temperatures and 15-25 degrees cooler than when I run other applications which will consume my entire CPU and memory if I let it. I have explored Lightroom's settings but seen nothing further I can configure to speed it all up.  I have also attempted the operation on images on the SSD, my drobo (known to be slow), an independent fast disk I have, and a pair of raided disks and have the same issues.
    I will also note that all of my other applications seem to continue to operate just fine.. the slowness seems to be contained to the Lightroom application itself.
    Lightroom version: 6.0 [1014445]
    Operating system: Mac OS 10 Version: 10.10 [3]
    Application architecture: x64
    Logical processor count: 8
    Processor speed: 3.2 GHz
    Built-in memory: 18,432.0 MB
    Real memory available to Lightroom: 18,432.0 MB
    Real memory used by Lightroom: 5,537.5 MB (30.0%)
    Virtual memory used by Lightroom: 32,240.6 MB
    Memory cache size: 4,342.0 MB
    Maximum thread count used by Camera Raw: 8
    Camera Raw SIMD optimization: SSE2
    Displays: 1) 2048x1152
    Graphics Processor Info:
    NVIDIA GeForce GTX 285 OpenGL Engine
    Check OpenGL support: Passed
    Vendor: NVIDIA Corporation
    Version: 3.3 NVIDIA-10.0.31 310.90.10.05b12
    Renderer: NVIDIA GeForce GTX 285 OpenGL Engine
    LanguageVersion: 3.30
    Application folder: /Applications/Adobe Lightroom
    Library Path: /Users/DryClean/Documents/Lightroom_Catalog/MyCat_LR6.lrcat
    Settings Folder: /Users/DryClean/Library/Application Support/Adobe/Lightroom
    Anyone have any suggestions?

    A big problem continues to be that once you wait for it to index all your faces you find it missed over half of them. There are many cases where it missed the subject of the photo but managed to find a tiny waiter off in the shadows. I don't know how this will ever get fixed; it seems it'll require an update that lets you rerun the indexing a few times, maybe with different levels of granularity. I doubt that's coming.
    It stands to reason that a system that hands you thousands of false positives for every face can't recognize if something is or isn't a face in over half the cases. Faces in profile or tilted down, especially with the eyes looking down, are bypassed completely. I have directories with a thousand people shots in them, many with multiple people, and instead of LR6 returning an index of 1.5k or so, it gives me 385. I'm not sure how valuable the search advantages will be in this case; I can see it not returning some favorite shots of people.
    Anyway, for anyone looking to get past the spinning wheels, work on one directory at a time. Then once a directory is done, keep it selected so you have the same thumbnails of identified faces in the confirmed area, and control-select the next directory. You won't have to reseed the confirmed faces area this way and things move much faster when you're not working with the entire library. It also helps to click each person in the confirmed faces and work in that view sometimes. It'll return faces of family members of that person and you can rename those. You can watch it focus its top results as you confirm but, annoyingly, it doesn't narrow the total suggestions but actually expands them the more faces you confirm and the fewer correct positives remain. (This seems opposite to how it should work. It should give you fewer faces the more you confirm as it gets a better idea what the person looks like and has fewer shots left unconfirmed of that person.)  Yet, even with the expanded results, faces will still escape it and pop up in other people's results as false positives.
    Bottom line: It's only a little better than manual tagging, and not as thorough because of the poor hit rate of the initial indexing. But it works better if you stick to isolated directories and occasionally individual people. At least that way you don't have to wait for it to re-sort tens of thousands of results with every click.

  • ATI Radeon HD4870, not used with motion as GPU acceleration

    Hi, I have a problem with my setup. Motion does not use my ATI card for GPU acceleration.
    So a simple project on my new mac with this card, is much, much slower than my macbookpro with Nvidia GeForce 9400M, or my old edit machine that has a Radeon X1900 on a macPro 3GHz.
    And the atMonitor utility shows good use of GPU on these machines, but on my new machine with ATI HD 4870 card, the GPU is never used in motion.... So what is wrong with my HD4870 card.
    /Helge

    Motion uses the GPU to do all its heavy lifting, I'm not sure how accurately the atMonitor software reflects this, nor how taxing your project is. CPU is used to perform the math, but the GPU does all the image manipulation.
    Patrick

  • Media Encoder CC not using GPU acceleration for After Effects CC raytrace comp

    I created a simple scene in After Effects that's using the raytracer engine... I also have GPU enabled in the raytracer settings for After Effects.
    When I render the scene in After Effects using the built-in Render Queue, it only takes 10 minutes to render the scene.
    But when I export the scene to Adobe Media Encoder, it indicates it will take 13 hours to render the same scene.
    So clearly After Effects is using GPU accelleration but for some reason Media Encoder is not.
    I should also point out that my GeForce GTX 660 Ti card isn't officially supported and I had to manually add it into the list of supported cards in:
    C:\Program Files\Adobe\Adobe After Effects CC\Support Files\raytracer_supported_cards.txt
    C:\Program Files\Adobe\Adobe Media Encoder CC\cuda_supported_cards.txt
    While it's not officially supported, it's weird that After Effects has no problem with it yet Adobe Media Encoder does...
    I also updated After Effects to 12.1 and AME to 7.1 as well as set AME settings to use CUDA but it didn't make a difference.
    Any ideas?

    That is normal behavior.
    The "headless" version of After Effects that is called to render frames for Adobe Media Encoder (or for Premiere Pro) using Dynamic Link does not use the GPU for acceleration of the ray-traced 3D renderer.
    If you are rendering heavy compositions that require GPU processing and/or the Render Multiple Frames Simultaneously multiprocessing, then the recommended workflow is to render and export a losslessly encoded master file from After Effects and have Adobe Media Encoder pick that up from a watch folder to encode into your various delivery formats.

Maybe you are looking for