N660 TF 2GD5 - PerfCap/GPU load/Power consumption problem

Hello,
at first, sorry for my english, I will try to do my best.
I have problem with my GPU when I play Battlefield 4. Problem occours with newest Nvidia drivers, also with beta and older drivers.
Problem is that I have huge FPS drops periodically, coming maybe every 5 minutes for 15-20 seconds, for example in BF4 it means FPS drops from 50fps to 15fps, really annoying.
It does not matter at scene complexity, it´s the same in huge battles as when I am alone looking up to the sky.
I was searching why is this happening and GPU-Z helped me to indentify PrefCap, and immediately GPU load + Power consumption going down from 100% to 40-50%
I have a screenshot, but can´t findout how connect attachment (external links are not allowed).
Could somebody explain me what is the problem and why is this happening? I am thinking about to sell my MSI graphics card and buy another one because of this, I don´t want any throtling problems because of compatibility or whatever   
Is there any way how to turn it off or disable? Register, something.
Thank you

you seems to be right guys.
I undervoltaged my CPU by -0.1V, lovered Frequency to 3600MHz and problem is gone.
So, time for new PSU 

Similar Messages

  • HOWTO Workaround Power Consumption Problem w/ Garm...

    Preface
    After suffering from low standby battery life for quire a while (less than 24h on E51) I have figured out that the problem comes from a background service installed by Garmin Mobile XT. I have made a few measurements w/ Nokia Energy Profiler and the difference between having Garmin installed or not is 3 fold. That is 0.05W average standby power consumption w/o Garmin installed and 0.16W average standby power consumption w/ Garmin installed, everything else the same.
    The Problem
    Garmin Mobile XT installs a small service w/ unknown to me purpose which sucks the extra .10W of power even w/o Garmin started. There is an option to Enable/Disable this service in Garmin Mobile XT's configuration menu which is Disabled by default. The problem is that even that the default setting is Disabled, the service still runs once Garmin is installed.
    The Workaround
    Until Garmin fixes this bug you can disable the background service by following these steps: 
    0. Install Garmin Mobile XT on your handset if you haven't done so.  
    1. Start Garmin Mobile XT.
    2. Select "Tools" from the three buttons on the right.
    3. Select "Settings".
    4. Select "System".
    5. Scroll down and set "Launch background service" to "Enabled".
    6. Exit Garmin Mobile XT.
    7. Repeat steps 1 - 5 but this time set "Launch background service" to "Disabled".
    8. Reboot your handset.
    Done
    Result
    Now the background service will be *truly* disabled and your standby battery life should return to normal.

    Thanks for your suggestions, but i was finally able to figure it out myself. linux-lts was unfortunately no option for me, as i want to use bumblebee. But the direction, this pointed me to was good. It seems one of the kernel updates broke my bumblebee package, so the power management didnt work and the card was constantly powered on. This was nothing new, but my powertop completely messed it up, by adding this additional power consumption to eth0. Funny enough, this power consumption even disappeared, when i unloaded eth0. After deleting powertops saved results and recalibrating it, it now shows up all components power usage correctly.
    Thus i mark this thread as SOLVED.

  • MSI z87 X-POWER 100% GPU load at idle

    guys,
    i have just bought the z87 x-power. System specs are:
    -i7 4770k
    -2 x Asus 6950 1 gb in crossfire
    -16 GB GEIL ram
    -Samsung 840 series ssd
    I have flashed the latest 1.3 bios.
    My problem is that even when my system is idling, the GPU in PCIE slot 1 (the uppermost slot) runs at 100% load. If i enable crossfire, both cards run at 100%. It draws A LOT of power. The temporary fix i have found for this is to disable crossfire in CCC and enable it again. Once i do this, both cards work fine and the power consumption and gpu temps return to normal levels. But i have to do this every time i start my PC.
    I have tried the following to no avail:
    -swapping the cards around (the second card is in slot 5 as per the boards instructions for running two cards).
    -removing the crossfire bridge
    -swapping the power connectors
    -uninstalling and re-installing the drivers (i'm using the latest 13.11 beta drivers).
    -using driver sweeper to completely remove all GPU driver data.
    -restoring optimized default settings in bios
    -swearing.
    -praying to allah, baby jesus, mick jagger.
    The problem persists even with no drivers installed (using gpu-z to check the load and temps). This leads me to think its not the AMD drivers causing the issue.
    has anyone else encountered this issue? it's doing my head in!

    Quote from: MojoW on 06-November-13, 19:17:30
    I googled your problem and alot of people seem to be having the same problems because of a bitcoinmining malware.
    Check Processes when your gpu's are @ 100% to see if u can find : iehighutil.exe
    Just a thought.
    thanks Mojo. have had a pretty thorough look and i dont have that process. Do you know if that malware uses any other names? I noticed that atiedxx.exe was using a fiar amount of system resources and it disappeared when i disabled and re-enabled crossfire. a search on atiedxx.exe doesnt return much in terms of malware.

  • Mac Pro max GPU power consumption limit?

    Hey guys,
    What's the max power consumption limit for the Mac Pro's 2 GPU Mini PCl-e power slots? I have to know because I have to decide which GPU I plan on buying. I'd be thankful if someone helped me with this.
    Oh and it's a Mid-2012 12-core Mac Pro.
    Cheers!

    you might want to look at www.barefeats.com and then probably limit to Mac edition cards. Budget and what you hope to achieve. And whether you want it mostly for games in Windows and w/o boot screen or even willing to add a 2nd PSU just for power and 2  x 8-pin plus 2 x 6-pin power.
    EVGA Mac Edition GTX 680 is two 6-pin and $580 area.
    AMD has Mac Edition 7950
    You will actually find more about unsupported cards and their use on MacRumors. And you do need 10.8.3+ for most anything beyond Apple's ATI 5x70.
    I don't think looking at power usage is where I would have started my search.

  • N660 TF 2GD5/OC - Core clocks keep spiking to maximum since PSU change

    Hi, the core clocks on my GTX 660 Twin Frozr III have been spiking to maximum when browsing or just opening/closing programs since changing to a Corsair RM 750W PSU (their new model).
    Looking at GPU-Z when this is happening (see pic below), the GPU load is under 5% so I don't know why this is happening.
    Here's my system information:
    Gigabyte GA-Z77X-UD4H, BIOS F1, i5-3470, Windows 8 64 bit
    Corsair 2x4GB CMX8GX3M2B1600C9
    Corsair RM 750W PSU
    MSI N660 TF 2GD5/OC
    Corsair Neutron 128GB SSD
    Samsung SH-B123L Blu-Ray Rom Drive
    My CPU & GPU are both at stock speed and I have done a virus scan with Kaspersky which was negative. Also using latest 327.23 WHQL drivers.
    Any help appreciated.

    Many thanks for your reply
    I guess I'll accept this is normal behaviour then. I think it's because I've now got a quieter PSU (fan only spins up on 40% load) that I can hear the GPU fan briefly spinning up & down every time the power state changes. Which is only slightly annoying, but with my OCD tendencies becomes magnified.
    The only workaround I've found so far is to use Nvidia Inspector's multi display power saver mode, so I'll stick to that for now and maybe wait for the next generation of cards from AMD and Nvidia before upgrading and maybe finding a card that works better with my PSU.

  • W520 nvidia quadro power consumption

    Dear community, I have a short question about power consumption. I did a clean install of windows 7 64bit on my W520 with a quadro 2000. Power consumption when idle with max brightness is somewhere at 18 watt, definitely highter than what I had with the Lenovo default installation. Checking Lenovo's energy software indicates that the GPU is running at full speed all the time so I thought this could be a source of additional power consumption. I installed the latest Lenovo graphics driver from the homepage, but I'm not sure if thats enough. Maybe there is a problem with Nvidias Optimus technology or the switchable graphics? I'd be glad about any feedback and, yes, I know that I'm a little bit on my own with a clean install. :-) Thanks, Branagh

    I get these numbers from the basic view. The battery tab does not indicate power consumption when I have the power chord connected. If I'm on battery the wattage numbers match.
    But the result remains the same no matter if I'm on battery or plugged, or what power settings I use (setting the display to max brightness and the machine idling): If I force the laptop to be on the Nvidia graphics, power consumption is at arond 18W, if I allow it to use the intel graphics, it goes up to at least 20W. GPU load in the power manager is always given at 100%.
    GPU-Z gives more adequate numbers I guess. When I force the Nvidia GPU, the load is correctly displayed at 0% when idle and the core clock goes down to 50Mhz. When I use the intel graphics the load remains at 1% and the core/ram clocks remain at 650 / 533 Mhz all the time.

  • N660 GAMING 2GD5/OC vBIOS

    Hello, I've recently bought an MSI N660 GAMING 2GD5/OC with TwinFrozr IV cooling system. Until now I really thought that the MSI's dust removal technology was really working but it doesn't. Even the various reviews and the shop I bought that card from advertises this feature on this particular card, which made me buy it. To my knoledge this feature is implemented into the card's vbios which dicates the fans to spin in reverse during boot. So let me throw my question in hope I get the required support
    Is it possible to have a bios for this card that has this feature enabled? Thank you in advance
    My card's info as it appears on the sticker at the rear of the box is as it appears bellow:
    312-V287-055
    N660 GAMING 2GD5/OC
    S/N: 602-V287-35SC1311018505
    PCI-E, N660,2G GDDR5, TF IV FAN, OC, DL-DVI-I, DL-DV-D, HDMI, DP, Power Cable, SLI
    and my current vBIOS version as it appears in GPU-Z utility is: 80.06.58.00.55 (P2030-0000)

    O.K. thank you for your rapid response  .
    Also I get that "it can't be activated" is the company's policy on that and/or similar matters and it's not a technical restriction all things considered. I really hope sometime in the future this feature will be finally implemented one or the other way because I really believe that there is no serious reason to exclude a really nice feature like this from a mainstream card series.

  • Power consumption under Vista.

    I know this isn't really a photoshop thing, but I happened to discover it while I was playing with photoshop!
    I have a performance monitor for my graphics card, and it told me that the card used to run at 270MHz for 2D applications under XP.
    Now I have loaded Vista, the card is running flat out at 3D performance levels of 666Mhz, all the time. This is despite the fact that I have disabled all the Vista mega graphics.
    I manually underclocked the card to 335Mhz, and there is no difference in Vista performance. Especially, there is no difference with Photoshop. But of course, running at the lower f, the GPU will be consuming less power and have alonger life, etc. I mean the running temperature dropped a fair old bit!
    Does anyone know why Vista is running the card in 3D mode all the time? It possible to get it to stop?

    > but he is trying to downgrade an upper-end card.
    that was part of my point. i wonder if he's stressing the card because it's not running the way it's supposed to (like translate this to that and back to this in order to display downgraded.). i suggest he turn aero back on and check his performance mon again to see if there's any change/improvement.
    >I think that he would need a better case if heat is an issue
    agreed. or better cooling system.
    > and power consumption is not going to decrease that much, probably needs a better power supply.
    possibly that too, but not necessarily.

  • 3000 J converted to NAS, Are there fan control settings/ways to reduce power consumption?

    So I've had this old 3000 J series - (J115 7388) that's been sitting doing nothing for a while, so I decided to convert it into a FreeNAS, I've stripped out the DVD, card readers, anything unneccesary to reduce power usage - on that theme I have a few questions...
    1. Is there a setting in the BIOS to automate the fans as & when needed to make the system quieter & to reduce power consumption - there will be 2x 2tb WD green drives in there with an idle time of 5 mins, atm I'm using one 320GB HDD from an old laptop - the system won't be used 90% of the time & I'd like it to be able to react when needed rather than constantly be on
    the specs http://support.lenovo.com/en_US/detail.page?LegacyDocID=MIGR-67129 say the fan is variable, but I can't see anything obvious in the BIOS, it did take me some time to find the boot order, so maybe I'm overlooking something
    So that's my main question & this is the bonus
    2. I know there's an Nvidia C51G chipset in there somewhere, is it possible to remove this, there's no GUI on the pc anymore, so again, & also anything else like sound, just to make it as efficient as possible & not to have any unessesary components using power
    I have downloaded updated the latest BIOS (circa 2007) onto another pc from http://support.lenovo.com/en_US/downloads/detail.page?DocID=DS001014 - even if it does provide an update to what I already have - I did note 2007 already on the BIOS version, of course I can't run it to try to make a bootable usb as the downloaded exe is for XP
    I'm having a little trouble with the Lenovo site UI, not sure if it's coding - the links to guides/docs/drivers/downloads on http://support.lenovo.com/en_US/guides-and-manuals/default.page?selector=expand just don't seem to be working for me

    Are you sure there are no trojans or viruses on laptop? Also lounch the task manager and check out the service or application with deranged CPU loading while the laptop is in the idle mode. What is the power usage in the idle mode?
    x220 | i5-2520m | Intel ssd 320 series | Gobi 2000 3G GPS | WiFi
    x220 | i5-2520m | hdd 320 | Intel msata ssd 310 series | 3G GPS | WiFi
    Do it well, worse becomes itself
    Русскоязычное Сообщество   English Community   Deutsche Community   Comunidad en Español

  • Tecra R840 - Win8 - power consumption in hibernate mode

    Hello,
    i did the offered upgrade to W8, i am very satisfied with the W8 system, but the problem is that the notebook is consuming battery also in the hibernate mode.
    I would understand that the power consumption will be in the sleep mode, but on the hibernate mode?
    I found out that when the computer enters the sleep mode or hibernate mode and i open the lid, the computer automatically starts EVEN if the option to "power up computer when lid open" is unchecked.
    When i select to power off computer - it does not react to open lid, i have to press the power button.
    The problem could be in the windows system, that the OS tolds the computer to wake up when the lid closed.
    The problem is, that the computer starts charging everytime i wake up from hibernation (i had my office 15minutes from home, and it consumes about 1-2% from battery.
    It is quite anoying that my full charge battery capacity is on 83% of original one (i bought my PC in June2012).
    I had Lenovo T60 7yrs ago, and there was an option to start charging the battery only when the battery charge was under 95% and it saves the battery capacity.
    Does anybody have some clue?
    Thank you!

    Fact is that battery capacity will be reduced even if the notebook is completely OFF. Battery capacity will lose few percent of capacity. To reduce this check - http://aps2.toshiba-tro.de/kb0/FAQ9C015N0001R01.htm
    Useful document for you can be also http://aps2.toshiba-tro.de/kb0/HTD9401AZ0001R01.htm
    > I had Lenovo T60 7yrs ago, and there was an option to start charging the battery only when the battery charge was under 95% and it saves the battery capacity.
    As far as I know such option is not available on Toshiba notebooks. When connected to AC power supply notebook starts to load battery automatically.
    There is no option to change anything.

  • MacBook Power Consumption

    We operate tens of macbooks at our campus for student loans. We are trying to cut on the power consumption of the loans operation. I am interested to know how much power does the power brick consume when:
    a) nothing is connected to the MagSafe jack.
    b) the macbook is fully charged and the LED is lit green.
    Please don't guess or think! give me a firm answer if you know. I appreciate that!

    I pulled out my 1940s era General Electric Wattmeter. This is a two-coil instrument that measures Vrms * Irms * cos( phase angle) = actual Watts. This type of meter does not measure reactive power that does not deliver energy to the load but circulates in the system. Reactive power is a concern because the out-of-phase current of reactive power does cause energy loss (I squared R loss) by heating the wires carrying the current.
    Power brick energized, but not attached to computer - unmeasurable.
    Power brick energized, attached to computer, green LED on - about 0.7 Watt.
    Power brick energized, attached to computer, OSX booted, but idle - 8 to 10 Watts.
    Power brick energized, attached to computer, screen saver running - 8 - 12.5 Watts.
    Power brick energized, attached to computer, 5th Element DVD playing - 12.5 -15 Watts.
    Power brick energized, attached to computer, Grapher, contours.gcx - 18.5 - 20.5 Watts.
    Once in a while, I could get the MacBook to peak at about 25 Watts (averaged by the mechanical dynamics of my meter) of power consumption. The initial plug-in of the power pack to the socket also consumes several amps for a split second - too hard to measure accurately with my crude equipment. There's also a little audible crack! as the prong touches the outlet, so it is obviously a pretty large momentary in-rush of current.
    When the power brick is unplugged, the Irms current draw is about 22 mA according to my Fluke 8060A. Presumably that's almost all reactive at close to -90 degrees phase angle relative to the voltage. Switch mode power packs like the MacBook's tend to be capacitive loads. That's about 2.64 VARs of reactive power.
    When the MacBook is idling at about 9.5 Watts of real power consumption, the current draw is about 245 mA. The real power corresponds to about 70 mA of in phase current. This gives the surprising result of about 234 mA of lagging reactive current.
    In terms of what real-world thing can you do? For 10 to 100 MacBooks, I don't belive it is worthwhile to do anything. Probably remediating the power supply to reduce the reactive current load would be good thing. At industrial scales, factories worry about current phase angle a lot. Power providers bill for reactive current they have to generate, otherwise they would lose revenue by heating their transmission lines, but the customer gets no energy delivered to the manufacturing process by the out-of-phase reactive current.
    Probably, the energy consumed and eventual environmental waste generated by remediating 10-100 power supplies, not to mention the engineering hours required, would not offset the power that would be saved over the useful life of the MacBook.
    On the other hand, grumbling at Apple to contract with the manufacturer for a greener design for the power pack for the many thousands of users would be a good thing on which to expend a little energy.
    PS: just to be sure my Wattmeter is giving reasonable results, I plugged in a "60 Watt" bulb that's been used for about 200 hours. According to the meter, it draws 58 watts.
    Bill
    Message was edited by: impulse_telecom

  • Power Consumption Revisited

    I was reading an article on tomshardware.com the other day in which they described a process by which they were able to measure the power consumption of various video cards using a simple device that plugs into a standard wall socket and then displays the number of watts the currently plugged in device is using.  In light of the ever-increasing PSU recommendations that tend to show up here (I recall seeing one poster recommend "a PSU with 24 or more amps on the +12V rail for anything other than a barebones system"), I decided that it might be beneficial to these forums if I did a little empirical study of my own.  So anyways, I shelled out ~$30 for the device shown here:
    http://www.supermediastore.com/kilwateldet1.html
    ...and ran some tests of my own.  The results:
    Preliminary Testing -
    To verify that my power meter would give reasonably accurate readings, I first hooked it up to a 3-way lamp with a 50/100/150 watt bulb installed.  The readings returned for the 50/100/150 watt settings (respectively) were 44/94/142, so it would seem like my power meter is at least reasonably accurate.  Some other stuff I measured just for the hell of it...my speakers use 3 watts of power in stadby mode, and 30 watts when active (haven't yet tested when active and playing at full volume), and my monitor uses about 70 watts when on, and about 2 watts in standby mode.
    Results -
    Satisfied that I had not just wasted my money on an inaccurate power meter, I then went and hooked it up to my PC (the one described in my sig) and measured the power consumption under a variety of circumstances.  It is important to note that these readings reflect the total amount of power drain being applied to the wall socket, not the amount of power that is actually being demanded by the system.  This is because no PSU is 100% efficient (a good one will be maybe 80% efficient, if even that much), so the amount of power that is actually needed by the system is actually about (at least)20% *less* than the recorded values.  Anyways:
    During startup, the power usage spikes very briefly at 197 watts, then averages 152 watts over the rest of the boot cycle.
    The system uses 134 watts of power when idling.
    Under full CPU load, the system uses 168 watts.
    Running 3d Mark 2001 the power usage is 169 watts.
    Playing Far Cry (high detail settings, 1024x768x32), the power usage is again 169 watts.
    Conclusions -
    So, let's now assume a worst-case scenario, in which the extra 34 watts recorded during full CPU load came entirely from extra CPU power drain (a reasonable assumption), and in which the extra 35 watts recorded during 3d Mark and Far Cry came entirely from extra video card load (a much less reasonable assumption), and in which we have a PSU that is 90% efficient (greater efficiency means that the system would actually have to demand *more* power in order to get the total power drain up that high).  In this case we see that if an application were developed that fully taxed the video card and CPU continuously, the total power drain would be 134 + 34 + 35 = 203 watts (which actually correlates rather nicely with the 197 watt spike observed during the boot cycle), meaning that the system is demanding about 183 watts from our unrealistically efficient PSU (note that with the PSU efficiency set to a more realistic 75%, the system would only be demanding a mere 153 watts of juice at full CPU and video card load).  
    Admittedly, the video card in my system is relatively weak, so let us again take the worst case scenario and assume that if I were to be using a 6800 Ultra, the total power drain would be 100 watts greater (this is above what the actual difference should be given the results posted on tomshardware.com regarding the power use of the 6800 Ultra), so our video card now consumes an astounding 135 watts of power, and our total power drain (in our unrealistic situation where we have some application which is capable of 100% CPU and video card utilization for a sustained length of time) is now 303 watts.  With our unrealistic 90% efficient PSU, it would mean that the system is demanding about 273 watts from the PSU (about 228 watts with a 75% efficient PSU).
    Note that aside from the weak video card, I have a fairly robust system (which also happens to be slightly overclocked), with 4 HDD's (two of which are WD Raptors), 2 optical drives, several PCI devices, and two large 120mm case fans, and yet the power demands of this system, even in an unrealisticly demanding situation, are *well* within the ability of a quality 380W (or even 300W) PSU to deliver.  In this case even if all the power happened to be being sucked off of the +12V rail (which is not the case), any PSU with 18 amps at +12V could still handle it.  Furthermore, even if I were to add a needlessly power-hungry video card into the mix, the power demands are *still* safely within what any decent 380W PSU should be capable of (and even what a quality 300W PSU should be capable of, although this may be pushing it a little, though it should always be noted that the numbers indicate a hypothetical worst-case power drain that should be beyond the maximum drain possible in any real-world situation).
    So, we can therefore conclude that the power demands of a reasonably robust Athlon64 based system are not astronomical by any means, and that they do not justify a minimum recommendation of a 465W PSU with 24+ amps on the +12V rail for any system which is not "barebones," and that there is no observational evidence to support the idea that a PSU with 18 or fewer amps at +12V is categorically inadequate for use in an Athlon64 based system.
    ...anyways, I guess that's all, I hope you found this interesting, or at least informative.  I'm off to see what else I can do with my power meter thingy...

    Really?  Do you have measured data which clearly supports your claims, or are you just holding up an opinion as a matter of fact?
    My point was, my measured results show that the total power demand of an Athlon64 based system across *all* of the rails is fairly low, even at 100% system load.  So, let's recalculate things assuming a 75% efficient PSU, with 75% of all load being at + 12V (which is still probably higher than the actual value), and let's leave the hypothetical 6800U inside of my system.  We get .75 * 303 = 227 watts in total that the system is demanding.  Of these 227 watts, the system is demanding .75 * 227 = 170 watts over the +12V rail.  170 watts / 12V gives us a total demand of 14.2 amps on the +12V rail.  Note that this is with the hypothetically demanding 6800U card installed and is still likely to be at least a couple amps higher than what a *real* system would ever use, and any *quality* PSU capable of 18 amps at +12V should still be perfectly adequate for use in the system.
    Furthermore, PSU efficiency dropping to 60% in real world situations supports my results, as it means that the actual system was demanding substantially *less* power than the system in my hypothetical example, making things even *easier* for the PSU.  Re-running the above equation with a 60% efficient PSU and 75% of all power demand coming from the +12V rail, we see that the system is only asking for 11.4 amps at +12V at full load with a 6800U installed (and also at full load).
    If you want to disagree with my results, that's fine, but don't expect me to take your argument as credible unless you have some actual, measurable data to back up your claims.  Saying "this is the way things *really* work because I say so" doesn't cut it, so until you want to break out a multimeter and measure the amps your PSU delivers to the MB on the +12V rail at boot, idle, load, and gaming and then report your results and discuss whether or not they are consistent with your "amps are what counts" hypothesis, I hold my results and conclusions up as being valid, and as soon as I see any measured results which contradict mine, I will gladly stfu about PSU recommendations being needlessly high.

  • Power consumption with dual power supply

    Hi,
    I need to know how dual power supply works, I mean, when you have two power supplies in a router if both of them are working at the same time, or if one of them is working while the other one only begin to work when the first one stops.
    And if both of them are working at the same time, what is the power consumption, the same as if only one of them was working?
    Thanks

    marianares0001 wrote:Hi,I need to know how dual power supply works, I mean, when you have two power supplies in a router if both of them are working at the same time, or if one of them is working while the other one only begin to work when the first one stops.And if both of them are working at the same time, what is the power consumption, the same as if only one of them was working?Thanks
    The implementation details can potentially differ per product category but usually the following is applicable:
    The product provides configurable power redundancy modes and choosing a mode dictates whether load will be shared across supplies or whether a particular supply will be dedicated for redundancy.
    If both (or more) supplies are working at the same time then both share the power load.
    The actual power consumption depends on the power required to run the installed components and is the same whether you use power supplies in redundant or power sharing mode.
    Atif

  • N660 TF 2GD5 UEFI Compliant VBios

    Hi,
    I have an issue with my N660 TF 2GD5 since I installed Windows 8.1 in UEFI mode, once I install the official Nvidia driver (whatever the version) I have a dark grey (~black) screen at boot; instead of login screen, sometimes I can see the cursor but I'm completely blocked. For bein able to login I have to uninstall the driver.
    I guess that the root cause is that the VBios is not UEFI compliant, is it correct?
    Moreover I have an issue when I try to dump the firmware:
    NVIDIA Firmware Update Utility (Version 5.164)
    Adapter: GeForce GTX 660      (10DE,11C0,1462,2871) H:--:NRM B:01,PCI,D:00,F:00
    The display may go *BLANK* on and off for up to 10 seconds during access to the
    EEPROM depending on your display adapter and output device.
    Identifying EEPROM...
    EEPROM ID (EF,3012) : WBond W25X20A 2.7-3.6V 2048Kx1S, page
    Reading adapter firmware image...
    My SN: 602-V287-050B1302041719
    Any help is welcome!
    GPR

    Quote
    I guess that the root cause is that the VBios is not UEFI compliant, is it correct?
    Doubtful as the machine would not even post if being set to UEFI mode but vga is not using a GOP compliant vbios.
    Quote
    Moreover I have an issue when I try to dump the firmware:
    And what problem is it? Doesn't it continue saving? If Windows works without driver use gpu-z to save the vbios: http://www.techpowerup.com/gpuz/

  • REQUEST: UEFI GOP - 2 * MSI N660 TF 2GD5/OC

    Hello All, i like to request the latest UEFI bios for the follow cards...
    First:
    N660 TF 2GD5/OC
    S/N 602-V287-04SB1208195847
    Second:
    N660 TF 2GD5/OC
    S/N 602-V287-050B1304104486
    Currently running a ASrock Z87 Extreme6/ac with UEFI BIOS ofcourse.
    I noticed a topic a few posts below with the same cards, but i figure its better to ask again, so we do not run into major problems. Also both cards are made in a different time, so there might be different bios's  needed.

    Quote from: Svet on 12-July-13, 20:02:24
    Might be also interesting to mention, that there is a clear difference between both 660's.
    The top one is running at +- 34°C, while the bottom one runs at +- 29 a 30°C... Even when i split the multi monitor load over both, this stayed the same. And there is almost no hot air being draw from the lower to top, as the case is open ( for another reason, with fans blowing cold air directly to them ), and the 34°C, what is around the same 33°C that is was running before, without a second 660 below it.
    So ... Whatever they did, they clearly optimized the core even better on this new one, or so it seems.

Maybe you are looking for