NVidia GTX 780 Ti faster than GTX Titan

Today the GTX 780 Ti GPU is available starting at $699 and it is faster than the $1000 GTX Titan.  We do not have any test results but all specifications point to it being faster.  It has faster memory and more CUDA cores.  The only difference is that the GTX Titan has 6 GB of video RAM while the GTX 780 Ti has 3 GB.  Unless the media you are editing is higher than 4K, 3 GB of video RAM should be enough (see Harms data).
Will someone please move this to the hardware forum, I must have been dozing off  Bill

The 3770k will be a great match for a single 780ti.
Maybe on an absolute scale. But in my previous post in this thread I compared to the PPBM6 results Harm's Monster achieved. (And granted, Harm's Monster was equipped with a GTX 680 when he tested it.) Thus, if my i7-3770K was 2.5x slower than Harm's Monster in the H.264 Blu-ray portion of the PPBM6 test suite (259 seconds versus 115 seconds), then the MPEG-2 DVD result should also be about 2.5x slower than Harm's Monster (in other words, a result of about 52 seconds compared to the 23-second result of Harm's Monster). However, since Harm's Monster used a GTX 680 to achieve those results at the time he tested it, it follows that the recommended GPU for my system to achieve that particular balance would have been a plain, non-Boost GTX 650 Ti.
That said, I tested a 2GB GTX 660 from eVGA (a GTX 660 SC) in both my auxiliary i5-2400 system and my main i7-3770K system, and while I do agree that the GTX 660 was a bit overkill with an i5 with only 16GB of RAM {its MPEG-2 DVD result of 40 seconds was 2 seconds slower than the same card did in my main rig although I cannot confirm whether the slower result was CPU-limited or RAM-limited (UPDATE: I investigated further, and the 2-second slower result was due to the auxiliary system's PCI-e x16 slot running at only PCI-e 2.0 instead of PCI-e 3.0 bandwidth)}, the GTX 660 would be a good match for an i7-3770K with a moderately overclocked CPU, 32GB of RAM and a two-disk RAID 0 with the latest-generation 7200 RPM disks (such as the 1TB to 3TB models in the Seagate 7200.14 line). In fact, the GTX 660 scored the same (38 seconds) as my old GTX 560 Ti 448 (which should really have been named a GTX 570 LE instead of its released name) in the MPEG-2 DVD portion of the PPBM6 test suite.
Message was edited by: RjL190365

Similar Messages

  • Premiere Pro CC Warp Stabilizer with activated Mercury Playback Engine - GPU doesn't use the Nvidia GTX Titan GPU - CUDA

    Windows 7 Pro, fresh installation, include all updates
    Premiere Pro CC - actual version
    Gainward GTX Titan Z - actual firmware and driver
    SSD Hard Drive
    Intel i7/ 960/ 3.2GHz/ 32GB RAM
    Mercury Playback Engine - GPU is activated.
    Card is known by standard in cuda_supported_cards.txt
    Selected Clip: 1920x1080i, 25fps, AVCHD/mp4
    Seleceted effect: Warp stabilizer with Subspace-Warp (Standard-parameters)
    Step 1: Analyse the clip
    The Intel i7 is working with 90%, not the GPU
    The status monitor of the Nividia-card shows 0% used performance
    Playing the analysed clip (informations from the Cuda Render System):
    Total render time: 26ms
    Display FPS: 0.82
    Scheduler FPS: 2.28
    Target FPS: 25.00
    Rendered FPS: 3.50
    Total: 239
    Dropped: 234
    => Playing the analysed clip is dropped extremly.
    Step 2: Rendering the clip
    The Intel i7 is working with 90%, not the GPU
    The status monitor of the Nividia-card shows 0% used performance
    The Warp Stabilizer is shown as a GPU-Cuda supported effect.
    The GPU doesn't help the Intel i7....?!
    Please help me!

    Thank you for your fast response!
    Warp-Stabilisation shall be a GPU accelerated effect.

  • NVidia GTX Titan

    In a few months, I will be building a new computer and I am considering buying a GTX Titan. My main question is: will it work with PP CS6? And if not, does Adobe plan to support it?

    The reasoning behind my guess about performance is based on the following comparison:
    Specifications
    GTX 680
    GTX Titan
    CUDA cores
    1536
    2688
    Memory bus in bits
    256
    384
    Memory Bandwidth (GB/s)
    192.2 (203.1)*
    288.4
    Base Clock (MHz)
    1006 (1166)*
    837
    OpenGL
    4.2
    4.3
    Graphics Card Power (W)
    195
    250
    * The figures between () are my own overclocked settings.
    While the number of cores has increased significantly, the most telling difference is the memory bandwidth, which increased by around 50% and since this is a single GPU card, in contrast to the GTX 690, this may be the factor to have the most impact on performance. There is one aspect to take into consideration however and that is the very weak threading use of CUDA cores with Ray tracing. It is very disappointing to see how few CUDA cores are used during Ray tracing. Don't know whether that is a driver problem or a deficiency of the Adobe code, but until that is resolved, there can not be a linear performance boost from adding CUDA cores. Hence my guesstimate of 30 - 40%.

  • GTX Titan - 12% improvement?

    OK, I am one of those stupid early adopters who sometimes has more money than brains - and today was one of those days. I had to have an NVidia GTX Titan. I got one this afternoon and ran a quick test with a recent 3 min. 57 sec. video I made earlier this week. It was nothing fancy: 1080p, 30 fps, 8 graphic layers (2 animated background images and 1 footage with green screen). It was not created to try and take advantage of the GPU. It was just a typical video I produce daily.
    Here is my system:
    CPU: I7-3930K (4.4 ghz)
    GPU: stock Asus GTX 680
    RAM: 64 gig (1866 mhz)
    C: drive: 240 meg Intel 520
    Source drive: 2 x 240 gig Intel 520 (RAID 0)
    Destination drive 2 x 240 gig Intel 520 (RAID 0)
    TODAY'S upgrade: ASUS GTX Titan
    Here are my results:
    - Rendering to H.264 (HD 1080p, 30 fps) took exactly the same amount of time. There was no improvement. The 237 sec. video took 170 seconds to render with both GPUs.
    - Rendering to MPEG2 (HDTV 1080p, 30 fps) was faster. Not by a lot but around 12%. What would normally take 96 seconds took 84.
    But since this test was done using my iPhone's stop watch, you could add or deduct a couple of seconds either way. I will redo the PPBM5 test later tonight or tomorrow during the day. We will see if I score better than before.

    Nicol,
    Thanks for your submission. Rank #3 now. Great results.
    Of course the Disk I/O test and the H.264 test are almost identical and the differences can be attributed to measurement errors. What is interesting however, is the MPEG2-DVD score that went down from 56 to 44 seconds, an improvement of around 20%. This test requires a lot of GPU assistance, since there is scaling, blending and blurring going on over the complete timeline. It is still early in the game, but when you see 20% improvement on such a test, even when using AME, it is possible that the real performance improvement over a GTX 680 may be better than 20%. I'm still hoping for 30 - 40%, but that requires a different test to show that. Anyway, 67% of the top scores on the MPEG2-DVD test are now Titan's.

  • NVidia Geforce GTX Titan compatibility

    Right now I'm using CS5 and want to upgrade to CS6.  I would much rather upgrade my card and my software rather than just the video card. Before I buy the new nVidia Geforce GTX Titan for my system, I would like to know it will be compatible. I haven't seen it on the list of suggested cards, but it is fairly new. It seems to be the card to get, but the Quadro 6000 is what is recommended. The specs for the 6000 seems to be far lower (by over half) than the Titan, so I am not sure why I would buy it, except that it is definitely compatible with other nVidia Pro Video cards. I don't use SDI since I am using cards to shoot video and I use my computer for Post-Production. I also use Blender 2.66 for 3D design and animation in conjunction with AE and Premiere to clean up, edit and render out video. I am using a Quadro FX 5800 for my video card, but it is quickly becoming a dinosaur, which is pretty amazing to me that it lasted this long in the tech world.  I realize this seems like an nVidia question, but it straddles both companies. Since nVidia has worked closely with Adobe for so long, I was just wondering if anyone knows. The Titan seems to run circles around the 6000 for half the price. How I would end up using the Titan would be as an engine and use the 5800 as the display card to allow the card to be maximized.

    Just wondering if you read my reply ?
    Looks like I can upgrade the memory with 24GB of G. Skill Ripjaw series DDR3 1600 for about $200.
    Looks like a good plan.
    The i7 can be upgraded to the 990X for about $1000 (and overclocked less than my 950)
    and overclocking the 950 to 4.0 - 4.4 GHZ is free, saves you around $ 1000 and is about equally fast.
    and the nVidia card upgrade to the Titan for about $1100.
    which is absolute overkill on this system. You will not notice the difference with a 660 Ti for around $ 250.
    Include the $375 for the upgrade to CS6 and I'm under $2700.
    AFAIK the upgrade from CS5.5 to CS6 is $ 525 in the US and around $ 900 in Europe.
    If you follow my advise, the total cost is $ 200 + $ 0 + $ 250 + $ 525 = $ 975.
    That saving goes a long way for a new system, but you still would have to get that Samsung 840 Pro SSD.

  • NVidia GTX 780 in a Mac Pro?

    I have a 2010 Mac Pro that I just bought.  It currently has an ATI Radeon 5770 graphics card with 1GB DDR5 vram.  I am looking to buyt the best gaming card I can buy for this machine.  I am running Windows 7 installed with bootcamp for gaming.  Due to power supply restrictions and possibly other hardware restrictions of Cards and the Mac Pro, I need input on which cards will work.  I would like to use the NVidia GTX 780, 690, or ATI Radeon 7970.  Or other? 
    Thanks in advance.

    It seems the warranty is no longer valid after you do any kind of power modification to Apple hardware, so I am not recommending this merely sharing some knowledge.  I spent some time researching and testing solutions before I settled on the following to get the GTX 780 running in both OSX 10.8.4 and Windows 7 64 on a 2010 Mac Pro.
    Avoid 6-8 pin adapters these are a bad fix, you might get enough power to start the machine but as soon as you start any serious use of the GPU the power supply will shut off because of the demand for more power than it can safely provide.
    WARNING if you don't understand how this is done then don't attempt to do it.  Anyone who understands how the power system operates should have no problem understanding and performing this. 
    Many people believe an external supply is the only solution, however if you are only adding one card then there is a simple solution if you can bare a small compromise.
    I don't really use optical drives and when I need to I have external usb versions. So i removed the optical drive and made a wiring harness which would tap off the required rails from the optical drive power cable.  This harness feeds the 6-pin input connector on the GTX 780.  The two 6-pin cables from the Mac Pro backplane then feed the dual 6-pin to 8-pin adapter which ships with the GTX 780 which in turn plugs into the 8-pin connector on the GTX 780. 
    Once this is done the power requirements don't seem to exceed the Mac Pro power supply limits and all those CUDA cores can get busy.  The fan noise is low even when pushed, the temperature has not exceeded what I would expect and the performance increase is excellent. 
    Of course anyone wanting an SLI based power solution will probably need to add a power supply, of the research I did the internal variants seemed the way to go, having a much cleaner and safer install.
    Hope this helps, after spending so much time on this, it seemed worth writing down.....
    Cheers
    John

  • Does the NVIDIA GeForce GTX Titan X work with the newest AE CC version?

    Hello
    I would like to know if I can use the GeForce GTX Titan X for the newest AE CC version? On the system requirements Page (2014) for ray-traced 3D renderer, the GTX Titan is mentioned, but not the Titan X. Is the Titan X also working? Who can help me.
    I would like to run it on a HP z640 (Windows) Computer.
    THX for all the help.
    Tom

    Sorry for my earlier typo. I meant the Porsche 918 Spyder.
    To determine when to use the Titan X, use a rough rule of thumb like this formula:
    MIN (N, 1.5) x (physical cores/CPU) x (clock speed), where N is the number of CPU's.
    Your example with dual E5-2643v3 results in: 1.5 x 6 x 3.4 = 30.6
    With dual E5-2697v3, the result is: 1.5 x 14 x 2.6 = 54.6
    WIth a single i7-5960, overclocked to 4.5, the result is: 1 x 8 x 4.5 = 36
    When the result is < 40, it is highly doubtful the Titan X can be used to full potential, when the result is between 40 - 50, it may occasionally be used to full potential, when the result is > 50, it is very likely to be a very good choice.
    This statement assumes that memory is at least 64 GB and the disk setup is very fast on all volumes with transfer rates in excess of 800 MB/s.

  • Hot Debut ! MSI 2014Gaming Laptopsfeature with Nvidia GTX 800M series GPU

    MSI's gaming laptop now delivers the brand new models with latest Nvidia GTX 800M family which are more powerful and boost amazing performance of the latest gaming laptops. Under the hood, featuring the Intel 4th generation Quad Core processors and NVIDIAGeForceGTX 880M, GTX 870M GTX860M and GTX850M discrete graphics,while tipping the scales at 15.6 inch to 17 inch respectively. These gaming NBs meetgamers’ need for machines that are both lean and mean. In addition, they sport the best SteelSeries gaming keyboards with multi colorbacklit only on MSI gaming notebooks, the killer networking feature to boost the internet efficiency, new authentic Sound by Dynaudio speakers system, and XSplit Gamecaster pre-installed with a 6 month premium license for gameplay streaming MSI’s own Audio Boostheadsets sound enhancement solution. Light and powerful, these NBs will give you the furious advantage in Full HD gaming competitions.
    The most wanted exclusive features at 2014 MSI gaming laptops
    [TABLE=width: 500]
    [TR]
    [TD][/TD]
    [/TR]
    [/TABLE]
    Exclusive featuring the Steelseries keyboard and Steelseries Engine
    Exclusive featuring the best live streaming application Xsplit Gamecaster
    Exclusive equips the Super Raid 2 technology
    The world 1st leading gaming brand builds in the Killer Doubleshoot technology
    Exclusive and fine tune the sound system by Dynaudio
    Exclusive Matrix Display technology for multiple external monitor output
    The new GT series – Dominate the cyber battlefield
    Featuring NVIDIA’s latest GeForce GTX880M (Dominator Pro) and GTX870M (Dominator) graphics, the new GT70 can give the users a superior extreme gaming experience on the top-end performance platform. It provides gaming performance at "the speed of light" and with incredible resolution. And it boasts more than 15% better performance on 3D Mark 11 standard tests over similar level gaming computers of the previous generation of, like the GTX780M and GTX770M. The unique gamer oriented features design, record-shattering performance, and innovative gaming experience of MSI's gaming laptops are steadily winning over extreme gamers around the world during 2013.
    [TABLE=width: 500, align: center]
    [TR]
    [TD][/TD]
    [/TR]
    [/TABLE]
    New GS series - The Ghost killer, weapon of Stealth
    The MSI engineer refines every detailed material and component to increase the performance to the limit and stay true to portability. The MSI GS Stealth and Ghost series packs Intel core i7-4700HQ, graphics powered by NvidiaGeforce GTX870M and designed at only 19.9mm thin and weighing 1.9kgw(GS Ghost). Performance in 3D mark11 test can reach 20% more than previous generation GPU.
    [TABLE=width: 500, align: center]
    [TR]
    [TD][/TD]
    [/TR]
    [/TABLE]
    ThegroundbreakingGE Apache Pro series
    They feature Intel’s 4thGen.Quad Core processor and new generation NVIDIA GeForce GTX 870M and GTX 860M discrete graphics, and its performance is over 20% to 35% faster than GTX 770M and GT 765M graphics. GE Apacheseries are the portable and powerful battle laptops on the gaming field todaywhile weighing in at only 2.55(GE60 Apache) and 2.85(GE70 Apache) kilograms respectively, and are specially designed to meet the higher demands of heavy loading gamers around the world.
    [TABLE=width: 500, align: center]
    [TR]
    [TD][/TD]
    [/TR]
    [/TABLE]
    The faster and furious GP Leopard series for popular online games
    They featured Intel’s 4th Generation Quad Core processor and new Generation NVIDIA GeForce 840M discrete graphics. Specially designed to meet the demands of online gamers around the world. With the 22nm production technology, the Nvidia GeForce 840M performs over 2500 points on 3Dmark 11(P) and increases overall performance by 25% over GeForce GT 740M.That coupled with the fact that it supports DirectX 11 effects and NVIDIA PhysX technologies. You will be amazed by the smooth gaming experience of your favor online games on HD to Full HD resolutions, GP Leopard series is your easy to take and best companion anywhere.
    [TABLE=width: 500, align: center]
    [TR]
    [TD][/TD]
    [/TR]
    [/TABLE]

    I made a thread, but it has not been approved by a moderator yet, so I will ask the question in here since the topic is relevant.
    The GS Stealth in this topic is the same as the one MSI revealed at CES 2014.
    But what is GS 70 Stealth and GS 70 Stealth Pro on the website?
    And which one is the GS 70 Stealth that was released just a few days ago?
    I am just confused as to the naming of this particular series.
    I am looking for the GS Stealth that is the one in this topic above, but there seems to be so many versions of it and some are called 70.
    Can someone explain what is what?
    EDIT:
    nevermind, I figured it out through extensive searching on various forums.
    The new Stealth is called GS70 StealthPro-024 or GS70 StealthPro-001 in case anyone wonders.

  • GTX Titan in After Effects & Premiere CS6

    GTX Titan Superclocked now running in OS X 10.8.3 using the latest NVIDIA Web Driver (313.01.01f03) and CUDA Driver Version 5.0.59.
    I've modified both the "cuda_supported_cards.txt" file for Premiere Pro CS6 (6.0.2) and the "raytracer_supported_cards.txt" file for After Effects CS 6 (11.0.2) to include the "GeForce GTX TITAN."
    Premiere recognizes the card fine and is able to leverage the Titan for it's Mercury Playback Engine GPU Acceleration (CUDA). However, After Effects reports the following error when starting up:
    After Effects error: Ray-traced 3D: Initial shader compile failed (5070 :: 0)
    Mylenium discusses the error in detail here:
    http://myleniumerrors.com/2013/01/06/5070-0-3/
    Removing the "GeForce GTX TITAN" from the "raytracer_supported_cards.txt" file resolves the error message, however, this obviously renders the Titan useless for any ray-traced acceleration using the Titan.
    Any thoughts on why After Effects is reporting this error (even though Premiere is capable of enabling CUDA with the Titan) and what the solution might be?
    One final note, even with the startup error, After Effects successfully reports the GTX Titan under GPU Information (see attached image below)

    I agree with the above. Couple things I would add:
    -1200W PSU should be good for nearly all systems, unless you are running 6+ HDD and multiple GPUs. The titan is actually very energy efficient for its speed
    - IF going the overclocked 3930K route, I'd suggest the corsair HAF100 closed loop water CPU cooler, at a minimum.
    - Coolermaster and thermaltake both make great cases. Airflow is very important.
    - Also I have found in previous builds that 128 GB SSD for OS is not enough, it gets full very quickly. Personally, I'd recommend at least 256 for SSD overheard, especially if you want to install full Adobe CC suite + multiple 3D apps and programs.
    - don't forget data backup is very important, especially when using RAIDS for projects, as they are not super stable and can have disk failures - you can get a cheap external 2-4 TB USB 3.0 drive for pretty cheap. It will save your life one day
    - Definitely look into the red rocket PCI card. if you can afford it. Much better use of your money for editing red than another titan, or xeon chips.

  • Rep tells me my Duel Xeons not support GTX Titan

    i own/use a workstation computer for editing hd video and 2d animation - i don't game. when i purchased the machine a few years ago, a rep at the company told me that my duel xeons would only support quadro based (workstation) cards. i'm looking to upgrade the overall performance of my machine, be it a geforce gtx titan or 600-700 level gpu, or upgrading my processors, but i need to make sure that i indeed CAN switch to geforce from quadro. below are my straight from my invoice.
    Intel Motherboard Dual Socket Xeon S5520SC
    Intel Xeon CPU Fan Heatsink STS100C
    2 x 2.26GHz Intel "Nehalem" Xeon Quad Core [8MB]
    12GB 1333MHz DDR3 Triple Channel SDRAM (6 x 2GB)
    1TB High Speed Hard Drive [64MB Cache, 7200RPM]
    StormDrive Dual Layer CD/DVD Writer
    850W Silent Power Supply
    Windows 7 Pro [64-bit]
    NVIDIA Quadro FX 3800 Workstation Graphics Accelerator [1GB]
    PCI 3 Port FireWire [TI Chipset]
    (additional) 1TB High Performance Drive [64MB Cache SATA 3 6GB]
    (additional) 2 TB High Performance Drive [64MB Cache SATA 3 6GB]
    ALSO, can anyone attest to which is better (on the whole): geforce or quadro?

    You system would handle a Titan card but most of the available processing in the card would go unused. The Geforce cards right now have better specs than the Quadros accept for the extremely expensive K6000 which is equal to a Titan. When dealing with the Hardware MPE engine the specs of the cards decide the performance the GPU adds. Nothing specific to the Quadro cards effect the performance in Adobe. Now you can get 10bit color Preview with the Quadros in Adobe and you cant with the Geforce cards. The Geforce cards only support 10bit color via Direct X and Adobe uses Open GL. Beyond that a Geforce card would be far better suited. I would suggest a Geforce GTX760 or GTX770 card. Both would give you the best performance you will see from that system with GPU Acceleration.
    Eric
    ADK

  • AE CC timeline lag & not realtime preview (Win7 GTX Titan)

    As of a few days ago I'm suddenly experiencing lag when navigating around, scrubbing the timeline (as in cursor movement), using keyboard shortcuts, drag selecting keyframes etc.
    Also RAM preview occasionally shows a red warning its not playing back in realtime.
    This is a pretty new system that I'm only starting to use in anger and all has seemed fine until a few days ago, and I haven't updated anything to my knowledge.
    I've an SSD for system disk, dedicated SSD for AE cache, 2 x HDD (1 for projects, 1 to render to).
    I've deleted my preferences and purged all the cache several times and still have issues.
    A new empty project seems fine, but if I make a new comp and add a new layer it starts to lag if I CTRL-D duplicate the layers (after about 20) and try to CTRL-A and expand a property (e.g position P)
    As I say I've been working on a few proejct with lots of layers over the past month and not experienced the lag (though I have the red RAM preview warning)
    I'm baffled as everything has seemed fine until very recently and I'm not aware of updating anything prior to noticing this issue.
    Any ideas anyone?
    Specs:
    AE CC 12.2.0.52
    Hardware:
    Win7 Pro
    CPU - i7-4930K 3.40Ghz
    32GB RAM
    GTX Titan 6GB Driver 332.21
    Fast Draft:
    Available
    Texture Memory:
    2422.00 MB
    Ray-tracing:
    GPU
    OpenGL
    Vendor:
    NVIDIA Corporation
    Device:
    GeForce GTX TITAN/PCIe/SSE2
    Version:
    2.1.2
    Total Memory:
    5.91 GB
    Shader Model:
    4.0 or later
    CUDA
    Driver Version:
    6.0
    Devices:
    1 (GeForce GTX TITAN)
    Current Usable Memory:
    5.72 GB (at application launch)
    Maximum Usable Memory:
    6.00 GB

    Update: as a test I installed AE CS6 (after I'd finally managed to locate the download link!) and initial tests don't result in the same problems as CC. (This is both with and without the CS6 raytracer file modified to accept the Titan).
    I can easily replicated the problem in CC:
    1) New comp (1920x1080)
    2) New solid layer - scrub timeline back and forth, fine no lag.
    3) Duplicate layer - scrub timeline back and forth, fine no lag.
    4) Duplicate layer a few more times - scrub timeline back and forth, start to notice lag
    5) Repeat duplicate layer and I'll start getting delay in the duping process.
    6) Get to about 30 layers and lag increases - scrub timeline back and forth, noticeable lag.
    7) CTRL-A (delay), P (delay), click for keyframe (delay) scrub timeline back and forth, noticeable lag.
    This has now made CC pretty unusable, but it was fine (bar occasional slow RAM preview playback) a few days ago!
    Please don't tell me I have to roll back to CS6....
    Is this the right place to report this issue? Anyone?

  • GTX Titans in Expansion Chasis with 10.9.4

    I have been searching online for this issue and found a few forums discussing it. ( here's one > http://forums.macrumors.com/showthread.php?t=1713478, )
    It is not commonly discussed as the configuration is not common but I have no other way to reach out to Apple about this issue, so here I am posting it. I sincerely someone in Apple will read this and fix this and give some indication that they know of this issue and is working on it.
    I have a Mac Pro 5,1 with 12 core hooked up to a Netstor PCIe Expansion Chasis (Similar to a CUBIX one , which also has the same problem). In the expansion chasis I have 3 GTX Titans in there to be used as GPUs for Resolve and OpenCL/CUDA processing.
    In 10.8.5, System profile will show and detect all 3 Titans. And Resolve and all other apps can detect and use the 3 Ttians.
    However, with 10.9.2, 10.9.3, and 10.9.4, only 2 of the Titans will be detected when 3 are installed. The 3rd one being "Display, no kext loaded". It doesn't matter which configuration, slot location, it will detect 1 less Titan when there are more than 1 installed. For instance, if I install only 2 cards, only 1 will be detected and the other as a "Display, no Kext loaded". I have tried mixing the cards, to see if it is a problem with 1 of the cards but it doesn't matter. As long as I have 2 or more cards installed, one of them will be "disabled".
    If I install just 1 card, it will be detected fine and usable in Resolve.
    I was hoping with every Maverick update this problem will be solved, but from 10.9.2 till 10.9.4, this problem persists. Oddly, the reverse problem is seen with Titan Blacks. Titan Blacks are fully detected on Maverick, but exhibit the same problem as the Titan in 10.8.5.
    I sincerely hope someone from Apple will acknowledge and fix this issue, as it is clearly an OS problem. No Nvidia web drivers can solve it. If anyone else sees this problem, please chime in if you know of any solutions. even if it involves hacking any kext files I'd gladly do it.
    Thanks!!

    Apple development does not monitor these forums. These are user-to-user forums.
    You should get a free developer account and file a bug report.

  • Best brand of nVidia GTX 285 card?

    Now that I've changed my mind, and decided to get ready to someday buy an HD camcorder, I'm looking at the various brands of nVidia GTX 285 cards to use with CS5... with prices ranging from $370 to $450
    The top brands seem to be EVGA $390-$430 and BFG $370-$400 and XFX $390-$450 and GIGABYTE $370 @Newegg
    Is one of these brands better than the others?
    Note that I do NOT believe in over-clocking, so I am ONLY going to consider a brand + model that uses the standard clock speed as specified on the nVidia web site

    brand is unimportant to a point
    look at GHz and memory speeds some sell factory OCed
    also warranty. some have lifetime, if i recall BFG, XFX and EVGA?
    i prefer Zotac as they tend to be faster for less.
    Scott
    ADK

  • Nvidia GTX 980 Slow Playback and Rendering on 36 Core Dual Xeon

    I have a Dual Xeon e5 2699 v3 machine (18 cores per CPU; 36 physical; 72 Logical).  I am processing Red Dragon footage at 6K resolution and 60 Frames per Second.  I have a Red Rocket-X card as well as Four (4) GTX-980 Nvidia GPUs.  When I try to use the Mercury Playback engine the performance is pitiful.  When I select CPU/Software only I get over 120 fps.  The Adobe CC software ONLY uses One of the Two CPUs installed on the system at any one time, it is like it doesn't see the other CPU even though Windows acknowledges that both are there and working (Cinema 4D and Cinebench use all 72 logical cores and will max them all out).  However, even with the Adobe CC Software only seeing 1 of 2 CPUs, the CPU playback is still way faster than the Mercury Playback/CUDA version.  This makes no sense to me sense the GTX-980s are the fastest/newest cards from Nvidia.  I have read that Premiere CC will only use one GPU for playback but will use multiple cards for renders.  HOWEVER, when rendering GPU-z says that each of my cards is working at less than 25% of capacity.  The rest of the system is NOT the bottle neck, I have 256Gb of DDR4 RAM, along with 12-1Tb SSDs attached to a 12 Channel 12Gb/s e-SATA card with 8Gb of RAM on the card.  I am getting sustained transfers on the RAID of over 2 Gigabytes per second.
    I have invested a lot in this machine in an effort to save rendering time.... Adobe's recommendation for multiple CPUs and Multiple Graphics cards don't seem to matter as my render times are no faster than they were using FCP-X on a 2010 12 core MacPro Tower.
    I hope it is just a driver or programming issue that relates to the 2699's and the GTX980 being so new???
    Any Help would be appreciated,
    Michael

    Bill,
    Thank you for making the time.  I followed your instructions, great little piece of software.  Here is a screen shot when I ran my test...
    As you can see even on your test, only 1 CPU is being used by Premiere.  The results from the benchmark were uploaded under the computer name RedShredder (I use it to edit and render Red Dragon 6k footage).
    The results from you output file are as follows:  "32","44","8","206",Premiere Version:, 8.2.0.65
    I couldn't register at the site but will try again later.
    My concern is that no matter what I do Premiere and the entire Creative Cloud ONLY see ONE CPU on my Dual Xeon system.  Sometimes it will run on the other one and you can force it to by changing the processor affinity selection but no matter what, even if you click all cores on both CPUs it will still pick one and just run on that processor.
    Any help would be appreciated,
    Michael

  • Nvidia Gtx 650 ti boost vs Nvidia Gtx 580

    Hi everybody,
    I have an hard question about nvidia gtx video cards.
    The nvidia gtx 650 ti boost is adeguate for editing video in fullHD with Premiere Pro Cs6 ?
    I have chosen this video card because the price is very interesting: "only" 150 € (more or less)
    Another question: is it adeguate for sporadically editing whit After Effects Cs6 ?
    I have compared the gtx 650 ti boost with the gtx 580. The second is more more powerful, but the price is 450 € (more or less). I have found the gtx 580 used at 200 €.
    Question: Which is the better choice? gtx 650 ti boost new or gtx 580 used (and maybe overclocked)?
    Thanks.
    Reference:
    gtx 650 ti boost http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-650ti-boost/specifications
    gtx 580 http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-580/specifications

    Software components? I use only Premiere and After Effects.
    I'm sorry, I expressed myself badly.
    It is clear that the 580 is faster than the 650 but my question is: the 650 is powerful enough? I don't pay 200 Euro for a "insufficient" video card.
    The second question is if the used grafic card is a good choice.
    Thank you so much.
    PS: the configuration of the system is:
    Intel i7 920 (8M, 2.66 -> 2.93 Ghz)
    RAM 12 GB triple channel @ 1066 Mhz
    Motherboard: Asus p6t

Maybe you are looking for

  • Error in creating solution in ODI Demo 10.1.3.5

    Hi, i'm modifyng the demo enviroment. I've created a solution, and, following the ODI Guide, drag&dropped the project in the solution windows. I create the version, and obtain the followind error: Error when creating version: java.sql.SQLException: C

  • How to Sign with a digital ID?

    after reading all available documentation I could not understand why the digital signature options are not available when I want to sign a pdf file created online. Can anybody help please?

  • Can't reinstall Audigy 2 - setup can't find

    Hello, I can't reinstall my Audigy 2 Platinum Ex sound driver as the setup program (from CD) says: <EM>Setup could not detect any Sound Blaster Audigy 2 on your system.</EM> <EM>Please ensure that your Sound Blaster hardware is properly installed bef

  • Lightroom Mobile output file sizes

    I have started using LR mobile yesterday and can't figure out how to export, share or even save to the camera roll a full size resolution photo after I do my edit. There doesn't seem to be an option to allow me to choose the edited photo output size

  • Can I use the status bar in my app?

    I want to put a icon symbol in the status bar,when my app is in foreground ? Is it possible? If so help me with some code...