Mercury: GTX 265 limitations vs Quadro?

On this NVidia Blog here:
http://blogs.nvidia.com/ntersect/2010/02/nvidia-quadro-driving-adobes-new-mercury-playback -engine.html
The author, in replying to a comment, states:
"•Specifically with the Mercury Playback Engine, when working with more than three layers, only Quadro will accelerate the entire project on the GPU. GeForce products will only accelerate layers 1 – 3, with any additional computation falling back to the CPU."
Can Dennis or anyone else from Adobe confirm that this is true? If so, is this due to the amount of memory on the GPU, or what?
Cheers
Mark

From a discussion I had at the Adobe Road Show about a year ago, the real limitation is not the amount of memory, but the number of cores or streams.
The 265 has 216, the 275 and 285 have 240 and the 295 has 480 (dual GPU). But it can't be simply that because the Quadro FX4800 ($1500) only has 192 and the FX5800 ($3000) has 240.  Interesting question.

Similar Messages

  • Quadro 2000 v. gtx 570 v. Quadro 4000

    Can anyone comment on the MPE performance difference between the Quadro 2000, the gtx 570 and the Quadro 4000?  I'll be using hacked GH2 footage with CBR intra-frame coding, which (I'm told!) can and should be put on AVC Intra time-lines (not AVCHD). 
    I'm concerned most with time-line responsiveness and playback performance.  DVD encoding, exporting footage, etc., won't be happening very often, so that's of less concern. 
    I'm aware that the gtx 570 is probably the best buy of the three; but that's not the question.  Many thanks.

    In any case, performance wise the Quadro 2000 is a waste of money: It costs almost $400, yet it performs equally as slowly as a $100 card. And in Premiere Pro CS5.5, the encoding performance becomes significantly slower with lesser GPUs. Look up posts by Bill Gehrke and you may find a list of GPUs along with their performance charts in the PPBM5 benchmarks. Bill tested a wide range of GPUs from a GTX 580 all the way down to an old 9500 GT. Pay particular attention to the MPEG-2 DVD scores. You will find that even on an overclocked i7-2600K system, the system with a GTX 550 Ti took more than twice as long (146 seconds) as the GTX 580 (60 seconds) or even a GTX 560 Ti 448-core (68 seconds) in that test. The Quadro 2000 would have performed even slower than the GTX 550 Ti in that same test (heck, the GTX 550 Ti itself is slightly slower than a first-generation GTX 260 in this test despite having an equal number of CUDA cores due to the 550 Ti's slightly lower total memory bandwidth). The Quadro 4000 would have performed roughly on a par with Bill's tested GTX 285 (117 seconds) in that same test.
    On the other hand, if you're encoding to H.264, then the Quadro 2000 would have been only slightly slower than the GTX 570; you would have had to downgrade further to Quadro 600 (GeForce GT 430) level to see a significant degradation of H.264 encoding performance.
    Secondly, the Quadro 2000 has only 1GB of RAM total. With your footage, it is possible that any effects that you apply will eat up more than the amount of memory on the card. If a scene needs 1.5GB of VRAM to render using MPE GPU mode, then the 1GB card will run out of RAM. And when the rendering job runs out of VRAM, that entire frame or scene will default entirely to the MPE software-only mode, which will result in slower performance and may also degrade image quality.
    And I strongly recommend avoiding the purchase of off-the-shelf PCs or workstations to begin with: Those systems are way too expensive for such bottom-of-the-barrel performance, and upgrading such a system via the manufacturer would have cost you three to four times more than if you bought those same parts elsewhere. If you can't build an editing workstation yourself (or find it too much of a bother), consider contacting a vendor who specializes in custom-configured editing systems such as ADK.

  • Dilemma?? GTX 570, 580 or Quadro 4000

    hey guys apologies if im in the wrong place for this, its my first ever post online!  I have a slight dilemma, I have just recently ordered a new setup, however im confused as to which graphics card to go for. I will mainly use the system for HD Video editing (Sony HVR Z7) using premiere Pro CS5 and after effects. From what i have read the GTX range is more than capable of accelerating certain effects in premiere pro, but will the quadro be better?
    My other main use of the pc is that I would like to hook it up to my Sim 2 Lumis host projector, via HDMI or DVI, now, since the Quadro has 10 bit video would this in any way re produce a much better image quality than the GTX range? Or is this only limited to the display port? Is the 10bit video sent through all ports even HDMI/DVI? I know that my projector has 10bit Video Processing. I would really appreciate some guidance on this, as im wanting to place an order for the card asap.
    (Money is not an issue with regards to those cards)
    Many Thanks guys.

    BTW incase you need a reference this is from Nvidia.com
    http://www.nvidia.com/docs/IO/102043/GTX-570-Web-Datasheet-Final.pdf
    Page 3
    Advanced Display Functionality
    • Two pipelines for dual independent display
    • Two dual-link DVI outputs for digital flat panel display resolutions up to 2560×1600
    • Dual integrated 400 MHz RAMDACs for analog display resolutions up to and including 2048×1536 at 85 Hz
    • HDMI 1.4a support including GPU accelerated Blu-ray 3D support, x.v.Color, HDMI Deep Color, and 7.1 digital surround sound. See www.nvidia.com/3dtv for more details.
    • Displayport 1.1a support
    • HDCP support up to 2560×1600 resolution on all digital outputs
    • 10-bit internal display processing, including support for 10-bit scanout
    • Underscan/overscan compensation and hardware scaling
    Incase you need reference for what Deep Color is in the HDMI standard BTW:
    http://www.hdmi.org/learningcenter/faq.aspx
    HDMI 1.3:
    Higher speed: HDMI 1.3 increases its single-link bandwidth to 340 MHz (10.2 Gbps) to support the demands of future HD display devices, such as higher resolutions, Deep Color and high frame rates. In addition, built into the HDMI 1.3 specification is the technical foundation that will let future versions of HDMI reach significantly higher speeds.
    Deep Color: HDMI 1.3 supports 10-bit, 12-bit and 16-bit (RGB or YCbCr) color depths, up from the 8-bit depths in previous versions of the HDMI specification, for stunning rendering of over one billion colors in unprecedented detail.
    Broader color space: HDMI 1.3 adds support for “x.v.Color™” (which is the consumer name describing the IEC 61966-2-4 xvYCC color standard), which removes current color space limitations and enables the display of any color viewable by the human eye.
    New mini connector: With small portable devices such as HD camcorders and still cameras demanding seamless connectivity to HDTVs, HDMI 1.3 offers a new, smaller form factor connector option.
    Lip Sync: Because consumer electronics devices are using increasingly complex digital signal processing to enhance the clarity and detail of the content, synchronization of video and audio in user devices has become a greater challenge and could potentially require complex end-user adjustments. HDMI 1.3 incorporates automatic audio synching capabilities that allows devices to perform this synchronization automatically with total accuracy.
    New HD lossless audio formats: In addition to HDMI’s current ability to support high-bandwidth uncompressed digital audio and all currently-available compressed formats (such as Dolby® Digital and DTS®), HDMI 1.3 adds additional support for new lossless compressed digital audio formats Dolby TrueHD and DTS-HD Master Audio™.
    Eric
    ADK

  • Mercury CUDA not enabling when using NVIDIA GeForce GTX 285 on Apple Mac Pro after Mavericks install

    Been using the same setup since CS5 with the Mercury CUDA running perfectly. After recent upgrade of OS X to 10.9 Mercury CUDA is no longer available and only lets me run with the OpenGL or software options. Im using a NVIDIA GeForce GTX 285 for apple computers. Here are the results for the GPUSniffer program in the latest Premiere Pro 7.1.0 files. The LAST line make me chucle because it the first on the list of supported card in the "cuda_supported_cards.txt" file. Anybody else seen this?
    --- OpenGL Info ---
    Vendor: NVIDIA Corporation
    Renderer: NVIDIA GeForce GTX 285 OpenGL Engine
    OpenGL Version: 2.1 NVIDIA-8.18.27 310.40.05f01
    GLSL Version: 1.20
    Monitors: 1
    Monitor 0 properties -
       Size: (0, 0, 1920, 1080)
       Max texture size: 8192
       Supports non-power of two: 1
       Shaders 444: 1
       Shaders 422: 1
       Shaders 420: 1
    --- GPU Computation Info ---
    Found 1 devices supporting GPU computation.
    OpenCL Device 0 -
       Name: GeForce GTX 285
       Vendor: NVIDIA (Apple platform)
       Capability: 1.2
       Driver: 1
       Total Video Memory: 1024MB
       * Not enabled by default because it did not match the named list of cards.

    found a link from a couple of days ago on a creative cow forum post one of the poster saying they are from adobe stating this
    Re: Mercury Playback Engine MacPro
    by Peter Garaway on Nov 12, 2013 at 9:49:16 am
    Hi Wendell,
    Sorry for the inconvenience. NVIDIA is currently working on drivers that support CUDA on Mavericks 10.9 with some of the older NVIDIA cards such as the GTX 285 and the Quadro 4800.
    For others interested, the Quadro 4000, K5000 and GTX 680 ect... work with CUDA in 10.9.
    Best,
    Peter Garaway
    Adobe
    Premiere Pro
    I have latest CUDA drivers, so i guess i am just waiting till a proper update that have the fixes to support my card.

  • NVIDIA GeForce GTX 670MX and Quadro K1000M

    Can the GeForce GTX 670MX or the Quadro K1000M support GPU acceleration in premiere pro?

    Check the Tech Specs on teh product page. What's not listed is not supported for Mercury, it's as trivial as that. Both cards will still support the more conventional OpenGL acceleration, though.
    Mylenium

  • GT70: changing GTX to quadro

    So I wish to buy this 1000$ GT70 off kijiji, I am a cad designer and machinist. use gibbscam alot.
    The WT70 seems expensive for the same thing as a gt70.
    Would it be possible to change the graphics card from the mxm default gtx to an mxm quadro k2100m
    Now I know its not recommended and warranty will be void, I dont wanna here some theories or whatever, I know electrical procedures, I got my static arm band. Buuilt desktops before.
    I wanna know, can it be done, will the bios change work?
    I was gonna buy an whitebook like this but there are not available. This comes out cheaper than a 1500$ whitebook with no warranty , dont care about warranty. I want a solid base that I will repair myself.
    and MSI always has best quality.

    In theory you should be able to upgrade GTX to Quadro but you might need to confirm if the GTX thermal module can fit in with the Quadro card.

  • MPE layer bug(s) - GTX 480 and CS5 trial

    Hi All,
    Due to some strange MPE-PPro behaviour, I am now having some last minute doubts about whether to jump up to a full version of the production suite - or perhaps wait for a .1 patch.
    Please advise if any of you can spot where the cause of these problems might lie...
    These are fairly easy-to-repeat MPE-related bugs in the trial version of PPro CS5, so far I have found issues both with scaling and when duplicating clips to new layers.
    Current hardware is as follows (will be looking for a new i7-920/X58 mobo combo if CS5 is purchased)
    Mobo- Asus Striker Extreme
    CPU - QX6700 C2Q@ 2.66Hz
    RAM 4G
    Win7 Pro x64
    So for testing purposes, I have "enabled" MPE using a GTX 480 and all seems fine for basic cuts-only or opacity fades on multiple layers. The bugs I have found seem to be related to adding more than 2 effects (even on just a few short clips...)
    Scenario 1:
    1) DV clip on layer 1, second DV clip on layer 2 scaled to 25% and positioned top left corner. Yellow line - playback behaviour is OK so far.
    2) With MPE enabled, if I add some fast CC to layer 2 clip then the footage plays back as though scaling is reset to 100% (although the slider still shows 25%)
    3) Switch from MPE to SW preview and the clip correctly returns to 25% scale (still with CC applied). Switch back and the scaling is again lost.
    Scenario 2:
    1) DVCPRO 1080 25p clip on layer 1, duplicated to layer 2 with a gaussian blur and 15% opacity (creates a basic glow effect).  Yellow line - playback behaviour is OK so far. Repeat for a second clip on to layers 3&4 with a simple opacity fade-up at start on each, overlaying the last few secs of 1&2 for the length of the fade.
    2) With MPE enabled, layers 3&4 only show when the layer 1&2 clip has reached the out point, ie the fade up on layer3&4 is "ignored" and the second clip only cuts in when the first clip has reached the out point. All the time with a Yellow line.
    3) Switch from MPE to SW preview and everything works as it should (after preview render is generated of course).
    So my questions are, can anyone with a GTX 285 or (certified) Quadro repeat either of these bugs? Am I better to wait until the GTX 480 is officially certified in a few months (possibly including a patch) or is there another cause elsewhere in my current hardware?
    MPE is really nice (in theory at least) but it worries me that it fails to calculate opacity occlusion and scaling correctly for such relatively simple sequences.
    Any suggestions/opinions/comments welcome!
    Cheers,
    Dave.
    Broader Pictures

    Hi Harm,
    I've created a fresh test project and recreated a timeline similar to scenaio 1 (sticking to DV to rule out demanding codec issues)...
    Here's the OK intial version of the timeline:
    LAYER4: PIP clip scaled to 25% pos. top left, no other effects just an opacity fade-in-out
    LAYER3: PIP clip scaled to 25% pos. top right, no other effects just an opacity fade-in-out
    LAYER2: PIP clip scaled to 25% pos. bottom right, no other effects just an opacity fade-in-out
    LAYER1: Simple clip no scaling full screen (backdrop), fast cc effect w/ default values
    Preview bar is yellow and play back is fine with MPE on.
    THEN, if I add any other MPE-enabled effect to any layer, the preview goes red and the top-most layer disappears from the preview monitor!
    The rendered preview also omits layer 4, as soon as I turn off MPE, the timeline returns to red and layer 4 pops right back on the monitor!
    Is this just a trial limitation of the MPE module, rather than restrict the 'extra' effect/layer to render-only you lose it altogether? This just seems a bit of a strange way to promote such an important feature, it certainly hasn't helped me decide on if/when to purchase, or perhaps it has?
    Any more thoughts?
    Thanks,
    Dave.

  • NVidia Quadro 4000 First Impressions

    I've seen some questions and some discussion regarding the nVidia Quadro 4000 card. Mine arrived yesterday, I figured I would share my initial experiences with it to help give others guidance so they can make the best decisions for their own needs.
    First off, if your primary interest (or a significant interest) is playing games, than a Quadro card is not for you. nVidia's high end cards are fine-tuned on both a hardware and software (driver) level in the needs of pro apps and manipulating very large data sets. It happens at the expense of some performance stats that matter to gamers. Quadro cards aren't designed to get you a higher fps in your favorite shooter, they're designed to get you better performance with ray-tracing, real-time 3D environments, and scientific use. I'll leave you to surf to nVidia's web site for more marketing speak on that. In my initial tests, I found that to be completely true. I don't do much gaming, but the couple games I tested performed no faster than the GTX-285 I had in the machine before.
    Attempting to run some more tests, I found that RealTech VR's OpenGL extensions viewer (which has some decidedly gamer-centric benchmarks) showed little to no improvement over the GTX-285 (as expected).
    Running a few test renders in Cinema 4D, I found only about a 5-7% increase in performance. That might be due to immature drivers, but it may also be due to C4D renders being more about CPU mucle (Maxon doesn't have any specific CUDA-support or acceleration). What I did notice was that moving/camming around in the app was much improved. I couldn't say if that had to do with an extra gigabyte of vram, or if it was some kind of 'Fermi' magic (Fermi's the name of this generation chip technology from nVidia).
    I have not yet gotten the chance to give Adobe CS5 (and specifically Premiere Pro and After Effects) a serious workout, though just playing around I noticed that the Quadro card had much more capacity for handling multiple layers of video in real time (I threw a dozen videos onto a main track in varying sizes of 'picture-in-picture' display, and arbitrarily adjusted the speed of some and color corrected others). It handled everything I could throw at it without appearing to break into a sweat, and I haven't yet had time to give it a proper performance test.
    Being an early adopter, I have the expectation that on initial release there will be kinks and hiccups, and that as the drivers mature the performance will improve dramatically. Based on discussions with colleagues and what I've seen in reviews, this has been the case with both the GTX-285 and the Quadro FX 4800 card, and probably was also the case on older nVidia cards as well. The Quadro 4000 met those expectations - it feels like this is still a work in progress. The drivers (version 256.01.00f03) are stable (no crashes, no kernel panics, no horrible situations to speak of), but based on my early results I'd guess that they're not optimized for speed, either. On the Windows side, nVidia has driver version 259 available as a 'certified' release, and a higher performance version 260 available, and performance under Windows 7 Professional (64-bit) seems better. To be fair, the card's been on the market for PC's since late July, those drivers are more mature.
    I still need to give the card a serious workout with Adobe CS5, but so far things look promising. Anecdotally, I've also noticed that system performance is greatly improved when I'm doing lots of multi-tasking. I often have several different apps running at once, and between the new technology and the additional video memory (my old card had 1GB, this has 2GB), I find I can juggle 20+ apps and dozens of Safari windows/tabs running without the Mac Pro batting an eyelash. That's hard to quantify in a specific benchmark, but it's very welcome considering the way I tend to work.
    As the drivers improve, and as my own workflow evolves to make more use of larger datasets and more complex 3D scenes, I see the Quadro 4000 really starting to shine. Heavy-duty CUDA users may be happy to know that this card only requires a single additional power connection, which means that you can install two of these cards into a single Mac Pro (for a total of 512 CUDA cores). If you're doing big scientific work or working with CUDA-supported ray tracing (or other plugins), or doing extremely elaborate things with RED camera footage, that may likely be a game-changer for you. For me, it'll likely be quite some time before I outgrow what this card can do.

    As I'd mentioned in another thread, the card began shipping last week. I expect that it will take a couple months before places like Apple and Amazon to dig through the large number of backorders they have (I don't think this card is produced in mass quantity, even on the PC side).
    My system setup is a Mac Pro 8x2.26GHz, 32GB RAM, 8TB HD storage, nVidia Quadro 4000 2GB driving the primary display, and nVidia GT120 driving a secondary display. I'm considering getting third party power supply unit that sits in the second optical drive bay, and would plug into the Mac Pro's power supply, and then provide additional power supply connectors that would allow me to plug in my GTX-285 as a secondary graphics card (since it uses 2 connectors, and the Mac Pro only has 2 total).
    Even when my machine was using a GTX-285 and the GT-120, I could see a difference in performance when dragging an application window (particularly if it's a 3D app) from main display to the secondary (the GT-120 is a significantly lower power card, with only 32 CUDA cores and 512MB video memory). With the Quadro driving my primary display the difference is much more noticeable now.
    From what I understand, there are some technical issues with using ATI and nVidia GPU's in the same machine, so attempting to use with a 5770 may not work. But if you were able to use them together, it would make more sense to have the Quadro card driving your primary display, since it's likely going to perform as well or better than any other card you might be able to pair it with.
    I've already given some thought to a second Quadro card down the road. As the drivers mature, and the apps I use evolve to make better use of CUDA and OpenCL, and my own workflow and skills improve to the point where I'm doing more 3D modeling/rendering (and stuff like ray-tracing), then having 2 of these cards in a single machine could really come in handy. Today it appears that all those CUDA cores and VRAM are serving to help make the apps faster and more responsive at design time, but rendering is still very CPU-centric. But tomorrow those apps will hopefully be able to tap into the GPU to help improve render times.

  • New Quadro FX cards 2010

    Hi everyone,
    i just want  to confirm if there will be new Quadro cards to be released some time in  the following weeks/months..
    I saw on the other thread, "The FX4800 is  end-of-life, the successor is expected to be announced  this month.... -  Harm Millaard"
    I search all over the internet and i couldn't find any  news about the "new" quadro cards.
    I was keeping an eye on the Quadro FX  3800/4800. Should i hold off for now until Nvidia announces the release  of their "new" Quadro cards?
    i am using AutoCAD, 3ds max, Adobe After Effects  CS5, Adobe Premiere Pro CS5.
    Thank you very much.
    P.S. Thank you, Harm for posting a lot of incredible  information about video editing =)
    -Mark

    you are far better off with the 470 than a 3800/4800.
    the 3800/4800 are based on the GTX 260
    the newer Quadros will be based on 470/480 with only the most expensive one begin the 480.
    EG: the 5800 is a 285
    the 470 will be supported with the next update coming soon
    so dont waste your money
    Scott
    ADK

  • Can the Quadro 2000 do HD editing?

    According to the NVidia website the best it can do is SD
    Do i really need to spend a fortune for the Quadro 4000?
    Unfortunately my pockets are not as deep as I would like but this is my set up
    Intel Sandy Bridge Core I7 3.4GHz 2600K LGA 1155 processor
    Gigabyte P67A-UD3R B3 LGA 1155 Intel  P67(Revised B3 stepping) DDR3 2133+ Ultra Durable 3
    3X USB power support ATI CrossFireX DualBIOS SATA Raid Ready
    Kingston DDR3 1600MHz Gaming performance Hyper X Memory 8GB X 2= 16GB
    Western Digital 1TB x3 set to RAID 0
    64M SATA2 HDD
    COOLER MASTER GX 750W PSU SLI Ready Single+ 12V V2.31 Ultra silent Intelligent Fan
    STlab F171 1394b 3-port Firewire
    Blu Ray writer
    So the above is gonna cost me a bit but I will be doing freelance videography, Im hoping I can cut some cost somewhere, can I use a Quadro 2000
    or a GTX 580 or do I need a Quadro 4000 for HD editing as stated on the NVidia website?
    Thankyou

    To be honest, the Quadro 4000 is actually below the GeForce GTX 460 or GTX 560 (hardware-spec-wise): The Quadro 4000 is based on the original GF100 GPU that is relatively inefficient in terms of performance per watt compared to newer Fermi GPUs. In this case, the GF100 Fermi was supposed to have 512 CUDA cores (but only a maximum of 480 CUDA cores are ultimately enabled, as they were in the GeForce GTX 480) - but the Quadro 4000 has half of the GF100's CUDA cores disabled, resulting in only 256 CUDA cores being enabled. (For comparison, the most cutdown version of the GF100 GPU used in GeForce cards, the GeForce GTX 465, still has 352 CUDA cores enabled.) The GeForce GTX 460 series and the GTX 560, on the other hand, are based on newer and slightly more efficient versions of the Fermi architecture. The GeForce GTX 460 SE is the slowest of the 460/560 series GPUs, but still has 288 CUDA cores enabled. The full-blown GTX 460 and the GTX 560 (a cut-down version of the 384-core GTX 560 Ti) both have 336 CUDA cores.
    Thus, the Quadro 4000 is still underspecced (GPU-wise) compared to even a GTX 460 SE, let alone a GTX 560 Ti that we've been recommending as a BFTB (Bang-For-The-Buck) choice. And even the top-of-the-line Quadro 6000 is only slightly faster than a GTX 470 but is still slower than a GTX 480, 570 or 580.

  • GTX285 vs Quadro 4000

    Hi guys! I have a question about comparison GTX285 and new Quadro 4000. I'm working with Final Cut Studio, Shake and Cinema 4D. How do you think, does replacing GTX285 with new Quadro 4000 will make performance boost? How huge it will be in this particular applications?

    I tend to discount most of what I've seen on the blogger/review sites, they tend to either be game-centric, or not have a really great understanding of how a big machine (like the Mac Pro) gets used for pro video apps.
    Hatter, has there been any confirmation yet on a consumer-level GTX570 card for the Mac, or is that speculation (or via flashed/hacked drivers)?
    As Hatter said though, answers would have to be 'in theory' at this time, since the Quadro 4000 for Mac hasn't yet shipped (or at least arrived yet, I've got mine on order). Guessing/speculating can be extremely difficult, too, since it's a case of comparing Apples to Oranges (sorry, couldn't resist). The GTX285 is a 'consumer-grade' card which is hardware and driver-optimized for performance where it matters in gaming, and the Quadro series is 'pro-grade' and is hardware/driver-optimized for performance in pro apps (in this case, 3D rendering, data modeling, etc).
    According to representatives I've spoken with at nVidia, the Quadro 4000 should easily deliver at least twice the performance of the Quadro FX 4800 (and as much as 5-8x faster in certain operations). The Quadro FX 4800 card was considered to be 2-3 times faster than the GTX-285 (depending on the application you're using). I've never owned a Quadro card myself so I haven't done extensive testing with one, but from the demos and time spent playing around with other peoples' machines that had Quadro FX 4800's installed, I don't think it's a false claim.
    I can tell you that you won't see a huge improvement when it comes to Final Cut Studio... for now. There's an upgrade due in the spring that will hopefully be able to take better advantage of newer technology (not just in GPU's, but with your whole machine), hopefully then you'll see a massive improvement.
    Adobe's Premiere Pro and After Effects CS5 have support for nVidia's CUDA technology, which lets them perform certain functions up to 10 times faster than without hardware acceleration. And the part I like most, a lot of stuff that would previously have required rendering in order to preview can easily be done in real-time. That's with just the GTX-285, having the Quadro 4000 will increase both the complexity and the number of layers that I can have in my projects and still get real-time performance.
    The boost you see in C4D performance may depend on a number of factors, including what version you're using, what kind of work you're doing, and what plugins you're using. Octane Render is a ray-tracing plugin that looks incredibly promising, tapping into CUDA to boost the speed of ray-traced renders. I don't use C4D but some folks I consult with do (and have interest in ray tracing), for them the Quadro 4000 (or even better, putting two Quadro 4000's in a single Mac Pro) could be a real game-changer. But even if you don't have CUDA-optimized ray tracing plugins, the performance increase should be in the neighborhood of 4-6x (twice the speed of the older Quadro card, which itself is 2-3x faster than the GTX-285).
    Hopefully the card will start shipping soon, and we'll start to see the results first-hand.

  • Quadro V Geforce and Desktop V Mobile GPU

    I decided to list key specs in a table to plot the 'de-tuning' done to Mobile Graphics cards compared to their desktop twins. More specifically the differing amounts applied to GTX as compared to Quadro cards. I hope the table works ok, I'm a first time user of them .
    Specification
    GTX 660
    GTX 660M
    Quadro K5000
    Quadro K5000M
    Memory Size
    2GB
    2GB
    0%
    4GB
    4GB
    0%
    Memory Type
    GDDR5
    GDDR5
    GDDR5
    GDDR5
    Memory Interface
    192-bit
    128-bit
    33%
    256-bit
    256-bit
    0%
    Memory Bandwidth
    144.2GB/s
    64GB/s
    55.7%
    173GB/s
    96GB/s
    44.5%
    Cuda Cores
    960
    384
    60%
    1536
    1344
    12.5%
    Max Power
    140W
    ~50-75W*
    64-46%
    122W
    100W
    18%
    *nVidia do not list the Max Power Consumption where I was looking. If anyone can enlighten me please do! I have seen it guest at 50W and quoted as 75W at another site Here
    The cards I chose are arbitrary. The GTX660 because I am buying a laptop with this variant in and the K5000 because it was the only model I found showing in both categories. The values that are percentages show the amount of de-tuning.
    My conclusions from this are that Quadro cards are allowed to retain much of their performance in a laptop environement because they will be purchased by users less interested in Battery operation for their machine. The GTX is hit with the All-in-one watering down brush yet they still wish to use the same product number, implying similar high performance. Buyer do your research!
    Hope this is useful/interesting to someone.
    Peter.
    Refs:
    GTX 660, GTX 660M, Quadro K5000, Quadro K5000M

    I couldn't edit the original table anymore so here it is modified.
    Spec
    GTX 660
    GTX 660M
    GTX 680
    GTX 680M
    Quadro K5000
    Quadro K5000M
    Memory Size
    2GB
    2GB
    0%
    2-4GB
    4GB
    0%
    4GB
    4GB
    0%
    Memory Type
    GDDR5
    GDDR5
    GDDR5
    GDDR5
    GDDR5
    GDDR5
    Memory Interface
    192-bit
    128-bit
    33%
    256-bit
    256-bit
    0%
    256-bit
    256-bit
    0%
    Memory Bandwidth
    144.2 GB/s
    64 GB/s
    55.7%
    192.2 GB/s
    115.2 GB/s
    40%
    173 GB/s
    96 GB/s
    44.5%
    Cuda Cores
    960
    384
    60%
    1536
    1344
    12.5%
    1536
    1344
    12.5%
    Max Power
    140W
    50-75W*
    64-46%
    195W
    100W
    49.8%
    122W
    100W
    18%
    Cost
    $300
    $225- 275
    ~$500
    $750- 840
    $1800- 2000
    ~$2500
    Comparing the GTX680 with the Quadro K5000 tell a slightly different story. The GTX680M is the cheapest and the least 'hobbled' of the two, except where the power is concerned, which is a good thing no doubt .
    The question in my mind now is, does PPro take advantage of the GTX680M as well as it does the Quadro K5000M?

  • List of supported CUDA Cards (CS5)

    Adobe is working on a playback and rendering engine for Adobe  Premiere Pro called the Mercury Playback Engine. This new engine is  NVIDIA® GPU-accelerated, 64-bit native, and architected for the future.  Native 64-bit support enables you to work more fluidly on HD and higher  resolution projects, and GPU acceleration speeds effects processing and  rendering.
    The Mercury Playback Engine offers these  benefits:
    Open projects faster, refine  effects-rich HD and higher resolution sequences in real time, enjoy  smooth scrubbing, and play back complex projects without rendering.
    See  results instantly when applying multiple color corrections and effects  across many video layers.
    Work in real time on complex timelines  and long-form projects with thousands of clips — whether your project  is SD, HD, 2K, 4K, or beyond.
    Ensure your system  is ready to take advantage of the Mercury Playback Engine in a future  version of Adobe Premiere Pro. The Mercury Playback Engine works  hand-in-hand with NVIDIA® CUDA™ technology to give you amazingly fluid,  real-time performance. See  it in action
    * PR CS5 supports the  following list of CUDA cards:
    GeForce GTX 285
    Windows and MAC
    Quadro FX 3800
    Windows
    Quadro FX 4800
    Windows and MAC
    Quadro FX 5800
    Windows
    Quadro CX
    Windows
    More  hardware details:
    http://www.adobe.com/products/premiere/systemreqs/
    [Moderator's Note: the discussion about Adobe's choices for supported cards was moved to the Premiere Pro Main Forum]
    Link

    Paste of Post by Will Renczes (Adobe)
    http://forums.adobe.com/message/2739509#2739509
    >>>
    Now that the launch is done and this information is all public, I'm going to summarize all the bits of information that have been floating around into one distilled post:
    The Mercury playback engine comprises of 3 areas (our chief weapons are surprise, surprise and fear...  nevermind...):
    - 64 bit support, and better memory management / frame cache management / sharing between the Adobe apps (ie Premiere and After Effects & the Media Encoder have a notion of shared memory now, and are aware of how much is being consumed by their peers);
    - optimizations to multithreaded rendering, to the playback's pipeline, speed improvements with various media types, and all around general fine tuning
    - CUDA acceleration of effects / transforms / pixel conversion routines.
    Don't have a supported CUDA board?  You still get two out of three.  Might not seem as sexy on the cover, but CS5 is still a massive improvement over CS4 even without the hardware acceleration.
    (Conversely:  let me dispel the myth that you can drop in a CUDA supported board into any box and you magically get umpteen layers of RED 4K in realtime.  All that CUDA does is free the CPU from the tasks of doing image processing - video footage however still needs to be decoded by the CPU.  If you're looking to do high end 4K, do yourself a favor and don't shortchange yourself on a cruddy box.  Get an i7, for cryin' out loud...  but I digress)
    Now, why the limited card selection?
    One of the biggest themes was to improve stability and making Premiere truly earn the Pro moniker.  To quote another engineer, "This was a decision about being Pro."  By limiting the selection of cards, you have a guarantee that the product will do what it's supposed to, that your rendering accuracy will be as good as in software, and that these cards will play nice with 3rd party I/O vendors.
    What's the difference between the level of functionality I get with the GTX 285 vs the Quadro boards?
    The GTX is limited to 3 streams of realtime.  Also, the Quadros come with more memory, so this helps if you're looking to do hi-res (eg RED) editing. Lastly, as a gaming card set, the GTX cards will downclock themselves if they're overheating, so your performance might drop if your cooling isn't the best.  The Quadros OTOH have a fixed clock rate, assumingly they have better heat tolerance levels.
    When will that selection expand?
    TBD.  All I will say is that we are looking at some of the next-gen Fermi cards, but they're still undergoing evaluation.  Let's put it this way - the beta users group is still running so that they can help test the new card support going forward.   Keep your ear to the ground, I'm sure there will be plenty of noise made when they're announced.
    Can you add me to the beta list?
    Nope.  Not my domain, I'm afraid.
    What's the scoop with ATI cards, and openCL?  Why nVidia / CUDA only?
    When the acceleration work began over a year & a half ago, openCL wasn't even a finalized specification.  CUDA was a more mature technology, so that's what we went with.  For the future? It'll be evaluated for CS 6.

  • Program Monitor is Black, no video playback after installing new graphics card

    I bought a PNY Nvidia Quadro 2000 graphics card a few days ago, and since it's been installed, I've been unable to watch video playback in my program, source or even reference monitor.
    Similarly, If I designate one of my monitors soley for video playback, the screen is just black, and as the timecode advances on my timeline I only hear sound.
    My project is using "Mercury Playback Engine GPU Acceleration" for Premiere Pro CS6.0.1
    I went from using one graphics card to using two graphics cards, and installed a third monitor. I've spent the last few days trying everything I could think of, I even uninstalled and re-installed premiere.
    Hardware Specs:
    Windows 7 Ultimate (64-bit)
    HP Pavilion h8-1230z
    AMD-FX8150 3.59Ghz 8 core processesor
    16GB RAM
    600W PS
    EVGA Nvidia GeForce GTX 550Ti
    PNY Nvidia Quadro 2000
    I am using a PCI x16 for the Quadro 2000
    I am using a PCI x1 slot w/x1-x16 adapter for the 550Ti
    Could this be the problem?
    Additional Information
    Vectorscope, All scopes, YC Waveform - Display modes all work.
    I downloaded the most recent Nvidia drivers for both of my cards.
    I haven't received any error messages or had any problems in After Effects CS6, as video playback is working properly in my other adobe applications.

    I am still building a new system, so currently I'm still using a 480. You can read all about my choices and progress here: Adobe Forums: Planning / building a new system. Part 1
    There are 11 entries in the Benchmark Results that succesfully use the GTX 680, even though it is not 'supported'. They all use the 'hack'. However, the 680 has been added to the list of supported cards for AE, so it is only a matter of time before the 680 will be added to the list of supported cards for PR. Todd had made that clear. Now, when the 680 is added, it makes sense that the 670 will also be added. My guess is that both will be added in a short while.

  • Maximus / Tesla - does anything for Pr performance?

    (sorry if this has been discussed already - couldn't find a thread dedicated to that)
    NVidia and Adobe published a few performance comparisons where it seems that Tesla (via Maximus) adds a lot of performance boost to Premiere Pro systems.
    Whereas everything else I read (including on this forum) indicates that the gains are primarily in smoother playback (fewer dropped frames in complex timelines), and when the 1st card is relatively weak - such as Quadro 2000 or 4000.
    Question 1: have I missed anything? Does Maximus do anything else besides helping with realtime playback.  (Please note that I am not talking about any other apps - CS or not. Premiere Pro performance only).
    Question 2: anyone tested the Maximum configuration against a decent GeForce card (GTX-670) or, say, Quadro 5000?
    E.g. what are the performance benefits of Maximus with Q5K against Q5K+Tesla, for Premiere Pro CS6?
    Thanks!
    (Here is the stuff I checked before asking these questions:
    - Studio 1 Production benchmarking
    - http://forums.adobe.com/message/4027056#4027056
    - http://forums.adobe.com/message/4207177#4207177
    - NVIDIA Maximus and Premiere Pro CS5.52 by Dennis Radeke
    - PPBM5)

    Alex,
    You have to keep in mind that with a single Q5/6K card, all MPE accelerated business is handled by the Q5/6K card, using all the available CUDA cores on those cards. When you move to a Maximus configuration, by adding a C2075 Tesla card, the CUDA cores on the Q5/6K cards are bypassed and only the CUDA cores on the Tesla card are used for MPE acceleration. It is as simple as that.
    PR was never able to use multiple GPU's and effectively adding a Tesla card to create a Maximus environment only disables the Quadro card for hardware MPE and leaves it up to the Tesla card. The remaining function for the Quadro card is to steer the monitors. Hence my reservations about this Maximus solution, unless you need 10 bit output to your monitors.
    Admitted, I do not know how this works with AE, since AE can use multiple GPU's, so there it may be advantageous.
    The nine times performance improvement claimed in the article you linked to are about the same or less than we find in the Benchmark Results.
    It is my conviction that memory bandwidth is the overriding factor that determines MPE performance and up to now our benchmark results support this.
    If you compare the memory bandwidth of the various cards, you see this pattern emerge:
    Video card
    Memory bandwidth
    Memory Bandwidth in Maximus configuration with a Tesla C2075
    Quadro 4000
    89.6 GB/s
    144 GB/s
    Quadro 5000
    120 GB/s
    144 GB/s
    Quadro 6000
    144 GB/s
    144 GB/s
    Tesla C2075
    144 GB/s
    144 GB/s
    GTX 680
    192.2 GB/s
    Not possible, so it is limited to 192.2 GB/s
    This makes it obvious that the fastest solution is the GTX 680. If opting for a Maximus solution, it makes no sense to choose a Quadro 6000 over a 4000, unless you need the memory size of the 6000.

Maybe you are looking for

  • How to reinstall apps in a new device. I downlowad from " selection of premium apps" last Dec

    Hi,All I downloaded and installed some apps from " selection of premium apps" last December. The device I installed was 9780. Now, I am starting to use a new 9900 device. How to reinstall apps I downloaded in my new 9900?  My old 9780 would be erased

  • SVT Gain & Phase - no gain value output

    Our application has been using the SVT Gain & Phase.vi from the Sound and Vibration Toolkit add-on for Labview with no problems until we try to upgrade. The original versions in use are: Labview 8.2, Testand 3.5 and the Sound & Vibration Toolkit 4.0.

  • How to handle SYSTEM_NO_MEMORY exception

    Hello, is there a way to handle the SYSTM_NO_MEMORY exception in a Z-Program? I want to show the user an error message with hints how to prevent this short dump instead of having the system showing the short dump. I tried to use TRY-statement but unt

  • Content Source Error - Unexpected Network Error

    A content source (file share) crawls zero items and the crawl log has one error: An unexpected network error occurred. (Exception from HRESULT: 0x8007003B)  I have a few crawl rules setup to exclude certain folders.

  • 502 Error for Iphone5

    My iphone5 wont download any updates (via Wifi or connected to a computer) and keeps coming up with a 502 error message. message saying : An unknown error has occured. Network is inactive. How do i resolve this ?