Battling Hardware MPE, Episode 2: Chunky Blurs

Round Two of my hardware acceleration MPE tests...
When using direct export with hardware MPE, any effect that renders a blurred alpha channel (Fast Blur, Gaussian Blur, etc.) creates an extremely ugly/chunky/unusable result. The source footage and sequence does not matter, nor does the destination format. The following examples are of an animated Gaussian Blur (from 0 to something) on a title clip, wiped off with a Gradient Wipe transition (toggle the GradWipe makes no difference). As with my previous thread (Battling Hardware MPE, Episode 1: Cropping on Export), I tested four variations of exporting: with hardware acceleration on and off, and with direct export and sending to the AME queue:
GPU Acceleration off, sent to AME queue:
GPU Acceleration off, direct export:
GPU Acceleration on, sent to AME queue:
GPU Acceleration on, direct export:
So we've got good, good, good, bad. As before, the direct export method using hardware MPE seems to throw a wrench in the works. Any effect that blurs like this (including soft shadows or glows) suffers this ugly banding and harsh falloff. What's curious is that the last example is how the Program Monitor appears while GPU acceleration is enabled; if I disable it, it looks like it does in the first three. What I can't figure out, then, is where hardware MPE is actually at work! Is it in the direct export (bad) or in the queue (same as non-GPU accelerated)? It's not making much sense to me.
Now, I had a chance to have a brief email exchange with one of the engineers regarding a similar issue a few months ago. In response to similar observations and questions, here is his reply:
You are correct that composting with alpha can give different results. This is caused by processing in linear color so that blending is more like natural light. With MPE GPU acceleration, all composting is always done in linear color. This is nothing to do with the hack, but an intentional design decision to never compromise in quality. In software, composting is only done in linear color when rendering at maximum render quality because on the CPU it takes a lot longer. This probably also explains why you occasionally saw this with software. In the monitors we never show anything at maximum quality with unrendered footage. With software you thus need to export with max render quality specified or set max render quality in the sequence settings and render previews. For consistent results when switching between software and GPU acceleration I suggest enabling both max render quality and max bit depth.
Either I'm not understanding this, or it's confirming the bug. I get that hardware acceleration is supposed to enable "linear color processing;" that's fine, if that's better than how it's usually done (whatever that is--I'm not an engineer), but based on what I'm seeing with the hardware direct export, it's WORSE than any software render or encode. Ultimately, I don't care what is technically superior if it looks aesthetically inferior. With the GPU on, a direct export is not usable, and when rendering through the queue, it looks visually no different than when not using the GPU.
So based on the response above, I just did some more tests, this time with the Maximum Render Quality and Maximum Bit Depth options. I did not change the MRQ and MBD settings for the sequence itself--only in the export window--as it is my understanding that those check boxes will enable or disable those features. Using the same example above, I found some interesting results:
So, this would appear to largely bear out what the engineer explained. My observations now:
Hardware acceleration, at least as it pertains to this linear color processing issue, is fundamentally equivalent to Maximum Render Quality in software rendering mode.
Maximum Render Quality does nothing to soften the chunky blurs, shadows or glows. Instead, Maximum Bit Depth must be enabled.
In my initial tests, GPU On + Queue resulted in the same visual effect as GPU Off; in this test, GPU On + Queue resulted in the same effect as GPU On + Export (???)
Setting the Maximum Bit Depth option for your sequence in hardware mode will display smooth blurs with soft falloff in the Program Monitor.
Setting the Maximum Bit Depth and/or the Maximum Render Quality option in software mode has no effect on the Program Monitor display.
Regardless of sequence settings, failure to set either the MRQ or MBD option in the export window will result in those settings not being applied.
Setting either the MRQ or MBD option in the export window will always result in those settings being applied, regardless of sequence settings.
After going through all this, I may be willing to concede that everything is working correctly, more or less. However, my complaint is now, "WHY does this have to as complicated as this?" There are simply too many combinations that have to be set properly to get the desired quality of output, and I firmly believe that this needs to be simplified. When exporting or rendering in hardware/GPU mode, I believe that MRQ and MBD should be on by default; as it is, even with the promise of "linear color processing" with hardware acceleration, I still have to remember to tick another box to make sure that blurs, shadows, and glows don't look like stair steps. The jury is still out on how "good" linear color processing is; maybe I just got used to software rendering of these soft alpha channels, but I'm having difficulty seeing the benefit of more "realistic" light processing at the moment. With hardware acceleration on, you're basically stuck with how those soft elements look; with the hardware acceleration off, I can opt for the more subtle look if I like, even if it means I give up the other presumed benefits of linear color processing. When I design graphics in Photoshop, I expect them to look at least reasonably similar in Premiere; with hardware acceleration on, all bets are off.
I realize this is a new technology and will, hopefully, continue to mature and improve, but I'm hoping that this at least sparks some conversation about this issue. Casual users may not care too much, but anyone using this product for broadcast or other commercial work should be aware of the complications associated with the technology, and should demand that it work consistently and at an expected level of quality. Maybe I'm expecting too much of this, but I certainly hope not.
Your comments are requested and appreciated.

According to your last picture, the one thing consistent with all the problem footage is MRQ.  It seems that feature is actually causing the problem.
Correct; that was basically a test to see if what was described by the engineer (namely, enabling MRQ to produce better alpha channel compositing) held true. Well, it's sort of six of one, a half dozen of the other. In essence:
Hardware MPE = Software MPE + Maximum Render Quality = Linear Color Processing
Enabling MRQ in hardware mode does nothing; enabling MRQ in software mode makes software mode act like hardware mode. In either of those scenarios, the alpha channel is composited using linear color, which causes soft shadows, glows, etc. to balloon. Additionally, the soft shadows are banding, due to 8-bit processing. Only by enabling Maximum Bit Depth can the banding be eliminated, but the bulbous glow remains. Setting Maximum Bit Depth alone in software mode has no apparent effect; the alpha channel may not be composited "more realistically" thanks to linear color processing, but it doesn't grow grotesquely or reveal banding, either.
The tests above were conducted with the Gaussian Blur effect within Premiere. I decided to try a test using a Photoshop document; the PSD is a simple text layer with an Outer Glow layer style applied. I tried importing the layer with the layer style as it is, I tried flattening the layer, and I tried merging the layered document on import into Premiere; all resulted in the same effect. The results are more troubling than the examples above, as I use Photoshop for the vast majority of my graphics work for Premiere. Here's the test:
This is text composited against the background video in Photoshop, so that I can see how the glow should be rendered:
Nothing fancy, but the glow has a nice, subtle fall-off. When I import the PSD into Premiere, edit the alpha text layer over the background clip, and render out in software mode, this is the result:
Visually identical to what I created in Photoshop; so far, so good. Now, if I enable MRQ and MBD on export, still in software mode, this is the result:
Um, yuck. Enabling MRQ has had the same ballooning effect as in the tests above, but since the graphic and background video are 8-bit, and I'm not using a 32-bit effect (like the Premiere Gaussian Blur effect), enabling Maximum Bit Depth has no effect. The glow shows banding, and is just downright ugly.
So what happens in hardware accelerated mode? No prizes for guessing correctly:
No combination of Maximum Render Quality or Maximum Bit Depth has any effect in this case; the glow always looks like this in hardware accelerated mode.
This is a HUGE problem, as far as I am concerned. Not only can I not use hardware MPE to export if I want a decent-looking glow, but I can't even use it in the design and build phase because the composited result looks NOTHING like what I'm building in Photoshop. I have a keyboard shortcut set up now to launch the Project Settings panel so I can toggle back and forth between software and hardware mode, so I can at least enjoy some of the acceleration of hardware for general editing, but for detail work and for export, hardware MPE is proving to be an Achilles heel.
This is proving to be really depressing. Until I did these tests, I didn't quite realize the depth of the problem. Granted, no one is forcing me to use hardware acceleration, but am I wrong in calling this a serious flaw?

Similar Messages

  • Battling Hardware MPE, Episode 1: Cropping on Export

    This is likely going to be a series of posts about some of the--shall we say--quirks I've encounted when trying to work with hardware acceleration/MPE with CS5. I should preface this by saying that I am, at this time, using a GTX 480 which is obviously unsupported, so I'm willing to accept that any of what I've come across is due to this fact; however, I'm pretty sure that these issues are endemic to hardware accelerated MPE, in general.
    The first quirk I've found is that when exporting from Premiere with GPU acceleration activated, using the Export button, while cropping the source using the export crop function, the resulting encoded file is not properly rendered. Specifically, this set of variables results in black bars in the encoded video, seemingly the result of the "Scale to Fit" parameter being ignored. The source sequence type and material as well as the destination format do not seem to matter; originally, I thought this was an issue with the H.264 encoder, which I use for exporting small emailable spot reviews, but I've confirmed it with other formats like Flash and AVI as well.
    Since a picture is worth 1000 words, what follows are four screenshots illustrating the two variables I'm testing--namely, GPU acceleration and direct export vs. AME queue. For reference, the source sequence is a standard DV NTSC sequence, with a mish-mash of graphics and footage, and I'm exporting to a Lagarith AVI, 320x240, progressive, cropping 12L,4T,12R,4B, Scale to Fit. No, I wouldn't ordinarily export to such a file, but it's the same regardless of format, scaling, or cropping values.
    GPU Acceleration off, sent to AME queue:
    GPU Acceleration off, direct export:
    GPU Acceleration on, sent to AME queue:
    GPU Acceleration on, direct export:
    In the last one, you can clearly see the issue. Since the aspect ratio of the rendered file isn't changing, it would appear that the Scale to Fit option is not necessarily being overriden, but it's almost as if the crop values are being reapplied and creating the black bars.
    So, if this is due to hardware MPE, why isn't it appearing when sending the export to the AME queue? Well, here's my theory: hardware acceleration/rendering isn't being used when the export is sent to the queue! Check out the first three images: they are exactly the same. I would expect this of the two "GPU off" screenshots, but not of the "GPU on" screenshot.
    Now compare the two "GPU on" images; the one showing the cropping bug is noticeably sharper, and it shows the telltale signs of the linear light processing that hardware MPE is supposed to enable (which is another issue I'll cover later). You can see the blue light rays coming off the sparks much more clearly in the last image (again, not that I like this, or that it should be that way, but it's an indicator that MPE is working).
    So what the heck is going on here? Is this a bug, a quirk due to me using an unsupported card, or is this a "feature" in much the same way that linear light processing of GPU accelerated MPE is a feature? This would be a pretty simple thing for anyone to test: just create a DV sequence, throw anything in it (color bars would work), crop 8 pixels off the sides in the export window, and use the Export button to render to any particular format at 320x240 (I've confirmed this with other frame sizes, by the way).
    Hoping to get to the bottom of this... thanks for any discussion this may spark.

    Yep, Huston we have a problem.  I can reproduce your example. I have a GTX285.  Everything works great on my setup.  I have MPE enabled.
    I also tried a variation of your example.  I exported a 1280x720 image to a DV 720x480 Wide format.  I cropped 8pix from the TOP and Bottom so no black bars should be seen..
    If exported via Queue, fine.  If exported directly the resultant image is cropped to 4x3.
    Exported via Queue
    Exported directly

  • Alpha channel weirdness with hardware MPE

    To begin, I'm using a GTX 480 with the hack, so I'm not going to complain too loudly it this is a result of using as-yet unsupported hardware. However, I just want to verify with other hardware MPE users (both legit and unsupported) if this issue is happening on other systems.
    I've noticed some oddities with imported PSD files and the way that their alpha channels are rendered with hardware MPE, versus software MPE. I'm putting up a couple frame grabs with GPU acceleration turned on and off at couple different point where I'm using the PSDs. Forgive the PNG compression; you should be able to see the difference, nevertheless.
    First up, 100% opaque text over a background. The edges of the text are... fuzzy, maybe?
    Software:
    Hardware:
    Second, a couple layers Fast Blurring and fading in. The logo is 100% opaque, and there is a separate "glow" layer (rasterized in PS) behind it. Pretty obvious, here...
    Software:
    Hardware:
    Finally, a mostly-transparent logo bug. The hardware version is not as transparent.
    Software:
    Hardware:
    The only difference between each of these examples is that I turned on or off GPU acceleration in the Project Settings; it's the same PSD for each grab. I've also noticed that standard cross dissolves are a little chunky when dissolving to or from a graphic (even a flattened version); the opacity change is not as linear as it usually is. In software mode, this goes away.
    Anyone witnessed similar results? Again, I want to believe that this is just a result of using the GTX 480 and the hack, without official support. It could very well be the nVidia driver, too, I suppose, but I haven't tried rolling back to check that (I'm running the latest versions).
    Thoughts?

    I can confirm this.
    I do not think its the psd but the Alpha Channel in general.
    The colours are off when in MPE (Nvidia GTX 285)
    I filed a bug report.

  • ATI Primary and Nvidia Secondary for Hardware MPE Acceleration

    Hi everyone,
    I'm not sure if this has been discovered yet. I think it is very exciting, and very important for anyone with an AMD (ATI) GPU who wants hardware MPE acceleration.
    It is possible to use Hardware MPE acceleration while using an ATI video card as your primary adapter, and a lesser CUDA Nvidia GPU as a secondary adapter not connected to any monitor.
    My system:
    CPU: 1090T
    Mobo: 890GX
    RAM: 8 1333
    RAID: No
    GPU1: 5870
    GPU2: GTS 450
    As you can see, I have a Nvidia and AMD GPU in the same system. The 5870 is obviously by far the most powerful of the two, and it is what I use to record rendered footage using FRAPS.
    Recently, I became aware of the powers of hardware MPE. I concluded that the best way to obtain HMPE and maintain my FRAPS recording was to purchase a GTX 480. However, this was out of my wallets league as I could not sell the 5870.
    I was already aware that PhysX (A CUDA physics calculation library) could only be run on Nvidia CUDA GPUs (Like HMPE). Many Nvidia card users used secondary CUDA cards to accelerate physics calculation in games. ATI card users could not use a secondary Nvidia card for physics calculation as the Nvidia driver locked down PhysX in the presence of an active ATI GPU. Luckily a clever fellow called GenL managed to hack the Nvidia drivers to force PhysX to work in the presence of an ATI GPU.
    I hypothesised that if I performed that hack, HMPE would gain access to CUDA in a similar fashion to PhysX, thus allowing me to buy a far cheaper GTS 450 and pair it as an HMPE renderer with my 5870. After buying a GTS 450, I failed at implementing the hack and was about to give up.
    HMPE worked when my monitor was connected to the GTS 450, but if i tried to start PPro with the 5870 connected to any monitor HMPE was unavailable.
    I had two monitors connected to my GTS 450, and was playing around with adding stupid amounts of HMPE accelerated effects to an AVCHD clip. Realising that it was impractical to constantly switch the DVI cable from 5870 to GTS 450 I decided to leave my primary monitor connected to the 5870 and give up on HMPE. So, I reached around behind my computer and did it, but crucially did not quit PPro before I did so.
    When the screen flickered back to life, the yellow HMPE preview bar was still yellow. The timeline still scrubbed perfectly smoothly. HMPE was still working with a 5870 as the primary monitor: The PPro window was on the 5870 monitor, and the 5870 was rendering the window!
    I found that provided I did not close PPro, I could switch between HMPE and SMPE at will, all while using the 5870 as the primary adapter.
    I tested this using a 10 second composition of 3 AVCHD 1920x1080 Clips with CC, drop shadow, gaussian blur, edge feather, Basic 3D, transform, Ultra Key, drop shadow applied, rotatating amongst each other. I could still switch even if the 5870 was the only card connected to a monitor.
    Rendering this test clip via PPro direct export takes 30 seconds in HMPE mode with the 5870 and 1.43 in SMPE mode with the 5870.
    However: Rendering performance in AME stays the same whether I selected HMPE or SMPE. I believe this is because AME is a separate application that 're-detects' the ATI card and disables HMPE before beginning the encode, in the same manner that restarting PPro while using the 5870 removes the HMPE option. Rendering the clip in SMPE and HMPE modes using the GTS 450 gave the same 30 second vs 1.43 minute result.
    Therefore, as long as you are happy to encode via direct PPro export you will still see the benefit of HMPE while using an AMD card as the primary adapter.
    I hope this is as terribly excited to other users of ATI cards as it was for me. This has saved me several hundred dollars.
    Cheers,
    NS2HD

    Interesting results. I own a system manufactured by BOXX, a system developer out of Texas who really knows their stuff. I had asked them if it would be possible to purchase a CUDA enabled card and put it in my secondary slot and use it for MPE while maintaining my current (nvidia) card to run my monitors (also giving me the ability to run four screens). They said that no, according to the Adobe developers they were working with, Premiere could only use MPE off the CUDA card if the monitor previewing your work was plugged into that card. I guess they were wrong!
    Also, from my understanding, you don't see lesser results with AME because it's a separate program that starts separately, you see the lesser results because it has not yet been coded to take advantage of CUDA.

  • Firewire output of Hardware MPE - I give up

    Ok... I'll forget about IEEE1394 output of hardware MPE.
    I suspect it isn't likely to be resolved in CS.anything.
    The technical challenge of routing a DV signal from the CPU/GPU
    back through a firewire channel may be an insurmountable hurdle.
    But it's really tough to let go of your pet peeve.
    If nVidiAdobe were to develop a companion card to the primary GPU
    that allowed realtime SDI and Component output of hardware MPE
    to a broadcast monitor and/or deck, I would gladly buy one in a second...
    regardless of cost.
    Using third-party hardware and sequence presets, or employing the
    suggested hinky workaround using a consumer grade monitor that
    requires monthly attention to maintain its color calibration using
    third-party hardware and software, or any other suggested approach
    I have read here are IMHO, all flimsy second-tier dead ends.
    Multiple monitors and/or Reference monitor in Premiere Pro
    http://forums.adobe.com/thread/744683
    With CS6 perched on the horizon, It seems Adobe is in the perfect position
    to continue its inexorable march toward taking charge of the NLE market.
    Most serious FCP editors are Ae users already.  Give them a good taste
    of expanded native file support along with Dynamic Linking  and throw in
    some new capabilities and reliability to Pr, and they will wonder why they
    were doing things the hard way for so long.
    Fellow pros used to snicker and roll their eyes at me when I said I was
    using PPro2.  But, when I mention CS5 they will always have a few
    questions about how well things work... and the snicker is long gone.
    However, I feel the lack of realtime uncompromised external monitoring
    is an issue that must be natively resolved before Pr can take the lead.
    Adobe and nVidia should find a way to work this out.
    I think a rhetorical question can be found here somewhere.

    function(){return A.apply(null,[this].concat($A(arguments)))}
    SCAPsinger wrote:
    I've posted this before, and at the risk of sounding like a clanging gong, I'll post it again JUST IN CASE it's as useful a solution for anyone else.
    You can easily output the program monitor of PPro through a 2nd (or 3rd?) video port. In the Playback options, you can select an additional monitor as the output for previews (same place where you would select DV output for previews). There is no extending of the PPro interface or other modification required other than to select the monitor in the Playback options.
    When you hit play in PPro, it activates full screen playback on the monitor. Pause playback and it pauses on the monitor. If/when you ALT+TAB out of PPro to another application, the fullscreen goes away (and - for me - reveals my beautiful company logo on the desktop) but it comes right back as soon as you hit play.
    It works on all timelines, no codecs or special setups needed.
    I previously connected via a DVI > HDMI adapter cable, but now I use the HDMI out of the video card straight to the HDMI port of the external HDTV.
    I'm sure this solution won't be satisfactory for everybody, but for me, it gives a very good representation of what my projects look like on the end users HDTV set. AND it's very easy to setup. AND it works with all sequence settings. AND it works with hardware MPE active. AND....well...guess that's about it.
    Thanks for your input...
    That is pretty much what I am doing (except I am using two video cards) but it is not accurate when you are using DV.
    I need to output to an NTSC monitor not a progressive computer monitor.  There is a big difference when viewing.

  • CS5: Bug when exporting directly from PPro with hardware MPE?

    For the record, I'm using:
    A GTX-480 (unsupported)
    The hardware MPE-enabling hack (unsupported)
    The latest version of the nVidia drivers (257.whatever)
    I discovered an issue, of sorts, when exporting directly from Premiere CS5, versus using the "Queue" option to send to AME. I was exporting from an NTSC DV Standard sequence to a 320x240 H.264, in which I had enable cropping and trimmed a bit off the sides and top and bottom of the source; the Output tab was summarily set to "Scale to Fit". With hardware MPE enabled in the Project Settings, the export would be the proper final dimensions (320x240), but any area that was cropped off was instead padded with black. The actual video content was then squeezed and malformed into the area not occupied by the black bars.
    I found that, if I disabled hardware MPE or sent the export through AME by use of the Queue button (instead of Export), then the export was performed properly--the video was cropped correctly and then the video stretched to the borders of the exported video, maintaining the desired aspect ratio.
    As I mentioned at the outset, I'm breaking a couple rules and perhaps tempting fate with others, but before I go about rolling back my drivers and the like, has anyone noticed a similar issue?
    ADDENDUM: By the way, the export doesn't need to come from a sequence. If I load up a source clip into the Source Monitor, and export from there using the process outlined above, I get the same issue. This would seem to indicate, to me, that this is due to some incompatibility with my card, its driver and hardware MPE...

    Did he demo renders or exports?
    About 11 minutes in, he [Dave] demos accelerated H.264 renders.
    Harm Millaard wrote:
    AFAIK MPE only works with renders, not with encoding.
    Well, he did both, actually--but the quote above is a typo on my part. I meant to say "exports" or "encodes." The last couple minutes of the aforementioned demo video are dedicated to exporting a sequence directly from Premiere, and having the benefit of hardware MPE. That's where I'm getting the issue I've outlined above; with hardware MPE enabled, and using the "Export" versus AME "Queue" button/function, I get the strange black outline. With software MPE or through AME, all is well.

  • Finding the cause of total hardware lock-ups

    The new build has been running reasonably well, and I've done no major optimizing so far, and no overclocking (yet). However, I've been having infrequent and seemingly random system lock-ups that would appear to be hardware related.
    Last night was about the fourth or fifth of these. I had been working on and off throughout the day on a 4-camera multicamera project, and having no issues at all. This was a TOTALLY different multicam experience than I've had with CS4, where MC was hardly usable. Anyway, I was going to make my last few edits of the night when the system locked up--I had just started playing back a sequence when it happened. No hard drive activity, no keyboard/mouse interaction, no bluescreen--the audio that had started playback, however, got caught in a short loop and made an annoying racket. The only way to return to normal was to hard reboot with the power button depressed. No error or fault is logged in the Windows event logs, beyond the reboot which is recorded as something like, "Your system restarted, probably due to you turning it off... don't do that." The actually fault that perpetuates such a drastic action is not recorded. My initial thoughts were that the RAM has a bad chip somewhere, so I've run a memory test for over 8 hours now, and had 0 errors reported. There were no lock-ups, either--the system has been just humming along that whole time. This obviously does not rule out the RAM, but doesn't make it easy to say it's the problem, either. The only "tweak" I've made to the system is the RAM, and that's by setting the DRAM frequency to 1600MHz; nothing else has been altered in the BIOS.
    The last two lock-ups occured in one day, when I was attempting to export to an H.264 file, through the AME queue, with hardware MPE enabled. At different points in the export, the system locked, with similar results as above. After the second lock-up, I disabled GPU acceleration in PPro, and the export completed just fine. Interesting, I thought. However, I'd done other hardware MPE-assisted exports earlier that day and in previous days, with no lock-ups.
    As mentioned, I'd encountered at least one other hard freeze, and if I remember correctly, I was in Premiere but not doing anything. I'd walked away from the system for a little while, came back, and it was Living Dead on my return. I hadn't been rendering, importing, playing back, or anything else.
    I've been checking my temps with HWmonitor, and there isn't anything out of the ordinary as far as I'm aware. I'm really not stressing the system much, pretty basic and rudimentary edits. The memory test has now done about 600% coverage, and no errors. The bizarre coincidence is that these lock-ups occur when I'm in one of the Adobe programs--but I've never seen software cause a lock-up such as this before.
    So... where do I go from here? The lock-ups seem to completely happen at random, with no particular instigation. Is there some sort of other monitor software I can run in the background that might indicate hardware failures that Windows is not able to catch?
    I'm hoping you hardware gurus can shed some light on this bizarre predicament... thanks in advance for your insight.
    ADDENDUM:
    It would seem to make sense to post my hardware... duh:
    ASUS P6X58D
    i7-930
    OCZ  Gold 12GB (6 x 2GB) DDR3 1600 (PC3 12800)
    GIGABYTE GeForce GTX  480
    COOLER MASTER HAF 932
    COOLER MASTER Silent Pro 1000W
    Western  Digital VelociRaptor 150GB (system drive)
    HITACHI 500GB 7200  RPM SATA 3.0Gb/s
    4x SAMSUNG Spinpoint F3 1TB  7200 RPM SATA 3.0Gb/s (running RAID 5 with onboard controller)
    Noctua NH-D14 CPU Cooler

    Well, I really wanted Jeff's suggestion to be the solution, but alas, troubles remain. And it would appear that something far more sinister is happening than flaky codecs or software.
    The good news is that I can now reliably repeat the lock-up, each and every time. The bad news is that the lock-up occurs whenever I attempt an export. I'm trying to get a 75-minute multicamera edit onto a DVD, and the moment the actual encode begins, the system goes into a hard hang. I have a number of audio clips that are being used, and I see "Adobe Premiere Pro is preparing audio for export" or something like that, and then once the video processing begins, it's curtains.
    At first, I thought it was something with the hardware MPE--I'm using a GTX 480 with the hack. Editing works well--remarkably well. Encoding is a different issue, altogether, and it doesnt' matter whether I start the export with the Queue button or the Export button. I tried disabling GPU acceleration, and at first, I thought I had solved the mystery--the encode actually began and progressed for about 30 seconds. However, inevitably it would seem, I got the same hard hang.
    The first couple "hangs" were actually system restarts; the Windows desktop would disappear, and moments later, I was back at POST. I then disabled the "Automatically restart" option in Windows, and after doing that, I simply ended up with a frozen system and desktop. I'm not sure those are coincidental, actually; I'm going to test that again this morning to see if I get the restart or the hang.
    I've been checking out my temperatures with HWMonitor, and as far as I can tell, I'm not going off the charts with temperatures. Temperatures do climb (I have HWMonitor as I start the encode) for both the CPU cores and GPU, but nothing drastic, and there is no way that it's getting too hot in the 5 seconds that the encode is running before the hang. What about the power supply? I can see wattages for the CPU fluctuating drastically as the encode tries to begin, which I would gather is SOP for the operation of a computer, but is it possible that I am getting too much/too little juice? I've got a 1000W Cooler Master PSU, the components you see listed above, and 6 hard drives--this would seem to be more than sufficient to me (for my current use), but undoubtedly, the system is pulling more watts as it starts to work harder. Unfortunately, there is nothing I can really disconnect to test this theory, since I need all of the components and hard drives connected to do anything.
    I'm at a total loss, guys, and I'm more than a little frustrated because I've got a pretty expensive paperweight sitting in my office. Unfortunately, I live in a pretty rural area, so finding a trustworthy computer tech locally is a challenge, so I'm hoping that you tech-savvy folks can through out some places to look.
    Help me, Obi-Wan Kenobi... you're my only hope...
    Thanks, all...

  • Any news on Firewire/DV output with MPE on?

    I understand this issue - not being able to preview over a firewire stream while MPE is on - has been reported, perhaps going back over a year.  Is there any fix, other than turning off the MPE?

    Nope.
    Firewire output of Hardware MPE - I give up
    http://forums.adobe.com/thread/840033?tstart=0

  • MPE and CS5.5

    So I have a i7 quad 3.4 and GTX 570 - which is one of the cards that Adobe lists as certified to run hardware MPE. Card is installed and driven properly.
    My question is: before I actually upgrade, do I have to do anything else (install a new something, buy another piece of software, set-up CUDA cores... whatever) to run MPE on hardware mode? Or do I just turn the setting on CS5.5 project setting to MPE hardware acceleration then be good to go?
    Please help 'cause I cant seem to find this anywhere.
    If there is indeed something I need to work on, please guide me through would you guys? I'm rather novice.
    Thanks!

    No, you don't have to do anything special. If you have one of the appropriate cards and set the Renderer setting in the project settings to Mercury Playback Engine Hardware Acceleration, then some things will be processed by CUDA on the GPU.
    See this:
    FAQ: What are CUDA and the Mercury Playback Engine, and how do I use them?
    This might help, too:
    FAQ: How do I learn Premiere Pro?

  • CS5.02 w/MPE turns off Aero Glass

    I have a new Dell Precision T5500 workstation(X5560 CPU, 12GB Ram, PNY Quadro 4000, Win7 Ultimate 64-bit) running CS5 MC.  AE looks and works great. PPro CS5.02 loads and looks correct but, once a project has loaded with the Mercury Playback Engine(MPE) enabled, all Aero Glass effects for Window Color and Appearance get switched off.  Resulting in what looks like the default Windows 7 Basic properties and color scheme(Baby Blue with no transparency).
    PPro captures,edits and outputs fine.  And once PPro is exited, the Win7 desktop, visually, returns to my "normal" saved Theme.  While PPro is running, the Win7 Personalization page shows that my saved Theme is loaded.  Clicking it's icon to reload the settings doesn't cause any visual changes.  Selecting the Window Color icon brings up the Window Color and Appearance tool which has a message stating that Windows Aero is off, etc.
    If I remove the Quadro 4000 entry from the cuda_supported_cards.txt file, essentially disabling the hardware MPE, Aero Glass stays on while PPro runs.  Except now I have to render effects since Premiere is using the software MPE.
    The NVidia driver is from 10/16/2010( 8.17.12.5993 ).  Everything else on this machine(HW/SW) is up to date and is working great.  I haven't been able to find any mention of this anomaly here or on other CS5 or NVidia related forums.  Has anyone seen/heard of this?  Is there somebody else using the Quadro 4000 with CS5?

    Thanks, I checked out the search results of the link posted.
    I grabbed and ran Microsoft's Mats_Run.aero.exe tool.  Funny thing, even with Premiere running that tool still says there are no issues with my system.  Plus I don't get a systray message saying something to the effect of "aero is turned off" like other folks report in the MS Answers forums when things like this happen.  Anyway, like I said previously, I can live with this "feature" while I'm using PPro and so far with my hardware, my AE usage hasn't been effected.
    Thanks All,

  • 11.3 Firefox - Strong Blur on FB's flash game

    FlashPlayer 11.3 Plugin since at least build 257 when used on Firefox (12+) produces some very bad drag'n'drop scrolling on the FB game Battle Pirates. Sluggish, high-CPU, blurred.
    I've made a video showing the problem:
    https://vimeo.com/46471851
    Video password: flash113
    Download the video to see it clearly.
    Notes: The problem can't be seem when running the game in Fullscreen, or when using IE ActiveX FlashPlayer. In Opera, it is slow to scroll, but not blurred.
    Up to 11.2 it looks ok in Firefox.
    PS.: OS is Win7 x64, 32bit browsers

    Thanks for the tip on how to disable the protected mode.
    It fixed the blur, but the performance of dragging the game map is still very poor.
    Toggling the Hw Accel don't have effects (maybe because the game don't use it?). Same when disabling FF's hw accel.
    What exact info you need? Here's some:
    Processor: AMD Phenom(tm) II X4 960T Processor (4 CPUs), ~3.0GHz
    Card name: ATI Radeon HD 5700 Series  (5770)
    Memory: 10240MB RAM
    Drivers are updated:

  • Red In Timeline Even With MPE

    Just curious. Now, that I have optimized my system for CS5, I thought I would give it a try. I opened a project and all works great. However, I thought that there would be no red in the time line if the GPU was working. I added a lot of different effects to multiple layers of video and it stayed yellow and played back smoothly. But, when I added an effect like 3d Cube to experiment, it turned red over the timeline. Now, I know red means it needs to be rendered, but I thought with the GPU working that it would at least be yellow. Is this normal? I am using the Quadro 4000 with all updated drivers.
    Thanks

    Only some effects use hardware MPE, not all effects and 3D Cube is one of the effects that are not hardware accelerated.

  • Premiere Pro CS6: GTX 660 ti ($300)  vs. GTS 450 ($100) and other thoughts on upgrading HW

    I've been wanting to make some major upgrades to my hardware, but it just doesn't seem worth it yet...even after almost 4 years. I ultimately decided to "rent" a new video card and run some tests. Here is some background info on my upgrade though process and some results comparing the video card performance.
    Disclaimer: I'm not a hardware expert, but I'm not completely clueless (I think). Your input/insight is welcome.
    My system (purchasd 2/2009)
    i7 920
    GTS 450 (1GB RAM)
    12 GB 1333 RAM
    Samsung SATA II 128 GB SSD (OS/apps)
    5x 1TB 7200 RPM drives in RAID 0 (with accompany slow/cheaper 2TB backup drives)
    Some upgrade options I am considering
    Sandy Bridge 3930 - but it's $560 w/o cooling and would require a new, more expensive motherboard, new ram, cooling, etc.
    Ivy Bridge 3770, but I keep reading that that an overclocked 920 isn't that much different in perf (in fairness mine isn't oc'd). I did find a MB that would work for only $90. So I could make this upgrade for just under $400 (RAM would stay the same).
    Wait for Haswell, but i could be another 9 months and it's supposed to only give maybe a 10% perf gain over IB. It's more focused on mobile - less power, integrated graphics, etc.
    High-end Xeons are totally off the table. $/buck is waaaay too low.
    Video card and benchmark reviews/problems
    So I thought I'd first try getting a new video card. I see conflicting benchmarks. This site (the one that provides the CUDA.exe hack) notices very little difference between most GTX cards in perf for their benchmarks. The PPMB5 site shows significant differences between say the GTX 680 and lower end cards. But are these really accurate?
    The GTX 680 is almost $500, so I opted for the 660 ti at $300 to see if I could get a noticable perf gain. It seemed like the best $/buck card and wouldn't require me to get a new power supply.
    Another reason I wanted to do my own tests: None of the benchmarks I've seen actually mention the type of footage used. I care about footage from the Canon MKII-III, or similar footage. I definitely do not care about things like exporting to MPEG 2.
    I did some very unscientific benchmarks, but they were real world for me. First my "problem" areas.
    Performance problem areas
    #1 - Time-lapses consisting of 1080 (height) JPEGs and 2160 (height) JPEGs don't always play smoothly (larger 2160s almost never do). I read adding more VRAM might help. The 660ti has 2x the RAM as my current video card.
    #2 - Split screen sequences (up to 9 clips simultaneously) don't play smoothly.
    #3 - Scenes where I speed up a clip to 1000x don't always play smoothly. (Although upgrading from CS5 to 6 actually seems to have solved this issue, I couldn't get it to repro any longer).
    #4 - Export to h.264 could be faster. I do this a lot, but mostly because it's how I sometimes make proxies because of problems around #1-2 (works fine - used to use CineForm but it always crashed Premiere and these work for my needs). This is typically my final export as well for posting on sites like Vimeo.
    #5 - Timeline rendering could be faster, although I don't do this a lot and if I do it's simple, not a bunch of crazy effects. E.g. use unsharp mask. This is pretty low pri for me though because I think timeline rendering is a bad idea. Once you do it, if you even move the clip you have to render again.
    Some simple bottleneck analysis first:
    Disk queue length sometimes is just over 1 on 1 disk in my RAID array during TL playback. Might slow things down slightly. Not an issue during export.
    Processor never seems to get pegged in any case.
    RAM is never maxed out, but it starts to go to Premiere limits (10 GB that I've set) after playing through several time-lapses (I'm just now noticing this). Choppiness starts well before RAM is even near that on some clips.
    Tests/results:
    NOTE: I do run the 660ti in a PCIe 2 x16 slot. Let me know if you think it would even matter to run in a PCIe 3.0 slot. My MB doesn't have one.
    #1 Time-lapse smoothness - didn't improve with the 660. Moving the 1080 size JPEG TLs to my SSD did help some problem TLs play smoothly however.
    #2 Split screen. Did a test with a 9-clip-at-the-same-time sequence. No improvement with the 660ti.
    #3 Clips speed up 1000x - could not repro the problem now that I run CS 6 vs. 5 on either card.
    #4 - Export to H.264 1080p @23.9x fps.
    Export 5:30 clip of 5D MKIII footage + H.264 proxies:
    GTS 450 - 9:14
    660 Ti - 8:30
    Export 1.5 minute clip of large time-lapses (JPEGs that are 2160 high):
    GTS 450 - 9:35
    660 Ti - 7:00
    Export a 2 minute clip of just MKIII footage
    GTS 450 - 2:45
    660 Ti - 2:45
    #5 Timeline render with simple image correction effect
    Timeline render short 5D MKIII clip with unsharp mask applied:
    GTS 450 - 1:10
    660 Ti - 1:19
    Conclusion:
    The 660ti ($300) showed marginal improvements in exporting h.264 against my GTS 450 ($100) and did not address my other issues. Definitely not worth it for the type of work I do.
    Moving my time-lapse JPEGs to an SSD helps play the 1080p versions back smoothly. The 2160p larger versions still lag. Maybe more RAM would help? They still start off choppy and then acquire more and more RAM, so not sure here. Maybe faster 1600 RAM? I don't know, I doubt it. I may have to just use 1080 versions or make proxies.
    I don't see a pegged CPU much if at all, so upgrading to an Ivy Bridge 3770 doesn't seem like it'll help much if at all.
    I did end up buying 2x256 GB SATA III SSDs (only $169 each) that I'll run current projects off of, or at least time-lapse sequences (RAID 0). My motherboard doesn't have an SATA III slots, however, so I won't see the full power of these, but not sure I'll need it. Again I'm not seeing a clear disk issue either from the perf monitoring.
    I suspect many of these problems are still with the software and how it takes advantage of my hardware, but I'd love more insight.
    Generally I make things work and I don't have any really painful bottlenecks, but I'm always up for perf improvements/doing things faster. It does look like I won't see any major breakthroughs, however by spending $400-$1000 bucks on HW upgrades.
    Thoughts?
    Luke
    Blog  |  Photography  |  Vimeo

    Thanks for the resonpose Harm. Inline.
    Harm Millaard wrote: SYSTEM: It is an older system, about the same I had in the form of 'Harm's Beast', although I have a much beefier disk setup, more memory and OC'ed to 3.7 GHz, in combination with a GTX 480. Not much you can do about this system, apart from upgrading memory to 24 GB but the major drawback is that those investments will not carry over to a new system, at least not easily. [Luke] From your description of your system it sounds like 4 things could indeed be upgraded and carried over to a new system. 1) OC the processor (e.g. purchase a generic water cooler for ~$100), 2) Improve the Disk setup, 3) Upgrade the video card, 4) Add more/faster RAM.
    I've seen in some benchmarks that an OC'd 920 is not so dissimilar to an OC'd 3770K. The latter is faster, but it isn't a huge difference. The larger question still remains - will any/all of these upgrades yield large performance gains and solve all/a higher percentage of my problems? Or do I have a decent sweet spot of a system and should wait for the software (e.g. MPE evolution in CS7-8) to catch up and take better advantage of what I have? Like I said from doing some rudimentary performance monitoring, I'm not seeing a pegged CPU (just a brief spike here/there), I'm not seeing disk transfer at capacity (although 1 disk has a slightly > 1 queue length at times), I'm not seeing in all cases over-utilization of memory, etc. (except higher RAM usage is seen albeit staggered for large JPEG time-lapse sequences, but I see choppiness well before RAM usage gets to 10 GB).
     VIDEO: You correctly point out that the GTX 680 shows in the MPE graph on the PPBM5 website much better results than other cards. But keep in mind that most 680's are used in new systems, often with the latest CPU's and fast memory. I'm convinced that a 680 is not noticeably faster than a 580, because they have the same memory bandwidth, but it looks that way because they are often accompanied by hexa core i7-39xx CPU's with large amounts of memory.  [Luke] Good point - potentially further evidence that the video card doesn't make a big difference? At least not enought to justify 5x the cost (e.g. $500 680 vs. $100 450).  This would be consistnet with what Studio 1 Productions has seen. The GTS 450 has a memory bandwidth of 86 GB/s, the 660 Ti has 144.2 GB/s, so the latter is significantly faster as you have shown in some of your tests. [Luke] The only test I would characterize closer to having a significant increase using the 660 would be exporting large JPEG time-lapses to H.264, where it was a good 27% faster. The rest seemed more marginal or did not change. TESTING: You don't mention to what format you exported and with what resolution and framerate. Hardware MPE will come into play when you have rescaling, frame blending, blurring and stuff like that occurring. If you export to the same frame size and frame rate as your source and no blurring occurs, then exporting is purely a CPU matter and the video card has no impact at all. [Luke] See above - H.264, 1080p @23.9x fps. General remarks: I personally consider your 5 disk raid0 setup as pretty risky. You have multiplied the risk of losing all data by a factor 5!. You have no redundancy at all. Even though it is fast, I expect your sustained transfer rates are less than 450 MB/s and when using a 9 clip split screen, it may be too slow with the limited memory and the old CPU you have. You have effectively one single volume for video related editing (apart from the OS disk) and while that makes for easy administration, it still entails the drawbacks of the half-duplex connection of SATA. It might be better to add a couple of HDD's in raid0 for media cache, previews and exports to avoid that limitation. You can always carry those to a new system. [Luke] Yes there is a higher level of risk, but with backups every 30 minutes during project work I yield cheap/easy perf gains for the cost of--at most--30 minutes of work. I've lost no work in the last 4 years, 1 drive failed once while on vacation and I replaced it easily. Anyway backup/data integrity is a different issue separate from performance which I'd like to focus on in this context.
    I get ~420MB/s read with this array (mostly older, blue WD drives and Deskstars). I'm running out of space, so I just ordered 3 x 2 TB WD black drives to replace this with, expecting probably a similar transfer rate. Again though I'm not necessarily seeing disk being a bottleneck in perf mon aside from one disk who's queue length sometimes goes over 1, so we'll see if the newer black drives help.
    I have ordered 2x256 SATA III SSDs to put min my time-lapses on as having them on my current primary SSD seemed to help in some cases.
     Sorry to be so harsh. [Luke] No worries, harsh is OK but I'm still not seeing a clear solution to some of my issues and I'm still not convinced a new system - short of a top-of-the-line SB or Xeon system (both of which are very $$) will be worth the upgrade. When 5 years had passed between my current system and my 2004 system, I feel like the upgrades were much more significant especially for bang/$.
    Luke Humphrey
    Blog | Photography | Cinematography

  • Maximum Render Quality CS5.5

    My project is about 1hr 15 mins long, covering 8 sequences.
    3 days ago, I encoded using the MPG2-DVD preset and burnt a trial disk with an Encore project.
    After reviewing the disk, I made a few trivial changes to the PP project and encoded again.
    Exactly the same encode settings except I checked Maximum Render Quality.
    This time, it took FIVE TIMES longer to encode, and I cannot see any difference in disk quality.
    What does checking Max Render Quality do? 
    Would you expect it to take 5x?
    And would you expect a better quality disk?
    Thanks

    Jim,
    Bill and I are testing this extensively for the new PPBM6 test and on our current time-line, which is an AVCHD 1080i-29.97 source with numerous effects, fast color corrector, brightnes & contrast, gamma correction, gaussian blur and 3-way color corrector and speed slowed down by 50%, for a total duration of 2;39;04 exporting to MPEG2-DVD with a preset of NTSC 23.976 Widescreen High Quality I have just tested again the export times and it gave me the following results:
    System
    Hardware MPE On,
    MRQ Off
    Hardware MPE On,
    MRQ On
    Software MPE,
    MRQ On
    i7-3930K,
    GTX 680
    24 s
    24 s
    436 s (94 without MRQ)
    i7-2600K,
    GTX 680
    NA
    33 s
    870 s
    i7-980X,
    GTX 680
    NA
    30 s
    556 s
    Now this test is taxing on the GPU, because there is frame blending, scaling and blurring going on during export, but I can not see any difference with or without MRQ with hardware MPE turned on. I ran several runs and they are consistently between 23 and 24 seconds on this test.  Several things of interest here, the advantage the 6-cores have over a 4-core when using software mode only, and the difference between MRQ settings in software mode MPE.
    PS. You may be testing this with AME instead of Direct Export. AME is seriously handicapped in CS6 and that may be the cause of your strange results. How does it look with your same test when you use Direct Export, because exporting a 1 minute clip without effects taking 1 minute contrasts seriously with my export of 24 seconds for a clip 2.5 times longer and filled with effects. You know that both Bill and I have rather tuned systems and when our software MPE exports take this long (436 - 870 seconds) there is something going on here I would like to know more about.

  • Bit Depth and Render Quality

    When you finally export media to some sort of media format via the encoder does the projects preview Bit Depth and Render Quality settings affect the output file?
    I know there is "Use Preview files" setting in the media exporter dialogue but I just want to be sure of what I am doing.

    Jeff's response is my perspective, as well, which is both backed up by my own tests and the official Adobe word.
    Exhibit A: My Tests
    That is DV footage with a title superimposed over it in a DV sequence, with a Gaussian blur effect (the Premiere accelerated one) applied to the title; all samples are from that sequence exported back to DV. This was to show the relative differences of processing between software and hardware MPE, Premiere export and AME queueing, and the effect of the Maximum Bit Depth and Maximum Render Quality options on export (not the sequence settings; those have no bearing on export).
    The "blooming" evident in the GPU exports is due to hardware MPE's linear color processing. I think it's ugly, but that's not the point here. Further down the line, you can see the effect of Maximum Bit Depth (and MRQ) on both software MPE and hardware MPE. I assume you can see the difference between the Maximum Bit Depth-enabled export and the one without. Bear in mind that this is 8-bit DV footage composited and "effected" and exported back to 8-bit DV. I don't understand what your "padding with zeroes" and larger file size argument is motivated by--my source files and destination files are the same size due to the DV codec--but it's plainly clear that Maximum Bit Depth has a significant impact on output quality. Similar results would likely be evident if I used any of the other 32-bit enabled effects; many of the color correction filters are 32-bit, and should exhibit less banding, even on something 8-bit like DV.
    Exhibit B: The Adobe Word
    This is extracted from Karl Soule's blog post, Understanding Color Processing: 8-bit, 10-bit, 32-bit, and more. This section comes from Adobe engineer Steve Hoeg:
    1. A DV file with a blur and a color corrector exported to DV without the max bit depth flag. We
    will import the 8-bit DV file, apply the blur to get an 8-bit frame,
    apply the color corrector to the 8-bit frame to get another 8-bit frame,
    then write DV at 8-bit.
    2. A DV file with a blur and a color corrector exported to DV with the max bit depth flag. We
    will import the 8-bit DV file, apply the blur to get an 32-bit frame,
    apply the color corrector to the 32-bit frame to get another 32-bit
    frame, then write DV at 8-bit. The color corrector working on the 32-bit
    blurred frame will be higher quality then the previous example.
    3. A DV file with a blur and a color corrector exported to DPX with the max bit depth flag. We
    will import the 8-bit DV file, apply the blur to get an 32-bit frame,
    apply the color corrector to the 32-bit frame to get another 32-bit
    frame, then write DPX at 10-bit. This will be still higher quality
    because the final output format supports greater precision.
    4. A DPX file with a blur and a color corrector exported to DPX without the max bit depth flag.
    We will clamp 10-bit DPX file to 8-bits, apply the blur to get an 8-bit
    frame, apply the color corrector to the 8-bit frame to get another
    8-bit frame, then write 10-bit DPX from 8-bit data.
    5. A DPX file with a blur and a color corrector exported to DPX with the max bit depth flag.
    We will import the 10-bit DPX file, apply the blur to get an 32-bit
    frame, apply the color corrector to the 32-bit frame to get another
    32-bit frame, then write DPX at 10-bit. This will retain full precision through the whole pipeline.
    6. A title with a gradient and a blur on a 8-bit monitor. This will display in 8-bit, may show banding.
    7. A title with a gradient and a blur on a 10-bit monitor
    (with hardware acceleration enabled.) This will render the blur in
    32-bit, then display at 10-bit. The gradient should be smooth.
    Bullet #2 is pretty much what my tests reveal.
    I think the Premiere Pro Help Docs get this wrong, however:
    High-bit-depth effects
    Premiere Pro includes some video effects and transitions
    that support high-bit-depth processing. When applied to high-bit-depth
    assets, such as v210-format video and 16-bit-per-channel (bpc) Photoshop
    files, these effects can be rendered with 32bpc pixels. The result
    is better color resolution and smoother color gradients with these
    assets than would be possible with the earlier standard 8 bit per
    channel pixels. A 32-bpc badge appears
    to the right of the effect name in the Effects panel for each high-bit-depth
    effect.
    I added the emphasis; it should be obvious after my tests and the quote from Steve Hoeg that this is clearly not the case. These 32-bit effects can be added to 8-bit assets, and if the Maximum Bit Depth flag is checked on export, those 32-bit effects are processed as 32-bit, regardless of the destination format of the export. Rendering and export/compression are two different processes altogether, and that's why using the Maximum Bit Depth option has far more impact than "padding with zeroes." You've made this claim repeatedly, and I believe it to be false.
    Your witness...

Maybe you are looking for