We can all use more intelligence in the edit bay, and Blackmagic Design is giving it to us.
Blackmagic Design is using its Neural Engine technology to boost the AI component of DaVinci Resolve to make the system, and the editors who use it, perform some amazingly clever tasks.
When Blackmagic Design released its DaVinci Resolve 16 post-production software during last April’s NAB Show in Las Vegas, we knew there was a lot of “smarts” going on under the hood.
But how much of those “smarts” were Artificial Intelligence, or AI, may still come as a surprise.
Jason Druss, product specialist at Blackmagic Design, took some time to explain the implementation of AI in Blackmagic Design’s context.
“We see AI’s best purpose is to minimize repetitive, time-consuming tasks that everyone involved in post has to face,” he started out, “and for us the key is the employment of our Neural Engine.”
A cool tool, that Neural Engine.
As the Blackmagic Design Web site defines it, “The DaVinci Neural Engine is entirely cross-platform and uses the latest GPU innovations for AI and deep learning to provide unparalleled performance and quality.”
Druss considers one of his favorite tools to come out of this AI technology is the new facial recognition capability.
Facial recognition not only automatically detects faces but also lets you quickly brighten eyes, change lip color and adjust skin tones.
“We had previously incorporated facial refinement tools on the Color page in DaVinci Resolve,” he told me, “but what we did in the Resolve 16 release is pair that facial analysis capability with an ‘analyze clips for people’ tool that can highlight a bin of clips and gather characters found in it into their own groups.”
As Druss puts it, this is a matter of using the Neural Engine to incorporate today’s AI into existing tools in DaVinci Resolve.
Now, once you have selected a whole bin of clips of characters in a bin and right-clicked on them, select “analyze clips for people” and the system will analyze the identity of the faces and give you the opportunity to assign names to them.
“That way you can see how many clips have those characters in them,” Druss explained, “and the best part is we will have automatically populated Smart Bins with the result.”
AI also plays a major role in DaVinci Resolve 16’s Color Correction capabilities.
By accessing the keyframe panel on the Color page, the curve editor will let you accurately determine black levels and highlights.
“The Auto Color tool was good before, but now with the implementation of the Neural Engine, you can set your shadows right at the bottom of your scopes and your highlights just cresting over the top, and set a neutral color balance” he said.
“But the most important part is when you select Auto Color in Resolve 16 you will find the proper amount of density in the shadows which is the signature of a properly color timed shot.
This is a case where the Artificial Intelligence and Deep Learning Neural Networks technology in DaVinci Resolve 16 makes you look more brilliant.
“My personal favorite, though, is probably Speed Warp, available in DaVinci Resolve Studio edition,” Druss went on. “A couple of years ago we introduced a feature called Optical Flow, which was a time re-mapping system for creating slo-mo out of 24p shots. It did a pretty good job of placing new frames within the shot, but sometimes the movement of people would look kind of rubbery or wavey.”
Now the AI tools give the Studio version of Resolve 16 Speed Warp.
“What Speed Warp does is have the Neural Engine look at the original clip, compare it to the optical flow re-timing, and identify the areas that risk looking rubbery,” he said. “Once it senses those pixels, it can correct and eliminate the ones out of place, and only those pixels, to give you very smooth looking slow motion.”
But one feature I had been eagerly anticipating was the almost-magical Object Removal tool which has also had its AI boosted.
“For a long time we have had two forms of noise reduction in Resolve: Temporal, which looks at the same pixel up to 5 frames forward and 5 frames back to detect unwanted noise, and Spatial, which examines each frame to identify noise and replace the bad pixels,” he described. “These have existed for some time.”
Now, however, these noise reduction algorithms are being re-strategized for the Object Removal tool.
As Druss described it, “We place a power window around a shape within a shot, call on the noise reduction algorithms combined with AI to identify the pixels that would be occluded if the object were not there, and then fill in the background imagery once the foreground object has been removed.”
“We are really invested in using AI to make the jobs easier in post-production,” Druss finished up. “Now that we have the Neural Engine, we are only at the beginning of adding the power of AI to our software.”
You might also like...
There’s a terrible tendency in cinematography to concentrate too much on the technology, overlooking creative skills that often make a huge contribution. In the last two pieces of this series we’ve gone into some detail on the historical bac…
Super Bowl may not be the most watched sporting event in the world but remains a showpiece for US broadcasting where the latest technologies and innovations in coverage are displayed.
In Part 2 we looked at solutions to keep AoIP systems simple and discussed the compromise and efficiency vendor specific network systems provide. In Part 3, we look further into system management and network security.
Computer game apps read compressed artificial world descriptions from a disk file. This artificial world is regenerated by the CPU and loaded into the GPU where it is displayed to the gamer. The gamer’s actions are fed back to t…
The derivation of the famous CIE horseshoe was explained in the previous part in terms of a re-mapping or distortion of rg color space. The derivation is somewhat abstract because the uses of color science go far beyond the applications…