We call it "Artificial Intelligence" but the real intelligence is in the design of the system behind it.
Autodesk is using Artificial Intelligence to teach its Flame 2020 software how to help creative artists do their job more efficiently when working on integrated VFX, color correction, look development and final mastering.
Artificial Intelligence (AI) was a leitmotif flowing through many of the introductions at April’s NAB Show 2019 and as usually happens, the most intriguing systems impacted by it were in post-production.
But what AI actually is sometime lies in the inner eye of the beholder, since AI can mean different things to different people.
So the series of articles of which this installment is the opening salvo is going to recognize that when delving into the implementation of AI in post, it’s useful to try to lasso this untamed topic with a definition.
Since we are beginning with some of the software released by Autodesk, I was very grateful when Will Harris, the Flame family product manager at Autodesk, avowed that to him, AI means, “Utilizing established methods for machine learning to produce an algorithm that can predictably produce results.”
In other words, you create a system that can learn, then teach it stuff, and let it autonomously do tasks for you based on what it has learned.
Facial recognition can determine not only identity, but also mood.
Harris started with the example of facial recognition, in which a variant of AI can identify different people by tagging their facemasks, but said that Autodesk has progressed the art far beyond that.
“In the Media and Entertainment space at Autodesk, we are taking the intelligence of VFX into the world of post-production,” he began. “For example, if you render something in a c.g. rendering engine you can produce a beauty image, but you can also produce render passes or something that is called AOV’s or Arbitrary Output Variables.
“They are the intelligence of the renderer spat out in different forms so you can post-process and adjust those renderings,” he continued. “A great example would be if in addition to that beauty image you rendered out a diffuse, non-lit version, and also a specular highlight and shadow pass. If you combine those all together, you could create a unique look to that face which did not necessarily exist before.”
Thinking in these terms lead Harris’s Flame 2020 development team at Autodesk toward the first way they decided to implement AI. Manually rotoscoping human images can be a very time consuming operation, but maybe AI could help alleviate this.
So now, with the release of Flame 2020 at the NAB Show last April, this result of AI learning is baked into the software.
Now, if you feed the system a source image and a reference Z-depth, it can produce a new image with the intelligence to identify the desired separation.
Flame lets you see the Z-depth created, and manipulate it.
“The system also knows that sky is usually at the top of the image, and ground is conventionally at the bottom,” Harris said. “So we gave the software what is sometimes known as a ‘suck it and see’ tool to give it a help figuring that out.”
You might like to know that The Cambridge University English Dictionary defines “suck it and see” as UK informal for “to try something to find out if it will be successful”.
“What you end up with is sort of an automatic tool in Flame 2020 that you can try and see if you like the results,” Harris said. “If the outcome is acceptable, you keep it. If not, you try something else.”
A second implementation of AI in Flame 2020 has to do with what c.g. renderers call a “normals pass” in which you are trying to re-light a face or any object to create a different effect.
Suppose the original shot was too flat, and you need a more dramatic look. Now you can create a “normal map” in 3D using another AI feature of Flame 2020.
“Again, through the training of scanning 100,00’s of face images, we’ve taught the software how to identify various ethnicities, facial variations, skin tones and expressions,” Harris said. “We actually used many 3D renders and then taught the software how to apply what it had learned to real human faces.”
This is all you need to dramatically enhance the lighting on a character’s face.
“Or, you can invoke many different kinds of keyers by using AI as a reference,” Harris said, “using any shape or shot that you choose. It’s all a matter of training the system which shape you want to pattern the key after.”
“Even if it doesn’t work perfectly, it gets you 80% of the way there,” Harris finished up. “Then you can take it from there. This is only the beginning of the ways that Autodesk is planning on incorporating AI in our Flame 2020 and other softwares.”
You might also like...
“‘Chris,’ she said, ‘it’s about an order of nuns who’re protecting the world.’”
Director of photography John Christian Rosenlund has at least a three-decade history with director Bent Hamer. Their most recent collaboration, The Middle Man, depicts a town in the northern United States during a post-industrial depression. It’s perhaps not a s…
Before pandemics and the downsizing at traditional, broadcast news operations, many news and non-fiction DOPs were already assuming a significant role in post-production. Whereas frame rates, f-stops, and the character of our lenses, once formed the backbone of our expertise…
For a long time, selecting camera gear has been fairly easy. For twenty years, digital cinema cameras have never quite had everything we wanted, and the choice often boiled down to comparing the compromises. That’ll always be true to a…
It’s perhaps a little unfair to blame modern visual effects people for the fact that audiences are becoming a little jaded about green screen. If we’re to conclude that there’s some sort of quality problem with VFX, we’d …