We call it "Artificial Intelligence" but the real intelligence is in the design of the system behind it.
Autodesk is using Artificial Intelligence to teach its Flame 2020 software how to help creative artists do their job more efficiently when working on integrated VFX, color correction, look development and final mastering.
Artificial Intelligence (AI) was a leitmotif flowing through many of the introductions at April’s NAB Show 2019 and as usually happens, the most intriguing systems impacted by it were in post-production.
But what AI actually is sometime lies in the inner eye of the beholder, since AI can mean different things to different people.
So the series of articles of which this installment is the opening salvo is going to recognize that when delving into the implementation of AI in post, it’s useful to try to lasso this untamed topic with a definition.
Since we are beginning with some of the software released by Autodesk, I was very grateful when Will Harris, the Flame family product manager at Autodesk, avowed that to him, AI means, “Utilizing established methods for machine learning to produce an algorithm that can predictably produce results.”
In other words, you create a system that can learn, then teach it stuff, and let it autonomously do tasks for you based on what it has learned.
Facial recognition can determine not only identity, but also mood.
Harris started with the example of facial recognition, in which a variant of AI can identify different people by tagging their facemasks, but said that Autodesk has progressed the art far beyond that.
“In the Media and Entertainment space at Autodesk, we are taking the intelligence of VFX into the world of post-production,” he began. “For example, if you render something in a c.g. rendering engine you can produce a beauty image, but you can also produce render passes or something that is called AOV’s or Arbitrary Output Variables.
“They are the intelligence of the renderer spat out in different forms so you can post-process and adjust those renderings,” he continued. “A great example would be if in addition to that beauty image you rendered out a diffuse, non-lit version, and also a specular highlight and shadow pass. If you combine those all together, you could create a unique look to that face which did not necessarily exist before.”
Thinking in these terms lead Harris’s Flame 2020 development team at Autodesk toward the first way they decided to implement AI. Manually rotoscoping human images can be a very time consuming operation, but maybe AI could help alleviate this.
So now, with the release of Flame 2020 at the NAB Show last April, this result of AI learning is baked into the software.
Now, if you feed the system a source image and a reference Z-depth, it can produce a new image with the intelligence to identify the desired separation.
Flame lets you see the Z-depth created, and manipulate it.
“The system also knows that sky is usually at the top of the image, and ground is conventionally at the bottom,” Harris said. “So we gave the software what is sometimes known as a ‘suck it and see’ tool to give it a help figuring that out.”
You might like to know that The Cambridge University English Dictionary defines “suck it and see” as UK informal for “to try something to find out if it will be successful”.
“What you end up with is sort of an automatic tool in Flame 2020 that you can try and see if you like the results,” Harris said. “If the outcome is acceptable, you keep it. If not, you try something else.”
A second implementation of AI in Flame 2020 has to do with what c.g. renderers call a “normals pass” in which you are trying to re-light a face or any object to create a different effect.
Suppose the original shot was too flat, and you need a more dramatic look. Now you can create a “normal map” in 3D using another AI feature of Flame 2020.
“Again, through the training of scanning 100,00’s of face images, we’ve taught the software how to identify various ethnicities, facial variations, skin tones and expressions,” Harris said. “We actually used many 3D renders and then taught the software how to apply what it had learned to real human faces.”
This is all you need to dramatically enhance the lighting on a character’s face.
“Or, you can invoke many different kinds of keyers by using AI as a reference,” Harris said, “using any shape or shot that you choose. It’s all a matter of training the system which shape you want to pattern the key after.”
“Even if it doesn’t work perfectly, it gets you 80% of the way there,” Harris finished up. “Then you can take it from there. This is only the beginning of the ways that Autodesk is planning on incorporating AI in our Flame 2020 and other softwares.”
You might also like...
Gamma is a topic that pervades almost all forms of image portrayal, including film, television and computers. Gamma has become a tradition, which means that its origins are not understood, and it is not questioned. Perhaps it is time that…
Glasgow in December is a place and a time with a particular look, and The Nest is a production which enthusiastically embraces that aesthetic. Broadcast in the UK beginning in March 2020, it was produced for the BBC by Studio Lambert…
The current social and medical situation with lockdowns and distancing is unleashing new ideas at local TV stations. Some will become the new normal.
It’s one thing to be confronted by a big pile of technology and to be confused by it. It’s another to know something about that technology and conclude that things could be a lot simpler than they are. Tha…
It’s all very well reading all this theory about colorimetry, but what can be done in practice? First of all, it is necessary to consider that imaging, be it still or moving, is a creative process that relies totally o…