We call it "Artificial Intelligence" but the real intelligence is in the design of the system behind it.
Autodesk is using Artificial Intelligence to teach its Flame 2020 software how to help creative artists do their job more efficiently when working on integrated VFX, color correction, look development and final mastering.
Artificial Intelligence (AI) was a leitmotif flowing through many of the introductions at April’s NAB Show 2019 and as usually happens, the most intriguing systems impacted by it were in post-production.
But what AI actually is sometime lies in the inner eye of the beholder, since AI can mean different things to different people.
So the series of articles of which this installment is the opening salvo is going to recognize that when delving into the implementation of AI in post, it’s useful to try to lasso this untamed topic with a definition.
Since we are beginning with some of the software released by Autodesk, I was very grateful when Will Harris, the Flame family product manager at Autodesk, avowed that to him, AI means, “Utilizing established methods for machine learning to produce an algorithm that can predictably produce results.”
In other words, you create a system that can learn, then teach it stuff, and let it autonomously do tasks for you based on what it has learned.
Facial recognition can determine not only identity, but also mood.
Harris started with the example of facial recognition, in which a variant of AI can identify different people by tagging their facemasks, but said that Autodesk has progressed the art far beyond that.
“In the Media and Entertainment space at Autodesk, we are taking the intelligence of VFX into the world of post-production,” he began. “For example, if you render something in a c.g. rendering engine you can produce a beauty image, but you can also produce render passes or something that is called AOV’s or Arbitrary Output Variables.
“They are the intelligence of the renderer spat out in different forms so you can post-process and adjust those renderings,” he continued. “A great example would be if in addition to that beauty image you rendered out a diffuse, non-lit version, and also a specular highlight and shadow pass. If you combine those all together, you could create a unique look to that face which did not necessarily exist before.”
Thinking in these terms lead Harris’s Flame 2020 development team at Autodesk toward the first way they decided to implement AI. Manually rotoscoping human images can be a very time consuming operation, but maybe AI could help alleviate this.
So now, with the release of Flame 2020 at the NAB Show last April, this result of AI learning is baked into the software.
Now, if you feed the system a source image and a reference Z-depth, it can produce a new image with the intelligence to identify the desired separation.
Flame lets you see the Z-depth created, and manipulate it.
“The system also knows that sky is usually at the top of the image, and ground is conventionally at the bottom,” Harris said. “So we gave the software what is sometimes known as a ‘suck it and see’ tool to give it a help figuring that out.”
You might like to know that The Cambridge University English Dictionary defines “suck it and see” as UK informal for “to try something to find out if it will be successful”.
“What you end up with is sort of an automatic tool in Flame 2020 that you can try and see if you like the results,” Harris said. “If the outcome is acceptable, you keep it. If not, you try something else.”
A second implementation of AI in Flame 2020 has to do with what c.g. renderers call a “normals pass” in which you are trying to re-light a face or any object to create a different effect.
Suppose the original shot was too flat, and you need a more dramatic look. Now you can create a “normal map” in 3D using another AI feature of Flame 2020.
“Again, through the training of scanning 100,00’s of face images, we’ve taught the software how to identify various ethnicities, facial variations, skin tones and expressions,” Harris said. “We actually used many 3D renders and then taught the software how to apply what it had learned to real human faces.”
This is all you need to dramatically enhance the lighting on a character’s face.
“Or, you can invoke many different kinds of keyers by using AI as a reference,” Harris said, “using any shape or shot that you choose. It’s all a matter of training the system which shape you want to pattern the key after.”
“Even if it doesn’t work perfectly, it gets you 80% of the way there,” Harris finished up. “Then you can take it from there. This is only the beginning of the ways that Autodesk is planning on incorporating AI in our Flame 2020 and other softwares.”
You might also like...
When, in May 2019, AMD announced their Ryzen Zen 2 architecture, beyond the amazing performance offered by the new Series 3000 microprocessors, they announced the new chips would support PCI 4.0. Although I was pretty confident the step from 3.0 to 4.0 meant 2X greater bandwidth,…
As High Dynamic Range (HDR) and Wide Color Gamut (i.e.BT.2020) are increasingly mandated by major industry players like Netflix and Amazon, DOPs in the broadcast realm are under intense pressure to get it right during original image capture.…
Most people are aware that any color can be mixed from red, green and blue light, and we make color pictures out of red, green and blue images. The relationship between modern color imaging and the human visual system was…
Almost since photography has existed, people have pursued ways of modifying the picture after it’s been shot. The “dodge” and “burn” tools in Photoshop are widely understood as ways to make things brighter or darker, but it’s probably less widely…
May 14, 2019 may not have seemed a particularly important date for those who edit and color grade on Mac’s and PC’s. But it was. By chance, that day I went looking for the May Windows 10 Feature Update (1903). I was sur…