Machine learning, computer vision and realtime meshing of the virtual and real for consumer experiences and at production level are key themes running throughout the conference sessions of the BVE show. Here’s our pick of the bunch.
Over the last few years Mark Harrison, a former BBC executive who now runs the industry-wide group DPP, has become something of a sage when it comes to future media. He attends shows like CES and keeps his finger on the pulse for what’s coming down the track so his opening session Blindsided: the game-changers we might not see coming (27-Feb; 10:30 - 11:00) should present some interesting takes not least by asking us to consider trends from non-West European or West Coast point of view.
“We all know consumers like OTT video. But that’s the least of it,” Harrison says. “There are trends in the digital economy which if looked at globally, could have sudden, and profound, implications for the professional content creation industry.”
Of all tech trends perhaps the most pervasive and persuasive over the last year and looking to the next twelve months is the ability to automate powered by machine learning. The metadata generated by ML has the potential to transform the content supply chain; whether its using speech to text to automate transcription to reduce costs, using the technology to create differentiated consumer experiences, or using visual analysis to provide greater insights to enable new revenue streams. But what lessons have been learned in experimenting with and integrating ML/AI? What value does introducing it really have today and where are we headed with this?
A stellar panel, chaired by Matt Eaton who is GM at GrayMeta, examine Machine Learning: An Enabler for Broadcast & Production Business Transformation 26-Feb; 11:45 - 12:30). Content owners sharing their thoughts include representatives from Channel 4, VICE Media, OSN and Sky.
When it comes to AI, then look no further than Finnish pioneer Valossa whose CEO Mika Rautiainen has real practical example of how producers and broadcasters are using latest video recognition solutions for application in automated content compliance monitoring and content profiling for analytics. He presents a demo of how video recognition and content intelligence can be used to provide next-generation insights of video content in Exploring New Frontiers with Cognitive Media AI for Media & Broadcasting (27-Feb; 14:00 - 14:30).
The drive toward digitally replicating humans continues although no-one has quite explained why a human-sim, unrecognisable from the real thing, would be a better actor than a Tom Cruise or Emily Blunt. A system called Medusa devised at ILM and used to create Andy Serkis’ Snoke in the recent Star Wars movies has just been rewarded with an Oscar at the Academy’s annual Scientific and Technical Awards.
It was used to create 8ft tall supervillain, Thanos in Disney/Marvel’s Avengers: Infinity War with the character originated from the facial-scanning of actor Josh Brolin. Medusa also played a part in Steven Spielberg’s virtual reality ride Ready, Player One. It was used to design Parzival, the avatar of hero Wade Watts.
At BVE, digital humans and 21st century puppeteers go under the microscope in a session presented by Andy Wood, president of Cubic Motion a facial animation technology developer (27-Feb; 13:00 - 14:00). Wood will contend that digital humans can be presented on any device with a screen via whatever media becomes available, tele-present and omnipresent. He reveals how computer vision and performance-driven animation combine to create photorealistic characters for video games like Spider-Man, Call of Duty and Hellblade and promises to show surprising use cases of digital humans in TV and film, such as within the mixed reality world of Netflix's Kiss Me First.
Related to this development and a process already widely used for performance capture is volumetric capture. This is typically the use of multiple cameras arrayed around a stage to capture performance and movement for reconstruction of the scene (from any angle) in post. It’s commonly used for the full motion video scenes in video games and increasingly high end motion pictures like Alita: Battle Angel. Still the most famous example is from the 1999 film The Matrix.
Dimension Studios runs a London based volumetric capture space using Microsoft technology (Microsoft has licenced its tech out to several studios in the US, Europe and Australia). Co-founder Simon Windsor is billing his explanatory talk as ‘The next dimension for storytelling (28-Feb; 11:45 - 12:15). “With the growth of Extended Reality platforms we are now entering the age of 'free-viewpoint media' and 4-dimensional experiences,” he says. “As these experiences become more real, it requires a rethink in how we create virtual humans so they are more believable and elicit emotion and empathy.
His talk explores how volumetric capture is creating the next generation of virtual humans and being used by storytellers.
Virtual Production is the broader concept of bringing post production into the capture process, and which may or may not include volumetric capture. Pioneered by James Cameron, among others, during making of the first Avatar, virtual production gives a director and other key creatives like the cinematographer and editor, the tools to visualise CG characters and environments live and in 360-degrees on set.
VFX powerhouse ILM takes us from From Concept to Final Pixels (27-Feb: 15:00 - 16:00). Matt Rank, Senior Virtual Production Supervisor explores how Virtual Production plays a part in all aspects of film production (from virtual scouting and previs, on-set production through to post and final in camera pixels) allowing directors and DPs to digitally lock in their shots, and the ability to capture their actors to the highest fidelity possible.
Virtual production techniques require realtime interaction of CG animation with live action in high fidelity. The tech which filmmakers are relying on to draw the two worlds together are games engines. Epic Games’ Unreal Engine is the leading tech here not just for massive budget movies but increasingly for new TV formats and also driving virtual sets for use in more bread and butter TV genre from sports (including esports) to news. The firm’s Business Development Manager, Ben Lumsden tells us all about the latest developments in Real-time Production in Unreal Engine (28-Feb; 14:15 - 15:00).
There are those who think that the way we interface with the internet and therefore with information and entertainment in future will be via fully immersive virtual reality or semi-immersive AR. Most emphasis is on the pictures but the best navigation will only be achieved when audio merges the real and the virtual together. Spatial sound is essential to AR, to guide us and prevent nausea and in fact to augment the information we receive visually. Oliver Kadel, head of audio at 1.618 Digital explores what he describes as the latest ways of creating compelling AR content with new-generation audio (27-Feb, 16:00 - 17:00).
You might also like...
In this thought-provoking missive, Gary Olson delivers his predictions and insights for IBC 2019.
Philo T. Farnsworth was the original TV pioneer. When he transmitted the first picture from a camera to a receiver in another room in 1927, he exclaimed to technicians helping him, “There you are – electronic television!” What’s never been quoted but lik…
From theory to implementation, this second year of the IP Showcase and Theater at IBC2018 is should be on everyone’s schedule.
Android TV is finally being adopted on a large scale by pay TV operators three years after its launch and seven years on from the original unveiling of its predecessor Google TV. One casualty could be the RDK (Reference Design…
Whether the exhibits and technology would represent more hype than promise was a key question going into NAB 2018. Attendees expected developments on ATSC 3.0 and the industry’s migration toward IP infrastructures. Perhaps most surprising was the high level of interest i…