2018 NAB Show Event Channel

The #1 source of technology content in the broadcast & media industry, by the editors of The Broadcast Bridge - filtered by category.

Click here

Production & Post Global Viewpoint – February 2018

  EBU Production Seminar Cuts Through Hype of AI

Delegates at the recent 2018 EBU (European Broadcasting Union) Production Technology Seminar were keen to highlight the potential of intelligence (AI) in its various guises to help meet the challenges of cloud-based media services but also to avoid falling victim to excessive hype.

AI has been oversold recently in many industry sectors but the underlying idea of finding intelligent solutions to problems and employing automation to reduce costs as well as extend human competence make absolute sense. The same advances in semiconductor technology that have been ushering the era of commodity hardware and virtualized cloud-based infrastructures are also providing the foundation for so called AI and machine learning. The key lies in harnessing computational power for executing more complex algorithms involving feedback and adaptation to tasks with scope for automating increasingly advanced realms of expertise. This excited some delegates at the EBU seminar to highlight the role of AI in managing growing complexity of the IT based infrastructures revolving around IP based distribution that broadcasters now have to embrace.

Generation of metadata in its various forms is one fertile area for AI in production, because it yields great potential benefits across the whole video lifecycle while being labor intensive and as a result often constrained by lack of resources or time. For example, AI techniques are being applied in cognitive video analysis for indexing content from blockbuster movies to documentaries, enabling more sophisticated and subtle application of search, navigation and recommendation. A number of major technology vendors including Microsoft and IBM have been demonstrating intelligent video analysis in real time so that automatically generated metadata can potentially even be applied to live content.

Delegates also heard from Michelle Munson, co-founder and CEO of Eluvio, a start-up developing content-centric software, how machine learning can be applied to exploit context in routing within broadcast networks. This exploits work done for wireless networks where great efficiency gains can be made by allowing for varying demand and location of users. This has to be done automatically in as close to real time as possible and machine learning comes in by training and tuning the system through intelligent trial and error.

A theme of the EBU seminar then was how valuable real-world applications of AI and machine learning were emerging from the fog of hype. But besides AI, microservices and methodologies based on objects or components are playing major roles streamlining production as it moves to the cloud. The BBC’s Chris Northwood indicated that the aim here was to create systems comprising many coherent components, each optimized for a particular task, interacting constructively within a whole service.

France Télévisions’ Matthieu Parmentier expanded on this theme. “As opposed to traditional linear content, we're moving into a world where you have multiple individual components all put together – assembled – on a common timeline,” he said. Microsoft’s Martin Wahl then bought in metadata with his whimsical comment, “if content is king, then metadata is queen!”

There was naturally also some time at the seminar devoted to the consumer benefits of advanced video technology, especially around Virtual Reality (VR) and its close sibling Augmented Reality (AR). The BBC’s head of immersive and interactive content Graham Thomas highlighted how recent advances in displays and motion sensors for headsets coupled with increased rendering power were making it possible to make such experiences available in consumer devices. 360 video is fairly easy to produce and can deepen immersion, but lacks interactivity or the ability for the viewer to move freely within the picture. VR, on the other hand, can provide very rich interactivity and movement but tends to be more expensive to produce and it is technically challenging to support photorealistic imagery at this level. However, as the BBC has pointed out, commercial solutions are becoming available, such as volumetric capture and early approaches to light field acquisition and rendering. For this reason, major broadcasters are now investing in both VR as well as AR, which also employs CGI and graphics, but in that case to enhance real world scenes.

Commenting is not available in this channel entry.

Related Editorial Content

AI Comes to Media Creation

Artificial Intelligence (AI) is taking center stage in many applications. It already has some history in data analysis. But this year, AI is making gains in the creation of media, especially custom-tailored subscription channels.

AI Brings New Life To Online Channels Through Personalization

Like the set top box, the traditional TV channel continues to defy predictions of its imminent death and now it appears like it is gaining a further lease of life in the OTT world with the help of AI techniques…

Industry Embraces AI For Streamlined Clip Search Across Petabytes (And Soon Zettabytes) Of Storage

In today’s highly competitive media environment, companies are always looking for ways to streamline their operations and speed up the processes involved in content creation. One of the most critical is post production workflows and the need to find a…