BEIT at NAB: Artificial Intelligence in Media

AI is being talked about a lot, but many of the applications being touted are still just potential solutions looking for a problem, and the data returned is not yet reliable enough to be implemented across the industry. A key BEIT session explores the practical implications of AI in Media and a taster we’ve gathered the insight of David Cole, CEO, IPV and Steven Soenens, VP Product Marketing, Skyline Communications.

Q: To what extent is AI being introduced to the media supply chain as a realworld implementation?

David Cole: At the moment for IPV we’re seeing the most traction in speech-to-text and specific object recognition technologies, such as brand logos. These are being introduced into the industry for both captioning and metadata generation for content enrichment purposes.

Steven Soenens: There are many distinct area’s in the media supply chain in which AI assists businesses today. The most visible are applications to manage assets, such as systems that generate meta data in an automated manner, voice recognition and subtitling. The performance of those AI assisted systems relies significantly on ML techniques. This means that such systems need to be trained (supervised learning) with a lot of examples before they become useful. A lot of pre-defined situations need to be created upfront in order for the system to learn and be able to interpret new inputs.

Skyline Communications has introduced AI in a totally different domain in the broadcasting space, namely the broadcast operations. The DataMiner platform is an AI-powered multi-vendor NMS/OSS and orchestration platform for the broadcast and service provider industries. As the name of the platform suggests, it has been built since its incarnation to ingest and mine vast amounts of data and therefore has a big data foundation. We’ve continuously added AI new capabilities to the platform including fast pattern recognition and machine learning. This environment is much more challenging compared to other AI domains, in particular knowing that:

Steven Soenens, VP Product Marketing, Skyline Communications

Steven Soenens, VP Product Marketing, Skyline Communications

The data ingest is not controlled in the same way as other environments. In an operational environment, the data ingest is not continuous (data series contain gaps), contains often wrong data points (wrong counters reported), and contains a lot of different types of data origination from a high variety of data sources. As such, it is very difficult to train the engines upfront. We focus first on unsupervised learning, which is a tougher nut to crack for obvious reasons.

In an operational environment, AI is used to assist the operator so that they can not only react faster in case an incident happens, but also react in a pro-active manner. This implies that the solution needs to run AI algorithms in quasi real-time on the one hand, and this means that it’s impossible to spin-up additional compute capacity on the fly since this would take too long. We have patent pending algorithms in place to achieve just that.

How should companies best evaluate the value of competing AIs (in asset management, for example)

SS: AI is a continuously moving target. What was perceived to be ‘AI’ two years ago, is now the new norm and not necessarily named ‘AI’ anymore. The only way to remain ahead of the curve and in the leading position is continuous investment in R&D. The key question customers should ask of vendors is the level of investment they are putting into an AI roadmap.

DC: It’s usually better to create more value than to make savings, so think about how much value the AI will add to your assets. For example, if AI can help you to auto-tag your content with metadata so that it’s easier to find later, you will be able to reuse it more easily and get more value out of it. First, ask yourself the question: what do I need to know that can help me get value out of the process?

Is AI scalable (i/e if a company invests with one company’s AI today will the algorithm grow with their business?)

DC: Scalability is all about getting the best results from AI. One solution to this currently is using an AI engine aggregator, which decides which AI software is the most appropriate for each type of data. At the same time, larger investors in AI engines will of course improve and reap benefits eventually. Both of these things should be kept in mind when considering scalability. This will allow users to benefit from both refinement and new enhancements in the accuracy and richness of the data sources.

SS: As the media infrastructure ever becomes more complex, it is more difficult to manage the systems end to end. And even though the new generation of media infrastructures are more resilient we do notice that if things go wrong, the outages are very big. No media company can afford huge outages any more. So, the only way to detect behavioral anomalies (early indicators of a possible upcoming incident) is to have an AI engine smart enough to detect incidents early and quickly. Without doubt, sharing the learnings of the neural networks across multiple media companies is a next step we definitely believe in.

David Cole, CEO, IPV

David Cole, CEO, IPV

What do you see as the main business benefits of AI in media?

DC: Metadata: If accurate metadata can be generated automatically, media producers will save time and resource tagging their media. The ROI from this kind of accurate and informative metadata is always difficult to perceive but extends these savings. Additional benefits can include creating more compelling content and personalising it, making decisions about advertising placement in context, intelligent archiving, finding relationships across a library of media for a given story and much more.

The ability to reuse media more easily will enable the repeatable valuable of the media to be exploited, thus helping to record and find relevant media easily and efficiently and enhancing business value. There’s no point in having an archive if you can’t easily exploit it!

SS: In media operations, we see a clear benefit in service uptime and quality (reduce churn), reduction of operational expense / channel, and a better utilization of the (on premises but also public cloud) infrastructure and capacity.

And what are the key areas of AI where media companies need to watch out (or the challenges facing AI before it gains maturity)

DC: If you’re using AI on something that will drive an automated decision or go straight to the customer, make sure it’s verified and prepare for interpretation that could provide a poor user experience or bad business decision. AI-based analysis engines are improving but there can still be huge inaccuracies. You may still need to get a human to verify it, but you’ll still save a lot of time as this is now a verification rather than primary generation - and you won’t expose the business to unnecessary risks. Being selective means reducing the noisy metadata from the system and hence is more likely to increase adoption.

SS: There is a golden rule in the media supply chain that says: never select technology that is not open and interoperable with any other technology from other vendors. The broadcast industry today is what it is as a result of using interoperable products and systems. As AI becomes a technology that will reside in any operations, the exact same thing will still apply. Therefore, my advise to any media company is: select an AI engine that is capable to learn and make decisions independent of the data source (and the specific suppliers of those).

Any other key take-aways from this session?

DC: Descriptive metadata will drive key business decisions, from the creative process to financials - such as what to keep and what to treat as transitory. More metadata is not necessarily the route to more value: precise and deliberate metadata choices are. Less is more. Our take on this is to focus on quality, not quantity.

Want to know more about this year's BEIT Conference? Click the link here to see the official schedule along with a snippet of information about each presentation.

Would a free exhibit's pass help? Click this link or image below and enter the code MP01at the correct prompt.

Need a free exhibit hall pass? Click on this link and enter MP01 when requested.<br />

Need a free exhibit hall pass? Click on this link and enter MP01 when requested.

You might also like...

The Big Guide To OTT: Part 1 - Back To The Beginning

Part 1 of The Big Guide To OTT is a set of three articles which take us back to the foundations of OTT and streaming services; defining the basic principles of the OTT ecosystem, describing the main infrastructure components and the…

Delivering Timing For Live Cloud Productions - Part 1

Video and audio signals represent synchronous sampled systems that demand high timing accuracy from their distribution and processing infrastructures. Although this has caused many challenges for broadcasters working in traditional hardware systems, the challenges are magnified exponentially when we process…

Professional Live IP Video - Designing Networks

There’s a lot to consider when planning to incorporate uncompressed media into your live production particularly using SMPTE ST 2110, ST 2022-6 or AES67 streams. In this article, we will look at the network hardware, its architecture and future-proofing you s…

The Back Of The Brain May Soon Rule The Roost

If industry reports are to be believed, Apple is poised to release a mixed-reality headset at some point in 2023. Of course, it’s anyone’s guess when Apple’s Reality Pro will actually see the holographic light of day, but one t…

Essential Guide: Delivering High Availability Cloud

Delivering high availability cloud for broadcast production and transmission environments requires engineers to think in terms of resilience from the very beginning of the design.