Why AI Won’t Roll Out In Broadcasting As Quickly As You’d Think

We’ve all witnessed its phenomenal growth recently. The question is: how do we manage the process of adopting and adjusting to AI in the broadcasting industry? This article is more about our approach than specific examples of AI integration; there’s plenty of time for that later.

The term "exponential" is often misunderstood and frequently misapplied. It's familiar through phenomena like Moore's Law and its effect on semiconductors and related products, including camera sensors (8K video is arguably around 85 times Standard Definition's spatial resolution). It is a multiplicative effect that can lead to huge numbers very quickly. It applies to semiconductors and a broad spectrum of natural and artificial phenomena, from viruses to technological progress. Each advance in technology multiplies future possibilities.

The rate of change with AI is so great that it is hard to comprehend. If you were to plot it on an x and y axis, the line representing change would be practically vertical. It will likely keep accelerating because unique amongst technologies so far, AI can improve itself.

One rule of thumb in predicting the future is that if experts in their own field are surprised by the rate of change, then that rate, from our viewpoint, is indeed vertical. If you had to find a word for this, it might be "hyper-exponential". But such rates of change won't necessarily apply to broadcast yet.

Broadcasting Isn't Yet A Completely Software Industry

There is absolutely no doubt that AI will eventually affect everything. It will flow into our lives like water from a tap or electricity from a socket. But until then, no matter how fast software AI models grow their talents, there will be significant delays because you can't create or replicate physical things just by visiting a menu. So, there is always likely to be a "hybrid" approach that is part physical (microphones, monitors, etc.), part software, and part based on standards.

Physical devices - mainly transducers - will always be essential to production. You have to be able to see what you're making. Until generative AI replaces all conventional means of production (including actors and sets), performers and performances will require microphones, lenses, cameras, monitor headphones, and loudspeakers.

There will be an obvious reluctance to abandon all traditional physical means of production, but don't assume that AI will never play a significant role. Until then, tradition, caution and sheer common sense will act as damping factors in the growth of AI in broadcasting.

But It Can Write Software

In the last 18 months, LLMs (Large Language Models) have learned to write computer code. Nobody taught them to do it - the capability just cropped up as an "emergent property". It is purely and simply the result of rapidly increasing capability and LLMs' intrinsic complexity and sophistication, which raises all sorts of exciting possibilities.

You'll be able to prototype new broadcasting applications quickly and at a much lower cost. You will find that you can make an application do almost anything you can imagine (within hardware limits and the sheer laws of physics).

But is it supportable? Traditionally, and for good reasons, a large proportion of software development is testing and quality assurance. With AI-generated software, that will probably still need to be the case for several reasons.

First, AI doesn't "think" like we do. Even though it will still have to abide by the rules of its chosen programming language, it will likely invent new shortcuts that we simply won't understand. It will be like having a software savant rewrite your 12th-grade programming homework. And if there's no specific rule against something, then you can be confident that the AI will use that possibility to achieve its goals, likely with unintended consequences. At least if the AI is writing in a known programming language, you have a chance of understanding it and, therefore, testing it. You could also ask the AI to document its work, making it even more testable.

But what about when you use AI directly, like Elon Musk's so-called "Full Self Driving"? This is perhaps the most extreme example of trusting outcomes - actually, people's lives! - to a completely inscrutable neural net. It's a wildly ambitious project and impressive in its own way, but it does appear to be some distance away from being able to drive without supervision. Neural nets are not algorithms in the conventional sense. They're more akin to a sometimes-fallible human response. And we can't see their workings.

If you don't understand how something works, how can you test it? Moreover, would you trust it to run a playout system? Or maybe optimize your signal routing for low latency?

The answer is that you probably can, but you have to approach it in the right way.

AI Vs Determinism

Eventually, AI will become more reliable and more deterministic, but until then, it's best to be cautious and limit the possible downsides. One way to do this is to build modular systems. To take an automotive example, imagine Ford made an AI-based gearbox. The AI decides when to change gear based on its situational awareness. It's built as a drop-in replacement for a conventional gearbox and does essentially the same thing, but better. There's still the possibility of errors, but by making it modular, any errors would be limited to the transmission and not, say, the brakes.

This is a good, pragmatic approach that leaves humans in charge, even if several systems within a studio get replaced with AI modules. If you give AI systems limited scope, they can optimize themselves to do their allotted tasks to perfection.

Of course, the best we can do in a rapidly changing situation is to use our current knowledge of benefits versus risks and do all we can to navigate through the disruption ahead. In the following article, we will look at how to deal with the wider (and wilder) scope of how AI will affect the broadcast industry.

You might also like...

Essential Guide: Delivering Intelligent Multicast Networks

This Essential Guide discusses the potential weaknesses of the ‘Protocol-Independent Multicast’ protocols that underpin multicast, and explores how a bandwidth aware infrastructure can maximize network capacity to reduce the risk of congestion.

Standards: Part 16 - About MP3 Audio Coding & ID3 Metadata

The MP3 audio format has been around for thirty years and has been superseded by several other codecs – so here we discuss why it still has a very strong position in broadcast. We also discuss ID3 metadata tags which often a…

HDR Picture Fundamentals: Brightness

This article describes one of the fundamental principles of broadcast - how humans perceive light, how this relates to the technology we use to capture and display images, and how this relates to HDR & Wide Color Gamut

Virtualization - Part 2

In part one, we saw how virtualization is nothing new and that we rely on it to understand and interact with the world. In this second part, we will see how new developments like the cloud and Video Over IP…

Standards: Part 15 - ST2110-2x - Video Coding Standards For Video Transport

SMPTE 2110 and its related standards help to construct workflows and broadcast systems. They coexist with standards from other organizations and incorporate them where necessary. In an earlier article we looked at the ST 2110 standard as a whole. This time we…