Why AI Won’t Roll Out In Broadcasting As Quickly As You’d Think

We’ve all witnessed its phenomenal growth recently. The question is: how do we manage the process of adopting and adjusting to AI in the broadcasting industry? This article is more about our approach than specific examples of AI integration; there’s plenty of time for that later.

The term "exponential" is often misunderstood and frequently misapplied. It's familiar through phenomena like Moore's Law and its effect on semiconductors and related products, including camera sensors (8K video is arguably around 85 times Standard Definition's spatial resolution). It is a multiplicative effect that can lead to huge numbers very quickly. It applies to semiconductors and a broad spectrum of natural and artificial phenomena, from viruses to technological progress. Each advance in technology multiplies future possibilities.

The rate of change with AI is so great that it is hard to comprehend. If you were to plot it on an x and y axis, the line representing change would be practically vertical. It will likely keep accelerating because unique amongst technologies so far, AI can improve itself.

One rule of thumb in predicting the future is that if experts in their own field are surprised by the rate of change, then that rate, from our viewpoint, is indeed vertical. If you had to find a word for this, it might be "hyper-exponential". But such rates of change won't necessarily apply to broadcast yet.

Broadcasting Isn't Yet A Completely Software Industry

There is absolutely no doubt that AI will eventually affect everything. It will flow into our lives like water from a tap or electricity from a socket. But until then, no matter how fast software AI models grow their talents, there will be significant delays because you can't create or replicate physical things just by visiting a menu. So, there is always likely to be a "hybrid" approach that is part physical (microphones, monitors, etc.), part software, and part based on standards.

Physical devices - mainly transducers - will always be essential to production. You have to be able to see what you're making. Until generative AI replaces all conventional means of production (including actors and sets), performers and performances will require microphones, lenses, cameras, monitor headphones, and loudspeakers.

There will be an obvious reluctance to abandon all traditional physical means of production, but don't assume that AI will never play a significant role. Until then, tradition, caution and sheer common sense will act as damping factors in the growth of AI in broadcasting.

But It Can Write Software

In the last 18 months, LLMs (Large Language Models) have learned to write computer code. Nobody taught them to do it - the capability just cropped up as an "emergent property". It is purely and simply the result of rapidly increasing capability and LLMs' intrinsic complexity and sophistication, which raises all sorts of exciting possibilities.

You'll be able to prototype new broadcasting applications quickly and at a much lower cost. You will find that you can make an application do almost anything you can imagine (within hardware limits and the sheer laws of physics).

But is it supportable? Traditionally, and for good reasons, a large proportion of software development is testing and quality assurance. With AI-generated software, that will probably still need to be the case for several reasons.

First, AI doesn't "think" like we do. Even though it will still have to abide by the rules of its chosen programming language, it will likely invent new shortcuts that we simply won't understand. It will be like having a software savant rewrite your 12th-grade programming homework. And if there's no specific rule against something, then you can be confident that the AI will use that possibility to achieve its goals, likely with unintended consequences. At least if the AI is writing in a known programming language, you have a chance of understanding it and, therefore, testing it. You could also ask the AI to document its work, making it even more testable.

But what about when you use AI directly, like Elon Musk's so-called "Full Self Driving"? This is perhaps the most extreme example of trusting outcomes - actually, people's lives! - to a completely inscrutable neural net. It's a wildly ambitious project and impressive in its own way, but it does appear to be some distance away from being able to drive without supervision. Neural nets are not algorithms in the conventional sense. They're more akin to a sometimes-fallible human response. And we can't see their workings.

If you don't understand how something works, how can you test it? Moreover, would you trust it to run a playout system? Or maybe optimize your signal routing for low latency?

The answer is that you probably can, but you have to approach it in the right way.

AI Vs Determinism

Eventually, AI will become more reliable and more deterministic, but until then, it's best to be cautious and limit the possible downsides. One way to do this is to build modular systems. To take an automotive example, imagine Ford made an AI-based gearbox. The AI decides when to change gear based on its situational awareness. It's built as a drop-in replacement for a conventional gearbox and does essentially the same thing, but better. There's still the possibility of errors, but by making it modular, any errors would be limited to the transmission and not, say, the brakes.

This is a good, pragmatic approach that leaves humans in charge, even if several systems within a studio get replaced with AI modules. If you give AI systems limited scope, they can optimize themselves to do their allotted tasks to perfection.

Of course, the best we can do in a rapidly changing situation is to use our current knowledge of benefits versus risks and do all we can to navigate through the disruption ahead. In the following article, we will look at how to deal with the wider (and wilder) scope of how AI will affect the broadcast industry.

You might also like...

Standards: Part 9 - Standards For On-air Broadcasting & Streaming Services

Traditional on-air broadcasters and streaming service providers use many of the same standards to define how content is received from external providers and how it is subsequently delivered to the consumer. They may apply those standards in slightly different ways.

An Introduction To Network Observability

The more complex and intricate IP networks and cloud infrastructures become, the greater the potential for unwelcome dynamics in the system, and the greater the need for rich, reliable, real-time data about performance and error rates.

Designing IP Broadcast Systems: Part 3 - Designing For Everyday Operation

Welcome to the third part of ‘Designing IP Broadcast Systems’ - a major 18 article exploration of the technology needed to create practical IP based broadcast production systems. Part 3 discusses some of the key challenges of designing network systems to support eve…

What Are The Long-Term Implications Of AI For Broadcast?

We’ve all witnessed its phenomenal growth recently. The question is: how do we manage the process of adopting and adjusting to AI in the broadcasting industry? This article is more about our approach than specific examples of AI integration;…

Next-Gen 5G Contribution: Part 2 - MEC & The Disruptive Potential Of 5G

The migration of the core network functionality of 5G to virtualized or cloud-native infrastructure opens up new capabilities like MEC which have the potential to disrupt current approaches to remote production contribution networks.