Why AI Won’t Roll Out In Broadcasting As Quickly As You’d Think

We’ve all witnessed its phenomenal growth recently. The question is: how do we manage the process of adopting and adjusting to AI in the broadcasting industry? This article is more about our approach than specific examples of AI integration; there’s plenty of time for that later.

The term "exponential" is often misunderstood and frequently misapplied. It's familiar through phenomena like Moore's Law and its effect on semiconductors and related products, including camera sensors (8K video is arguably around 85 times Standard Definition's spatial resolution). It is a multiplicative effect that can lead to huge numbers very quickly. It applies to semiconductors and a broad spectrum of natural and artificial phenomena, from viruses to technological progress. Each advance in technology multiplies future possibilities.

The rate of change with AI is so great that it is hard to comprehend. If you were to plot it on an x and y axis, the line representing change would be practically vertical. It will likely keep accelerating because unique amongst technologies so far, AI can improve itself.

One rule of thumb in predicting the future is that if experts in their own field are surprised by the rate of change, then that rate, from our viewpoint, is indeed vertical. If you had to find a word for this, it might be "hyper-exponential". But such rates of change won't necessarily apply to broadcast yet.

Broadcasting Isn't Yet A Completely Software Industry

There is absolutely no doubt that AI will eventually affect everything. It will flow into our lives like water from a tap or electricity from a socket. But until then, no matter how fast software AI models grow their talents, there will be significant delays because you can't create or replicate physical things just by visiting a menu. So, there is always likely to be a "hybrid" approach that is part physical (microphones, monitors, etc.), part software, and part based on standards.

Physical devices - mainly transducers - will always be essential to production. You have to be able to see what you're making. Until generative AI replaces all conventional means of production (including actors and sets), performers and performances will require microphones, lenses, cameras, monitor headphones, and loudspeakers.

There will be an obvious reluctance to abandon all traditional physical means of production, but don't assume that AI will never play a significant role. Until then, tradition, caution and sheer common sense will act as damping factors in the growth of AI in broadcasting.

But It Can Write Software

In the last 18 months, LLMs (Large Language Models) have learned to write computer code. Nobody taught them to do it - the capability just cropped up as an "emergent property". It is purely and simply the result of rapidly increasing capability and LLMs' intrinsic complexity and sophistication, which raises all sorts of exciting possibilities.

You'll be able to prototype new broadcasting applications quickly and at a much lower cost. You will find that you can make an application do almost anything you can imagine (within hardware limits and the sheer laws of physics).

But is it supportable? Traditionally, and for good reasons, a large proportion of software development is testing and quality assurance. With AI-generated software, that will probably still need to be the case for several reasons.

First, AI doesn't "think" like we do. Even though it will still have to abide by the rules of its chosen programming language, it will likely invent new shortcuts that we simply won't understand. It will be like having a software savant rewrite your 12th-grade programming homework. And if there's no specific rule against something, then you can be confident that the AI will use that possibility to achieve its goals, likely with unintended consequences. At least if the AI is writing in a known programming language, you have a chance of understanding it and, therefore, testing it. You could also ask the AI to document its work, making it even more testable.

But what about when you use AI directly, like Elon Musk's so-called "Full Self Driving"? This is perhaps the most extreme example of trusting outcomes - actually, people's lives! - to a completely inscrutable neural net. It's a wildly ambitious project and impressive in its own way, but it does appear to be some distance away from being able to drive without supervision. Neural nets are not algorithms in the conventional sense. They're more akin to a sometimes-fallible human response. And we can't see their workings.

If you don't understand how something works, how can you test it? Moreover, would you trust it to run a playout system? Or maybe optimize your signal routing for low latency?

The answer is that you probably can, but you have to approach it in the right way.

AI Vs Determinism

Eventually, AI will become more reliable and more deterministic, but until then, it's best to be cautious and limit the possible downsides. One way to do this is to build modular systems. To take an automotive example, imagine Ford made an AI-based gearbox. The AI decides when to change gear based on its situational awareness. It's built as a drop-in replacement for a conventional gearbox and does essentially the same thing, but better. There's still the possibility of errors, but by making it modular, any errors would be limited to the transmission and not, say, the brakes.

This is a good, pragmatic approach that leaves humans in charge, even if several systems within a studio get replaced with AI modules. If you give AI systems limited scope, they can optimize themselves to do their allotted tasks to perfection.

Of course, the best we can do in a rapidly changing situation is to use our current knowledge of benefits versus risks and do all we can to navigate through the disruption ahead. In the following article, we will look at how to deal with the wider (and wilder) scope of how AI will affect the broadcast industry.

You might also like...

Audio For Broadcast: Cloud Based Audio

With several industry leading audio vendors demonstrating milestone product releases based on new technology at the 2024 NAB Show, the evolution of cloud-based audio took a significant step forward. In light of these developments the article below replaces previously published content…

Future Technologies: New Hardware Paradigms

As we continue our series of articles considering technologies of the near future and how they might transform how we think about broadcast, we consider the potential processing paradigm shift offered by GPU based processing.

Standards: Part 10 - Embedding And Multiplexing Streams

Audio visual content is constructed with several different media types. Simplest of all would be a single video and audio stream synchronized together. Additional complexity is commonplace. This requires careful synchronization with accurate timing control.

Designing IP Broadcast Systems: Why Can’t We Just Plug And Play?

Plug and play would be an ideal solution for IP broadcast workflows, however, this concept is not as straightforward as it may first seem.

Future Technologies: Private 5G Vs Managed RF

We continue our series considering technologies of the near future and how they might transform how we think about broadcast, with whether building your own private 5G network could be an excellent replacement for managed RF.