AI In The Content Lifecycle: Part 1 - Pre-Production

This is the first of a new series taking a practical, no-hype look at the role of Generative AI technology in the various steps in the broadcast content lifecycle – from pre-production through production, post, delivery and in regulation/ethics. Here we examine the Gen-AI tools already in use within pre-production and discuss the philosophical change required within the creative community to embrace them. The big studios are already nurturing generative AI – it is here to stay.

The shakeup of video production, including creation of scripts, has intensified since settlement of the dispute around use of AI in content creation between Hollywood studios and the Writers Guild late in 2023. Generative AI has raised the creative bar further, putting the spotlight on originality more than ever before.

Generative AI is intruding much further into the pre-production process of audio-visual content than earlier iterations of neural network-based machine learning (ML) technology. Previously, the impact of AI/ML had been greater among other aspects of the production and consumption lifecycle that will be addressed specifically in future articles of this series. In pre-production though the impact of more recent developments under the banner of Gen AI has been dramatic, because of the intrusion into the creative realm, or at any rate whet we define as creative.

This has come about with rapidly improving abilities to generate video from text inputs, as well as to parse and break down scripts into constituent parts and help with aspects of casting and prop selection. There is growing scope for feedback into the scripting with various objectives, including tailoring the resulting content to its intended audience.

Matters have already come to a head over one important aspect, regulation of Gen AI in TV and film projects, especially around script creation at the pre-production stage. This led to the Writers Guild of America ending a strike in September 2023 and securing about half their initial objectives, including agreement that AI-generated content cannot be considered literary or source material. Crucially, the deal also allows the Guild to insist that using writers' output to train AI is exploitation without prior agreement.

It is important at this stage to provide context, because some commentators and analysts have been swept away either by hysteria or hyperbole, sometimes brandishing the absurd idea that Gen AI will banish human creativity to the sidelines. The reality, as expressed by wiser and more reflective members of the industry, is that it represents another tipping point for the industry resulting from the onward march of technology and particularly sustained rapid growth in computational power. To some extent AI has merely been catching up with hardware capabilities, and Gen AI with its intrusion more into the creative domain is another step along that path.

This is the line taken by Netflix co-CEO Ted Sarandos, expressed in a May 2024 interview in the New York Times. He likened the disruption caused by AI to the migration of video distribution from DVD rentals to streaming almost two decades ago. Netflix was notable for bridging that gap and reappearing that much stronger and more dominant the other side in the streaming era. "In periods of radical change in any industry, the legacy players generally have a challenge, which is they're trying to protect their legacy businesses,” said Sarandos. “We entered into a business in transition when we started mailing DVDs 25 years ago. We knew that physical media was not going to be the future."

Now, Sarandos believes, Netflix and its competitors face a comparable challenge with AI, and especially Gen AI. "I think that AI is a natural kind of advancement of things that are happening in the creative space today, anyway," he said. "Volume stages did not displace on-location shooting. Writers, directors, editors will use AI as a tool to do their jobs better and to do things more efficiently and more effectively."

Sarandos went on to argue that the idea of AI replacing human creative was absurd. At the same time though human creatives who failed to exploit the power of Gen AI would find themselves redundant. This can be likened to the impact that Adobe’s Photoshop had on the art of photography, and even on painting, as artists use it to shorten the process of creating studies or templates for their finished works, as well as selecting colors. AI is adding polish to that as well.

The Sarandos line has also been taken by other industry authorities, such as Simon Johnson, head of the Global Economics and Management group at the MIT Sloan School of Management in the USA, and co-author with Daron Acemoglu of the book Power and Progress: Our 1,000 Year Struggle Over Technology and Prosperity. Johnson insisted that “the people who are writing and editing in 10 years or so will be using AI to extend their creativity.”

He argued that in the case of movie scriptwriting, Gen AI would both automate more routine tasks and augment the capabilities of various workers in the field. His key optimistic point was that so long as writers remain in charge, Gen AI will expand the range of ideas that make it into scripts.

Such arguments about AI being a liberating force creatively by reducing barriers to ultimate production have also been made in other fields. In content pre-production, such liberation is already occurring with a growing range of tools such as Filmustage and Scriptbook designed to shoulder some of the drudgery involved and free humans for more creative aspects.

In essence such tools apply machine learning algorithms to break down scripts quickly into relevant elements, which they then identify and categorize, breaking out cast, locations and props. Some can also identify potential risks and challenges, such as copyright infringement, and safety aspects associated perhaps with requirements for stunt performers in scenes involving greater physical risk. They may also elicit reminders or checklists of key points.

The increased scope brought by Gen AI has stimulated development of products available open source, and also spawned a number of startups, such as Runway, Metaphysic, Resemble AI, DeepBrain AI, and Fable Studio. Some of the big content producers have been helping to stimulate or nurture such startups, such as Comcast NBC Universal through its LIFT Labs Accelerator program. Taking place over four weeks each year, the program involves assessment, testing, marketing and media assistance, as well as the prospect of deployment within NBC Universal, which includes Sky in Europe.

At least two of the 2023 crop are involved in pre-production, one being UK based Charisma.ai, whose technology is designed to personalize narratives to individuals within a script, according to pre-specified goals.

Another is Rephrase.ai, a text to video platform aimed more at the business sector to create lifelike digital avatars and personalize videos to individuals. This is at the threshold of a new application domain in video personalization, which also holds potential for entertainment, and especially the gaming sector.

It is worth noting there is some overlap between pre-production and subsequent generation of the actual AV content through avatars and visual effects for example, which can be more generic and reused across multiple productions. Gen AI itself has increased that overlap through its ability to integrate script and content production.

This is evident in the latest Gen AI models, such as OpenAI’s Sora, which has taken content generation further by producing convincing film clips up to a minute long from a short text summary. Sora has acquired the ability to place objects properly in street scenes, as witnessed in a video it created of a Tokyo street scene in a demonstration during May 2024. The scene follows a couple as they walk past a row of shops.

This exemplifies the rapid progress being made over content creation during the Gen AI era, which can be said to have begun late in 2021, for although its origins date back to the 1950s, it is only recently there has been the computational power to realize those initial ambitions. Early examples of content creation emerged in 2022 from Meta, Google, and Open AI itself, as well as one or two startups, but their video output tended to be grainy and ridden with glitches. By contrast, Sora’s creations are high definition and far more accurate in their emulation of the scenes, suggesting that a new milestone has been reached for Gen AI in content creation. The other big vendors are quickly catching up.

Inevitably, there has been some kickback against adoption of Gen AI in pre-production. This has come not just from those fearful of the impact on human creatives, but also over the accuracy of the Gen AI tools and loss of control over the process. However, these fears are quite rapidly being overcome, and major studios are already embracing the methods.

Gen AI is also being adopted in this capacity for advertising, taking targeting to a new level by matching content within which it is played. The big streamers are taking the lead, with Disney in February 2024 unveiling its Magic Words, which allows its advertising clients to personalize messages to the mood or emotions of the show or movie in which they are embedded, potentially down to the scene level. Google’s Performance Max, and Meta’s Advantage+ are along the same lines, while others such as Apple, and also NBCU, are at least experimenting with this idea.

Clearly activity around Gen AI around content creation and conception is accelerating this year, and expanding across genres, including advertising. 

You might also like...

Designing IP Broadcast Systems

Designing IP Broadcast Systems is another massive body of research driven work - with over 27,000 words in 18 articles, in a free 84 page eBook. It provides extensive insight into the technology and engineering methodology required to create practical IP based broadcast…

NDI For Broadcast: Part 3 – Bridging The Gap

This third and for now, final part of our mini-series exploring NDI and its place in broadcast infrastructure moves on to a trio of tools released with NDI 5.0 which are all aimed at facilitating remote and collaborative workflows; NDI Audio,…

Designing An LED Wall Display For Virtual Production - Part 2

We conclude our discussion of how the LED wall is far more than just a backdrop for the actors on a virtual production stage - it must be calibrated to work in harmony with camera, tracking and lighting systems in…

Microphones: Part 2 - Design Principles

Successful microphones have been built working on a number of different principles. Those ideas will be looked at here.

Expanding Display Capabilities And The Quest For HDR & WCG

Broadcast image production is intrinsically linked to consumer displays and their capacity to reproduce High Dynamic Range and a Wide Color Gamut.