Production Trends For 2022

A new year means a new look at how your operations are handling (producing and delivering) content and which new technologies might make things better.

While technology budgets continue to be tight, the past year has seen a huge migration into cloud-hosted captioning workflows, simply because it makes economic sense when having to process hundreds of new titles at a time. Due to looming financial and content demand pressures, large broadcasters have begun to understand they need automation to manage the ever-increasing amount of material that needs to be delivered.

Here’s a few tech trends that are certain to increase this year, as media companies and broadcasters fight to stay in the game.

Cloud Computing/Future Foundation

The scalability of the cloud ensures that broadcasters only pay for cloud connection and processing costs when they actually need it and can turn off the services when they don’t.

The challenges brought on by the pandemic have led to a rapid acceleration of direct-to-consumer (DTC) and free ad-supported streaming TV (FAST) services. This in turn has driven demand for video processed and distributed by the cloud. In 2022 expect to see even more migration to the cloud with the goal of reducing cost and improving productivity and flexibility in content distribution to multiple platforms simultaneously. While many technology companies have virtualized hardware products to expand their usage, media organizations (their clients) have been asking for an origination system that can move from one cloud service to another in order to reach consumers worldwide.

Many media companies are migrating to the cloud with the goal of reducing cost and improving productivity and flexibility in content distribution to multiple platforms simultaneously.

Many media companies are migrating to the cloud with the goal of reducing cost and improving productivity and flexibility in content distribution to multiple platforms simultaneously.

This new type of video delivery, which is sure to increase, is supported by virtualized playout systems hosted on public cloud services like AWSAzure and Google Cloud and displayed on a device that supports HLS or MPEG-DASH.

Compression Smooths The Way

Reducing file sizes in order to send them over an IP infrastructure will continue to prove its value in efficiently and reliably delivering content where it needs to go when a consumer asks for it. These factors all add up to a positive consumer experience and QoS is critical to the success of compression at the core of this model—delivering video at high quality using the least amount of bandwidth over long distances.

Some of the standards currently used for video streaming are AV1, H.264, like H.265/HEVC (High-Efficiency Video Coded), VC-1, and Apple ProRes. They all use a mathematical concept called Discrete Cosine Transform (DCT) that identifies redundant pixels in an image that can be discarded while compressing it, without significantly affecting image quality.

In 2022, expect to see increased use of the new Versatile Video Codec (VVC or H.266), which was developed by the Joint Experts Team (JVET) of the Telecommunication Standardization Sector (ITU-T) and standardized in July 2020. Early tests suggest that H.266 achieves 40%-50% better compression rates than HEVC, which is currently used by Apple iPhones and a host of other display devices.

Also showing promise—with a compression ratio up to 12:1 and higher—is the JPEG XS codec, which is designed for applications like live productions that need to keep latency across wide- and local-area networks (LAN and WAN). It is a visually lossless, low latency and lightweight image and video coding system that targets mezzanine compression. Typical compression ratios are up to 10:1 for 4:4:4, 4:2:2, and 4:2:0 images but can also be higher depending on the nature of the image or the requirements of the specific application.

Faster Wireless Data Delivery Improves Everything

With the latest generation wireless data delivery services from AT&T and Verizon moving ahead despite calls from federal transportation officials to delay for further testing of their effect on aviation, 5G is now expanding worldwide. This is good news for video productions that rely on wireless delivery of IP packets, like local newsgathering and OTA broadcasting with the ATSC 3.0 system.

5G wireless networks also offer very low latency and fast file delivery.

5G wireless networks also offer very low latency and fast file delivery.

A camera on location can output camera-original footage to the cloud in real time. As soon as it is shot, video will be available to producers, editors, and journalists anywhere in the world. Editing can start as video is shot in the field.

5G networks also offer very low latency, as little as 1ms, versus 50ms on 4G network. These new networks will be able to support resolutions up to Ultra High-Definition (UHD) and even 8K without artifacts or display interruptions. In addition, live streaming of high bitrate 4K (and soon 8K) video requires bandwidth of 25Mbps and higher to deliver these streams over the user's internet connection. Since 5G is capable of supporting high bandwidth connections, it can serve as a valuable pipeline for high bitrate real-time video.

Also promising is a new wireless access technology in 5G, in both the sub-6 GHz and mmWave bands, called Massive MIMO (multiple-input multiple-output). Since its inception about a decade ago, it has evolved from an academic idea to become the core technology that likely will be utilized in all future wireless technologies. It can provide uniformly good service to wireless terminals in high-mobility environments. The key concept is to equip base stations with arrays of many antennas, which are used to serve many terminals simultaneously, in the same time-frequency resource. The word “massive” refers to the number of antennas used and not the physical size.

AI Has Far-Reaching Applications

There was a time when the mere mention of bringing artificial intelligence (AI) and machine learning into the media industry brought visions of robots replacing humans. However, far from taking people’s jobs, AI has far-reaching implications that could affect literally every part of the content production/delivery lifecycle. It’s increasing productivity and enabling staff to do more with the same resources.

Indeed, AI is helping production professionals do their job better and streamline workflows, saving hours of time and effort in a myriad of ways. It’s now part of many broadcasters’ toolbox of production options. 

AI is now positively affecting literally every part of the content production/delivery lifecycle.

AI is now positively affecting literally every part of the content production/delivery lifecycle.

Many newsrooms now see these fast-computing algorithms as the best production assistant they have ever worked with. The results have been extremely positive: single images stored on massive petabyte storage systems are found in seconds; programs are played off of a server at precise times. It’s led to more content—for TV and the web when broadcasters need it most.

At the operations level, AI solutions are simplifying and enhancing virtually everything in a TV newsroom’s workflow, while cutting the costs of production, speeding up content syndication and significantly reducing the amount of man hours required for some of the most labor-intensive tasks. Here are just a few examples of how AI is affecting the video production community specifically, in helpful ways.

For example, virtually every cloud service provider now offers a suite of AI tools that can be selected on a per-usage basis for everything from live production switched remotely to on-demand storage and processing power. For large-scale, multi-venue productions, like the Olympics or a World Cup, the cloud has become invaluable as a scalable platform with unlimited resources. Latency remains a challenge with live telecasts, but this is improving with every new project completed.

AI is also improving the delivery of content, via better compression. A group of international technology vendors and broadcasters is developing standards to improve video coding. Calling itself MPAI (Moving Picture, Audio and Data Coding by Artificial Intelligence,) they believe that machine learning can improve efficiency of the existing Enhanced Video Coding standard by about 25 percent.

Therefore, the use of AI and machine learning to automate live production tasks offers an opportunity to bring new content to viewers that would otherwise be prohibitively expensive to produce. This will only continue in 2022.

The Year Ahead

Looking forward into 2022, these and other technologies will continue to evolve and improve to help make video professionals jobs easier, add new production values not possible before and help deliver content in the most effective way. Many of the trends that began in 2020 as a necessity to staying on the air will certainly continue. This concept of serving consumers, whenever they are, remains the key to success. The audience has fragmented so flexibility is also critical. There’s no one size fits all to moving production off-site or to the cloud.

If they haven’t already, broadcasters have to start looking at their operations with a long-term view of supporting both linear TV and a digital distribution model. If you don’t make the transition, their costs will increase, and in today’s highly competitive landscape, no one wants that.

You might also like...

Building An IP Studio: Connecting Cameras - Part 2 - Session Description Protocols

IP is incredibly versatile. It’s data payload agnostic and multiple transport streams have the capability to transport it over many different types of networks. However, this versatility provides many challenges, especially when sending video and audio over networks.

IP Security For Broadcasters: Part 11 - EBU R143 Security Recommendations

EBU R143 formalizes security practices for both broadcasters and vendors. This comprehensive list should be at the forefront of every broadcaster’s and vendor’s thoughts when designing and implementing IP media facilities.

Machine Learning For Broadcasters: Part 2 - Applications

In part one of this series, we looked at why machine learning, with particular emphasis on neural networks, is different to traditional methods of statistical based classification and prediction. In this article, we investigate some of the applications specific for…

Essential Guide: Flexible Contribution Over IP

Outside Broadcast connectivity using managed and unmanaged networks is delivering opportunities for employers that enhances productivity through flexibility, scalability, and resilience.

Information: Part 5 - Moving Images

Signal transducers such as cameras, displays, microphones and loudspeakers handle information, ideally converting it from one form to another, but practically losing some. Information theory can be used to analyze such devices.