Future Technologies: Asynchronous Transport
In this first in a series of articles considering technologies of the near future and how they might transform how we think about broadcast, we begin with the potential for asynchronous transport streams.
Other articles in this series:
The need to maintain backwards compatibility in broadcast television has been a core requirement since the 1930s, this has resulted in a system that is incredibly stable, but at the same time is television’s Achilles heel.
Although the early days of television witnessed huge technological changes, such as moving from 405-line to 525- and 625-line, and then from black and white to color, the changes were relatively slow and were well understood. Viewers with existing black and white televisions were able to still watch their programs, even though the images being transmitted were in color due to the massive effort that was required to build backwards compatible color broadcasts.
Video and audio compression was first introduced in the analog domain but even this provided backwards compatible signals through composite outputs, SCART and RGB connectivity for the home viewer. The fundamental requirements of 525- and 625-line broadcasts with 25fps and 29.97fps stayed with us right up to HD. This was an exciting time as not only did we increase the number of video lines, but we also changed the aspect ratio from 4:3 to 16:9. To make the 16:9 broadcasts compatible with 4:3 televisions we had to jump through all kinds of hoops with aspect ratio signaling giving rise to some interesting faults such as egg-shaped heads and images in both letter box and pillar box simultaneously. All to maintain backwards compatibility.
Thinking Ahead
Even today, we see the same attitudes towards backwards compatibility prevailing, and so we still find ourselves compromising on the immersive experience. But everything is now changing with the new transport methods such as 5G Broadcast and the internet. And we now have the opportunity to break away from the idea of backwards compatibility.
One major change the internet and 5G Broadcast transmissions have brought to the table is that their native viewing method is not just the television on the home viewer’s wall. Internet and 5G Broadcast is all about being on the move and watching television on mobile devices. And this now provides us with the option of changing one very important aspect of broadcast television: frame rates.
To be pedantic, when suggesting changing frame rates, what I’m really thinking about here is not just increasing the number of frames per second but making the frame rate variable.
Variable Frame Rates
Increasing the number of frames per second is relatively old news as this is happening in 4K, UHD, and 8K where 120fps are established and higher frame rates have been discussed. But there is a debate raging on whether it’s even worth increasing the frame rate at all and instead we should be using the available bandwidth to increase the number of lines and color gamut as these will both deliver an improved immersive experience for the viewer, especially when we consider motion artifacts. And for the purposes of this article, let’s just assume interlace has been dropped for any future video format.
Asynchronous Networks
Variable frame rates are quite an interesting idea as it combines the motion compensated compression already in use while achieving the illusion of motion through frame sampling. One important aspect of our IP journey is that broadcasters are transitioning from a synchronous system to an asynchronous infrastructure. Fundamentally, broadcast television is a time-invariant sampled system in both the video and audio domains. The playback device must play the video and audio samples at exactly the same rate as the rate with which they were sampled by the camera and ADC on the microphone. Failure to do this will result in buffer under and overflow for both the video and audio streams resulting in video stuttering and audio squeaks and pops.
IP is asynchronous by design and both the internet and 5G Broadcast transport streams take advantage of this. However, as broadcast engineers have had synchronous working practices built into their psyche from the very first time they walked into a studio, it’s not surprising that many of the broadcast-IP standards we’re working with today are focused on imposing synchronous methodologies onto IP. Unfortunately, IP networks do not lend themselves well to this kind of restriction. Instead, if broadcasters play the asynchronous game, that is we work with asynchronous networks instead of imposing strict timing constraints onto them, we will find ourselves in a much better situation.
Advancing Stat-Multiplexing
This thought process has an interesting side-effect for variable rate video frames, that is, we can combine it with intelligent statistical multiplexing to improve network bandwidth utilization. This is a development of the stat-mux method used in video compression where the stat-mux can dynamically allocate bandwidth to individual video compression devices so that higher quality video can be delivered during periods of high motion transients, this typically is where greater bandwidth is required. The stat-mux works on the principle that not all the video compression devices contributing to the transport stream will need maximum bandwidth all the time and that the video content will vary so the stat-mux can allocate the bandwidth to each device as required.
If we take the concept of stat-muxing and apply it to variable frame rates, then we find we have an analogous solution. But this time, the stat-mux will analyze the IP network directly and determine how much bandwidth is available and how much can be allocated to each variable rate video compression device. Now, instead of varying the bandwidth while keeping the frame rate constant, the bandwidth can be varied as a function of the frame rate. In other words, as the motion content of the video stream increases, then so does the frame rate. After all, why would be bother sending 120fps for a relatively static shot?
Increasing Compression Dimension
Another possibility is to map the video from pixels into objects with variable time so that the motion vectors are not limited by the sampling of the video frames. This way, the playback device can decide how the vector representation of the objects are rendered on the display. This also adds another dimension to compression as the vector representations could have thresholds built into them that determine the datarate and hence the quality of the images.
If we combine the variable frame rates with the pixel to vector object mapping and then increase the sampling rate to the point where its limit tends to infinity, then we have a light-field camera! But the really interesting aspect of this is that the datarate becomes a function of the image content of the video and is asynchronous. Yes, MPEG and AVCs already do this but they are still constrained to fixed video frame rates and are synchronous, all to maintain backwards compatibility with the decisions that were made in the 1930s, which imposes massive restrictions on IP and 5G-NR networks.
If we think this is all sci-fi, then just ask the question, why do we have frame rates in video reproduction? There are two reasons; to convince the human visual system that motion exists and secondly, sampled video frames were the only way we could make CRTs provide moving pictures back in the 1930s.
Learn From Gaming
If we look at high-end gaming, then we see that GPU technology is already providing object-to-pixel mapping and it’s not a massive leap of faith to then move to variable frame rates. It’s entirely possible that we could even go directly from the archaic video sampled system we currently have, directly to object-to-pixel mapping with timestamps that provide more of a light-field representation.
IP and 5G Broadcast are allowing us to think differently about how we make television. As a new generation of mobile viewing devices are now being used, then broadcasters can think differently about how they deliver the complete immersive experience. IP and 5G Broadcast are asynchronous, and this asynchronous thinking is something we should hold onto and embrace as we design the formats of the future.
You might also like...
Brazil Adopts ATSC 3.0 For NextGen TV Physical Layer
The decision by Brazil’s SBTVD Forum to recommend ATSC 3.0 as the physical layer of its TV 3.0 standard after field testing is a particular blow to Japan’s ISDB-T, because that was the incumbent digital terrestrial platform in the country. C…
Broadcasting Innovations At Paris 2024 Olympic Games
France Télévisions was the standout video service performer at the 2024 Paris Summer Olympics, with a collection of technical deployments that secured the EBU’s Excellence in Media Award for innovations enabled by application of cloud-based IP production.
Standards: Part 18 - High Efficiency And Other Advanced Audio Codecs
Our series on Standards moves on to discussion of advancements in AAC coding, alternative coders for special case scenarios, and their management within a consistent framework.
HDR & WCG For Broadcast - Expanding Acquisition Capabilities With HDR & WCG
HDR & WCG do present new requirements for vision engineers, but the fundamental principles described here remain familiar and easily manageable.
What Does Hybrid Really Mean?
In this article we discuss the philosophy of hybrid systems, where assets, software and compute resource are located across on-prem, cloud and hybrid infrastructure.