Audio Global Viewpoint – June 2022

Context Of Latency

The discussions around latency are well documented, but is there another way of addressing this thorny subject?

Latency means different things to different people. If we're working in a studio then latency is in the order of a few lines of video, but if we’re working in playout or transmission then we could be looking at frames or even seconds worth of delays.

Several lines of latency within a studio should generally not be too much of a concern as modern production switchers have buffers on their inputs or ME banks that can retime multiple lines of video. However, the question of whether multiple frames of video latency in a studio is really an issue is a question I often ask myself, especially when we start talking about the nanosecond timing attributes of PTP.

Maybe it’s not so much the latency of individual timing planes that are the challenge, but the combined and concatenated effects of them. Even so, we're probably bordering on what is acceptable, in part due to the lip sync issues created when we get to multiple frames of delay. Even when video has to be retimed from an outside source, the audio must also be retimed, and rate converted to match the video and sampling rates of the station.

This further points to the ability of individual equipment to be timed and the capabilities of its frame-lock synchronization. If a piece of graphics equipment can be frame synced, then all is good. However, if we must rely on a frame synchronizer to retime it to the production switcher, then we invariably add at least a frames worth of latency, possibly even more.

But what about distribution? In the analog days this was limited to the propagation delay of the transmission paths. But as we move to internet delivery and everything that goes with it, we suddenly find ourselves looking at latency in the order of tens of seconds, or even minutes. Not only have we included many types of transcoding, but we now must deal with the effects of sending synchronous video and audio signals through an asynchronous IP network. As the internet was never designed to distribute asynchronous video and audio signals, engineers have had to develop creative and innovative ideas to make OTT a reality, one consequence being massive latency (relatively speaking).

The point is that there are many different sources of latency in the broadcast chain and many of them vary depending on the context and use case. Even in the studio we can find different latencies becoming applicable depending on where we’re looking in the signal path. And outside broadcasts not only have to deal with the propagation delays of the contribution medium, but also the codecs required to deliver the appropriate data rates for the studio.

For me, the phrase “low latency” has only limited meaning when the context within which it is being used is not correctly specified. As we know from experience, the context of the latency is now as important as the latency itself.