Hardware Infrastructure Global Viewpoint – October 2020
Predictably Low Latency

We often hear vendors and pundits speak of “low latency” and “improving latency”. But what if we were to think in terms of predictable latency? Is this possible and would it help broadcasters?
Latency and cost seem to be inextricably connected. That is, the lower the latency of a system then the higher the cost, I don’t just mean money, but also complexity and resource precision.
My observation is that we’ve inadvertently moved the philosophy of sub-microsecond SCH and line timing into the world of IP. This has resulted in the implementation of incredibly complex infrastructures with an almost obsessive drive to constantly reduce latency to arbitrary units of time. My question is this, do we really need to constantly push latency to such low levels or is it now time to think of a new solution? Maybe predictably low latency is more use to us than just constantly trying to achieve unqualified low latency?
I’m not even sure what “low latency” means as there seems to be little in the way of definition in the broadcast industry. ST2110 certainly specifies packet distribution rates and fixing them to tight tolerances, but at an application level, there is no standard specifying the overall latency. There seems to be a school of thought that says, “keep latency low everywhere and everything will be fine”, but I think this is more blind optimism than good engineering practice.
Furthermore, we have been implementing relatively huge amounts of latency in our standard studio design for many years without too much complication. For example, the ubiquitous frame-synchronizer regularly introduces two frames of delay into the video path. As long as we delay the associated audio by a similar amount then nobody notices.
Another example of inherent unrecognized latency is in the production switcher. When the director calls a shot and switches the production switcher there is a finite amount of time for the electronics in the rack (possibly some distance away) to receive and acknowledge the command and switch the signal. To maintain synchronous switching and reduce the effect of frame tearing the video switch will take place in field blanking, this process itself could easily introduce one or two frames of delay from the switch command to the pictures changing. And that’s before we start considering the effects of compression.
Humans have an amazing ability to adapt to new and changing environments. Lockdown has generated outstanding innovation and introduced working practices many of us would have considered unworkable in any other situation. There are numerous stories of crews switching video and mixing audio remotely from their homes as they haven’t been able to physically enter their studio control rooms. The latencies they’ve experienced must have been massive, certainly hundreds of milliseconds, if not actual seconds. But they’ve adapted and made exceptional productions. Maybe cloud studios are closer than we think?
We aim for ever lower latency (whatever that may be), as it is a conditioned, and in my view a self-limiting belief that is holding us back from truly exploiting the benefits IP can deliver. IP is an asynchronous and transactional system, let’s work with it, not against it. We certainly need accurate timing and PTP delivers this for us. This isn’t just about providing accuracy, but is also about making systems predictable, and infrastructures that are predictable are by definition simpler to design, operate, and maintain.
My point is, we know we can adapt and if we can free ourselves from the confined thinking instilled in us from the decisions made in the 1930’s, and accept as an industry that we should aim for a latency that is more tangible, then I believe we will increase the flexibility and reliability of a broadcast facility by several orders of magnitude. And by “tangible” I mean a latency that can be measured and agreed upon, not a meaningless and irrelevant term such as “low”.