Test, QC & Monitoring Global Viewpoint – October 2020
Empowering PTP

Maintaining tight timing constraints has always been at the core of broadcast television. But as we continue to migrate broadcast facilities to IP infrastructures, can we make better use of PTP to improve system management and signal processing while keeping latency low?
It seems to be a fact of life that the complexity of a system increases exponentially as we tighten timing tolerances. The smaller the tolerance, the more expensive the system. Imagine if there was a specification that constrained a response to a mouse click on a web page to 100 milliseconds? This would cause havoc in the internet and the costs of connectivity would increase disproportionately.
I joined television just at the time when analogue PAL and NTSC was still prevalent. Each morning my colleagues and I would time the camera outputs through the production switcher against the station sync pulse generator. We’d adjust the line timing and SCH phase to make sure all the cameras, VTR’s and graphics kit sync’ed correctly. Then SDI came along and the production switcher had digital line buffers built into each program and ME bank. The station video sources sync’ed automatically, so there was no longer a need for line tweaks anymore.
Fast forward twenty odd years and we’re now deep in the IP revolution. One of the interesting challenges of our migration is that we need to make synchronous video and audio work over asynchronous IP networks. Although I accept that using frame-based video formats and time invariant audio sampling requires synchronous methodologies to operate reliably, and this historical practice is based on firm engineering principles and good practice, my question is this; should we still be thinking in these linear terms? How can we better use PTP to our advantage?
Latency is our enemy, but it is also a fact of life. In my opinion, SMPTEs ST2110 designers have done an outstanding job of making real-time video and audio work reliably over asynchronous IP networks with the minimum of latency. They’ve paved the way for innovators to provide solutions to problems that many of us haven’t thought of yet. It’s fair to say that there are certainly some interesting “quirks” of the specification that have attracted some vocal criticism, but the proof of the pudding is in the eating and we’re now seeing major broadcasters adopting the standard all over the world.
For me, the dominant ST2110 advance has been the introduction of PTP timestamps. This predictable temporal element allows us to not only synchronize video and audio accurately, but also empowers us to use intelligent signal management and processing strategies.
The addition of timestamps provides an extra dimension of temporal information facilitating more creative and efficient methods of buffer management resulting in better latency prediction. Instead of treating video and audio as inconspicuous packets of data, we can now understand and predict where packets temporally exist within the stream (or multiple streams in the case of network load balancing).
We no longer have to treat the signal as a continuous stream of data as we did in the analogue PAL, NTSC, and even SDI days. Yes, this does imply increases in buffer capacity and hence possibly some latency, but if we think in parallel terms instead of the linear serial baggage that we’ve adopted from the decisions made in the 1930s, then we will be amazed at the solutions we could find. For example, why does an ST2110 video stream have to be transferred sequentially over one IP link? It doesn’t! So, spine and leaf topologies are not the only solution…
PTP and ST2110 have a wealth of information embedded in them and I’m looking forward to seeing vendors and innovators using them to improve efficiencies, predict latency, and simplify systems.