In the last article we looked at security detection and prevention systems and their place in a broadcast network. In this article we continue the theme of looking at a network from a broadcast engineers’ point of view so they can better communicate with the IT department, and look at how timing systems work in the IP domain.
Broadcast has timing intrinsically built into the signal paths. For example, analog PAL and NTSC had field and line sync pulses to synchronize the scanning process in cathode ray tubes. Color subcarrier bursts synchronize the flywheel oscillator to lock the color demodulation frequency. Even SDI and AES have bi-phase modulated clocks built into their signals so that the receiver clock can lock to the sample clock.
SPG's Still Needed
Clock synchronization is extremely important in both synchronous and asynchronous digital television systems. The problem we are trying to solve is to keep the encoder and decoder sample clocks at the same frequency and in phase. If we do not do this then one clock will run faster than the other resulting in either too many or too few samples reaching the decoder.
Lost samples of data in uncompressed signals will cause an instantaneous audio splat or loss of a video pixel. In compressed systems the effect could be much worst as forward and reverse compression can result in a prolonged error.
Broadcasters have gone to great lengths to provide master clock referencing for both audio and video in the form of master sync pulse generators.
Although Ethernet uses bi-phase modulation to encode its data and clock signal, the clocks are not synchronized between network interface cards (NIC’s) so we cannot use this as a form of global synchronization.
GPS has been used in the past to lock encoders and decoders, however it’s proved impractical when the signal path moves away from line of sight of a satellite.
Precision Time Protocol IEEE-1588 has been recently developed by the IEEE to address the issue of network timing. PTP was designed as standard for many different industries, and as it is able to provide sub-microsecond accuracy it lends itself well to broadcast television.
PTP works in a master slave topology. One server or customized device is nominated as the master clock, and all other devices within the subnet synchronize to it forming a network of synchronized servers.
Although the protocol can run on any router without modification, some configuration work has to be done to provide the timing datagrams with the fastest and shortest delay path in the network. Network engineers achieve this by setting the quality of service (QoS) in the routers for specific types of datagram, a form of rate shaping that gives priority switching to the timing signals.
The time difference between the master and slave clocks consists of two components; the clock offset and the message transmission delay. To correct the master clock, synchronization is achieved in two parts, offset correction and delay correction.
The master clock should be a very accurate generator capable of providing 1GHz clock samples, either locked to GPS or deriving its clock from an oven controlled oscillator in a similar way to the broadcast sync pulse generator. Established manufacturers of SPG’s are now including PTP clock outputs on their products.
In a similar way to Unix time systems, PTP uses the concept of an Epoch clock. This is an absolute time value when the clock was set to zero, and the number of 1GHz clock pulses that have occurred since provides the current time, these are converted into human readable time with software to provide year, month, day, hours, minutes and seconds. The Epoch (or zero time) for PTP was set at midnight on the 1st January 1970.
Software Stacks Cause Clock Jitter
As PTP uses a 1GHz master clock and the granularity of the slave clock can be accurate to 1nS, the clock should be thought of as an event clock or presentation time clock rather than an absolute pixel count.
Software timing is notoriously unpredictable hence the reason manufacturers have kept to hardware solutions for time critical processing such as video playout. When PTP masters create timing datagrams, and slaves receive them, the timestamp should be inserted within specially designed network interface cards at the Ethernet layer. If it was inserted by the software stack then jitter would occur due to the unpredictable interactions of the operating system and software stacks.
When sending a video frame, some pixels will arrive ahead of their display time and some behind. Buffers smooth this out and the internal presentation software will make sure the frame is constructed before the next field pulse comes along. In effect, the frame pulses are synchronized by the PTP so the frame rate of the receiver is locked to the encoder.
The benefits of this method of synchronization go beyond video and audio playout. PTP now provides us with a predictable event clock so we can trigger events in the future instead of relying on centralized cues. If a regional opt-out of Ads was to occur in a schedule at 19:26:00hrs, the remote playout servers would be able to switch the program out at 19:26:00hrs to play the regional Ads within a timeframe of 1nS. As long as the schedule database is correctly replicated to all of the regional playout servers, we no longer have to rely on cue tones to provide opt outs.
PTP protocol allows for master and slave devices to be daisy chained together so a slave device can become master for another subnet. In this way we can have entire LAN’s and WAN’s synchronized together to allow broadcast devices to accurately switch and mix between sources.
In traditional analog and SDI studio’s there tended to be just one timing plane for the video; the vision switcher. If multiple vision switchers were to be used, then video synchronizers would have to be employed to provide another timing reference. PTP removes this need as the timing plane is essentially the same throughout the entire network as all slaves and masters become synchronous.
A new timing dimension has been brought to broadcast television.
You might also like...
Today’s broadcast engineers face a unique challenge, one that is likely unfamiliar to these professionals. The challenge is to design, build and operate IP-centric solutions for video and audio content.
Broadcasting used to be simple. It required one TV station sending one signal to multiple viewers. Everyone received the same imagery at the same time. That was easy.
Are you an IT engineer having trouble figuring out why the phones, computers and printer systems work but the networked video doesn’t? Or maybe you have 10-15 years of experience with video production equipment but really don’t understand why…
As broadcasters migrate to IP, the spotlight is focusing more and more on IT infrastructure. Quietly in the background, IT has been making unprecedented progress in infrastructure design to deliver low latency high-speed networks, and new highly adaptable business models,…
In principle, IP systems for broadcasting should not differ from those for IT. However, as we have seen in the previous nineteen articles in this series, reliably distributing video and audio is highly reliant on accurate timing. In this article,…