Timing - Tails of Truth

Recognising the extremities of the bell curve distribution and the impact large jitter variances have on picture and sound stability is critical. Only when this is understood, do we hope to be able to build reliable, flexible, IP infrastructures that will meet the demands of real-time broadcast facilities.

A bell curve is the most natural representation of distribution of variables and is here depicting a typical IP packet flow.

A bell curve is the most natural representation of distribution of variables and is here depicting a typical IP packet flow.

A broadcaster’s primary choice of IP transport is Ethernet. Affordable network speeds from 10Gb/s have been creeping up year on year to 25Gb/s, 40Gb/s, 50Gb/s, and now 100Gb/s. Native 4K is already achievable in IP and 8K will soon follow as adoption of higher interface speeds continue to increase.

SMPTE’s recently released ST2110 specification abstracts away the essence from the underlying bit clocks of synchronous SDI. Using RTP (Real Time Protocol) packet transport, ST2110 uniquely timestamps each video, audio, and metadata packet, to facilitate independent essence stream processing.

PTP Subsystem

For the RTP packet transport to work reliably in ST2110, each connected device must be synchronised to the same absolute time source. Borrowing from the IT industry, SMPTE chose the IEEE-1588:2008 standard called ST2059-2 in SMPTE lingo. Otherwise also known as PTP (Precision Time Protocol), this standard ensures all connected devices, such as cameras, monitor-walls, and sound consoles, have the same clock counter to within a microsecond, allowing events to be synchronised to microsecond accuracy.

Consequently, engineers now have two timing planes to consider; the underlying PTP synchronisation sub-system and the time-stamping of the essence packet streams.

Jitter is inherent in an IP network and cannot be avoided as single Ethernet connections may carry packets from different sources and destinations. The act of forwarding packets, buffering them and multiplexing and demultiplexing them in an asynchronous way, causes jitter.

Packet Distribution Varies

Devices sending IP Packets cannot be assumed to evenly distribute the packets of data in a timely fashion. Generally, the transmit software will output data as fast as possible to free the Ethernet driver for other services within the device.

An IP monitor-wall fed via a switch will receive many sources of video, audio, VANC data and the PTP data simultaneously on the same link.  The switch must prioritise PTP packets to keep video and audio in sync.

An IP monitor-wall fed via a switch will receive many sources of video, audio, VANC data and the PTP data simultaneously on the same link. The switch must prioritise PTP packets to keep video and audio in sync.

An Ethernet switch will prioritise some packets over others using the Quality of Service mechanism. An IP monitor-wall connected to a switch would receive many sources of video, audio, VANC data and the PTP data simultaneously on the same link. To keep the monitor-walls local PTP clock accurate, hence keep the video and audio in sync, the network administrator will have configured the switch to prioritise PTP packets resulting in the video stream being held back in favour of the PTP traffic, again potentially resulting in jitter.

Small amounts of jitter are expected, and the IP monitor-wall would use a FIFO (First in First out) buffer to overcome jitter by storing enough data for the decoders in the monitor-wall to construct the video packets and reliably display them in sync with audio.

Data Bursts

A well behaved 1080p/25 HD source sends around 270,000 IP packets per second or 270 IP packets every millisecond to achieve an average data rate of around 2.5Gbit/s in real world ST2110 terms compared to the static SDI specification of 3Gbit/s. However, if the source developed a fault, or was incorrectly configured, and suddenly started bursting data, it is entirely possible that peak bursts up to double that rate, 4Gbit/s or more could occur as microbursts with just a few hundred milliseconds of duration. This is still within the link specification for a typical 10GbE link but the average bandwidth measured over one second is still 2.5Gbit/s.

If the monitor-wall’s packet buffer is configured to store up to 32 packets of HD video for each of the six multicast subscriptions, its buffer size per flow would be approximately 110 microseconds.

For normal operation this is fine. As the packets are evenly gapped, the buffers write-rate equals the read-rate and packets sent to the internal video decoder are completed in a timely manner.

Buffer Overflow

When the source starts bursting, the short-term write-rate is significantly higher than the read-rate and the buffer will overflow in fractions of a second, potentially blocking PTP traffic and holding back the other video streams causing visual breakup and stuttering.

But how do we see this problem? The source is sending the correct average data rate and the switch is not dropping frames as the data rates are within the link limits. Is it the PTP grand-master? Or has the monitor-wall gone faulty?

It’s entirely possible that another monitoring device with a large buffer would be able to deal with the short-term bursts, and display stable pictures from the bursting camera. So why would you suspect the camera?

Tails Tell the Answer

Whilst looking at the average we’re only seeing half of the information. If we were to plot the temporal data distribution we would see an average at 2.5Gbit/s. But tellingly, we would see packets way beyond the second and third standard deviation from the average, well into the tails of the bell curve distribution.

Investigating timing related conditions is not as easy as it may first seem. Network Interface Cards (NIC’s) found in servers use buffers to receive data from the Ethernet wire to avoid packet loss. As soon as the packet is received it is written into a FIFO resulting in any temporal information being destroyed. Even looking at the packets in Wireshark (a free and open source packet analyser) will not divulge the truth, the rate at which the CPU reads data from the receive-FIFO is heavily dependent on the load on the operating system, so any timing information will be related to CPU usage, not to the network.

Accurate Timestamps Essential

Advanced NIC’s will tag packets with accurate timestamps as they are received of the wire, resulting in maintained timing accuracy to determine packets outside of the normal deviations. This is the only true method to measure packet timing information in a network.

Bridge Technologies VB440 is at the cutting edge of stream analysis. With two 40GbE inputs full redundancy is provided for real-time studio operations. As the VB440 measures the absolute time of received packets, it can determine the maximum, minimum, and average bit rates of selected streams. Furthermore, it provides a wealth of information relating to the status of the PTP sub-system to help understand what is going on beyond the averages in a network.

To succeed with IP migration, we must precisely understand timing. Averages only tell half the story and the answers to stability will be found lurking in the tails of the distribution curve. Recording accurate timing data and applying statistical analysis is essential for real-time media streaming.

Author: Simen K. Frostad, Chairman at Bridge Technologies.

Author: Simen K. Frostad, Chairman at Bridge Technologies.

You might also like...

Designing IP Broadcast Systems: Addressing & Packet Delivery

How layer-3 and layer-2 addresses work together to deliver data link layer packets and frames across networks to improve efficiency and reduce congestion.

Virtual Production At America’s Premier Film And TV Production School

The School of Cinematic Arts at the University of Southern California (USC) is renowned for its wide range of courses and degrees focused on TV and movie production and all of the sub-categories that relate to both disciplines. Following real-world…

Designing IP Broadcast Systems: Integrating Cloud Infrastructure

Connecting on-prem broadcast infrastructures to the public cloud leads to a hybrid system which requires reliable secure high value media exchange and delivery.

Designing IP Broadcast Systems: Where Broadcast Meets IT

Broadcast and IT engineers have historically approached their professions from two different places, but as technology is more reliable, they are moving closer.

Comms In Hybrid SDI - IP - Cloud Systems - Part 2

We continue our examination of the demands placed on hybrid, distributed comms systems and the practical requirements for connectivity, transport and functionality.