Streaming War Heats Up Around Low Latency

Streaming wars are associated in the public mind with the intensifying battle for supremacy between Netflix, Amazon, Disney, Apple and a few others in the SVoD (Subscription VoD) field, but it also applies to the contention heating up between underlying protocols for low latency transmission of video over the Internet.

However, if that is a war, many of the leading protagonists are hedging their bets by lining up on different sides, especially in the case of two leading contenders, the SRT (Secure Reliable Transport) and RIST (Reliable Internet Stream Transport) protocols. Most tellingly Haivision, which developed SRT before making it open source and is an evangelist for the technology, became one of the founding members of the RIST Forum earlier in 2019. Synamedia and Net Insight are two other vendors with feet in both camps.

SRT currently has the greater force behind it with around 200 members of the SRT Alliance, including many major industry technology players like Harmonic and Microsoft alongside cofounders Wowza and Haivision. But RIST arguably now has even more momentum if smaller volume, as its supporters have been galvanized by the different development model around a common open specification. By contrast SRT, being open source, is built around a common code base originating from Haivision, which means that innovations arise from software extensions written by participants rather than clearly interoperable components adhering to a common specification.

In that sense SRT can be compared to Linux or the open source publicly available version of Google’s Android, allowing development of semi-proprietary customized versions that may be somewhat incompatible. RIST on the other hand is more like MPEG’s H.264 codec where only the underlying transport stream syntax was standardized, allowing innovation at the encoder/decoder level to proceed without comprising core interoperability. There are some who argue, such as Wes Simpson, President, Telecom Product Consulting and RIST Forum co-chair, that this model will allow RIST to progress more rapidly than SRT, but that probably depends more on the management and motivation of the respective groups than virtues inherent in the development model.

The key driver in either case is the need to cut latency as close as possible to the minimum levels enforced by the laws of physics for live streaming services to flourish, especially where interactivity is involved, as in gaming. Protocols enabling low latency over managed fixed networks or at small scale have been available some time but the goal of RIST and SRT is to enable similar or even greater performance at scale for large numbers of streams over unmanaged networks.

The current state of play is that there are various low latency protocols under three categories. First are purely proprietary protocols, the best known and most widely deployed of which is Zixi, primarily for fixed OTT services. The advantage of the proprietary approach is that it can focus relentlessly on performance and Zixi has achieved streaming latency below 2 seconds for some of its 500 customers. The disadvantage is lack of support for standards and therefore lack of interoperability, although proprietary standards can gain defacto status given widespread support and Zixi does boast an impressive array of partners. Nevertheless the tide is flowing towards industry standard protocols.

The second somewhat intermediate category is the open source option, where SRT is the main contender, originating in this case from just one vendor Haivision but then opened to the community for extending and enhancing. The great advantage is the innovation that can spring from opening the software up to the wider developer community, which can accelerate innovation considerably but also brings risks because the source code is equally exposed to hackers. It is also somewhat proprietary by being dependent on the underlying code base.

Indeed, concerns over that dependence has stimulated development of RIST in the third category, protocols that are based entirely on open standards. In fact RIST is built on the foundation laid by predecessors, especially the Real Time Transport Protocol (RTP), which provides media transport on top of the widely used UDP protocol of the IP stack to cater for packet loss or corruption in transit.

Some protection against corruption during transit is provided by Forward Error Correction (FEC), which incorporates some redundancy to enable packet recovery at the receiving end in the event of limited losses. But compressed video is highly susceptible to packet loss because of interdependencies between frames and so some means of retransmitting dropped packets is necessary.

In RIST this is achieved through an adaptation of the Real Time Transport Control Protocol (RTCP) associated with RTP. This allows packets to be retransmitted but with a mechanism enabling the receiver to distinguish those that have been resent from the originals. That allows resent packets to be ignored when say communicating with systems that do not support packet retransmission and so enables a degree of interoperability with legacy protocols.

RIST also has one key advantage over SRT in supporting bonding, allowing a high-bandwidth stream to be sent over multiple low-bandwidth connections and reassembled at the destination. As was noted by the weekly analysis site Faultline published by Rethink Technology Research, this supports two distinct use cases. The first one is that high bit rate services can be supported over networks without high bandwidth connections, which can reduce not just cost but also latency, since communication can proceed immediately over whatever links are available. Secondly, the same stream can be duplicated over multiple links for redundancy, achieving error communication combined with low latency at the cost of extra bandwidth.

You might also like...

100GbE Switching And Transport Moves To The Forefront Of Contribution Landscape

There was a time, not too long ago, when 100 Gigabit Ethernet (100GbE) IP switching was only considered for IT data centers moving large amounts of financial and military data. With the growth of media and the urgent need for remotely…

Is Gamma Still Needed?: Part 6 - Analyzing Gamma Correction In The Frequency Domain

To date, the explanations of gamma that are seen mostly restrict themselves to the voltage or brightness domain and very little has been published about the effects of gamma in the frequency domain. This is a great pity, because analysis…

The Sponsors Perspective: Effectively Using The Power Of Immersive Audio

Lawo’s Christian Scheck takes a tour of console functions and features that have a special place in immersive audio production, and how they are developing.

Is Gamma Still Needed?: Part 5 - Processing Gamma Corrected Signals

It is unwise to pretend that gamma corrected signals can successfully be multiplied, added and subtracted in a matrix as if they represented linear light. Yet in television it is done all the time.

The Sponsors Perspective: Notre Dame’s IP-Based Campus Crossroads Project

Here, we take a look at the landmark installation at the University of Notre Dame that highlights one of the biggest advantages of IP-based systems - flexibility. In the past networks have required a lot of cables and interconnections, today,…