Streaming wars are associated in the public mind with the intensifying battle for supremacy between Netflix, Amazon, Disney, Apple and a few others in the SVoD (Subscription VoD) field, but it also applies to the contention heating up between underlying protocols for low latency transmission of video over the Internet.
However, if that is a war, many of the leading protagonists are hedging their bets by lining up on different sides, especially in the case of two leading contenders, the SRT (Secure Reliable Transport) and RIST (Reliable Internet Stream Transport) protocols. Most tellingly Haivision, which developed SRT before making it open source and is an evangelist for the technology, became one of the founding members of the RIST Forum earlier in 2019. Synamedia and Net Insight are two other vendors with feet in both camps.
SRT currently has the greater force behind it with around 200 members of the SRT Alliance, including many major industry technology players like Harmonic and Microsoft alongside cofounders Wowza and Haivision. But RIST arguably now has even more momentum if smaller volume, as its supporters have been galvanized by the different development model around a common open specification. By contrast SRT, being open source, is built around a common code base originating from Haivision, which means that innovations arise from software extensions written by participants rather than clearly interoperable components adhering to a common specification.
In that sense SRT can be compared to Linux or the open source publicly available version of Google’s Android, allowing development of semi-proprietary customized versions that may be somewhat incompatible. RIST on the other hand is more like MPEG’s H.264 codec where only the underlying transport stream syntax was standardized, allowing innovation at the encoder/decoder level to proceed without comprising core interoperability. There are some who argue, such as Wes Simpson, President, Telecom Product Consulting and RIST Forum co-chair, that this model will allow RIST to progress more rapidly than SRT, but that probably depends more on the management and motivation of the respective groups than virtues inherent in the development model.
The key driver in either case is the need to cut latency as close as possible to the minimum levels enforced by the laws of physics for live streaming services to flourish, especially where interactivity is involved, as in gaming. Protocols enabling low latency over managed fixed networks or at small scale have been available some time but the goal of RIST and SRT is to enable similar or even greater performance at scale for large numbers of streams over unmanaged networks.
The current state of play is that there are various low latency protocols under three categories. First are purely proprietary protocols, the best known and most widely deployed of which is Zixi, primarily for fixed OTT services. The advantage of the proprietary approach is that it can focus relentlessly on performance and Zixi has achieved streaming latency below 2 seconds for some of its 500 customers. The disadvantage is lack of support for standards and therefore lack of interoperability, although proprietary standards can gain defacto status given widespread support and Zixi does boast an impressive array of partners. Nevertheless the tide is flowing towards industry standard protocols.
The second somewhat intermediate category is the open source option, where SRT is the main contender, originating in this case from just one vendor Haivision but then opened to the community for extending and enhancing. The great advantage is the innovation that can spring from opening the software up to the wider developer community, which can accelerate innovation considerably but also brings risks because the source code is equally exposed to hackers. It is also somewhat proprietary by being dependent on the underlying code base.
Indeed, concerns over that dependence has stimulated development of RIST in the third category, protocols that are based entirely on open standards. In fact RIST is built on the foundation laid by predecessors, especially the Real Time Transport Protocol (RTP), which provides media transport on top of the widely used UDP protocol of the IP stack to cater for packet loss or corruption in transit.
Some protection against corruption during transit is provided by Forward Error Correction (FEC), which incorporates some redundancy to enable packet recovery at the receiving end in the event of limited losses. But compressed video is highly susceptible to packet loss because of interdependencies between frames and so some means of retransmitting dropped packets is necessary.
In RIST this is achieved through an adaptation of the Real Time Transport Control Protocol (RTCP) associated with RTP. This allows packets to be retransmitted but with a mechanism enabling the receiver to distinguish those that have been resent from the originals. That allows resent packets to be ignored when say communicating with systems that do not support packet retransmission and so enables a degree of interoperability with legacy protocols.
RIST also has one key advantage over SRT in supporting bonding, allowing a high-bandwidth stream to be sent over multiple low-bandwidth connections and reassembled at the destination. As was noted by the weekly analysis site Faultline published by Rethink Technology Research, this supports two distinct use cases. The first one is that high bit rate services can be supported over networks without high bandwidth connections, which can reduce not just cost but also latency, since communication can proceed immediately over whatever links are available. Secondly, the same stream can be duplicated over multiple links for redundancy, achieving error communication combined with low latency at the cost of extra bandwidth.
You might also like...
Thanks to Over-the-Top (OTT) streaming video, content owners and broadcasters have a very different relationship with the end consumer – often a direct one.
OTT distribution is worlds apart from traditional unidirectional broadcasting in terms of its fundamental operation and viewing preferences. The internet is a rapidly expanding collection of service providers, many in direct competition, transferring broadcaster video and audio streams alongside many…
In the last two articles in this series we looked at why we need to monitor in OTT. Then, through analysing a typical OTT distribution chain, we sought to understand where the technical points of demarcation and challenges arise. In…
Giving his unique view on NAB2019, Gary Olson considers and scrutinizes the big moving trends of consolidations, and casts clarity on the cloud, ATSC 3.0, and AI.
In the previous article in this series, “Understanding OTT Systems”, we looked at the fundamental differences between unidirectional broadcast and OTT delivery. We investigated the complexity of OTT delivery and observed an insight into the multi-service provider silo culture. In thi…