Amazon Buys Net Insight Sye To Boost Live Streaming Quality

Amazon has been confirmed as the buyer of Net Insight’s Sye low latency streaming business, chasing analysts under the bonnet to understand what aspects of the underlying technology inspired the sale.

It turns out that Amazon has been using Sye within its cloud streaming platform for a while and wants to expand its usage of the technology rather than take extra time developing its own alternative based on emerging standards.

Amazon has already improved the quality of its live sports streaming significantly using other technologies developed in-house, after having experienced a lot of negative consumer feedback during its early forays into premium events. Its UK coverage of the 2019 US Open tennis tournament drew particularly flak around buffering and service interruptions, but the feedback from more recent events such as two rounds of mid-week English Premier League (EPL) matches in December 2019 was much better. The main current complaints concern the delay behind live which leads to the all too familiar problem of events such as goals being seen earlier by people nearby fortunate enough to be experiencing lower latency.

Interestingly there is some evidence that consumers are less bothered about low latency than many content owners and broadcasters had assumed, with buffering and video quality being much higher priorities. Nevertheless, latency and synchronization between different services delivering the same live content is widely seen as the last big hurdle for streaming, and that is what Net Insight’s Sye was designed to address.

Sye was ahead of its time when launched in November 2015 to synchronize online streams with linear broadcast content. Since then various standardized approaches have been developed, notably SRT (Secure Reliable Transport), the open source protocol developed by Haivision, and the RIST (Reliable Internet Stream Transport) protocol proposed by the Video Services Forum. These both address latency by reducing the delay associated with the TCP protocol retransmitting lost IP packets to maintain video quality, but only provide the bare bones of what is needed for a complete implementation. Amazon selected Sye because it had been shown to work efficiently, having been built around Microsoft’s Azure platform but already being migrated onto its own AWS (Amazon Web Services). For a company with Amazon’s deep pockets it made more sense to fork out the $37 million paid to Net Insight for its Sye division rather than wasting valuable time perfecting its own low latency alternative, allowing rival streamers an opportunity to score on quality of experience. That could turn out to represent great value for money, especially as it brings in 30 engineers many of whom have been working on Sye from the outset and are already well versed in tuning the system for Amazon’s requirements.

To identify what attracted Amazon to Sye in the first place, it is necessary to drill down a little further into the technology. When first launched it looked like the development had been motivated more by need for synchronization of second screens with the primary broadcast of live services, enabling delivery of complementary content such as alternative camera views for example. Such synchronization could also enhance social media interaction via companion apps for example, but this use case has not really gained the traction required partly because of logistical issues.

In any case synchronization alone does not resolve the underlying delay, and the real attraction of Sye was the optimization techniques that made this the first significant product to cut latency to the level of broadcast, avoiding that annoying time lag. The first point to note here is that DASH has evolved alongside Apple’s HLS as a primary streaming mechanism, breaking IP video streams into segments or fragments each some multiple of one second in duration. DASH adopted a wider range of segment sizes than its predecessors such as HLS and also Microsoft Smooth Streaming, yielding greater flexibility, that is 1, 2, 4, 6, 10 and 15 seconds. Video is encoded into multiple streams at different bit rates so that the resolution can be varied to suit both the client’s playback capability and prevailing network conditions. A given session can be switched between streams at the end of a segment, the aim being to avoid buffering at the expense of a temporary drop in quality.

The choice of segment length is one of the key decisions in preparing content for adaptive streaming. Short segments enable the bit rate to be adjusted more frequently and so cater for networks where bandwidth fluctuates wildly, as in mobile services subject to varying radio signal propagation. The downside is that short segments impose greater processing overhead and therefore delay. Larger segments reduce the delay but at the cost of being less adaptive for mobile networks in particular, because the stream bit rate cannot keep up with the rapid changes in available network bandwidth.

Net Insight has appointed Crister Fritzson as CEO to shepherd the company after returning to its B2B media transport roots.

Net Insight has appointed Crister Fritzson as CEO to shepherd the company after returning to its B2B media transport roots.

Another contribution to latency is the segmentation process itself at the network edge, which can delay the signal by up to three whole segments. This means it could add three seconds to the delay budget even when each segment is only one second long. Delays are also caused by the network transit and by the retransmission of lost IP packets. There is little that Net Insight or any technology vendor can do about network transit, which is largely a function of the laws of physics, but it can bear down on the edge segmentation and IP packet retransmission.

The greatest achievement of Sye lies in taking out the segmentation delay at the network ingress out of the equation altogether, by avoiding use of segments completely. Net Insight still creates multiple ABR streams each at different bit rates as before, but enables clients to switch between them on the fly without having to wait for a segment to end. The ABR levels are based not on segments, but on streams, so that these can be changed transparently without having to segment them at all.

This still leaves a delay resulting from transcoding to create the ABR streams, but the large segmentation delay is avoided. According to Net Insight, this typically results in a 3 to 5 second delay over the Internet, with an additional transcoding delay of 1-4 seconds, adding up to about 8 seconds in total, similar to most satellite, cable and IPTV services. This method also allows rapid adjustment to variations in network bandwidth, since streams can be switched almost instantaneously without need to wait for a segment to end.

The other innovation is a configurable cache buffer installed in the network, which serves both for playout and retransmission of lost IP packets. This buffer is what allows synchronization of OTT streams almost exactly with live broadcast, not quite frame accurate but within 100 milliseconds, which is undetectable to the eye.

There is a trade-off here because the maximum time allowed for IP retransmission can be increased to make the service more resilient against bandwidth variation, but at the cost of increasing delay. The balance varies according to how great the network transit latency is, given that transcoding delay is relatively fixed. For a stadium environment with a very short round trip delay, the overall delay can be brought down below a second, with greater scope then for lots of packet retransmission to deliver a very high-quality stream.

But over the Internet at greater distances, with a few hundred milliseconds or even a second delay, then a 3 to 5 second buffer might be needed to resend 3 to 10 times in the event of multiple packet loss. Indeed, geographical distance is a factor determining that balance between number of packet retransmissions and overall delay.

The main benefits then of Sye that caught Amazon’s eye are threefold. One is elimination of the latency resulting from stream packaging. Secondly is improved quality by maintaining higher profile viewing for longer than legacy HTTP streaming protocols, or indeed most alternatives. In legacy HTTP streaming the TCP error receiver mechanism forces streams to back down to lower resolutions and with relatively slow recovery to higher profiles, in the event of packet loss. Then thirdly is the synchronization with live enabled by the cache buffer. 

You might also like...

The Sponsors Perspective: SRG-SSR, R&S - Virtualized Multiviewers At Revolutionary Production Centre

Rohde and Schwarz has evolved a heritage in the broadcast and media industry stretching over 70 years. Throughout this period, the company has developed a reputation as one of the leading developers of hardware-based technology solutions worldwide. However, recent years have…

Changing Architecture In The New IP World

The Cloud is the future of live TV production.

Live TV Karma Rules

Warning: Live TV production troubleshooting contradictions can trigger cognitive dissonance.

Pay TV Decline Spreads To All Americas

Pay TV is now in chronic decline across almost the whole of the Americas as ongoing recession among the Latin countries continues to bite.

The Sponsors Perspective: Delivering OTT With The MediaKind Universe

The MediaKind Universe represents our portfolio of solutions and services, designed to offer the best of media technology to cater to the needs of our customers’ workflows. Content owners, broadcasters, service and pure OTT providers will discover how our solutions c…