Recent Demonstration Shows RIST Protocol Ready for Broadcast Contribution Applications

The RIST standard, supported by leading companies, enables the creation of low-latency contribution links over the internet while providing high resilience to packet loss.

Even so, internet-delivered video has different characteristics from private networks. Packet delivery through the internet is not guaranteed. Packets can be occasionally dropped, primarily due to instantaneous congestion in routers. Such packets need to be recovered for glitch-free video operation. Given enough time, any losses can be recovered, but contribution applications are typically latency-sensitive.

Experience has shown that the best technique to deal with such losses is one of the variants of the well-known selective retransmission method called Automatic Repeat reQuest (ARQ). A technique devised in the 1960s, it represents a good tradeoff between latency and reliability. A number of commercial products use ARQ to provide this functionality, but they rely on proprietary implementations that lack interoperability.

What is RIST?

To address this issue, the Video Services Forum (VSF) started the Reliable Internet Stream Transport (RIST) Activity Group in 2017 to create a common protocol specification, promote interoperability between products from different vendors, and give broadcasters more choices when setting up an internet link for contribution. The first public RIST demonstration by the participating companies occurred during IBC 2018, and the RIST Simple Profile Specification was published as VSF TR-06-1 in October 2018.

Packet recovery in RIST works as follows:

  • Sender transmits packets without waiting for any kind of feedback from the receiver.
  • Packets have sequence numbers so the receiver can identify packet losses.
  • No acknowledgement is sent for packets that are correctly received.
  • The receiver will request a retransmission for lost packets.
  • A lost packet may be requested multiple times. 
At IBC 2018, multiple vendors conducted a public interoperability test to show that RIST video transmission worked.

At IBC 2018, multiple vendors conducted a public interoperability test to show that RIST video transmission worked.

As soon as the receiver detects a packet loss, it requests a retransmission.  At that point, it will need to wait for one network round-trip delay until that retransmission can possibly arrive. If it does not, the packet may be requested again. If we denote the maximum number of retransmission requests by R and the network round-trip delay in seconds by T, it follows that both the receiver and the sender must have a buffer enough to hold RT seconds of content, and that the added latency of the protocol is RT. By controlling R, it is possible to control the latency-reliability tradeoff of the protocol.

The buffer at the sender side must hold at least RT seconds of content to be able to satisfy the retransmission requests from the receiver. However, since the buffer at the sender side does not affect latency (packets are added to it after transmission); it can be made very large with no penalty other than memory consumption. The sequence numbers used for RIST are only 16 bits, so it would take about 80 MB of storage in the sender to cache the packets with every possible sequence number. This amount of memory is well within what is available in current systems, even in small embedded devices. Having a very large buffer at the sender simplifies overall system configuration.

Both transmitter and receiver must have correctly sized buffers to accommodate for lost and retransmitted packets.

Both transmitter and receiver must have correctly sized buffers to accommodate for lost and retransmitted packets.

RIST Specification

RIST selected the Real Time Transport Protocol (RTP) for the media transport. RTP is a very simple layer on top of UDP and provides sequence numbers (to detect packet loss) and timestamps (to remove network jitter, if required). The use of RTP ensures that RIST-compliant systems can interoperate with non-RIST systems at a base level without packet loss recovery.

For retransmission requests, RIST selected the Real Time Transport Control Protocol (RTCP) associated with RTP. Two types of retransmission request messages have been defined. The first type, called the Bitmask-Based Retransmission Request, is a standard RTCP message that includes one or more ranges of 17 consecutive packets, and can request any loss pattern within each range. The second type, called Range-Based Retransmission Request, is implemented as an Application-Defined RTCP message, and can request one or more continuous range of packets.

A RIST receiver may use either type of message. An advanced RIST receiver may dynamically decide which message to use based on the loss pattern, thus optimizing the bandwidth utilization. The “SSRC of Media Source” field helps the sender identify from which stream retransmission is being requested, allowing multiple streams to share the same UDP port at the sender. 

In order to ensure protocol stability, it is necessary for the receiver to differentiate between original packets and retransmissions. RIST uses the SSRC field in the RTP header to make this differentiation, as recommended by RFC 4588. However, unlike RFC 4588, the retransmitted packet is an exact copy of the original RTP packet, except for the SSRC field. To simplify the association of an original packet flow with its retransmissions, RIST uses the following rules for SSRC:

  • For original packets, the least significant bit of the SSRC is always set to 0 (zero).
  • For retransmitted packets, the least significant bit of the SSRC is always set to 1 (one).

These choices allow maximum compatibility with non-RIST receivers. A receiver that filters by SSRC will simply ignore any retransmitted packets, while a receiver that ignores the SSRC field may actually use the retransmitted packets based on their sequence numbers.

The RTP specification requires senders and receivers to periodically transmit RTCP packets, which RIST uses to facilitate firewall configuration. Generally, the sender and receiver are behind firewalls for security reasons. RIST Simple Profile only requires that two consecutive user-selected UDP ports be opened at the firewall located at the receive site. The sender is configured to transmit to the public IP address at the receiving site. Multicast support is also included.

RIST also supports multiple paths between the sender and the receiver. The media stream may be split over multiple lower-bandwidth paths (bonding), such as in the case of multiple cellular connections for media transmission, or replicated over two or more paths for reliability. RIST stream replication is compatible with SMPTE-2022-7.

RIST Performance Measurements

A number of companies participating in the RIST Activity Group prepared a demonstration for IBC 2018. Each company implemented the protocol directly from the specification (no shared libraries or code). A bank of RIST decoders were made available in Champaign, IL. The various participating companies transmitted streams over the Internet from different parts of the world to the bank of decoders. The signal from the decoders was combined in a multi-viewer, and the output of this multi-viewer was published live to YouTube and available live to attendees. The demonstration commenced two weeks prior to IBC, and proved that multi-vendor interoperable and reliable delivery over the internet can be achieved today. 

Figure 1. Single packet loss measurements.

Figure 1. Single packet loss measurements.

A set of measurements was performed using a real-time encoder transmitting actual audio/video content, network emulator, and real-time decoder. The use of a network emulator allowed precise control of the network conditions.

The results for single-packet losses are presented in Figure 1. The purple trace in the center represents the average across the 10 runs, while the blue and green lines above and below are the maximum and minimum values over the runs. In practical terms, if the network packet loss is known, the way to use Figure 1 is to read the number of retries required for that loss. For 1 percent packet loss, for example, the number of retries falls between one and two – so a minimum of two retries is needed. This implies the retransmission reassembly section of the receiver buffer must be at least twice the round-trip time.

Results for five-packet burst losses in Figure 2 are very similar to those presented in Figure 1 for single-packet losses, especially with regard to the average behavior. One artifact of this type of testing is that burst losses have to be less frequent to keep the same packet loss rate, thus making recovery somewhat easier at the lower packet loss values.

Packet Reordering

In practical terms, there is only one reason why packets may arrive out of order at the destination: when there are multiple network paths between the source and the destination. Routers may classify and transmit packets based on priority, but packets belonging to the same flow should receive the same classification. 

Figure 2.  Five packet burst loss measurements.

Figure 2. Five packet burst loss measurements.

The presence of multiple paths may be intentional (for example, bonding of multiple end-to-end paths), or something can happen in the network backbone outside of the control of the user. Service providers will likely route most packets through the same path; this path may change over time, and out-of-order instances will happen at the changeover.

In order to support reordering of packets, the RIST receiver needs to expand its buffer to include a reordering section. In a bonding application, the user is aware of the multiple paths and has control over them. In this case, the round trip delays for each of the paths can be measured (using a simple utility such as “ping”), and the required size of the reorder section is simply the difference between the highest and the smallest round trip time divided by two (since what matters is the one-way latency).

The more interesting question is what should be the size of the reorder section when the user has a single internet connection for the sender and the receiver. While an individual link can be measured, the great majority of users will not have the capability to do so, and most providers do not have data on out-of-order packets. Based on an earlier study published by IEEE, for the links characterized by the authors, on average only 0.365% of the packets are out of order. In the absence of any additional data, it may be reasonable to simply not have a reorder section, but the penalty for that is a small increase in the retransmission data.

This YouTube video demonstrates RIST members from around the world streaming content via the interne

How to Set Up a RIST Link

When setting up a RIST Simple Profile link, the user needs to manually select a few parameters to optimize the link. We recommend determining the round-trip time between the sender and the receiver using the “ping” utility. If using bonding, do this for all links. If the network loss is known, read the minimum number of retries from Figure 1. A safety margin is also recommended.

If the network loss is not known, start with four retries, which will give good results in most links. Alternatively, if the application has a maximum latency requirement, divide that by the round trip time to find the number of retries and use the value. If R is the number of retries selected and T is the round-trip time, the receiver buffer should be set to at least RT. If the application can tolerate it, add a 10 percent additional margin, as network delays tend to vary.

In a bonding situation, use the highest round-trip time for T. If the transmit buffer is configurable, set it as high as possible. At the very least, it must not be less than the receiver buffer. If using bonding, the reorder section must be set to at least the difference between the highest and the lowest round-trip delay, divided by two. A safety margin is also recommended. If not using bonding, this can be left at zero.

Finally, when commissioning a link, it is always recommended that it be monitored for an initial period to validate the settings. If the receiver reports late packets, its buffers should be increased – the link latency is probably higher than expected. A certain number of duplicate packets is expected. However, it this number is significant, either increase the time between retries or increase the size of the reorder section. If there are too many unrecovered packets, the number of retries should be increased if possible, with a corresponding increase in the retransmission reassembly section of the receiver. 

Ciro A. Noronha, Ph.D., Director of Technology, Cobalt Digital

Ciro A. Noronha, Ph.D., Director of Technology, Cobalt Digital

You might also like...

Standards: Part 18 - High Efficiency And Other Advanced Audio Codecs

Our series on Standards moves on to discussion of advancements in AAC coding, alternative coders for special case scenarios, and their management within a consistent framework.

HDR & WCG For Broadcast - Expanding Acquisition Capabilities With HDR & WCG

HDR & WCG do present new requirements for vision engineers, but the fundamental principles described here remain familiar and easily manageable.

What Does Hybrid Really Mean?

In this article we discuss the philosophy of hybrid systems, where assets, software and compute resource are located across on-prem, cloud and hybrid infrastructure.

Future Technologies: Artificial Intelligence In Image & Sound Creation

We continue our series considering technologies of the near future and how they might transform how we think about broadcast, with a discussion of how the impact of AI on broadcasting may be more about ethical standards than technical standards.

Standards: Part 17 - About AAC Audio Coding

Advanced Audio Coding improves on the MP3 Perceptual Coding solution to achieve higher compression ratios and better playback quality.