Services Global Viewpoint – August 2020

Taming The Internet Beast

As broadcasters, we have had the luxury for many years of having at our disposal well-defined contribution networks that are latency and bandwidth predictable, but as we make more use of the internet, especially for contribution circuits, should we be even trying to tame the internet for broadcast applications?

A simple rule of thumb seems to permeate our lives, that is, the more control we have over a system and the more predictable it becomes, then the more expensive is the result. SDI contribution circuits are a good example of this; a 3G-SDI link between London and New York will have included in its SLA the SDI parameters specified so we can predict latency and bandwidth for video and audio (within well-defined tolerances). Equally we could procure a dedicated IP link with tight latency tolerances, but they’re not as well defined for video and audio as the SDI link. And then there’s the internet.

Unless you start using CDNs then latency and bandwidth allocation becomes a real issue. Furthermore, CDNs solve a very specific use-case and are not really applicable for ad-hoc contribution circuits. There are vendors and organizations reporting very low latency, but it’s difficult to provide a specification for this as the underlying transport stream (that is the internet) is almost impossible to specify to any meaningful tolerance.

I’m in the process of traversing an interesting rabbit hole on TCP congestion algorithms and came across RFC7567 – Recommendations Regarding Active Queue Management. This RFC highlights the concept of congestion collapse in the context of aggressive TCP retransmissions to compensate for packet loss due to congestion. In effect, if the TCP retransmissions are not backed off correctly in the host device then they could contribute significantly to congestion collapse resulting in very low data throughput and massive latency, probably making the link unusable.

Digging deeper I came across the concept of fairness within TCP. This stops hosts from bombarding the internet with data so as not to gain an unfair advantage and consume more bandwidth at the expense of other users. This is a well-defined area of research and all TCP software stacks should comply with fairness policies. One of the positive side effects of TCP is that it provides flow control to reduce congestion in the internet (or any IP network).

These are just a few examples where the flexibility IP networks offer, including the internet, has consequences and the price we pay for this flexibility is lack of control. We can’t control the TCP retransmission algorithms as doing so may break the fairness policies of the internet.

There is one other interesting question I’ve still yet to answer. The obvious solution to avoiding the TCP congestion, flow controls, and the associated latency and bandwidth issues is to not use it at all but instead adopt the method of UDP distribution. Although this is used for streaming video and audio for many real-time applications, its usage in the internet is relatively low compared to TCP. So, my question is this, if more broadcasters and users adopt UDP systems for contribution circuits, then how will this affect the internet’s fairness policies, who will police it, and what will the effects be on UDP based contribution circuits?

As I understand, the internet is accustomed to working with TCP, and efficiency and fairness policies are well established and understood. But it would be interesting to understand how UDP systems will fare as their usage increases and they compete for the bandwidth consumed by TCP deep inside the internet service providers networks.

Commenting is not available in this channel entry.