Software Infrastructure Global Viewpoint – April 2021

Statistical Networks

We often speak of flexibility when transitioning to IP networks. But are we truly exploiting IP flexibility to its full potential with leaf-spine architectures?

A 100G fiber connecting two layer-2 switches can transport 30 HD progressive video signals, or 7 progressive 4K video signals. We don’t need to think about signal multiplexers or modulators as the switch automatically provides this for us through packet switching.

The leaf-spine topology we often hear many discuss works well and provides the core for many broadcast facilities. It works well for ST2110 but I can’t help but think that this may not be the most efficient topology to use. Would IT use such a topology? They certainly have top-or-rack switches connecting to two core switches. But these core switches then connect to other core switches, and if enough of them join through a routing mechanism such as BGMP, we have an internet.

There’s much more to routing than this but it does uncover the thought processes of IT. They have the luxury of not having to think about non-blocking architectures, but this may be an area where we can ease up with our thinking. When planning networks for video and audio distribution, should we really be trying to route every source to every destination without bottlenecks or blocking?

The main argument for this topology is to keep latency low. But if we take a more pragmatic attitude to latency and add some practical measurable upper bound to it then we have much more flexibility in the network design.

SDNs (Software Defined Networks) successfully split the data and control planes. In traditional networks, the routing decisions take place within the switch or router, but with SDN solutions, another level of control is added, and the control plane takes away some of the switching and routing autonomy.

The SDN can “see” the whole network within the domain we’re working with. Using suitable monitoring probes, it knows how much bandwidth is being consumed on a link, whether its reaching saturation, and where the bandwidth availability still exists. Much of the analysis the system is performing will be based on statistics due to the massive number of data packets flying around at any time.

One of the great advantages of statistical analysis is that it lets us reduce the average capacity of the network, so we don’t need to constantly design the whole system for peak bandwidths – as happens with SDI, AES and analog networks. Adopting such a solution requires us to take a step back from the point-to-point connectivity and static system thinking it encourages. Relaxing the latency requirement, and probably not by much, will allow us to take advantage of the peaks and troughs within the network, even with an evenly gapped system such as ST2110.

Furthermore, we can use some ingenious methods from machine learning to predict patterns and peak usage within the network to make it even more efficient.

Thinking statistically will empower us to build more efficient, flexible, and dynamic networks. After all, this is nothing new, we’ve been using stat-muxing in video compression for many years.