Professional Live IP Video - Designing Networks

There’s a lot to consider when planning to incorporate uncompressed media into your live production particularly using SMPTE ST 2110, ST 2022-6 or AES67 streams. In this article, we will look at the network hardware, its architecture and future-proofing you should be considering.

Network Topologies

The choice of network design has ramifications beyond simply cost and efficiency. When a person starts out with their first network, they'll probably have a switch with a few ports and plug everything into it. Once that becomes too small, it's natural to swap that switch for a bigger one. All the major manufacturers of networking equipment sell very large switches with say 576 or more ports, each of which may be 100Gb/s or even 400Gb/s. Whichever size you choose, using a single, monolithic switch, can be a great way to start as it simplifies the physical network and control.

Any serious uncompressed media over IP network will actually comprise two, identical and independent networks. Most IP broadcast equipment has two network ports for this purpose and will send identical streams over both. At the other end, the receiver will use the SMPTE ST 2022-7 standard to seamlessly take traffic from both networks to produce a resilient output. While this dual-network approach means that if a monolithic switch fails, the system continues uninterrupted, it does leave your facility with no resilience. By choosing a different architecture, we can reduce the ‘blast radius’ of a failure and retain resilience in most of the system even in the event of a failure.

As a stepping stone, let’s look at some other downsides of monolithic networks. For large installations, cabling individual connections will become too difficult between two technical rooms and more so if there is a separate building. Moreover, large switches tend to only have high-bandwidth ports which is a waste when cabling equipment that only has 1GbE connectivity. To mitigate that if, say, you have 100GbE ports on the main switch, you could add some smaller switches off the main switch which have the effect of splitting out the 100GbE connection down to multiple 10Gbps connections or even many 1GbE ports. Having another switch connected to the main switch also simplifies longer cable runs between rooms. 

Monolithic network.

Monolithic network.

For a larger infrastructure, and for the most flexibility, a spine-leaf architecture can be created where the monolithic switch is replaced with two identical, high-capacity switches. A series of smaller switches (the leaves) are then connected to both of the spine switches creating two routes to every destination. If one of the spine switches is taken out of service for maintenance, there is another able to take over seamlessly. The spine-leaf architecture has notable benefits in flexibility, future-proofing and minimizing the blast radius of any problems. However, the extra network hardware required does make the cost of the network potentially double the cost of a monolithic network.

Spine-leaf network architecture.

Spine-leaf network architecture.

Choosing the right network architecture is key to ensuring you get the right balance of cost and risk to your operations. If you can, it’s best to work out what will be connected to your network before you build it. Write down the bandwidth required and connectivity needed for each piece of equipment. Make a list by location and when that’s done, you will be able to see which architectures are possible for your facility.

Breakout Cables

The network industry has a history of building its latest interfaces on tried and tested technology. For instance, the 100Gbps interfaces are made up of four 25Gbps links. Many of the latest standards have a 'Q' for 'Quad' in their names, such as QSFP28, and QSFP-DD, the latter being a 400Gbps standard built on four lanes running at 100Gbps. This is advantageous as we can ask the switches to run the lanes separately and treat them as four ports. This ability to split a high-bandwidth port into lower bandwidth links helps create flexibility and maximize the value of expensive routers.

Breakout cables are an easy way of splitting ports in lieu of using a whole extra switch. DACs (Direct Access Copper) and AOCs (Active Optical Cables) are two common ways of splitting a port, though DACs are more popular due to their lower cost and lower power consumption. DACs have the downside of supporting shorter lengths than AOCs because they are based on copper, but they are very useful for local rack cabling.

Trunking

When linking switches together, the link will usually need to carry a lot of data. At full capacity, a non-blocking 10GbE 48-port switch will have 480Gbps of traffic flowing out of, and into, the ports – noting that network traffic is bi-directional. Each of those 48 ports will be in 'access' mode which means that the port will be in a designated Virtual LAN (VLAN). Trunk ports are special ports which typically have a higher bandwidth. For instance, the trunk ports on a 10GbE switch would be 40GbE or 100GbE. Trunk ports aren't locked to any one VLAN. Rather, they tag all traffic that flows over with the VLAN number. On the other switch, this tag is read and routed to the appropriate VLAN. Typically 802.1Q is the method used for this tagging.

Trunking can also be used to tie together several ports to give a higher bandwidth throughput, for instance, five 10GbE can be trunked to provide a 50Gbps link. As an example, this link aggregation could be used to receive the output of a bank of encoders from a third party at a special event. The third party would be operating the encoders on their own network as an outsourced service and the network handoff could be over an aggregated set of ports to carry the multicast traffic.

Choosing the Right Switch - Bandwidth

In the SDI world, we wouldn’t dream of buying a router that could only route 70% of our IO at a time. If we have 128 SDI inputs, we’d expect to be able to send to 128 SDI outputs. The same assumption is not to be taken for granted in the IT world where 40 computers may be on the same switch, but an office wouldn’t expect to have to deal with a full 1Gbps from all of them at once. For broadcasters, however, when we use a port there will always be media flowing and maybe close to the maximum capacity in many circumstances. This is why we need to be careful to think about whether we need equipment to be ‘blocking’ or ‘non-blocking’. 

A switch which is non-blocking means that when every port is running at maximum capacity, the ‘backplane’ which connects all ports together internally and actively manages traffic between them won’t run out of steam. A switch which is ‘blocking’ would have a backplane capacity less than the sum of the ports. For a 48-port 10GbE switch with two 40GbE uplink ports, the backplane would have to have 480+80GbE of bi-directional backplane capacity to be considered non-blocking.

The idea of ‘blocking’ isn’t just about the switches themselves because the network can be blocking too. Just like using SDI tie-lines between two routers was a ‘blocking’ topology for baseband video, your uplink between two switches may be a bottleneck. After the tenth, and final, tie-line is used, nothing more can be routed, the same can be true between switches and whilst it’s usually the right choice to demand a non-blocking switch, your overall network architecture may be unnecessarily expensive if it were non-blocking.

If you do choose to have parts of your network architecture ‘blocking’ they need to be handled just like SDI tie-lines which are either manually or by a broadcast control system. SDN (software-defined networking) understands the whole network and can manage bandwidth constraints unlike IGMP has no understanding of bandwidths so is not able to protect an inter-switch link from being over-subscribed.

In this article, we’ve covered important points to consider when approaching a new network design both purchasing hardware and designing the topology. Designing the right network for you requires a good understanding of the way your operation works to ensure you deliver full networking capabilities where they matter the most. But when you implement that network using the principles in this article, it will be ready to expand to the size and shape your future organization's needs.

You might also like...

Audio For Broadcast - The Book

​Audio For Broadcast - The Book gathers together 16 articles into a 78 page eBook which explores the science and practical applications of audio in broadcast.  This book is not aimed at audio A1’s, it is intended as a reference resource for …

Comms In Hybrid SDI - IP - Cloud Systems - Part 1

We examine the demands placed on hybrid, distributed comms systems and the practical requirements for connectivity, transport and functionality.

Designing IP Broadcast Systems: Part 2 - IT Philosophies, Cloud Infrastructure, & Addressing

Welcome to the second part of ‘Designing IP Broadcast Systems’ - a major 18 article exploration of the technology needed to create practical IP based broadcast production systems. Part 2 discusses the different philosophies of IT & Broadcast, the advantages and challenges…

Standards: Part 5 - Standards For Audio Coding

This article describes the various AES, MPEG, Proprietary and Open Standards that pertain to audio.

Essential Guide: Network Observability

This Essential Guide introduces and explores the concept of Network Observability. For any broadcast engineering team using IP networks and cloud ecosystems for live video production, it is an approach which could help combat a number of the inherent challenges…