Professional Live IP Video - Designing Networks
There’s a lot to consider when planning to incorporate uncompressed media into your live production particularly using SMPTE ST 2110, ST 2022-6 or AES67 streams. In this article, we will look at the network hardware, its architecture and future-proofing you should be considering.
The choice of network design has ramifications beyond simply cost and efficiency. When a person starts out with their first network, they'll probably have a switch with a few ports and plug everything into it. Once that becomes too small, it's natural to swap that switch for a bigger one. All the major manufacturers of networking equipment sell very large switches with say 576 or more ports, each of which may be 100Gb/s or even 400Gb/s. Whichever size you choose, using a single, monolithic switch, can be a great way to start as it simplifies the physical network and control.
Any serious uncompressed media over IP network will actually comprise two, identical and independent networks. Most IP broadcast equipment has two network ports for this purpose and will send identical streams over both. At the other end, the receiver will use the SMPTE ST 2022-7 standard to seamlessly take traffic from both networks to produce a resilient output. While this dual-network approach means that if a monolithic switch fails, the system continues uninterrupted, it does leave your facility with no resilience. By choosing a different architecture, we can reduce the ‘blast radius’ of a failure and retain resilience in most of the system even in the event of a failure.
As a stepping stone, let’s look at some other downsides of monolithic networks. For large installations, cabling individual connections will become too difficult between two technical rooms and more so if there is a separate building. Moreover, large switches tend to only have high-bandwidth ports which is a waste when cabling equipment that only has 1GbE connectivity. To mitigate that if, say, you have 100GbE ports on the main switch, you could add some smaller switches off the main switch which have the effect of splitting out the 100GbE connection down to multiple 10Gbps connections or even many 1GbE ports. Having another switch connected to the main switch also simplifies longer cable runs between rooms.
For a larger infrastructure, and for the most flexibility, a spine-leaf architecture can be created where the monolithic switch is replaced with two identical, high-capacity switches. A series of smaller switches (the leaves) are then connected to both of the spine switches creating two routes to every destination. If one of the spine switches is taken out of service for maintenance, there is another able to take over seamlessly. The spine-leaf architecture has notable benefits in flexibility, future-proofing and minimizing the blast radius of any problems. However, the extra network hardware required does make the cost of the network potentially double the cost of a monolithic network.
Choosing the right network architecture is key to ensuring you get the right balance of cost and risk to your operations. If you can, it’s best to work out what will be connected to your network before you build it. Write down the bandwidth required and connectivity needed for each piece of equipment. Make a list by location and when that’s done, you will be able to see which architectures are possible for your facility.
The network industry has a history of building its latest interfaces on tried and tested technology. For instance, the 100Gbps interfaces are made up of four 25Gbps links. Many of the latest standards have a 'Q' for 'Quad' in their names, such as QSFP28, and QSFP-DD, the latter being a 400Gbps standard built on four lanes running at 100Gbps. This is advantageous as we can ask the switches to run the lanes separately and treat them as four ports. This ability to split a high-bandwidth port into lower bandwidth links helps create flexibility and maximize the value of expensive routers.
Breakout cables are an easy way of splitting ports in lieu of using a whole extra switch. DACs (Direct Access Copper) and AOCs (Active Optical Cables) are two common ways of splitting a port, though DACs are more popular due to their lower cost and lower power consumption. DACs have the downside of supporting shorter lengths than AOCs because they are based on copper, but they are very useful for local rack cabling.
When linking switches together, the link will usually need to carry a lot of data. At full capacity, a non-blocking 10GbE 48-port switch will have 480Gbps of traffic flowing out of, and into, the ports – noting that network traffic is bi-directional. Each of those 48 ports will be in 'access' mode which means that the port will be in a designated Virtual LAN (VLAN). Trunk ports are special ports which typically have a higher bandwidth. For instance, the trunk ports on a 10GbE switch would be 40GbE or 100GbE. Trunk ports aren't locked to any one VLAN. Rather, they tag all traffic that flows over with the VLAN number. On the other switch, this tag is read and routed to the appropriate VLAN. Typically 802.1Q is the method used for this tagging.
Trunking can also be used to tie together several ports to give a higher bandwidth throughput, for instance, five 10GbE can be trunked to provide a 50Gbps link. As an example, this link aggregation could be used to receive the output of a bank of encoders from a third party at a special event. The third party would be operating the encoders on their own network as an outsourced service and the network handoff could be over an aggregated set of ports to carry the multicast traffic.
Choosing the Right Switch - Bandwidth
In the SDI world, we wouldn’t dream of buying a router that could only route 70% of our IO at a time. If we have 128 SDI inputs, we’d expect to be able to send to 128 SDI outputs. The same assumption is not to be taken for granted in the IT world where 40 computers may be on the same switch, but an office wouldn’t expect to have to deal with a full 1Gbps from all of them at once. For broadcasters, however, when we use a port there will always be media flowing and maybe close to the maximum capacity in many circumstances. This is why we need to be careful to think about whether we need equipment to be ‘blocking’ or ‘non-blocking’.
A switch which is non-blocking means that when every port is running at maximum capacity, the ‘backplane’ which connects all ports together internally and actively manages traffic between them won’t run out of steam. A switch which is ‘blocking’ would have a backplane capacity less than the sum of the ports. For a 48-port 10GbE switch with two 40GbE uplink ports, the backplane would have to have 480+80GbE of bi-directional backplane capacity to be considered non-blocking.
The idea of ‘blocking’ isn’t just about the switches themselves because the network can be blocking too. Just like using SDI tie-lines between two routers was a ‘blocking’ topology for baseband video, your uplink between two switches may be a bottleneck. After the tenth, and final, tie-line is used, nothing more can be routed, the same can be true between switches and whilst it’s usually the right choice to demand a non-blocking switch, your overall network architecture may be unnecessarily expensive if it were non-blocking.
If you do choose to have parts of your network architecture ‘blocking’ they need to be handled just like SDI tie-lines which are either manually or by a broadcast control system. SDN (software-defined networking) understands the whole network and can manage bandwidth constraints unlike IGMP has no understanding of bandwidths so is not able to protect an inter-switch link from being over-subscribed.
In this article, we’ve covered important points to consider when approaching a new network design both purchasing hardware and designing the topology. Designing the right network for you requires a good understanding of the way your operation works to ensure you deliver full networking capabilities where they matter the most. But when you implement that network using the principles in this article, it will be ready to expand to the size and shape your future organization's needs.
You might also like...
IP Monitoring & Diagnostics With Command Line Tools: Part 9 - Continuous Monitoring
Scheduling a continuous monitoring process will detect problems at the earliest opportunity. If the diagnostic tools run often enough, they can forecast a server outage before a mission critical failure happens. Pre-emptive diagnosis and automatic corrections are a very good…
System Showcase: Ireland’s RTÉ Adds Video To Its Radio Studios To Increase Content Value
RTE’s move to new studios prompted a project to add more sophisticated video capabilities to its new radio studios, reflecting a global trend towards the consumption of radio online.
Understanding IP Broadcast Production Networks: Part 2 - Routers & Switches
How Routers & Switches reduce traffic congestion and improve security.
Waves: Part 9 - Propagation Inversion
As a child I came under two powerful influences. The first of these was electricity. My father was the chief electrician at a chemical works and he would bring home any broken or redundant electrical parts for me to tinker…
Understanding IP Broadcast Production Networks: Part 1 - Basic Principles Of IP
This is the first of a series of 14 articles that re-visit the basic building blocks of knowledge required to understand how IP networks actually work in the context of broadcast production systems.