Get Connected… All the latest information and product coverage at the 2017 NAB Show, by the editors of The Broadcast Bridge. Filter your results by category.
Click here

Hardware Infrastructure Global Viewpoint – February 2017

Building A Video IP Infrastruture: Design Considerations

As production and broadcast facilities move to IP-centric solutions, it is crucial that engineering managers understand how to build an optimized architecture to leverage the benefits IP technology offers. Without proper design considerations, the cost of an IP infrastructure may even surpass that of an equivalent SDI studio. Measure twice, cut once.

IP infrastructures naturally provide expendability. This is one of the key motivation to migrate to this new technology. IP systems have been deployed in small and very large scales applications and used in many different markets. The technology is proven, cost- effective and scalable. However, there are several challenges when dealing with IP infrastructures.

The first main issue is affordability and scaleability. Needing to design an infrastructure that meets both short-term requirements while anticipating potential future expansions, Engineers face these challenges when designing additional control rooms or completely new television facilities. Their current design must be adapted for future expansion without requiring major-rebuilds which would generate additional costs, and they should ensure a quick return on investment by recovering the initial costs for when additional equipment might be needed.

The second issue is the inability to compromise on system robustness and redundancy. For TV stations, which usually operate on a 24/7 basis, it is mandatory to design a system that is resistant to failures which might compromise the operation of the station. Interruptions of that sort might lead to important loss of revenues, hence it is mandatory to design a system that allows continuous operation as well as allowing equipment to be maintained and upgraded as needed. Additionally, IP signals are slightly more delicate than SDI, therefore there is a need for a mechanism to protect the signals flowing in the network to maintain their integral quality.

The third issue concerns bandwidth usage cost repercussions of IP infrastructures. Broadcast television uses significantly higher bandwidth compared to alternative types of data — such as file transfer. Whilst audio and metadata aren’t too data intensive, each uncompressed video signal consumes gigabits of space in the network. IP switches transporting these signals increase in cost as much as bandwidth is increased. Any waste of expensive ports on the network will impact your system cost.

One of the solutions to the issues outlined above is dubbed the Spine/Leaf architecture. Introduced to create a fast, predictable, scaleable, and efficient communication architecture in a data centre environment, the Spine/Leaf architecture is configured in an Equal-Cost Multipathing (ECMP) allowing all connections to be utilized at the same time while remaining stable and avoiding loops within the network. This system can scale gracefully by adding Leaf and Spine switches to the network.

Figure 1. Spine Leaf architecture. Spine Leaf is a two-layer data center network topology  composed of leaf switches (to which servers and storage connect) and spine switches (to which leaf switches connect). Leaf switches mesh into the spine, forming the access layer that delivers network connection points for servers. Click to enlarge.

Figure 1. Spine Leaf architecture. Spine Leaf is a two-layer data center network topology composed of leaf switches (to which servers and storage connect) and spine switches (to which leaf switches connect). Leaf switches mesh into the spine, forming the access layer that delivers network connection points for servers. Click to enlarge.

Utilizing this topology, the Leaf switches are used to connect devices providing and requiring data access. Typically equipped with lower bandwidth ports (1GE, 10GE, 25GE) and some higher bandwidth ports (40GE, 100GE), these switches aggregate to the Spine. The Spine switch is equipped more of the high bandwidth ports.

The cost of the port is highly correlated to the internal capacity of the switch to route all the signals at line rate. On a line rate switch, the internal bandwidth must be equal to the number of ports multiplied by their bandwidth. For example, a Spine 32x 40G port switch must handle 1,28Tbps switching capacity. A Leaf switchdoes not offer that same line rate capacity from the I/O ports to the aggregation. They are typically configured with a ratio of about 3:1 to save some cost on the hardware.

Leaf switches are primarily used for lower-rate channels. To get the required bandwidth for video may require more expensive Spine switching. Click to enlarge.

Leaf switches are primarily used for lower-rate channels. To get the required bandwidth for video may require more expensive Spine switching. Click to enlarge.

In standard datacenter applications, most of the data traffic is done inside islands minimizing the amount of data needing to aggregate to other islands. The data rate differs from client to client because of their daily usage. It is reasonable to design a system that will not prevent the worst-case scenario where all clients would use their full allocated bandwidth at the same time. For this reason, the sum addition of bandwidth for all of the switch’s ports inside the switch is typically not higher than the available aggregation bandwidth, but much lower. The ratio is determined by the practical possibilities of filling up each I/O ports of the switch and requirement of data to be up-linked to or down-linked from the Spine switch. At a peak demand, the impact of over subscription is reasonably managed with retries and user impact is acceptable in most cases. Careful optimization allows significant cost reduction of the switch without compromising the operation.

For example, in Figure 3, a standard 48 x 10GE ports switch can typically provide four 40GE QSFP aggregation links for a total of 160Gbps of bandwidth corresponding to only 33% of the potential bandwidth from the I/O ports with 480Gbps. A minority of manufacturers propose better ratios at much higher price, but it defeats the purpose of using economical COTS (Commercial-Off-The-Shelf) equipment.

Figure 3. A standard 48 x 10GE port switch typically provides four 40GE QSFP aggregation links with a total of 160Gbps of bandwidth. Unfortunately, that amounts to only one-third of the potential bandwidth from the I/O ports with 480Gbps. Click to enlarge.

Figure 3. A standard 48 x 10GE port switch typically provides four 40GE QSFP aggregation links with a total of 160Gbps of bandwidth. Unfortunately, that amounts to only one-third of the potential bandwidth from the I/O ports with 480Gbps. Click to enlarge.

A broadcast IP infrastructure is used similarly, however, some specific requirements expose limitations of COTS switches.

Additional network requirements should be considered to support uncompressed type flows containing a large amount of data. Uncompressed audio/video visual data must be delivered in a predictable and deterministic manner, otherwise, the quality is greatly compromised. The system must be non-blocking and always provide enough bandwidth to bring the signals to any destination even at peak demand. Dropping of packets is not acceptable as the impact is visually and audibly disruptive.

Figure 4. A key difference in switching data and switching uncompressed video is that the large video files must be predictably delivered over a deterministic network. Using a TOR (Top of Rack) switch makes it easier to aggregate the data with Spine switches while reducing fiber cable requirements.

Figure 4. A key difference in switching data and switching uncompressed video is that the large video files must be predictably delivered over a deterministic network. Using a TOR (Top of Rack) switch makes it easier to aggregate the data with Spine switches while reducing fiber cable requirements.

Following the Spine/Leaf approach, the equipment providing the data connects to a Top of Rack (TOR) switch. Located in the same rack, the TOR switch makes the aggregation of data, then interconnects to Spine switches. This strategy can greatly reduce the distance of fiber optical cables connecting to the IP network.

In many TV stations, the gear is installed in racks for various maintenance reasons. Servers may be installed inside a set of racks located in the equipment room, and multi-viewers at a different location. The same approach applies for other types of equipment. For any required SDI devices, they may also be grouped together so they can be easily connected to the gateway devices.

In large facilities with centralised equipment rooms, the distance between devices can be significant. Implementing a Spine/Leaf architecture may help to reduce the length and quantity of cables interconnecting to the IP network. Each device is connected locally with short fiber to a TOR switch. The TOR switch aggregates the many signals to interconnect to the Spine switch using fewer fiber strands. This provides a clear advantage over any other centralised switching methodologies.

Figure 5: Typical IP Production Infrastructure (Drawing provided by Michel Proulx). Click to enlarge.

Figure 5: Typical IP Production Infrastructure (Drawing provided by Michel Proulx). Click to enlarge.

In a typical Broadcast production plan, each equipment provides few inputs and outputs, mostly in an uneven input and output balance. The I/O configuration may vary from one device to another, but rarely you will find a device using the full 10G bandwidth of the physical connection.

In Figure 6, typical sources such as cameras and servers providing an input and receiving an output through the 10GE port. With 1.5G video, this represents only 15% of usage of the port therefore 85% of the capacity is not being used.

Figure 6: Typical IP equipment using only 15% of a 10GE I/O port. Click to enlarge.

Figure 6: Typical IP equipment using only 15% of a 10GE I/O port. Click to enlarge.

In Figure 7, a different set of equipment receiving more signals but sending fewer signals through the 10GE port. In this case, the port is better utilized, although it is still not using the full 10G bandwidth in both directions.

Figure 7: Other equipment using the most of only one direction of the 10GE I/O ports. Click to enlarge.

Figure 7: Other equipment using the most of only one direction of the 10GE I/O ports. Click to enlarge.

This estimation of cost comparison, calculating the usage of bandwidth in a typical television stations including a large studio, the playout area, and the incoming feed area. A realistic set of equipment is included to give an accurate sense of how to properly deploy such a system. It is observed that very few broadcast devices, about 10%, use more than 50% of the bandwidth of a 10GE port. On the other hand, each port of the Spine switch is well utilized in at least one direction. With this efficient aggregation, this system only requires 40 ports of the Spine switch, whereas nine aggregation switches provide interconnection to 372 devices.

Table 1: Typical TV station Case study showing bandwidth utilization. Click to enlarge.

Table 1: Typical TV station Case study showing bandwidth utilization. Click to enlarge.

Based on average industry pricing, Spine (40GE port Spines) are 7 times more expensive per port than TOR (10GE leaf) switches. Judicious use of an aggregator switches for some devices and direct connection to Spine with other devices can lead to important cost saving.

Table 2: Estimation of cost of an IP switche (cost varies between vendors) Click to enlarge.

Table 2: Estimation of cost of an IP switche (cost varies between vendors) Click to enlarge.

Small Form-factor Pluggable adapters (SFPs) can be a highly effective tool in integrating SDI into an existing IP infrastructure. The proper SFP will adapt SDI signals or IP native into the network. SFP-gateway modules can be installed inside the TOR switches to convert the signals from SDI to IP and vice versa. This same TOR switch is already present to accommodate other IP native equipment.

Figure 8. This example shows how SFP gateway modules can be used in TOR switches. This allows the engineer to change between SDI and IP configurations as needs arise.<br />

Figure 8. This example shows how SFP gateway modules can be used in TOR switches. This allows the engineer to change between SDI and IP configurations as needs arise.

Careful design and use of SFPs will reduce the amount of fiber optic cables needed by half. Another benefit of the SFPs is the ease of migration from SDI to IP. As your devices gradually become IP ready, you simply need to replace the SFP inside the switch connecting to your device. No need for a major rebuild of the rack as some  SFPs provide a modularity of one or two channels at a time.

Figure 9: SFPs naturally integrate SDI inside an IP Network

Figure 9: SFPs naturally integrate SDI inside an IP Network

As explained previously, an aggregation switch does not provide a 1:1 ratio from the I/O ports to the aggregation ports. This means there is no reason to try to fill up each 10GE port with media in hope to optimize your system. The capacity to aggregate the data to the Spine is your bottleneck, therefore, the total bandwidth at the I/O ports cannot be fully utilized. Some SFPs can convert two signals per 10GE port. Even so, one cannot further optimize the density to a smaller footprint as switches are limited to the physical space required for the two mini-BNC connectors that connect to the external device.

Figure 10: I/O versus aggregation bandwidth capacity. Click to enlarge.

Figure 10: I/O versus aggregation bandwidth capacity. Click to enlarge.

Table 3 shows listed the different scenarios describing I/O port usage and the capacity to aggregate all signals to the Spine switch. The example is based on the use of a standard COTS Leaf switch of 48x 10GE ports with 4x 40GE ports. The first scenario is theoretical and shows the full use of every 10GE ports. In this condition, only 16 ports can be used before maximizing the 160GE limit of the aggregation ports. When configured with two 3G signals per port, 26 ports for a total of 52 channels can be used. Two 1.5G channels per port allow the use of all 48 ports for a capacity of 96 channels. This is the maximum of signals that can be conceivably treated and flowing inside a single RU of rack space.

Table 3: This table compares four design options. Click to enlarge

Table 3: This table compares four design options. Click to enlarge

It is obvious that a Spine/Leaf architecture, along with the judicious use of aggregation, is needed to efficiently interconnect devices into an IP network. And, key is making optimal use of bandwidth inside each switch to make the solution a more cost effective. Using TOR switches also simplifies cable management. Judicious use of SFP connectivity properly integrated within existing TOR switches contribute to even more savings. Using 96 (SDI to IP) gateway conversions inside a single rack space results in a highly efficient design. Most importantly, the Spine/Leaf architecture supports makes future expansion easy and most effective.

Louis Caron, Director Product Management at Embrionix

Louis Caron, Director Product Management at Embrionix