Using Configurable FPGA’s For Integrated Flexibility, Scalability, And Resilience - Part 2

Software continues to demonstrate flexibility and scalability, but the new breed of software defined hardware architectures build on the success of software to keep latency very low and flexibility and scalability high.


This article was first published as part of Essential Guide: Using Configurable FPGA’s To Deliver Integrated Flexibility, Scalability, And Resilience - download the complete Essential Guide HERE.

Software Hardware

These hardware resources take the benefits of ASICs but maintain the flexibility of software programming. FPGAs are available in all different shapes and sizes and are also defined by the amount of hardware resource they have available to the design engineer.

Programming an FPGA is often a three-stage process: modelling, simulation and verification, synthesis and placement. All these stages are performed in software and the output is often a formatted binary file that is loaded into the FPGA during boot or at any other time under the control of the system.

Modelling is the operation of designing the process the engineer is building and is often facilitated using languages such as Verliog and VHDL. Both these are considered High Description Languages and in appearance are not dissimilar from procedural programming languages. Simulation and verification provide offline testing of the design where data samples are presented, and the outputs are verified against the expected output. For example, an FIR is highly determinate, and we know what the output values should be given a known input. The final stage is synthesis and routing where the binary file is built and programmed into the FPGA.

All these processes can take an incredible length of time, especially when the designs become complex and are therefore often divided into smaller test benches. But once the design of the function is complete, the final software file is downloaded into the FPGAs within a matter of milliseconds.

What is really compelling about FPGAs is that once the circuit board hardware is designed and built, the rest of the implementation is based on software. This provides untold flexibility for vendors as they can literally make the hardware do anything they want (within the limits of the resource). And this is a real opportunity for broadcasters.

Proven FPGA Technology

Although FPGAs have been used for many years inside broadcast hardware designs, and are therefore proven technologies, it is only recently that arrays of FPGA ICs have become available as stand-alone ecosystems to facilitate dynamic and scalable resource for broadcasters. For example, a single card could be programmed to be a proc-amp, but on the next day be reprogrammed by downloading a binary file, to be a standards converter. This flexibility is something we’ve never seen before in broadcasting when considering the very low latencies involved. It’s possible for software COTS to deliver this, but the latencies are variable and unpredictable, and the systems are incredibly complex, which is often an issue for live productions. FPGA arrays are low and latency determinate, and relatively easy to operate.

FPGA IC arrays provided on a single circuit board can then be replicated many times in a rack frame. With the appropriate management software, the functionality is effectively abstracted away from the underlying hardware, including the transport stream. This delivers incredible opportunities for flexible and scalable operation for broadcasters, especially when considering the potential for the multitude of licensing models. For example, using the pay-as-you-go model, centralized licensing repositories could be linked into the vendors management software to make available modular functionality, such as proc-amps, embedders, and frame-synchronizers, to name but a few.

An FPGA consists of tens-of-thousands of hardware gates and functions that can be programmed allowing many different operations to be provided such as proc-amps, standards converters, and frame-synchronizers. Also, many FPGAs can be connected with high-speed differential pair busses to facilitate low latency signal processing across multiple FPGAs.

An FPGA consists of tens-of-thousands of hardware gates and functions that can be programmed allowing many different operations to be provided such as proc-amps, standards converters, and frame-synchronizers. Also, many FPGAs can be connected with high-speed differential pair busses to facilitate low latency signal processing across multiple FPGAs.

The really exciting aspect of this initiative is that when the rack of FPGA cards has been procured, all the operational functionality is provided by the vendor using software files. These can be updated and managed by the vendor so the broadcaster can focus on building their specific studio solution without having to worry about software versioning or configuration. Furthermore, by increasing the number of racks, the available resource scales appropriately. Therefore, when an engineer is designing or expanding their facility, they can spread their estimated functionality requirements over many racks knowing the detail of operation can be loaded into the FPGAs as required, thus making the system highly flexible and scalable. Futureproofing is provided without having to plan ten years ahead as more FPGA racks can be added as required.

Transport Stream Independent

This design philosophy also has some very interesting implications for the transport stream interfacing as it is taken care of by the FPGA circuit boards. The SDI, AES, ST2110 IP, and ST2022 IP protocols, and many others, are available as VHDL code libraries and so manifest themselves as physical interfaces on the FPGA. Consequently, transferring video and audio data to and from them is a relatively straight forward process as it’s all taken care off in the FPGA itself. We don’t need to be concerned with interface equipment to convert between the various transport streams, it all takes place inside the FPGA.

It’s fair to say that the hardware still needs physical interfaces and connectors, but again these can be provided as an array of assignable flexible resource instead of being statically dedicated to specific tasks, thus further improving flexibility and scalability.

Another interesting aspect of this design philosophy is that much of the video, audio and metadata signal routing takes place within the confines of the rack of FPGA circuit boards through high-speed back planes, not only does this keep latency low, but another positive side effect is that cabling is significantly reduced.

Although cabling forms the core of any infrastructure, it has two undesirable attributes, it’s heavy, and is susceptible to damage. Weight is particularly important for OB trucks and anything we can do to reduce it is a major bonus. Even where fiber is used to distribute IP, there are clear advantages to keeping the amount of fiber to a minimum, that is, the less we have, the less there is to go wrong.

Keeping the signal processing within the relatively close proximity of a rack will help maintain resilience and equally importantly, low and predictable latency. This also helps reduce the number of inputs and outputs on the central routing matrix, further keeping weight low and power consumption down.

Multiple racks with dual power supplies and diverse power routings delivers high resilience, especially when combined with a software configuration and management system that can assign the FPGA functionality on-the-fly, thus delivering outstanding flexibility, resilience, and scalability.

Conclusion

Broadcasters looking to upgrade, improve, or expand their facilities are currently presented with some very difficult decisions. In part, this is due to the influence of IP and cloud computing. However, much of the functionality broadcasters currently need are difficult and sometimes impossible to implement in IP and cloud, and this is just a natural consequence of the state of IP and cloud development at this moment in.

The good news is that the new breed of assignable FPGA arrays not only makes the delivery of flexible and scalable functionality a reality giving an outstanding compromise between hardware and software, but also abstracts the SDI, AES and IP transport streams away from the user operation allowing broadcasters to mix and match technologies with ease, and build the most flexible, resilient, and scalable broadcast infrastructure possible.

Supported by

You might also like...

NAB Show 2024 BEIT Sessions Part 2: New Broadcast Technologies

The most tightly focused and fresh technical information for TV engineers at the NAB Show will be analyzed, discussed, and explained during the four days of BEIT sessions. It’s the best opportunity on Earth to learn from and question i…

Standards: Part 6 - About The ISO 14496 – MPEG-4 Standard

This article describes the various parts of the MPEG-4 standard and discusses how it is much more than a video codec. MPEG-4 describes a sophisticated interactive multimedia platform for deployment on digital TV and the Internet.

Chris Brown Discusses The Themes Of The 2024 NAB Show

The Broadcast Bridge sat down with Chris Brown, executive vice president and managing director, NAB Global Connections and Events to discuss this year’s gathering April 13-17 (show floor open April 14-17) and how the industry looks to the show e…

Essential Guide: Next-Gen 5G Contribution

This Essential Guide explores the technology of 5G and its ongoing roll out. It discusses the technical reasons why 5G has become the new standard in roaming contribution, and explores the potential disruptive impact 5G and MEC could have on…

Comms In Hybrid SDI - IP - Cloud Systems - Part 1

We examine the demands placed on hybrid, distributed comms systems and the practical requirements for connectivity, transport and functionality.