Using Configurable FPGA’s For Integrated Flexibility, Scalability, And Resilience - Part 1
Although IP and cloud computing may be the new buzz words of the industry, hardware solutions are still very much in demand, especially when taking into consideration software defined architectures. And in addition, a whole new plethora of FPGA based configurable systems are providing flexibility, resilience, and scalability.
Other articles from this series.
Broadcast facilities have traditionally been designed using standalone devices that perform a particular task such as a standards converter, proc-amp, or production switcher. However, the new breed of software defined hardware systems allow generic hardware to be assigned to different tasks depending on the software loaded into it.
This leads to transport stream agnostic workflows that can work with SDI, AES, ST2110, and many other protocols, all with simplicity and ease.
This integration of transport stream agnostic workflows is empowering broadcasters to decide which transport technology they want to use, progress to, or stick with. For example, a broadcaster may want to keep SDI/AES but have the option of working with IP or Cloud services later.
With FPGA systems broadcasters get the best of all worlds, that is, flexible and simplified workflows with very low latency. Although FPGAs may be programmed with software files, they are intrinsically hardware devices that work to very tight time constraints and keep latency to sub nanosecond levels across the whole processing fabric. In essence, they deliver the ultimate parallel processing solution.
Advancing Technology
Broadcast engineers and technologists have had a lot to deal with over the past twenty plus years. Many have had to transition from analog NTSC and PAL, to SDI, widescreen, HD, then HDR and even UHD and 8K. And that’s before we start talking about compression and IP. Transitioning directly to IP may well be the end game in terms of the technological evolution of broadcasting, but for many this change must be managed carefully.
Television has traditionally been risk averse, and in part this is due to the eye watering amounts of money that are involved with live sports productions and premium movie rights. But it’s also due to the ongoing technological development of video and audio processing. Broadcasting is like an innovation sponge, no matter how fast scientists and researchers deliver new high-speed circuits, broadcasting always demands more. It was not so long-ago that HD was state of the art with its 1.5Gbits/s circuits, now we’re talking in terms of 12Gbits/s SDI for 8K and beyond.
IP delivers some outstanding benefits, but not all broadcasters need these. And even those that do, may find that not every workflow lends itself well to a full IP migration. There are many processes that still benefit greatly from a hardware approach especially when we start to consider latency. One of the challenges we have with IP is that software processing is inherently indeterminate in terms of latency, but this may or may not be an issue for the broadcaster. The good news is, that hybrid hardware systems can deliver the best of both worlds.
Latency Challenge
Hardware has always been dominant in the broadcast industry as it was generally the only method we had of processing real time video and audio with very low and predictable latency. But as computer processing, storage and network distribution have all increased in speed and reliability, it is now possible to process video and audio in real-time. The challenges we have are that software-based systems are both complex and latencies are indeterminate.
Looking at this it would appear we either opt for inflexible and static hardware systems that have determinate latency, or highly flexible and scalable software systems that suffer from highly variable latency. However, the ongoing technology innovation is reducing the latency in software and improving the flexibility and scalability in hardware. A combination of the two could prove the perfect hybrid solution.
Another major trait of broadcaster engineers and technologists is they like to keep their options open and not back themselves into a technology corner. This is achievable when we look at the new breed of FPGA hardware solutions. Not only do they provide all the benefits of hardware processing, but they also abstract way the transport stream so that the broadcasters are not tied to IP, SDI or AES etc. As their needs change and the technology develops, broadcasters can move more to IP or keep with SDI and AES. Fundamentally they significantly increase their options.
FPGA Solutions
With modern FPGA approaches to system design broadcasters now have the best of both worlds, they have the flexibility and scalability of software with the low and predictable latency of hardware.
FPGAs have advanced massively in recent years. Originally, they were seen as a flexible method of reducing hardware real-estate on circuit boards, but as their features and capacity increased, then so has their usefulness.
If we look at a simple video digital filter, such as a Finite Input Response (FIR) filter, then the main components are delay lines, adders, and multipliers. Using discrete components would take an enormous amount of real-estate on a circuit board even using surface mount components. Impedance matching the tracks between the components would be a real challenge at HD and UHD SDI speeds. Maintaining high manufacturing yield rates at these frequencies is heavily impacted by the number of components needed on the circuit board. However, high-capacity, and high-speed FPGAs can facilitate FIRs relatively easily, even at 4K and UHD video rates.
We could use DSPs instead of FPGAs as they have the flexibility of software, but they do suffer from limited resource, especially when we consider video rates. Furthermore, many more devices with external memory would be needed to replicate the equivalent high-end FPGAs.
Flexible Hardware Resource
FPGAs are a programmable device that use binary files to interconnect resources within the device. Fundamentally, these consist of gates such as AND and XOR, and from these two gates any of the Boolean functions can be replicated, which forms the basis of any digital design.
Another alternative for processing high-speed video is to use ASICs (Application-specific integrated circuit). These maintain the speeds of FPGAs (and often faster) but are fixed in their functionality. That is, if an ASIC is designed to provide MPEG2 compression functionality then that’s all it can ever do. Forward thinking ASIC designers will build into the silicon registers that can be updated from an external CPU so some of the compression parameters can be changed, but other than this, the design is locked-in during manufacture. The big advantage of ASICs is that long manufacture runs soon reduce the cost making consumer equipment affordable.
FPGAs can provide high speed inputs and outputs for SDI, AES, IP Ethernet and Fiber, and their functionality can be programmed using software to deliver all the advantages of high-speed and low-latency hardware, along with the benefits of scalability and flexibility found with software.
FPGAs are significantly more expensive than ASICs or the equivalent DSP solutions, but they are orders of magnitude more flexible than any other solution on the market. Not only are FPGAs programmable, but their configuration can also be changed on-the-fly. This allows an array of FPGAs on a circuit board with some external memory and interface ICs, to act as a massive programmable hardware resource.
Not only do FPGAs provide Boolean logic resource, but they also have built into their fabric features such as block memory, DSPs, CPUs, and multipliers. In effect, an entire hardware ecosystem exists on a single chip. Also, high-speed interfaces are provided so that multiple devices can be interlinked to further increase their capacity.
Although many features such as memory, DSPs, CPUs, and multipliers can all be replicated in Boolean XOR-AND logic, doing so is inefficient. For example, a simple D-type single bit memory function can be provided using five NAND gates, which in turn will require three XOR gates and one AND gate for each NAND gate, resulting in thirty XOR gates and ten AND gates for a simple 10-bit luminance single clock delay line. This is used in a single tap in a video FIR filter, which in turn could consist of twenty or thirty taps. And this is before we start looking into the complexity of adder circuits, multiplexers, and multipliers.
Supported by
You might also like...
The Big Guide To OTT - The Book
The Big Guide To OTT ‘The Book’ provides deep insights into the technology that is enabling a new media industry. The Book is a huge collection of technical reference content. It contains 31 articles (216 pages… 64,000 words!) that exhaustively explore the technology and…
Audio For Broadcast: Cloud Based Audio
With several industry leading audio vendors demonstrating milestone product releases based on new technology at the 2024 NAB Show, the evolution of cloud-based audio took a significant step forward. In light of these developments the article below replaces previously published content…
An Introduction To Network Observability
The more complex and intricate IP networks and cloud infrastructures become, the greater the potential for unwelcome dynamics in the system, and the greater the need for rich, reliable, real-time data about performance and error rates.
Designing IP Broadcast Systems: Addressing & Packet Delivery
How layer-3 and layer-2 addresses work together to deliver data link layer packets and frames across networks to improve efficiency and reduce congestion.
Designing IP Broadcast Systems: Integrating Cloud Infrastructure
Connecting on-prem broadcast infrastructures to the public cloud leads to a hybrid system which requires reliable secure high value media exchange and delivery.