Advances In 12G-SDI:  Part 2 - 12G-SDI Technology

In the last article in this series we looked at how SDI has developed over the years to reach an incredible 47.52Gbits/sec for quad-link 12G. In this article, we dig deeper and uncover the technology enabling SDI and its advantages.

Broadcast video and audio, distributed within a television station, must be low latency and very stable. To facilitate low latency there can be no resend of lost video or audio and therefore, we cannot afford corruption within a transport stream. Video and audio must be sent reliably with virtually no data loss.

Well-defined stable specifications are enormously important for the broadcast industry, especially as we move to the data rates ST-2082 is delivering. This is mainly due to the fact that a distribution system such as SDI must be hardware based. SDI is synchronous and embeds the bit clock into the transport stream to provide a highly synchronized system that is stable and delivers very low latency.

Hardware SDI

Although much processing of video and audio occurs outside of the SDI domain, it is virtually impossible to build an SDI interface in software. The synchronous and continuous data streams do not lend themselves well to asynchronous event-driven software architectures and solutions must be found in hardware.

This leads to a further challenge; hardware takes a long time to develop, especially when dealing with the speeds that 12G offers. To put these speeds into context, the radio frequency band for satellites stretches from approximately 3GHz to 30GHz, and the Ku-band has a spectrum of 12GHz to 18GHz. The fundamental frequency of the bit rate of a 12G-SDI signal is just short of the 12GHz Ku radio frequency band used for satellite communications. Working at these frequencies makes hardware design incredibly challenging resulting in long development cycles to deliver reliable products.

For a vendor to invest the time needed to design and then manufacture with the high yield rates required is very risky. The smallest design anomaly on a critical part of the circuit board could render a prototype useless requiring a costly redesign. Consequently, a manufacturer will not start a new hardware design until they are sure the specification is nailed down.

This is why SMPTE have been so successful. They spend a great deal of time designing specifications that they know are robust and reliable, as well as well-documented and readily available. We can be sure that if two ST-2082 compliant devices are connected together, then they will readily exchange video and audio between them. This is even more important now that we have descriptive data embedded within the ancillary data streams such as the VPID (Video Payload Identifier) as specified in SMPTE 352.

Frame Accuracy

SDI allows us to make use of the ancillary data available in the stream to embed audio or send frame accurate data. As we progress into areas such as HDR (High Dynamic Range), the need to provide frame accurate metadata and control data is beginning to become apparent. 

Key to the longevity of SDI is its inherent backwards compatibility. If you already have a 3G installation and then upgrade a part to be 12G, then any 3G signals routed through the 12G infrastructure should work reliably. There is a caveat here in that the vendor supplying the kit needs to make sure that they have provided multi-standard operation, but if they are using high-quality components then they have probably done this by default.

4K signals that may have been distributed as multi-link 3G or 6G can be more easily distributed over a single 12G link. For example, if a broadcast facility was using UHD 4:2:2 at 59.94fps over a 6G dual-link, then as the facility upgrades to 12G, only one cable link will need to be used as it will replace the 12G dual-link. 

Figure 1 – to send a 4Kp signal over a quad-link 3G-SDI, the image is divided into four HD (1920 x 1080) sub images. Each sub image requires a 3G-SDI link allowing them to be sent over four coaxial cables. The receiver circuit reconstructs the four images to form a single 4Kp stream.

Figure 1 – to send a 4Kp signal over a quad-link 3G-SDI, the image is divided into four HD (1920 x 1080) sub images. Each sub image requires a 3G-SDI link allowing them to be sent over four coaxial cables. The receiver circuit reconstructs the four images to form a single 4Kp stream.

SDR and HDR Conversion

As we move to 4K while still embracing HD as a de facto mainstream format, new formats such as WCG (wide color gamut) and HDR (High Dynamic Range) are starting to appear. ST-2082 connectivity, whether through coax or fiber LC, implies that we should be thinking more about backwards compatibility. Converting between HDR/WCG and SDR is not always as simple as it may first appear. The different approaches to HDR, such as PQ or HLG, add complexity and the conversions require deep insight to this process.

Simply changing levels or using software to convert between color spaces is fraught with potential complications. Broadcast television is incredibly sensitive to gamut errors and even the smallest anomaly could have serious consequences for downstream compression systems.

Any HDR infrastructure must have an adequate number of HDR/WCG to SDR (and SDR to HDR/WCG) converters to truly integrate 4K into an existing broadcast infrastructure. Even connecting to the outside world will require some sort of conversion. Changing between 4K and HD is a lot more involved than just scaling and filtering an image.

LUTs (Look Up Tables) help provide the correct color conversions and standard reference models are available. However, these are complex to implement and require matrix programming and high precision mathematical solutions to get the best possible conversion.

Figure 2 –  a 3D-LUT can be visualized as a cube. There are potentially billions of possible points within the cube where the input pixel can be mapped to the output pixel. To optimize the number of calculations needed, the number of input and output points are reduced and any input or output falling between these nodes is averaged between the nearest neighbor. This can be a source of video banding. Therefore, the more complex and dense the 3D-LUT, the better the resulting image.

Figure 2 – a 3D-LUT can be visualized as a cube. There are potentially billions of possible points within the cube where the input pixel can be mapped to the output pixel. To optimize the number of calculations needed, the number of input and output points are reduced and any input or output falling between these nodes is averaged between the nearest neighbor. This can be a source of video banding. Therefore, the more complex and dense the 3D-LUT, the better the resulting image.

Quality LUTs

A LUT is essentially a mathematical operation that takes the current RGB pixel value and applies a transform to it to provide a new value. These transforms are often non-linear and use resource hungry processing. The 3D-LUT can be thought of as a cube where the original pixel value and the new transformed value both reside.

3D-LUTs can be massive and complex. For a 10-bit system, there are over a billion possible values the new pixel can be mapped to for each pixel displayed on the screen. To reduce the number of calculations, 3D-LUT cubes are condensed into fewer nodes so that not all the pixel values are represented. These nodes represent averages between adjacent values and account for length variance in 3D-LUT files.

The 3D-LUT file size has a direct impact on the quality of the converted image. If too small, there are not enough nodes, and the software or hardware must make a lot of compromises resulting in the potential for visible banding and other artefacts on the screen.

Pixel Neighborhoods

This is one reason great care must be taken when choosing any framestore or HDR/WCG to SDR converter. If the 3D-LUT tables are not accurate enough then the converted image will deteriorate quickly. Much greater processing power is needed to provide the conversion using high quality 3D-LUTs containing many more nodes.

Quality of the conversion is not just about transforming the static pixel as they cannot be treated in isolation. Images containing specular highlights for example, greatly affect the neighboring pixels, and so the image must be considered as a whole to obtain the highest possible quality of conversion.

High Frame Rates

Greater fluidity of motion delivered through higher frame rates further improves the viewer experience. Greater creative freedom is also offered program makers and slo-motion can be much more effective.

The high frame rates that HDR and 4K have to offer must also be addressed. Standard HD frame synchronizers cannot cope with the 4K’s high frame rates and careful planning is needed to make sure any connectivity to existing HD systems is adequately thought through.

SDI is providing broadcasters with many opportunities. With a thirty-year heritage and ease of use, it’s held its position as a strong contender for signal distribution in broadcast facilities. The introduction of 12G-SDI makes the format even more appealing, especially as it has been integrated into many interface and conversion products. 12G-SDI Is empowering broadcasters to integrate 4K much more easily than ever imagined.

Part of a series supported by

You might also like...

Minimizing OTT Churn Rates Through Viewer Engagement

A D2C streaming service requires an understanding of satisfaction with the service – the quality of it, the ease of use, the style of use – which requires the right technology and a focused information-gathering approach.

Production Control Room Tools At NAB 2024

As we approach the 2024 NAB Show we discuss the increasing demands placed on production control rooms and their crew, and the technologies coming to market in this key area of live broadcast production.

Designing IP Broadcast Systems: Where Broadcast Meets IT

Broadcast and IT engineers have historically approached their professions from two different places, but as technology is more reliable, they are moving closer.

Network Orchestration And Monitoring At NAB 2024

Sophisticated IP infrastructure requires software layers to facilitate network & infrastructure planning, orchestration, and monitoring and there will be plenty in this area to see at the 2024 NAB Show.

Audio At NAB 2024

The 2024 NAB Show will see the big names in audio production embrace and help to drive forward the next generation of software centric distributed production workflows and join the ‘cloud’ revolution. Exciting times for broadcast audio.