Broadcast For IT - Part 10 - SDI

In this series of articles, we will explain broadcasting for IT engineers. Television is an illusion, there are no moving pictures and todays broadcast formats are heavily dependent on decisions engineers made in the 1930’s and 1940’s, and in this article, we look at SDI, its development, and its applications in broadcasting.

During the 1980’s, video production was becoming more complex due to the advent of computer video processing. Hardware centric video generators, costing many tens of thousands of dollars, such as the Quantel Paintbox, were the forerunners of todays Adobe Photoshop.

As well as generating a lot of heat, they quickly demonstrated the limitations of using NTSC and PAL signal distribution in a Post Production facility. Luminance decoded as chrominance caused color patterning and chroma-luma cross talk manifested itself as fine dots at the edges of color transitions.

Multiple Cables

An early solution was to distribute RGB signals with each color requiring its own coaxial cable. Video sync pulses were encoded onto the green signal to help keep the number of cables required low. But any difference in the signal propagation caused by differential cable lengths or hardware processing resulted in RGB color shift.

ITU Rec 601, issued in 1982, was the first standard to represent NTSC and PAL as digital signals. Using YCbCr color subsampling, analog signals could be represented digitally without the issue of PAL and NTSC to YUV decoding, resulting in the reduction of many of the artifacts.

Color Information Is Halved

Color subsampling takes advantage of the fact that the Human Visual System (HVS) only needs half as much color information as luminance, effectively providing video compression. A signal that is referred to as 4:4:4 has no subsampling and the color is represented full bandwidth. A signal that is referred to as 4:2:2 has half horizontal and full vertical resolution.

Digital graphics creation equipment works best with full bandwidth luminance and chrominance, that is 4:4:4. Prior to the 1990’s, the extra processing required for 4:4:4 usually demanded a hardware solution resulting in extremely expensive installations. And to do these justice, digital signal distribution was developed.

Diagram 1 – Table showing horizontal and vertical bandwidths of different color subsampling.

Diagram 1 – Table showing horizontal and vertical bandwidths of different color subsampling.

ITU BT.656 provided parallel digital video distribution using D-Sub 25 pin connectors for the physical interface, with balanced emitter-coupled-logic (ECL) providing the signal. To reduce the risk of signal skew across the 25-way ribbon, cable runs were kept very short, usually restricted to inter-rack connections.

Also, Rec.601 used 4:2:2 color subsampling to restrict the color bandwidth as 4:4:4 made too much of a demand on the technology of the time. The resulting signal was slightly compressed for the color but gave much better performance for the overall video.

Serial Digital Interface

To improve distribution lengths from a few meters, Serial Digital Interface (SDI) was introduced in 1989 by the Society of Motion Picture and Television Engineers (SMPTE). The standard was called SMPTE 259M and described the serialization of the parallel BT.656 system. SDI provides the same video bandwidth and chroma subsampling as BT.656 but doesn’t suffer from signal skew.

SMPTE 259M-A was provided for NTSC and 259M-B for PAL systems, albeit with slightly different bit rates making compatibility between PAL and NTSC transport networks difficult. SMPTE 259M-C soon followed with a common data rate of 270Mbit/s for both NTSC and PAL.

SDI replaces analog line and frame syncs with digital codes. Temporally, the syncs occur in the same place as their analog counterpart, but 259M-C allows the nominally redundant area of the line and field syncs to be used for other services.

Audio Is Encoded Into SDI

Ancillary data describes user data available in field and line syncs and is used to embed audio as well as provide meta-data and frame synchronous information.

Timing is the most important aspect of video systems. Without it, the video output of a production switcher will roll, flash, or jump resulting in an unacceptable viewer experience.

Audio is usually embedded in the SDI ancillary data to reduce the number of audio cables required, and to guarantee lip-sync between the video and audio. If distributed along different paths, video and audio can lose its timing relationship resulting in the audio being slightly ahead or behind the video. The viewer will hear the spoken word, but it won’t correlate with the moving lips. 

Diagram 2 – Block diagram of an SDI encoder.

Diagram 2 – Block diagram of an SDI encoder.

Frame accurate meta data is often required to make sure external events occur at a preset time relative to a video frame. Subtitles must occur when the actors are speaking for the program to make any sense for the hard of hearing. And remote affiliates use a system of regional opt-outs during Ad breaks, allowing the local broadcaster to insert their own adverts.

Referred to as the TRS (timing reference signal), SDI uses unique sequences of digits to define where horizontal and vertical active picture begins and ends in the continuous stream of digits being sent from a camera, graphics generator or production switcher.

Beware Of BNC Connectors

Coaxial cable is terminated with 75ohm BNC (Bayonet Neill-Cancelman) connectors. They differ from BNC connectors found in IT networks as Ethernet uses 50ohm termination. As well as there being an impedance mismatch, the center pin of the 50ohm connector is larger than the 75ohm and will damage the socket if the 50ohm connector is plugged into the 75ohm socket.

As HD emerged, SMPTE 292M was released with bit rates of 1.485/1.001Gbps and 1.485Gbps. HD-SDI used the same BNC connectors but higher performance coaxial cable to provide the significantly higher bandwidths.

SMPTE 294M provided 2.970/1.001Gbps and 2.970Gbps for video rates working in progressive mode instead of interlaced – 1080P60. This later became known as 3G-SDI. As video rates increase, new SDI formats have becoming available – 6G-SDI (ST-2081) for 6Gbps, and in 2015, 12G-SDI (ST-2082) for 12Gbps. ST-2083 is currently under development for 24Gbps, but this may not be widely used due to the rapid adoption of IP in broadcasting.

Migrating To IP

As the frequency and bandwidth requirements increase, the distance a signal can be reliably distributed is also reducing. This could be less than 100m for 12G-SDI.

SDI has served the broadcast industry well for many years and there will always be a requirement for it, especially in stations that do not need flexibility or economies of scale. However, the rapid adoption of IP is seeing many vendors looking more closely at IP, especially as Ethernet switch speeds are achieving 100Gbps, with 400Gbps around the corner.

You might also like...

Standards: Part 6 - About The ISO 14496 – MPEG-4 Standard

This article describes the various parts of the MPEG-4 standard and discusses how it is much more than a video codec. MPEG-4 describes a sophisticated interactive multimedia platform for deployment on digital TV and the Internet.

Chris Brown Discusses The Themes Of The 2024 NAB Show

The Broadcast Bridge sat down with Chris Brown, executive vice president and managing director, NAB Global Connections and Events to discuss this year’s gathering April 13-17 (show floor open April 14-17) and how the industry looks to the show e…

Comms In Hybrid SDI - IP - Cloud Systems - Part 1

We examine the demands placed on hybrid, distributed comms systems and the practical requirements for connectivity, transport and functionality.

Audio For Broadcast - The Book

​Audio For Broadcast - The Book gathers together 16 articles into a 78 page eBook which explores the science and practical applications of audio in broadcast.  This book is not aimed at audio A1’s, it is intended as a reference resource for …

Standards: Part 5 - Standards For Audio Coding

This article describes the various AES, MPEG, Proprietary and Open Standards that pertain to audio.