Moving video around a broadcast or media facility is a key requirement and fundamental to the operation. When media is transported the signal is open to induced errors, typically through noise, distortion and interference. The time frame of the error may vary between a few pico-seconds and many seconds. It may be predictable in its frequency, or completely random.
SDI assumes the underlying network is error free and only checks for corruptions through CRC computations, but does not correct them. In SDI there is no provision for resending faulty packets or frames of video, the system is completely synchronous and relies on the bit clock at the receiver being phase and frequency locked to the clock in the sender. The major advantage of SDI is that the delay in transmission of a synchronous SDI system is deterministic and limited to the propagation delay of the underlying medium, whether this be coax or fibre.
Error free transport streams are often a reality within the confines of a well-designed studio, outside broadcast vehicle or post production facility. However, we cannot always assume or afford this luxury, especially when the media leaves the broadcast facility. Asynchronous IP systems provide a degree of freedom over SDI synchronous designs as protocols from specifications such as SMPTE2022 can be put in place to take into consideration short term network errors.
UHD and 4K brings its own challenges as the bit rates multiply by a factor of eight; horizontal lines double, vertical lines double and frame rates double from interlace to progressive, and multiple coax cables are generally used to distribute the signal. A 2160p120/119.88 10bit 4:2:2 UHD/4K signal has a bit rate of approximately 24GBits/s and will require Dual-link 12G-SDI, or a 40GbE Ethernet link.
Professional IP-TV networks tend to be based on UDP/IP protocols, that is, there is no packet resend and instead they use a “fire and forget” policy. SMPTE2022-5 has provision for forward error correction (FEC) to rectify some short term packet loss caused through network problems. FEC uses a system of matrices referred to as 1D and 2D to create checksum packets used by downstream equipment to detect and fix some data corruption. Use of 1D and 2D matrixes comes at a cost of increased latency and bandwidth as extra FEC packets have to be calculated, inserted into the network stream, and decoded at the receiving end.
UHD/4K 30p files require an almost 12Gbps data rate. The even more demanding UHD/4K 60p files, need networks capable of moving data at almost 24Gbps.
Within SMPTE2022-5 there is provision to switch the FEC off and this would remove the latency, however in doing so the error correction would be completely lost. If FEC is switched off, we have to be sure that the underlying network is robust. In a studio environment this is quite reasonable, assuming the network has been properly designed for broadcast use.
SMPTE have further developed 2022 to SMPTE2022-7 which effectively provides dual redundancy between two streams. The video and audio packets are simultaneously sent over two diverse network connections. Working on the assumption that the routing between the two networks is going through different routers and taking differing paths, and therefore any lost packets are unique to each stream, the two are compared in the receiving equipment and any lost packets are taken from the other stream to provide a single error free output stream.
However, this assumes an HD infrastructure as uncompressed HD services can be sent over a single 10Gbit Ethernet link. When we move up to UHD story is different as we now require bandwidths of 24Gbps and more. One strategy to overcome this is to divide the UHD service into four HD sub images by performing a division similar to a quad split on the original UHD . In the hybrid solution each of these signals is mapped to four 3G-SDI cables using SMTE425-3/5, and each of the 3G-SDI signals is mapped to a 10Gbps Ethernet link using SMPTE2022-5/6/7. To provide SMPTE2022-7 redundancy the number of 3G-SDI signals are doubled either in the SDI or Ethernet domains, thus providing resilience for each of the sub images of the original UHD/4K service. The bandwidth requirements of the original UHD/4K service have doubled. The system architects have to decide where to draw the line between signal resilience and service integrity. The more resilient the network the more reliable will be the sound and vision.
One solution to moving UHD/4K over today's infrastructures is to use what is called a quad split. A downside is that it requires four times the number of cables and production switcher inputs thereby greatly increasing a facility's complexity.
Another major advantage of IP networks is that they are bi-directional, unlike SDI which is only unidirectional. Using the camera as an example IP networks allow for reverse vision feeds and automatic discovery and registration of the camera, iris, zoom and focus remote control, and with future development will even supply power. In a traditional broadcast studio facility, we would need to provide extra coax for reverse vision and data cabling for the camera controls, further increasing complexity as specialist data patch bays and routing is required.
UHD/IP infrastructures become much simpler if we use light compression. Visually and mathematically lossless results using the TICO algorithms with 4:1 compression can be achieved. A 60Hz frame rate UHD service can be mapped into a single 3G-SDI, and three simultaneous streams can be mapped to a single 10Gbps Ethernet link. In a recent technology demonstration Sony showed two OB units, one using traditional SDI and the second using IP, they were able to reduce the SDI cable weight from 278Kg to 40Kg using standard IT copper and fibre Lan cables.
IP and Ethernet networks are used in virtually every business throughout the world. This has resulted in IP routers and Ethernet switches becoming commodity items, with vendors constantly competing to provide technological advances to improve data speeds and throughput. Ethernet just keeps getting faster and faster. Consumer off the shelf products (COTS), like routers and switchers are more cost effective than the traditional video and audio SDI/AES/analogue switchers. A larger pool of network engineers is available compared to the specialist broadcast engineer .However, caution should be exercised if we assume IT network engineers can easily replace broadcast engineers. The two disciplines work at opposite ends of the timing spectrum with network engineers thinking of response times to clicks on a web page, and broadcast engineers thinking about syncing 3GHz clock signals to avoid frame rolls and picture shifts.
You might also like...
The derivation of the famous CIE horseshoe was explained in the previous part in terms of a re-mapping or distortion of rg color space. The derivation is somewhat abstract because the uses of color science go far beyond the applications…
HDR is taking the broadcasting world by storm. The combination of a greater dynamic range and wider color gamut is delivering images that truly bring the immersive experience to home viewers. Vibrant colors and detailed specular highlights build a kind…
Hackers are always improving the level of sophistication and constantly finding new surface areas to attack – resulting in the surging volume and frequency of cyberattacks.
The rg color space served to document the chromaticity gamut of the HVS, and so was a great step forward in understanding color and color vision. However, it was based on a certain set of primaries. As no set of…
In the fourth and final part of this series, we wrap up with an explanation on how PTP is used to support SMPTE ST 2110 based services, we dive into timing constraints related to using COTS (Commercial Off-The-Shelf) hardware, i.e.:…