The best preventative maintenance tool is detailed 24/7 monitoring.
The few remaining moving parts in television stations other than automated studio cameras are fans, pumps and disk drives. Electro-mechanics has been replaced by chips and buffers with settings. Maintenance engineers need a new tool box.
One of myriad benefits of technological progress is that properly spike-protected hardware has become amazingly reliable. Higher current power supplies, power amplifiers and modules still occasionally fail but less frequently. Low current digital electronics are particularly vulnerable to heat and spikes from power or lightning. Other than that, most problems occur one bit at a time.
Troubleshooting engineers know that intermittent problems are some of the most difficult to locate and fix. In an all-digital TV station or facility, old school test gear isn’t much help. No operator or engineer can be expected to note every glitch on every channel, particularly in MVPD facilities. However, many broadcasters have become multi-channel video programming distributers too, although without subscription fees. Typically, MVPD refers to cable or satellite programming distributors.
DTV, Gen 2
The last dozen years of terrestrial TV broadcasting have been filled with changes. Analog TV is gone. SD changed to HD. Gen 1 technology of ATSC 1.0 and DVB-T, HD, MPEG 2 H.264 is about to give way to the Gen 2 technology of 4K (or 8K), ATSC 3.0 or DVB-T2, and H.265 HEVC encoding. Spectrum repack is in various stages of progress in many countries across the globe.
There is much to praise about digital video. It’s reliable and low on artifacts. But it’s not exactly infallible and digital video infrastructures can be very complicated to troubleshoot. Obviously, a single DTV channel with no sub-channels or streaming video is the easiest to fix. The problem is either in Master Control, a few local digital devices, the LAN, STL, transmitter or the antenna.
As in the days of baseband video, before cable and even now, when the broadcaster is in complete control of all the devices in the broadcast delivery chain, all problems are in-house and theoretically preventable. Preventative maintenance is aimed to ensure issues or failures don’t occur during a world championship sports broadcast, or any other time. When hardware fails, results can be catastrophic. In broadcasting, it’s also most likely to occur at the worst possible time.
Not all failures are catastrophic. Many problems are “soft” problems, typically glitches caused by software or firmware. In a digital plant, many small soft problems can add up to cause a more obvious problem. The key to finding soft problems is detailed monitoring for errors. Preventative maintenance isn’t a guarantee, but it is some of the best station-to-viewer brand insurance a broadcaster can invest in.
The Qligent Channel View screen automatically provides a host of valuable data.
Strangers in control
Some MVPDs still use antennas to pick up local signals. More often, a station’s master control output is connected to fibers that feed cable head ends and satellite uplinks. Off-air and fiber signals are converted, processed and passed through all kinds of digital devices on the way to MVPD subscribers. If there’s a problem with the signal, viewers usually first call the station to complain. Telephone calls that begin with “What’s the matter with you people?” instead of “Hello” are calls for action.
If the problem is not at the station, complaining to the MVPD is about the only action broadcasters can take without a lawyer. Many stations have learned the best way to persuade a MVPD to fix a problem is to do the work. Study the problem. Pinpoint what happens and when. When you call to complain, clearly describe the exact problem and when it occurs. Identify where you think the cause might be found in the MVPD’s system. The easier you can make it for the MVPD to understand and fix the problem, the better.
Not all digital video devices work in perfect harmony with all other digital video devices. They may seem to, but sometimes difficult-to-identify random problems can build up over time. Built-in diagnostics in most digital gear doesn’t include testing interaction with sources or destinations. In some cases two-way communication happens or needs to happen, which is part of what gives digital video and DTV troubleshooting its intrigue.
If a catastrophic failure occurs between Point A and Point B, it’s fairly easy to identify the culprit in a preliminary line-up. Is the power on? Are the lights green? Is everything plugged in? Suspects that aren’t functioning properly are either broken, off-line or fell over the digital cliff and need a reboot.
Monitoring Multichannel QoE can be easy.
Common digital video problems such as blockiness, hiccups, missing frames and jitter can usually be traced to buffering and bandwidth. Blockiness is often a compression or bandwidth issue. When the internal chain consists of a LAN connecting Master Control, the STL and the transmitter, you control the bandwidth.
Digital video operations put most systems through a stress test. Many typical digital video issues suggest buffering problems. Buffering problems aren’t limited to the DTV broadcast chain. They can occur anywhere in a network and can be quite elusive. They could be the result of a combination of little things that don’t seem to matter that much individually. They can occur every so often, weekly, monthly or seemingly randomly, but when a buffer overflows and resets itself, video can hiccup, freeze or worse. It may sound simple to troubleshoot, but it’s not and it gets even more complicated after the content leaves the broadcaster’s property.
Content distribution beyond terrestrial broadcasting is a diverse landscape. Programming and delivering a single live channel is only part of what TV broadcasters must do to survive in today’s digital ocean. Viewers want more TV content choices, anytime, anywhere on screens of all sizes. Broadcasters must lead the demand with consistent reliability for everyone who clicks in. “Reliability” anchors a station’s brand.
To stay ahead and maintain reliability, myriad compression schemes, picture formats, wrappers and conversions and streams are served to a world of variable bandwidths, different operating systems and dissimilar video players. A streaming video system is judged by its quality and reliability (QoE and QoS), including its ability to seamlessly adjust streams to the unique digital properties of each viewer.
Different monitoring systems and techniques operate in a variety of ways and at different depths. The evolution of monitoring methods in many facilities are as individual as call letters. Most began content monitoring by filling shelves with consecutive 8-hour T-180 VHS tape recordings, kept for weeks or maybe a few months.
The ability to record video on a computer freed shelf space, gave everyone instant access to air checks and opened the digital doors to all kinds of possibilities. Since then many monitoring solutions have been invented, used and replaced. Monitoring is evolving not only toward making all information available on a smart phone, but also towards automated troubleshooting.
Some facilities prefer to monitor systems and channels on-site. Some group owners want to track and monitor all their stations at corporate headquarters. Some station departments may want to review and verify content or ads. Engineering departments want to monitor today’s complicated technical compliance. Engineers would also like a system that logs and tracks technical problems, and runs action-based, real-time, cause analysis.
Most vendor equipment can’t tell you how it is interacting. SNMP management won’t tell you. For that, tracking trouble is the key. Several systems have come to the market that track trouble in a variety of ways. One company included in research for this article is Qligent. Its products collect soft problems and helps determine the cause and effect. It does so by comparing source data with destination data.
Engineer and Qligent COO Ted Korte explains how his system works. “We look at 5 layers; the physical layer, ASI, RF or IP, the transport layer, the video layer, the audio layer, and the ancillary data layer. There is interaction between all these layers.” The system compares received data with something upstream, such as the transmitter or STL input or output, tracking soft failures to profile the network. Profiles can help tune a delivery system for highest efficiency and quality, such as finding the best power level vs. modulation error ratio (MER).
Some monitoring providers offer software as a service and a 24/7 monitoring service with their systems as a cloud-based service. The value is that stations, groups and networks can monitor station on-air operations from anywhere and compare file based data at points of origination to the data as transmitted or data arriving at other destinations.
Nearly all monitoring systems continuously record content. Some can log and verify virtually every technical parameter of a feed. The data such systems provide not only helps engineers find and fix problems. It can also verify sales, billing, programming, advertiser and viewer issues with playback and full documentation.
You might also like...
In this series of articles, we investigate OTT distribution networks to better understand the unique challenges ahead and how to solve them. Unlike traditional RF broadcast and cable platform delivery networks, OTT comprises of many systems operated by different companies…
This FREE to download eBook is likely to become the reference document you keep close at hand, because, if, like many, you are tasked with Preparing for Broadcast IP Infrastructures. Supported by Riedel, this near 100 pages of in-depth guides, illustrations,…
Today’s broadcast engineers face a unique challenge, one that is likely unfamiliar to these professionals. The challenge is to design, build and operate IP-centric solutions for video and audio content.
Broadcasting used to be simple. It required one TV station sending one signal to multiple viewers. Everyone received the same imagery at the same time. That was easy.
Saving dollars is one of the reasons broadcasters are moving to IP. Network speeds have now reached a level where real-time video and audio distribution is a realistic option. Taking this technology to another level, Rohde and Schwarz demonstrate in…