Our third and final part of this series looks at Quality Control & Compliance, which bring efficiency, confidence, and legal conformance to daily TV station operations.
In Part 1 of this series, we explored the four missions TV Master Control Rooms (MCRs). All four missions are equally important for successful Master Control operations.
The first mission is to continuously feed the TV transmitter with seamless commercial and program content from various sources, to a single or multiple channels, following the directions of each station’s daily program log.
Second is media management, which is knowing where media is stored, how it is organized, and how it is accessed and aired.
The third mission is switching between station sources to feed the transmitter system as called for on the log and EAS tests and alerts, along with other important graphics and information when necessary.
The fourth mission is monitoring, logging and compliance of the broadcast signal to assure it meets industry standards defined by the FCC, SMPTE, ATSC and other industry standards organizations.
QoS, QoE and UX
Verifying content quality before it airs and monitoring the quality of content as it airs has always been a critical responsibility for MCR operators. Master Control is typically the last place in the station where a human verifies that the correct content is playing, and that it looks and sounds as good as possible as it is broadcast. Two modern forms of QC are Quality of Service (QoS) and Quality of Experience (QoE).
QoS was defined for telephony by the ITU in 1994, and it makes perfect sense to also be used to evaluate DTV service today. QoS is designed to provide objective technical measurements of parameters that affect quality such as bit rate, bit depth, bandwidth, packet loss, errors, latency, jitter, and availability. QoS testing usually also includes signal-to-noise ratios, audio and video frequency response and loudness level monitoring to assure compliance with national technical regulations.
QoE was defined by ITU Recommendation ITU-T P.10 in 2016. QoE is a measurement of the degree of “delight or annoyance of the user” with a station, whether over-the-air or streaming over the internet. QoE can be roughly translated to “how does it look at home?” QoE is purely subjective, although it is often related to QoS factors. For example, remote TV receiver probes can report signal quality measured in the field back to a MCR or a maintenance bench to verify that each aspect of the signal from the transmitter and transmitting antenna is being received as it should be. QoE is similar to the more recent User Experience (UX), however QoE is designed for technology-centered telecommunications. UX is human-centered and measures human-computer interaction.
Throughout the history of TV, viewers have complained about loud commercials. In response, the Commercial Advertisement Loudness Mitigation Act (CALM Act) became federal law in 2010. It directs the FCC to prescribe a regulation limiting the volume of TV ads consistent with ATSC recommended practice A/85. ATSC’s recommended practice A/85: “Techniques for Establishing and Maintaining Audio Loudness for Digital Television.” was announced in 2009.
Nearly all loudness measurements are based on dialog normalization known as “dialnorm,” which is based on the average level of spoken dialog. Dialnorm is a metadata parameter that controls playback gain within Dolby AC-3. It indicates the level of average spoken dialog within programs encoded in the Dolby AC-3 bitstream in ATSC DTV signals. The metadata value range for dialnorm is from 1 to 31. Higher dialnorm values provide more headroom for a wider dynamic range.
There are several T&M solutions on the market that measure and monitor dialnorm and loudness. Dialnorm can be measured on live or archived content. Because it is based on metadata, it can’t be accurately measured with a typical VU meter.
While monitoring the quality of live content as it is broadcast is important, file-based, digital QC can be accomplished in the background on a server or servers. File-based QC lets stations and groups get ahead of the curve by verifying and correcting quality issues before content is aired. Several companies offer file-based solutions and SaaS solutions that perform loudness analysis and correction during transcoding, detect captioning, and notify operators when any media asset doesn’t meet station or group specifications. Such automated file-based solutions streamline the legal compliance process with web-based review, approval tools and automated proxy generation. Some solutions also provide multi-format media conversion.
Other automated file-based solutions were developed natively for the Cloud environment. Some can be pay-per-use and many support the latest technologies such as HDR, WCG and 4K UHD.
Automated file-based QC can provide analysis up to 6x faster than real-time analysis for HD content and near real-time for JPEG-2000 4K content. Checks often include color gamut, active video region, aspect ratio, cadence, flash frames, photosensitive epilepsy issues, language, loudness, audio peaks, audio clipping, EAS and audio drops. Specific solutions are also available for verification and correction of captioning and subtitles. Some also monitor for “mosquito tone” high frequency audio above 17kHz. The presence of mosquito tones can degrade the user experience for younger viewers who can hear such high frequencies.
Most newer DTV transmitters come with the ability to be remotely monitored and controlled. Several manufacturers offer add-on solutions for popular transmitters and exciters that can send email alerts or messages to the operator or chief operator when parameter anomalies outside the desired range are detected. Some can monitor multiple transmitters simultaneously for hub operations. Some systems also provide API control of transmitter audio processors. Anything that can save a road trip to check or adjust a remote transmitter is a useful advantage for stations.
Multi-monitors also known as multiviewers are a huge benefit for monitoring, both at the station and at the hub. Most multiviewers include alarms to alert operators when video and/or audio isn’t present, or if either contains other unwanted anomalies.
Server to Transmitter Connections
Most TV station groups use a custom hub architecture. What they all have in common is that they transfer tremendous amounts of data over long distances daily. What varies is how each group moves files to and from hubs and local transmitters. Most use private fiber with satellite or public internet backup for station/hub connections. Some use FTP for file transfers. Others use Signiant high speed transfer protocol, which is said to be about 100x faster than FTP.
In typical hub operations it isn’t unusual for stations to use a local edge server for local content and last-minute playlist additions. Edge servers eliminate the time it takes for a hub to ingest outside content, particularly when it is unique to a specific market, such as local car dealer spots for example. Some hub operations feed video from a hub server, others use automation to control local servers. Many are a hybrid combination of the two. Streamed from a hub or remotely controlled, the media content is ultimately fed to the local transmitter for broadcast distribution.
You might also like...
Analytics and monitoring are now more critical than ever for media supply chains to deliver on various targets, including customer retention, regulatory compliance, revenue generation through effective targeting of ads and content, and efficient use of network resources.
This free 82-page eBook is your definitive guide to IP security for broadcasters. Written by our Editor Tony Orme, it gathers 12 original independent articles that examine the complex issues of security, explain the terminology and IT standards involved, and explore…
In the final article in this series, we look at datasets, their importance and why GPUs are critical for machine learning.
Broadcasters and video service providers first embraced broadband delivery over the internet well over a decade ago, but have only recently started to embed this fully into their supply chains.
Training neural networks is one of the most import aspects of ML, but what exactly do we mean by this?