With multiple file formats becoming common, engineers need to adopt best practices for signal quality measurements.
Media Streams and files have special needs when it comes to monitoring and maintenance. Contribution has one set of formats, production another and distribution its own set. Even so, we still need to monitor all of them.
The need monitor for Quality Control has not changed. At each process the media needs to be tested and monitored to maintain quality and keep the media within “broadcast standard” parameters. And it’s different across the media ecosystem. How can we do this?
A dear friend and senior engineer at one of the major networks kept asking when I would continue this series. He laments the difficulty for engineers to find discussions, guidance and knowledge on how to maintain the IP facility. Sure we are starting to see cool new technology with very clever names, but sometimes without direction of what actually needs to be done and why.
The buzzwords may be familiar, but that alone is insufficient. Before getting into the whole discussion of streams and which flavor and to monitor and how, let’s talk basics.
In the IT industry there are mature tools for monitoring network traffic across the LANs & WANs and tools to optimize network performance and manage that network traffic. These tools do not care what the data stream is or what type of file is on the network. While these tools are valuable and necessary, there is the additional requirement to support media transport across the network. In addition, it is necessary to optimize the broadcast LAN, which has its own performance requirements. Because media passes through many different processes and potential changes in compression formats, the measuring and monitoring of the IP stream is essential to quality assurance.
Monitoring a live stream is different than just checking the integrity of a file. Compression techniques are applied differently as well. One of the more interesting challenges in the great format wars is to be able to test, measure and correct signals in a dynamic fashion and do so across multiple stream formats.
At the moment we have MPEGTS (aka IEC 13818-1 or ITU-T Rec. H.222.0 & SMPTE 2022 1/2), ASPEN, AIMS aka VSF TR03 & 04, NDI & NMI. But we are not yet done. For more info see The Care and Feeding Of An IP Facility Part 1. Other formats to be maintained may include; SMPTE 2022 5/6, SMPTE 2059 1/2, NMOS and let’s not forget SMPTE 2110.
My audio colleagues would not forgive me if we didn’t include, SMPTE 302, AES67, AES10, Dante, TICO and there are more. Each has different characteristics that will require proper tools to analyze the signals and then recommend corrective measures based on the individual parameters of each format.
Will media production organizations normalize all streams to a “house” standard? If so, are tests performed when the inbound stream is transcoded? Or, because all the signals are moved over the common IP network, does the stream analyzer monitor the network and then differentiate between each stream? Signals still need testing before and after transcode for quality assurance.
With PTP (Precision Time Protocol) as the new Black reference, will the test and measurement technology be able to measure the synchronization or latency characteristics in an encapsulated stream or will it separate the audio, video and timing? Let’s not leave out good old SMPTE ST 2038 Metadata.
What to measure?
Going back to my friend’s quest, even as I am researching for this article, it’s difficult to find technical references explaining what parameters should be monitored and measured and the metrics of acceptability. There are all kinds of Interop conferences to see if any of this actually works together, but not a lot explaining what constitutes broadcast quality or acceptable parameters for testing and measuring.
Latency is not a video or audio quality parameter. Bitrate error isn’t either and neither is packet loss. However, they definitely can have a serious impact on quality. And there’s the rub,
What exactly is the technical specification or acceptable range for bitrate and packet loss? A long time ago in a world far away, video was one volt, measured peak-to-peak and there was a standard that defined acceptable signal levels. There were tools within the system that allowed adjustments and corrections if the signals were outside acceptable parameters. Audio has its own specifications for frequency range, phasing and levels before distortion.
You could actually look at the specifications of the standards and understand permissible ranges. As the world went digital so did the test and measurement equipment.
Waveform and vector representation didn’t really apply. It’s nice to have old friends as a reference but they really don’t tell what’s wrong with the audio or video and where to correct the signal if it’s out.
SDI introduced new test measurement parameters and mostly new technology with the same names for adjusting—even though they didn’t really do the same thing.
- How many packets can be lost before the signal is unacceptable?
- What is the representation of packet loss in a testing device and visually/aurally?
Bitrate is the IP version of 1 Volt P-P, so if the stream is supposed to be uncompressed, does the bitrate error parameter indicate if there are insufficient bits or too many? Is the bitrate error parameter quantitative or qualitative? In other words, is the test device seeing insufficient bits too many bits or is the quality of each bit in error?
In this new world order, in addition to plain old QC - Quality Control, there is now QoE which means Quality of Experience. How do you measure it what are the criteria?
When I first started in broadcast, every time I visited my parents I would adjust my dad’s TV set because it was always green. And I would always get yelled at for messing it up. Seems dad liked his TV picture slightly green. Isn’t that a QoE example? Mine versus His.
If a signal is its compressed, the question of how many bits is the same. Now what about compression? First, each codec has its own specifications and acceptable parameters. Second, once the signal is compressed, each compression algorithm performs different processing so the qualitative aspect of measurement needs to account for each codec. How then to analyze the quality of compression?
In the IT community bitrate error and packet Loss are network testing parameters agnostic to the data being transported. In media, bitrate and packet loss have more defined parameters for transporting media over the network and are not agnostic.
With all the formats on the table for discussion and a high probability there will be more than one, the test technology needs to be dynamic. It needs to be able to identify the format and then validate that it meets the proper technical specifications for that format.
Going forward, one of my goals will be to research and report on what each signal parameter means and what are the acceptable ranges. I’ll look at how to read a test and measurement tool and suggest ways to efficiently monitor signals. Finally, I will suggest any available corrective technology that might be applied when a signal falls outside acceptable parameters.
Editor’s Note: Gary Olson has a book on IP technology, “Planning and Designing the IP Broadcast Facility – A New Puzzle to Solve”, which is available at bookstores and online.
Related Editorial Content
I was due to write another terminology article and thought doing one that explains the terms, alliances, and standards might be helpful. There is no shortage of new terms, so let’s get started.
You don’t need a “secret decoder ring” to understand IP terminology. However, if you think that a DAM involves water, that SOAP is for the shower, that PAM is your cousin’s name and ESB is short for Bruce…
This article is Part 2 of Gary Olson’s series on digital terminology. As promised, this article continues the explanation of the next series of terms that engineers, technicians and technical managers need to understand.