Monitoring & Compliance In Broadcast: Monitoring Video & Audio In Capture & Production

The ability to monitor Video and Audio during capture and production is becoming increasingly important, driven by the need to output to many widely different services, and doing it very quickly.
In this article, we focus on live events, where the event itself is pre-scheduled, and planning for production is possible in advance, as opposed to an unpredictable news event.
The “traditional” live event, sports or otherwise such as music festivals in the past was generally output to more traditional broadcast channels. The need to consider streaming outlets was nowhere near as important as it is today. At a typical large event, such as the Olympics, where global interest is present, typically in the past there would be a large Broadcast Center, built specially for the event, with an equally large herd of OB trucks covering ground outside. Typically, the host broadcaster, normally the official public broadcaster for the host country, would undertake to provide clean feeds to the various broadcasters covering the event for their local populations.
With increasing globalization of potential audiences, and many different ways for audiences to consume the footage, from HDR, full on 4k or even 8K, to YouTube shorts on a cellphone, the rights to live events are often purchased by global media organizations, who manage the on-site production, and then sell on to services that wish to carry them. The previous direct link to geographical audience is no longer exact. This, and the advances in remote production that have come in, (at the time of writing, over the last five years), have cut down the number of large installations on site, and the herds of trucks are somewhat depleted.
Having said that, all of the on-site presence, whether people, cameras, trucks, or buildings full of cables, now have increasingly complex and sophisticated roles, and without automation and monitoring at all stages, the potential for disaster is greater. Doing much more for less is a truism in today’s media world, and that very much applies in production and capture. Monitoring is essential both at the initial capture stage, and subsequently at any encoding stage, to ensure potentially fatal errors will not show up further down the line.
As part of this increasing sophistication, the increasing use of IP, carrying signals over networks, using standards such as ST2110, ST 2022-6, and NDI, offers productions new potential for flexibility, scalability, and integration with management, storage, encoding and IT systems. The possibilities for using a remote production hub with quick adaption and turnaround between live events has started to lead to media organizations being able to cover more “niche” sports and cultural events, with the corresponding increase in use of the assets. Since many remote production hubs make use of cloud compute infrastructure, this also potentially allows resources to be spun up when needed and reduced when not in operation.
Having said that, many organizations still maintain the reliable workhorse of SDI operations, particularly with cameras, as a hybrid operation, therefore both SDI and IP monitoring is needed. IP based operations require different technical skills, and where monitoring is concerned, a high degree of knowledge is still required to interpret and manage any issues. Where output is known to be to many different standards and may have to meet specific organizational delivery standards downstream, this must be considered in advance. Fundamentally, is it enough to capture the event at the highest possible resolution and then “down rez” from there through myriad transcoding operations?
Monitoring Across Networks
Key to successful live events using IP is network management and monitoring. In an ideal world, IP signals would be perfect, with a lovely squared off display on the monitoring scope! Many network monitoring devices aimed at on-site live event use offer user interfaces that are similar in view to those used traditionally for SDI, which have been proven over time to be effective for human interpretation. Unfortunately, basic real-world physics means that entropy will inevitably get into the system, and even with specialist and robust networking, some losses are inevitable. Using IP for live events requires bi-directional protocols, and a good knowledge of both the source and destination(s). It is important to always remember that for high bit rate video, when sending to, for example, a data center, a resend of missing data is likely to be impossible.
Apart from specific devices aimed at analyzing local conditions and ensuring correct timing and connections, there are also more centralized network monitoring systems available that will monitor the whole process, from recording to output, and these come very much into play ensuring that any errors in the whole process are quickly found, analyzed to determine criticality, and directing human engineering to the location of any faults. Some more sophisticated network monitoring systems are beginning to use AI effectively to manage error detection, and in some cases, are able to “fix” the problem, depending on where it may be.
The ability to also monitor, log and record any faults also allows engineers to identify the culprit wherever it may be in the complex chain of processes that make up a system which is complex and yet must be accurate at all stages in order for content to reach the end-user.
A Proliferation Of Requirements
The granularity required of monitoring systems is increasing, and fully managed IP networks are often required as public networks in many cases cannot offer the reliability required. While the advantages of going IP are many, if UHD capture is being considered, the network capability back to the production center must be given serious thought, not just in terms of the budget required, but also the physical ability of the network to carry the high-resolution streams accurately.
Whether to go full IP based, with or without remote production operations, or maintain some SDI, requires consideration of a number of factors, including budget constraints, latency requirements, and the aforementioned technical expertise.
Whether using remote or onsite production, one critical factor to be considered is the codecs to be used. At time of writing, many live events out there in the wild are still using H264, but for HDR and 4k, H265 offers better efficiency, and certainly for 8k, H264 is unsuitable. A look at the many analysis tables out there that compare bitrate versus file size and quality soon show the relative advantages of either. Budget does come into play here, just as it does when considering the additional storage costs when capturing and sending multiple high-resolution streams back to base. If multiple media organizations are all requiring direct feeds, the encoding and associated required incoming metadata can be different for each organization. Most media organizations publish their specific delivery requirements, for live and also separately for non-live productions.
Many will use internal mezzanine formats, with transcoding workflows in place to allow for the different output services that are required. Very sophisticated tools are available for verifying files as they move through these largely automated media factories, and when planning live events, consideration must be given to the potential workflows for generating the spin off non-live content, such as edited highlights, and subsequent later retransmissions. These will in all likelihood be output in many different formats, so monitoring and reporting of any errors is required at different stages of the processing. It is generally not helpful to discover fatal errors at the very end of the process, at output, which could have been picked up much earlier and fixed.
So when considering monitoring for live events, it is important to have tools on-site that can not only check the networking status in detail, the timing, and bitrates, but also verify the quality of the video and audio both before and after encoding. Not only that, but for many live events, an additional factor in the chain may be the application of digital rights management if the event is to be output on a paid subscription service or individual event basis.
In remote production, once the streams reach a production hub, monitoring the incoming feed at ingest, not only for network errors such as packet loss, but also for video, audio and associated metadata quality errors is essential. Monitoring during switching and editing processes, before and after transcoding if required, and at final output, can pay dividends in ensuring that even with the potential for downstream errors caused by less efficient public networks, the end user experience will be better quality.
Conculsion
To sum up, for live events, an overall approach to monitoring needs to be embedded as part of the pre-planning for the event, as while offline tools can be used during the on-site set-up to verify that all is working as it should, during the production, online real-time tools that constantly monitor the output should also be in place at critical points. Post live event, if content is to be re-edited and/or transcoded for output to other services, file-based monitoring should also be in place to ensure the quality of these often very different services. Finally, after the event has concluded, these tools can be used for analysis to see if any improvements can be made for future events.
Part of a series supported by
You might also like...
Broadcast Standards: Cloud Compute Workflow Pipelines
This is a detailed exploration of system & workflow principles, storage systems, queue management, how microservices enable active workflow designs, and using node graph systems to create a friendly UI.
Building Software Defined Infrastructure: Systems & Data Flows
For broadcasters seeking to build robust workflows from software defined infrastructure, key considerations arise around data flows and the pro’s and cons of open and closed systems.
Broadcast Standards: Microservices Functionality, Routing, API’s & Analytics
Here we delve into the inner workings of microservices and how to deploy & manage them. We look at their pros and cons, the role of DevOps, Event Bus architecture, the role of API’s and the elevated need for l…
Live Sports Production: Part 3 – Evolving OB Infrastructure
Welcome to Part 3 of ‘Live Sports Production’ - This multi-part series uses a round table style format to explore the technology of live sports production with some of the industry’s leading broadcast engineers. It is a fascinating insight into w…
Monitoring & Compliance In Broadcast: Part 3 - Production Systems
‘Monitoring & Compliance In Broadcast’ explores how exemplary content production and delivery standards are maintained and legal obligations are met. The series includes four Themed Content Collections, each of which tackles a different area of the media supply chain. Part 3 con…