Orchestrating Resources For Large-Scale Events: Part 4 - Monitoring Data For Efficiency & Forensics
Large-scale remote production systems can be complex and challenging to monitor, but IP presents many opportunities to capture and make use of rich data streams.
The goal of every live TV sports broadcast is to provide the best possible content with the highest technical quality. How that goal is achieved usually depends on people, technology, and budgets. How do large-scale productions with strong technical talent monitor a production for compliance and efficiency to provide the highest QoS and QoE?
Throughout the history of live TV, technical crews have used waveform monitors, professional video monitors, and VU meters to visually confirm video and audio quality and metrics. Similar live signal monitoring continues to this day, but modern monitoring methods also include automatic logging, electronic alerting for technical errors and deviations, and capturing and logging everything for later evaluation and auditing.
Because the trend in major TV sports production is IP and REMI IP, digital performance metrics make it easy to capture, log and monitor data, to alert operators to exceptions, and to reliably reproduce exceptions for troubleshooting.
Many exception settings are programmable, subject to an acceptable minimum level of service. The level of acceptable exceptions often depends on the size of the audience and what technical issues viewers and sponsors will or won’t tolerate.
Objective monitoring rather than subjective observation is infinitely better and more scientific than how it was done in the analog days.
The scale of production for a major international sporting event is epic compared to, for example, the scale of budgets and revenues for broadcasting a local fishing tournament. However, the technical needs and pitfalls at both ends of the scale are much the same. Large-scale productions have more moving parts, higher financial risks and are typically more sophisticated and complicated.
IP technology has changed the logistics and economics of TV production workflows and enabled distributed production, by allowing sharing and control between production technicians and equipment in multiple locations. When we have multiple stadiums spread across different cities, with multiple matches being played simultaneously, it is a massive undertaking for the production crews and infrastructure involved and necessitates an array of data collection points which all need to be devised, established, monitored and archived.
Thankfully, this large-scale challenge has attracted considerable innovation in recent years and there is a growing portfolio of systems available to help.
There is more to using IP and the cloud in live sports productions than image processing, compression, and transport. Many major broadcast manufacturers have introduced integrated platforms under a variety of brand names and monikers that provide a wide range of IP infrastructure management solutions and controls specifically for broadcasters. Whilst the primary purpose of most of the tools in the orchestration technology stack is designing systems, and then managing how they are spun up and down, most of them also present opportunities for data gathering and monitoring.
What’s In The Stack
Some integrated platforms qualify as software-defined networking (SDN) systems, where the management and control plane is separate from the data plane. The management and control plane typically allows a control screen customizable to prioritize and meet the needs in a particular facility. A SDN controller must support the local network architecture because of the wide variety of network switches and LAN topologies. One of the most common local networks in media facilities is monolithic, where all media edge devices are connected to the same switch. The problem with monolithic switches is that they aren’t scalable. A scalable networking choice popular in media and data centers is spine and leaf.
The SDN controller establishes a connection between a source and a destination. It communicates with source devices, destination devices, the network, or in combination. A service orchestrator allows scheduling of resources and enables prediction and allocation of required future network capacity. It also adds in-depth monitoring and a time dimension to an SDN controller.
Many integrated platforms are customizable to match the unique needs of stations, production companies and OB scenarios. Many allow live monitoring and review of virtually all data points from each source and the inputs and outputs of all processing units, servers and connections in the workflow, including distribution.
Some integrated platforms are cloud-enabled to facilitate migration to the cloud, or partially cloud-based to take advantage of some cloud capabilities, or cloud-native applications conceived, designed and built to run entirely in a public cloud. Some are available as SaaS, others can be local software. Not all integrated platforms are equally sophisticated.
Many platforms allow users to leverage existing SDI gear in a hybrid SDI/IP environment including UHD and HDR delivery and can also include centralized production processing, facility interconnect, and multi-resolution, multiviewer monitoring. Some systems also use AI and augmented reality (AR) technologies to enhance their power.
More versatile integrated platforms can replace individual spreadsheets such as crew and resource scheduling and management, asset management, automated media operations, and transmission management. Some platforms are equipment agnostic while some others are nearly all proprietary. Some integrated platforms require a great deal of bandwidth. In UHD systems, redundant 100 Gb data connections are not unusual.
Analytical Quality Control
TV QC includes media files before they air, and internal auditing of content and related data after it airs. Sometimes internal QC auditing is referred to as a forensic audit. Most TV stations capture the content they broadcast 24/7/365 to document that every frame and word of every spot aired, should a sponsor question it.
In terms of pre-air file analysis, analytical quality control (AQC) refers to processes and procedures designed to ensure that the results of file analysis are consistent, comparable, accurate and within specified limits of precision. While file-based QC has been in use for some years, live and post-air QC relies on logging and compliance data.
Analyzing network and system performance after an event can identify the root cause of what happened if something went wrong and be used to improve future efficiency. Some systems capture every frame of video, others capture one frame per second for visual verification. Some others capture every video transition. When combined with detailed data logging for deep diving into what did or didn’t happen all three methods work well.
In a fully managed system, all workflow signal metrics are monitored for exceptions and key performance indicators (KPIs). Some fully managed systems also provide security features such as dynamic port management, password checks, automatic software updates and AI anomaly and pattern detection to reveal unexpected signal flows, logins, and other unusual behaviors on the network.
Many monitoring systems rely on Web Real-Time Communication (WebRTC) to provide web browsers and mobile applications with real-time communication (RTC) via application programming interfaces (APIs). WebRTC allows audio and video communication to work inside web pages by enabling direct peer-to-peer communication, without plugins or downloaded native apps. WebRTC is supported by Apple, Google, Microsoft, Mozilla, and Opera.
Traditional stream monitoring begins with methods and toolsets to monitor content during the ingest/encoding/transcoding process of content prep. Incorrect levels during ingest will cause problems later. Stream monitoring usually also includes monitoring the core network during the delivery process, such as the origin server and CDNs, and by monitoring the player or app used to decode content in a typical end-user viewing scenario such as OTT or live ABR internet streaming.
Monitoring live IP transport streams typically includes detecting anomalies such as blockiness, frozen frames, black frames, and audio/video syntax errors. Loudness detection and correction are also monitored for compliance. Transport streams should also be monitored for QoS metrics. There are many solutions available for transport stream monitoring. Some solutions are designed to monitor ST2110 and ST2022-6 networks. Others are designed to monitor ABR MPEG-2 transport streams. Some can work with both.
In an adaptive streaming network, a single video program is transcoded into many bit rate variants the player can use to adjust for network performance. Bit rate “shifting” can sometimes occur while the content is being played. If the shifts are not smooth, playback anomalies can occur. All the bit rate versions must be properly aligned with each other for smooth bit rate shifts. Network conditions can change, which is what ABR is designed to compensate for.
There are a wide variety of choices for content and IP monitoring solutions. Facilities and production scenarios are unique. All solutions deserve due diligence to determine which is the best fit for your organization and situation.
You might also like...
Learning From The Experts At The BEITC Sessions at 2023 NAB Show
Many NAB Shows visitors don’t realize that some of the most valuable technical information released at NAB Shows emanates from BEITC sessions. The job titles of all but one speaker in the conference are all related to engineering, technology, d…
Celebrating BEITC At NAB Show
As we approach the 2023 NAB Show in the NAB centenary year, we celebrate the unique insight and influence of the Broadcast Engineering & IT Conference that happens alongside the show each year.
Orchestrating Resources For Large-Scale Events: Part 3 - Contribution & Remote Control
A discussion of camera sources, contribution network and remote control infrastructure required at the venue.
Capturing Nitin Sawhney’s Ghosts In The Ruins
The 2022 City of Culture festival concluded with a performance created by Nitin Sawhney CBE. Filmed on the URSA Broadcast G2, an edited broadcast of Ghosts In The Ruins aired on the BBC.
Orchestrating Resources For Large-Scale Events: Part 2 - Connecting Remote Locations
A discussion of how to create reliable, secure, high-bandwidth connectivity between multiple remote locations, your remote production hub, and distributed production teams.