Orchestrating Resources For Large-Scale Events: Part 3 - Contribution & Remote Control

A discussion of camera sources, contribution network and remote control infrastructure required at the venue.


This article was originally published as part of a Themed Content Collection: Orchestrating Resources For Large-Scale Events.

In many ways not much has changed with high-profile sports production; we have a collection of cameras inside a venue, connected to a gallery where cameras are remote controlled, sources switched, graphics added etc. With remote IP based production what has changed is that parts of that infrastructure are no longer parked outside the venue, they are hundreds or thousands of miles away. Fundamentally, most of the contribution challenges of remote production revolve around the latency introduced by the distances involved. Latency is the key issue which drives which bits of the infrastructure can sensibly be kept ‘at home’ in the hub and what needs to be on site.

By necessity therefore some of our infrastructure (beyond just cameras and microphones) and associated crew, needs to be on site. Vital systems for networking, compression, monitoring, remote control and comms all need to be somewhere near the action. How this is done varies; when the pandemic first accelerated adoption, much of the time a full OB vehicle would be sent to the venue with a subset of its systems actually used. This approach is still favored by some broadcasters and production companies as it offers an added layer of redundancy to the infrastructure. A few years on we have a range of approaches in play; Preparation of flypacks is becoming a very popular approach, because it gives an opportunity for individual rack and wider system configuration and testing back at base, and the flypacks are highly mobile making it practical to deploy them nationally or internationally. Some have built much smaller OB vehicles designed specifically for remote production. Some are installing permanent pitch side racks at larger venues where remote production is now a weekly occurrence on game day.

What Are We Connecting?

A typical high profile soccer game might have 40 cameras in and around the stadium. It’s a combination of different camera types, each of which has its own requirements in terms of connectivity and control. Some are manned on site (system cameras, flycams, drones etc)… others are fixed position (stationary cameras, mini-cams etc), others are robotic (ptz, rail cams etc). Different cameras have varying degrees of remote-control capability. Most will have iris control for basic shading and many offer remote color control.

The rigging of cameras, audio etc is mostly pretty traditional and well understood. Connectivity to the contribution network is quite traditional too, either wireless or wired… serial or IP. Using IP broadcast cameras which are designed for this type of application will streamline connectivity and control. Serial broadcast cameras can be used with the aid of camera mounted converter boxes. Many specialty cameras will not be IP native and they too will need converters. It is possible to also combine contribution via cellular 4G or 5G, especially for roving cameras and positions away from the main venue.

Contribution Network

We are discussing remote IP based production so the backbone within the venue is an IP based contribution network. Essentially a high-capacity network switch that is taking input sources from all the cameras and feeding them to a transmission interface that connects to/from backhaul. Most prefer to use an uncompressed ST 2110 based contribution network, although some might opt for a compressed format, primarily to reduce cost by reducing required switch capacity.

Calculating required network bandwidth is a matter of mathematics: To transmit full uncompressed video requires a bit rate ranging from around 2 Gbps for 8-bit color depth with 4:2:2 chroma subsampling, up to as much as 8 Gbps for RGBA color at 16 bits per channel. 40 cameras for a high-profile soccer game might therefore require between 80 Gbps and 320 Gbps of bandwidth. This might work for an on-site contribution network, given sufficient switch capacity, but not for backhaul so some sort of compression before feeding into backhaul is almost certainly required. There are a number of codec options but JPEG XS is gaining in popularity because it delivers upwards of 10:1 compression without noticeable degradation in quality. Compression is typically achieved using hardware encoders located with your network switch(es) in your flypack, rack, OB etc. This compression inevitably adds some degree of computational latency.

Managing IP addresses is a perennial challenge in all large-scale IP systems. Every single device on the network needs to have a unique IP address. In addition to cameras, we must consider confidence monitoring, audio systems, comms etc. What may seem like a relatively manageable challenge within the confines of a single venue contribution network, becomes exponentially more complex within a multi-site IP network for a major tournament or national league coverage. It is a challenge that can be solved in a number of ways and one which is addressed by a number of commercial integrated platforms or IT systems. Regardless of the approach, a significant exercise in planning, device configuration and system testing is required ahead of the event.

Remote broadcast infrastructure requires a return path from the production gallery to the venue for confidence monitoring (video and audio for various members of the on-site team), GPIO, tally and remote control. Comms requires two-way communication. The venue contribution network needs to accommodate this in terms of design, configuration and capacity.

Remote Control

Most large-scale productions will be based on proprietary systems supplied by the major vendors. In this scenario, system cameras, CCU, routers and switchers are all engineered to work seamlessly together, and with IP native systems that means remote control is part of the infrastructure. With an IP native system camera, a single IP connection carries video, monitoring and remote-control data. Serial cameras are less straightforward; using a camera mounted converter box to bridge the gap between SDI and IP for video is simple enough. For remote control, Serial cameras present a latency related challenge; most will time out and revert to on board control (and settings) if there is too much latency between the camera and the CCU. Some converter box vendors solve this by running software within the converter box which sends data to the camera to fool the camera into seeing the converter box as the CCU.

Alongside the system cameras a significant array of specialty cameras is likely to be used and these are likely to require a third-party remote-control solution. Such solutions require a control data stream to be delivered to the camera via the network alongside video connectivity, and that may mean a dedicated IP connection.

There are broadly two different types of remote camera control to consider; shading/coloring, and motion control for lenses and/or robotics. What will be possible depends entirely upon the specific cameras and controllers used.

With shading and coloring, most cameras support iris control or if necessary, gain control. Many cameras support control of blacks and basic color. IP native system cameras may support more comprehensive remote color control. Many large-scale productions are utilizing automated color control systems which are part of the hub gallery infrastructure – significantly reducing the need for remote color control. With some remote-control systems, where cameras have an OSD (On Screen Display) it is possible to access this remotely, giving a fairly deep level of remote configuration capability.

Remote motion control presents a potential latency pain point. Once we are over the hurdle of delivering control data from the controller back in our hub, to a device in the venue, there is a simple operational challenge. How quickly does the operator need the device to react to commands to remain viable? The answer to that will be driven by production requirements and must be considered on a device-by-device basis. Control data requires significantly less bandwidth than video, so GPIO, tally and control data will potentially reach the device ahead of, for example a video or audio confidence monitoring stream, or indeed comms. During the planning phase, as the potential hub<>venue system latency becomes a known quantity, decisions can be made about where the operator for each type of device needs to be.

Audio

Most sports production relies on microphones in fixed positions. Using soccer as an example; there are typically 12 pitch side mic’s. Stadium ambiance may be as many as 8 mics, with American football pitch side parabolic mics are used to capture the action for line outs etc and most matches will have wireless mics on the roving pitch side steadycams… but in all cases the connection of mics to stageboxes is traditional stuff. Any camera audio feeds are packaged with the video feed.

There is a requirement for an on-site audio engineer to rig, line check and do confidence monitoring so it is highly likely that a local touch-screen based console in a rack, with its own processor will be used. Such an approach would then typically provide capability for remote control over the entire on site mix engine (including mic’ gain) from back at the hub using a console control surface. Having the audio processing engine on site reduces latency for monitoring.

With soccer, automated mix systems where video analysis is used to follow the ball around the pitch, opening/closing channels and setting levels for the nearest pitch side mic’ is common. This would be operated back at the hub from the console control surface, sometimes using a second audio processing engine. Operators tend to be able to learn to live with managed latency.

Some Things Never Change

One thing has not changed; the success of any large-scale production depends on careful consideration, advanced planning, meticulous preparation and a lot of testing. What works for you will be determined by a careful examination of your own very specific requirements.

You might also like...

Essential Guide: Next-Gen 5G Contribution

This Essential Guide explores the technology of 5G and its ongoing roll out. It discusses the technical reasons why 5G has become the new standard in roaming contribution, and explores the potential disruptive impact 5G and MEC could have on…

Audio For Broadcast: Cloud Based Audio

As broadcast production begins to leverage cloud-native production systems, and re-examines how it approaches timing to achieve that potential, audio and its requirement for very low latency remains one of the key challenges.

Standards: Part 4 - Standards For Media Container Files

This article describes the various codecs in common use and their symbiotic relationship to the media container files which are essential when it comes to packaging the resulting content for storage or delivery.

Standards: Appendix E - File Extensions Vs. Container Formats

This list of file container formats and their extensions is not exhaustive but it does describe the important ones whose standards are in everyday use in a broadcasting environment.

Metadata Is Key To Unlocking AI’s Potential

Artificial Intelligence (AI) – which we should all really be calling Machine Learning - has found many applications within the media & entertainment world, driving innovation and pushing the boundaries of video production technology and advanced workflows. There’s a little sec…