Orchestrating Resources For Large-Scale Events: Part 1 - Planning Is Everything
An examination of how to plan & schedule resources to create resilient temporary multi-site broadcast production systems.
Major sporting events have long been an important feature of many broadcasters’ schedules. Traditionally, outside broadcast vehicles would produce a single programme on-site at each venue, and this would subsequently be sent back to base via traditional telecoms or satellite infrastructure for broadcast distribution. Increasingly though, remote production is becoming more typical; the video and audio feeds from multiple sources on-site are all delivered to a production hub which can be hundreds or thousands of miles away. This approach reduces the headcount needed at each site, with all the associated travel and accommodation cost that implies.
Detail Your Vision
The diverse range of technical options open to us now means that we’re no longer governed merely by the possible. It’s essential to consider the available technologies not as an end in themselves, but as an array of tools to help you deliver a great viewer experience, and on the road to that, a great working experience for your teams. Costs will of course be a factor, but the process should begin with documenting a clear and detailed understanding of the production vision for each event and venue, who will need to be involved in delivering that to the defined standard, and what good and bad looks like from their perspective in terms of the tools they’re given. You’ll also need to consider “non-functional” requirements such as security and resilience. As the project proceeds you’ll need to evolve the vision and make robust compromises and priority calls; the more understanding and engagement you have across the board, the better the outcomes you’ll get. It can be difficult, but try to first define the vision in non-technical terms - it should be focused on people and business outcomes rather than on technology and solutions. The why rather than the how.
Can We Use The Cloud?
It might be tempting to think that, in 2022, we can do all of this with the cloud - the reality is that, although cloud has various roles it can play, we’re not quite there in terms of facilitating remote live production for sporting events. The most critical issue is the consistency of latency between sources, which is currently challenging to guarantee in a cloud production environment. Audiences might tolerate various technical compromises on resolution and compression, but they’re unlikely to tolerate seeing key moments, such as goals, multiple times in quick succession because of a variance in source latency. This is an area under widespread active development though, so it’s a problem we can expect to be solved before too long.
Connectivity Basics
Newer technologies are starting to shorten implementation timescales, but planning for a major event normally needs to begin at least a year in advance. In instances where many broadcasters from different markets will be seeking to cover the same event, a lot of the on-site infrastructure is typically managed by a host broadcaster. In other cases you’ll need to be engaging directly with venues. Either way, early dialogue here is essential to understand what’s possible, and what the costs are likely to be. Don’t automatically assume you’ll pass your main video and audio feeds back to your hub in the same way for every venue. Usually an IP stream will be appropriate, but you might need to consider a more traditional method like satellite, or there may be a role for internet or cloud.
Even if you choose native IP cameras you’re probably going to need some encoding kit to compress the signals for distribution, and you may need to install an antenna for Precision Time Protocol (PTP). Then you’ll likely need a rack in a co-location facility somewhere to link the connections from the venues to some trunk connectivity back to your hub. The host broadcaster or venue will have recommendations here; they may well have existing connectivity available. If they don’t they’ll be able to advise what local telcos have access to each site. Telco connectivity can take months, so early planning is critical. Also vital is doing the sums to understand exactly how much data you’ll need to be concurrently sending. Give yourself some overhead here for flexibility - 20-30% if you can - as you really won’t want to be hitting a ceiling when that crucial sporting moment happens.
Back at your hub you’re going to need corresponding installations to receive your feeds, decode them as required, and present them to your production gallery. Whilst cloud and browser-based production software exists in 2022, it’s not yet up to the standards you’ll be working to for live coverage of a major event, so we’re talking about a traditional suite, though it could be IP-based, using something like SMPTE 2110, and it could be control surfaces linked back to remote hardware. This kind of approach can facilitate flexibility and resilience, but it does add technical complexity. As ever, these decisions should be made in line with your defined vision and desired business outcomes, not for the sake of doing something appealing with technology.
Redundancy
For all but the most budget-conscious productions, one set of kit and one connectivity path won’t be enough. You’ll want to consider what happens in the event of failures. This probably means duplicating the connectivity all the way back to your hub via a second telco and second co-location site. Telcos should be able to advise on the actual routes of their fibers, which you need to be as diverse as possible. The most common cause of physical connectivity failure is construction work along the fiber route; the value of diverse provision is somewhat diminished if the two telcos have actually run their fibers along the same or parallel ducts, particularly in urban areas where construction activity is likely!
If you’re sending the feeds around the world, public cloud providers and internet are worth investigating as cost-effective backup connectivity routes. Often some kind of pay-as-you-go option is available, this means there’ll be some basic set-up costs, such as cross-connect with your co-location centres at each end, but the bulk of the transit cost you’ll only have to pay if it’s actually needed. As technology progresses in the coming years, it’s reasonable to expect we’ll see more feeds distributed in this kind of way.
More Than Just The Action
Whether a host broadcaster is filming the action, or you’re putting in the kit to do so yourself, you’ll need to consider the other elements in your overall vision. Do you need to plan for a studio or a stand-up position at the venue? Will you have an on-site commentator? Will you be doing interviews with participants in the “mixed zone”? Are you expecting to provide off-site coverage at local landmarks or peripheral venues? How are you expecting to populate your on-screen graphics?
If you’ve defined these production aspirations in detail, you should be able to understand where latency and reliability may be less important and thus newer technologies such as 5G or cloud connectivity could be appropriate and cost effective. Backpack-type solutions are available for mobile reporting for example, and could connect over public 5G or 4G to your co-location centre or a public cloud deployment if needed, though of course it’s important to remember that public cellular bandwidth is likely to be variable. By understanding these compromises in terms of business outcomes, you can make confident decisions about risks and ensure everyone knows what the limitations are.
Production Logistics
If you have presenter talent at a venue you’ll need logistics like confidence feeds, talkback, and presenter earpieces. And of course you’ll need make-up, green room, welfare facilities, basic IT connectivity and technical support. Modern talkback and confidence feeds can easily be IP, and as most of your data is going to be travelling outbound from the venue to your hub, bandwidth in the other direction shouldn’t be a problem. You’ll need to consider what equipment you’re going to need and how and where it’ll be connected. Latency will likely be important for the audio communications but will be less important for video confidence feeds, so it’s worth considering them independently. Again, your early engagement with the teams involved and understanding of their needs and priorities will ensure the best outcomes.
Network Design And Security
Modern networking equipment allows us to operate a single physical network but segment it into multiple virtual networks using Virtual Routing and Forwarding (VRF) at the IP layer. This provides the means to segregate the traffic from a security and quality-of-service (QoS) perspective. For our event we probably want to create at least three VRF segments (VRFs) - one for the main video feeds, another for control and logistics traffic including talkback and data for on-screen graphics, and a third for potentially “dirty” IT services such as email, telecoms and web browsing. This protects our video content from the risk of malware arriving via email, for example. We might also consider duplicating some of these so that we have software-layer network resilience to complement our physical network redundancy.
You should expect to perform some kind of security threat modelling or risk assessment against your overall architecture; for this, you’ll need a detailed understanding of exactly what data is flowing where using what protocols, and how access is governed and secured.
Highlights And Archive
Alongside the live production, there’s usually a requirement to store the material and produce highlights packages. This is where public or private cloud options can now come to the fore - no longer do we need expensive on-site edit suites - we can choose either fully-featured remote controlled desktop editing stations, or various browser-based Software-as-a-Service (SaaS) editing solutions.
There are some challenges though - public cloud is great for scalability, you can have capacity on tap, but you’ll need to figure out how your video data is going to get there, how finished assets will be delivered to where you need them, what the data transit costs might be, and if additional data transfers might cause delays. You’ll also need some cloud operations and security expertise to ensure everything works well and provides the best user experience for your teams. Private cloud deployments can be easier to get data in and out of, but you have to provision for your peak capacity requirement and do expensive upfront purchasing, installation and ongoing maintenance. Use your defined vision and business outcomes to decide what’s best for your organisation, taking into account your existing infrastructure and expertise.
When it comes to archive storage of your material, public cloud is surely becoming the best option. Whichever provider you choose, long-term storage of data can be very cheap, flexible and site resilient. It’s best to ensure that video is processed into a standard technical format for archive, considering frame rates, resolutions and audio track layouts as well as codecs and containers. Metadata should be stored alongside the video in flat files, JSON is currently the format of choice for this, and can allow for basic database-type operations and other integrations using tools within the cloud provider’s platform.
Going Live
Anything that can go wrong will go wrong, so an extensive testing phase is going to be required. Access to venues early enough to do this all in-situ is unlikely, so anticipate comprehensive configuration and testing of your venue kit at another location well in advance, potentially months ahead before you ship it from your base. At the other end, time will be short, so your on-site technicians will need detailed check-lists to ensure that the equipment has arrived undamaged and is installed correctly. If you can link these checklist items right back to your detailed vision and business outcomes, you’ll know exactly what’s impacted if anything does go wrong, and be able to make quick decisions about what to do or where to compromise.
Don’t forget to schedule tests of your redundant paths and technical resilience as well - a useful approach can be to involve production and operations teams in “gaming” various resilience scenarios, so they know what to expect when the worst happens and can react accordingly.
Focus On Value
From the initial planning phase right through to decommissioning, put your audience and your people at the center of your decision making. As the project proceeds the teams will gain deeper technical understanding, which will necessarily inform an evolution of priorities. Equally external factors will spawn new requirements requiring robust assessment. Try to map every single piece of work back to a defined business outcome, and ensure that each outcome is properly understood by the team working to deliver it. The aim should be that teams are motivated by business objectives rather than technical aspirations or organizational tropes. This isn’t as straightforward as it might sound - there’ll be intricate dependency chains involved - but a clear understanding helps minimize the amount of wasted effort and delivers the most value.
You might also like...
The Resolution Revolution
We can now capture video in much higher resolutions than we can transmit, distribute and display. But should we?
Microphones: Part 3 - Human Auditory System
To get the best out of a microphone it is important to understand how it differs from the human ear.
HDR Picture Fundamentals: Camera Technology
Understanding the terminology and technical theory of camera sensors & lenses is a key element of specifying systems to meet the consumer desire for High Dynamic Range.
IP Security For Broadcasters: Part 2 - The Problem To Be Solved
By assuming that IP must be made secure, we run the risk of missing a more fundamental question that is often overlooked: why is IP so insecure?
Standards: Part 22 - Inside AIFF Files
Compared with other popular standards in use, AIFF is ancient. The core functionality was stabilized over 30 years ago and remains unchanged.