Live Sports Production: Broadcast Controllers & Orchestration In Live Sports Systems

As production infrastructure, processing resources and the underlying networks required become ever more complex, powerful tools are required to plan, deploy and monitor.

Here we move on to discussing the need for and relationships between Broadcast Controllers, Network Orchestration and Monitoring in IP infrastructures. We began with an overview of why we need such systems and why it seems that all of our contributors were motivated to build their own bespoke systems.

Rainer Kampe. CTO at Broadcast Solutions: “We need to define what we mean by orchestration as an industry. Some people define orchestration as orchestrating the network, taking care of the bandwidth and taking care of the streams in the network. That’s one part. That is relevant where you have blocking networks or you want to do scheduling etc. The Broadcast Controller is the human interface for the operator for routing, getting information, labels, tally, I’m on air and so on. But the boundaries between Network Orchestration and Broadcast Controllers are getting a bit blurry. On top of this orchestration doesn’t help you if you do not have the monitoring of what you’re doing and what is going on in your network and where to find faults. This is the part of orchestration which has nothing to do with orchestrating the network but orchestrating your workflow.” 

“The driver for building our own system was that seven years ago there were three or four Broadcast Controllers on the market that were monolithic software things with a legacy user interface. Our idea was that we need to change firstly, the user interface, because the user should have the same experience he has here with his personal device, with his phone, with his tablet. We completely implemented NMOS and on top we built our own NMOS registry which is available on a redundant cluster service which can be used by whomever wants it with his own broadcast controller or with ours. In our development if you want to add most devices you hit the plus button, like on your iPhone and it will show up. This is why we called our broadcast controller ‘Hi’ which stands for Human Interface. Another driver was scalability and wanting an underlying software built with modern IT methodologies not monolithic exe files. This biggest driver though was usability, simply making life easier for us when commissioning and life easier for the customer.”

Dan Turk, Chief Technology Officer, NEP Americas: “We built our own bespoke, overarching, SDN network control, broadcast controller and monitoring system just for our facilities. It’s called TFC and it really was built by engineers for engineers. As the IP transition was happening, we couldn’t really find a system that we loved that checked all the boxes. There were some that were great network controllers, and some that were great broadcast controllers, but you had to cobble together pieces and ended up doing some firmware in the middle to control things. Because of the size and scale of NEP as a global company, we were able to build something for us, but is now becoming available as standalone licensed service for broadcasters across the industry.”

“TFC manages all of our networks, all of the routing, all of the communicating to transmitters and receivers. Plus, it communicates to the switches; if I need to get a flow from one leaf to another leaf, it controls the route. It is a managed SDN.  TFC also handles the monitoring and control which then goes into operator panels. The goal was to automate as much as possible.  If it takes five steps to build a camera for the show, from building the source and naming it, getting it to the switcher, getting it to the replay, and the multiviewer, the tally and all the things to make all of that work, how can we redefine that to make that one step? When production teams want to change a camera from X to Y, it’s simple to do with TFC, and everything follows through from one system instead of four different systems.”

John Guntenaar, Chief Technology Officer, NEP Europe: “You need to treat IP infrastructure from a control perspective completely differently compared to baseband infrastructure, because you are controlling all of the individual devices instead of just controlling the SDI router, for instance. In developing TFC, we were looking for a way to implement IP to let it behave in a way that people are familiar with. With IP, there’s a lot of provisioning. Every device has its own IP address. Every device has its own configuration. You need to monitor them all individually and you need to control all of them individually. This is where TFC started. The reason for building our own control platform is that we’re able to configure, provision, monitor and control all of those devices. By using TFC we have been able to dramatically speed up the configuration and reconfiguration of our installations around all of our locations. We have standardized a lot of the configuration, so from TFC we can reconfigure a complete OB facility in minutes, where in the old days people would be working for days configuring all of the individual devices.”

“One of the cornerstones of TFC is to make sure that we can manage IP installations at scale. If you have a big IP installation but you don’t have a monitoring system, then all bets are off, because the full domain gets so large that if something doesn’t work, you don’t easily know where to start. There are so many things that can happen. You need a lot of telemetry data, a lot of monitoring data in order to control a location and be able to support a location that is as big as one of our data centers, or a large OB.”

“On the network level, we monitor things like RTP flows. We of course monitor bandwidth up and down, for errors, etc, but we also monitor the end devices where we can get the information for dropped packets on either of the red & blue lanes. We can see if sessions lost their video for instance, and we can see if cameras are still online. We can see if cameras are still transmitting and all of those elements are built into that monitoring layer. It’s not that we’re just monitoring the network, it’s a combination of everything that draws the complete picture. It’s challenging, and there’s never an end in monitoring. You’re always optimizing and always finding things that happen that you didn’t see immediately in the monitoring, and then the next time you are prepared for it.”

Broadcast Bridge: Raising the subject of Orchestration opened up a very different conversation with Patrick Daly at Diversified whose role involves complex IT orchestration for data center model software defined infrastructure that consists of stacks of software, various types of COTS resource, and dedicated hardware.

Patrick Daly. VP Media Innovations at Diversified: “At Diversified we still by and large work with our vendor partners for enabling solutions for Broadcast Controllers and network orchestration. We’ll do a fair amount of custom configuration, and some rather advanced ways of using these software packages, but we’re not locked into any one partner solution. We’re very much looking at a client’s affiliation with the partner community. Do they have preferences? If not then we do use case analysis, figure out what the requirements are and do some vendor scoring.”

“The Broadcast Controllers are becoming more advanced, but it’s still in my mind a low-level component. The higher-level orchestration layer is an abstraction that sits on top of the Broadcast Controller and starts to bring more operationally relevant context to that infrastructure. Conceptually the network orchestration layer sits below the Broadcast Controller. Where the Broadcast Controller declares sources and destinations, virtual re-entries, categories, label sets, the orchestration layer can begin to address groups of those things and states of those things as some operationally relevant concept. The orchestration layer becomes a tool to do the pre-show flight check, punch a salvo of actions to configure and reconfigure the studio and the PCR and its associated infrastructure.  A higher, more capable orchestration layer becomes a place to plan out your work.”

“Often times when I’m deploying the most modern software architectures, I’m deploying those on to Kubernetes clusters. In those instances, I’m putting Linux directly on to the COTS server, so there’s really not a hypervisor as such. Linux has its own virtualization capabilities that Kubernetes is taking advantage of but I still need an orchestrator layer on top of that to deploy software. Because I still live in a world where I have solutions providers building monolithic executables for windows, my orchestrator also needs to accommodate that. It needs to handle everything from spinning up a Windows server or a windows workstation and installing software on it, to deploying a Kubernetes cluster and deploying containers in a scalable way. It’s a pretty broad range of need from my client side. At Diversified we’ve built an orchestrator solution called Atlas for that purpose where we can bundle up software, whether it’s containerized services or legacy monolithic software builds and we can present those as operator friendly names. So rather than spin up a long list that’s a set of vendor applications, I can just spin up the PCR 1 assembly or spin up the 10 a.m. newscast, spin up tonight’s pregame show, and I can get the exact software I’m looking for, deployed exactly where I intend to deploy it, with all of the great scalability and fault tolerance and self-healing capabilities that you would expect from a SAS environment.”

“Looking at the relationship between the network orchestration layer and other elements of the ecosystem, the rise of some of the more advanced capabilities in the Cisco architecture, their non-blocking, multicast algorithms and their interfaces into broadcast controllers, or Artista’s MCS which is their equivalent, lets the broadcast controller communicate with the network, declare that it’s going to establish a route and then the network controller protects that path. I think that’s certainly a step forward from where we started in the early days of 2110. I think there’s probably a little way to go with some of the NMOS enablement to make for a more seamless and operator friendly type of a play with 2110, but I think where the most exciting part of all of this evolution comes in is when I can start applying machine intelligence and machine assist into the production cycle.”

Part of a series supported by

You might also like...

Monitoring & Compliance In Broadcast: Monitoring The Media Supply Chain

Why monitoring the multi-format delivery ecosystem starts with a holistic approach to the entire media supply chain.

IP Monitoring & Diagnostics With Command Line Tools: Part 3 - Monitoring Your Remote Systems

Monitoring what is happening in a remote system depends on being able to ask for something to be checked and having the results reported back to you. There are many ways to do this. This article looks at some simple…

Broadcast Standards – Cloud Compute Infrastructure – Part 1

Welcome to Part 1 of Broadcast Standards – Cloud Compute Infrastructure. This collection of articles is the first in a new series which expands on the enormously popular ‘Broadcast Standards - The Book’ by Cliff Wootton. Over the coming months a series of Th…

Live Sports Production: Sports Production Network Infrastructure

A discussion of production network infrastructure and where the industry is in the evolutionary journey from SDI to IP with senior system architects within three of the most respected organizations in broadcast.

Monitoring & Compliance In Broadcast: Part 2 - The Converged Delivery Ecosystem

‘Monitoring & Compliance In Broadcast’ explores how exemplary content production and delivery standards are maintained and legal obligations are met. The series includes four Themed Content Collections, each of which tackles a different area of the media supply chain. Part 2 con…