IP requires new skills, new workflows.
Playout automation has been enabling fewer people to control more channels for decades but we’re not quite at the point where human interaction can be eliminated altogether. Since most linear broadcasters will either move to a software-based deployment for their channels themselves or give them to a service provider that carries out that transformation for them, The Broadcast Bridge assesses the benefits and the challenges in so doing. Part II examines the crucial role of IP and the workflows and skillsets needed to operate such infrastructure.
The major premise of software-defined operations is to consign proprietary and hard to interoperate equipment and siloed workflows to history.
IP, and especially uncompressed IP, is the stepping stone. That said, the vast majority of playout infrastructures are still SDI, and a baseband solution is inherently incapable of being software only.
“Key to the transition will be the widespread adoption of open standards which enable interoperability between different vendors’ solutions in the IP environment,” says Daniel Robinson, Head of R&D, Pebble Beach.
Initiatives from AMWA, the Advanced Media Workflow Association who are developing NMOS (Networked Media Open Specification) are helping to drive this forward.
James Gilbert, Pixel Power, agrees, “We need standards and ST2110 only touches the surface of what is really needed to have software-defined best of breed systems in which you can connect video between different vendor’s products. What is not standardised is the control layer and that needs a big amount of work although AMWA and NMOS are working in the right direction.
“Inevitably there will be certain pieces of the puzzle which will stay with the vendor since it will be impractical and inefficient to open it up completely.”
There are many benefits to be gained from deploying a channel in the cloud. For service providers, MCOs, sports broadcasters, and corporates, virtualised playout can deliver an affordable option to deploy or contract IP-based channels instantly without the burden of racks of complicated hardware, and weeks or months of setup and provisioning.
But questions remain about the economic, logistical and technical benefits to the end user, and judging by the high volume of on-premises playout solutions that Pebble Beach install and commission – whether IP or baseband - it’s clear that this path is not one that every broadcaster or media company is ready to follow.
“Adding a virtualised infrastructure adds an extra layer of complexity and specific new requirements into the mix,” explains Robinson. “Don’t underestimate the level of in-house expertise you will need access to in order to implement a full-scale virtualised platform. Make no mistake, you will need to understand every nut and bolt of your virtual environment. In the more traditional set-up you will own the playout device and the vendor will take full responsibility for how that device performs, what benchmarks it complies with etc.
“However, with a virtualised solution, the vendor is simply the software provider, meaning that you, or your nominated representative, have responsibility for the overall performance of the virtualised platform and networks.”
Robinson adds that buying a bare metal box, a certain amount of RAM and a number of CPU cores will give you a reasonably predictable performance under given circumstances, but when you put your application on to a hypervisor, you are adding a whole new layer of software between you and the hardware which has a potentially huge number of ‘tweakables’.
“Don’t forget to check that your chosen hypervisor supports the disk drives and storage you want to use with your COTS hardware. If you need to change your hypervisor will your hardware be supported?”
Failure scenarios and failover contingencies needs to be considered. Who or what will be switching IP streams? If your VM fails, you may lose the transport stream altogether. Can your downstream distribution deal with no stream at all? Where are your IP streams going? Can you test them?
Shift in skillsets and workflows
Moving to any new workflow, whether it is an all IP or an all software architecture, does require new training and/or skill sets. Gilbert finds that operators are usually on board with any change in working practice.
“We don’t encounter resistance to that and part of that secret is involving them in procurement process so they can look at alternate solutions and feedback.
That’s not to say that the lines between traditional broadcast and IT aren’t blurring. In future there will be no distinction. In the past you needed technicians who understood how to hook up video signal and monitor Tektronix scopes. Now its Dev Ops and agile scrum developments to orchestrate solutions from different manufacturers. They need to understand the language of rest APIs as well as how to operate a scope.”
In the MCR, broadcast engineering competence will shift towards Python and C suite, high bit rate media transport technologies SMPTE 2110 / 2022, TCP/IP, and containerisation.
Such skills may simply be part and parcel of the incoming workforce. “Software defined technology will help attract young generations into the industry,” Gilbert says.
Aside from client training, Evertz have also tried to eliminate the learning curve, by creating UIs and feature sets to provide something that feels the same as operators are used to.
“For example, when we come to routing in a MCR environment, Evertz MAGNUM and VUE provides the same source, destination take familiarity, even if in the background its routing feeds up and down from the public cloud,” says Martin Whittaker, Technical Product Director, Evertz.
It is hard to be specific about what workflows might emerge but we can say that workflows are no longer set in stone and they evolve constantly and iteratively. We are now in a world where we are continuously tweaking and refining our operational practice and customer experience.
“Leveraging faster release cycles from the vendors we partner with as well as carrying out a significant amount of development in-house allows us to solve small issues with quick feature releases or small applications,” says Richard Cranefield, Head of Product for Playout Services, Red Bee Media.
It’s important to note that you won’t just need to measure the behaviour of the playout software application; you also need to monitor the behaviour of the entire infrastructure. Simply verifying that video and audio are playing does not give you the full picture. The range of available monitoring options in an SDI environment usually far exceeds those available in the IP domain. Diagnostics can be harder for IP too, so you’ll need to investigate what tools are at your disposal, as well as staff who are able to interpret the results.
“Operational monitoring is also critically important, especially in public cloud scenarios,” Robinson alerts. “As well as monitoring latencies and considering how and where your operators will monitor the playout, you need to consider any control latencies that will need to be added. Playout automation may need to send out control commands taking into account the monitoring latency for the user.”
The transport streams a playout infrastructure generates will go through the enterprise network switches and so can overload the network bandwidth potentially impacting on-air performance. That’s despite the fact that the playout software application may be running on a completely separate network.
Robinson’s advice is to check that the playout software vendor will give you access to the raw data that shows how the application is really performing on the virtualised platform. Among the multitude of parameters that can be measured, the sleep/wake time of processors of certain hypervisors may not be good enough for real time playout, he says. Latencies and behaviour will vary depending on the hypervisor you test.
The principal barriers to achieving a transition to truly virtualised playout and MCR operation are time and expertise, according to Red Bee Media which has gone through the process. Its platform is entirely software-based, including multi-viewers, and alarms and monitoring.
“We believe that you have to emulate all of your legacy appliances in software and not just some of them,” reports Cranefield. “To do so has meant integrating emerging technologies from multiple vendors and taking on responsibility for the performance of the hardware that those technologies now run on. In taking software from broadcast vendors and then running on our own cloud we’ve taken on responsibility for the fabric that everything runs on. The cloud infrastructure is now the part of the system that would have been the PCB in an old-world appliance. Who is accountable for a ‘device’ not working is now much more blurred, so we have had to upskill our engineering and network teams to deal with a greater level of responsibility in keeping the platform working, or understanding why parts of it are not.”
The R&D project that got Red Bee to this position lasted two years and was deliberately not pegged to any customer projects until they’d nailed it.
“Many broadcasters who want to undertake software transformation for themselves will still have to do this work, but they may struggle to dedicate the time and cost of developing the intellectual property required to get it right and can only leverage that investment over their own channels,” he says.
You might also like...
Twenty years ago, there was a clear divide between how you shot and finished a project for Cinema compared to the typical workflows used in broadcast TV. With the advent of streaming services that provide 4K/UHD to a broad…
The first burst error correcting code was the Fire Code, which was once widely used on hard disk drives. Here we look at how it works and how it was used.
If an 8K content service from OTT providers like Amazon, Netflix and YouTube is ever going to be successful, and that’s still a hot topic of debate, new types of compression will have to be part of the solution. T…
Need a live shot from inside an unmarked moving rental sedan during a thunderstorm? No problem.
In 2017, at that year’s VidTrans conference a regional gathering of members of the Video Services Forum (VSF), a new protocol for delivering audio and video over lossy IP networks (including the public Internet), was born. It was an idea t…