Engineers and technologists are known for their problem-solving abilities. In fact, it’s a fundamental requirement. But of all the challenges our industry has faced this year and the solutions we’ve had to find, which technology dominates the broadcast landscape?
For me, remote operation is one of the most fascinating aspects of any system. Whether it’s an aircraft’s fly-by-wire capability, a production switchers control panel, or cloud computing. As far as I’m concerned, they all demonstrate incredible innovation and design.
Not being able to touch a device brings on a whole new way of thinking. Instead of jumping in with a multimeter or oscilloscope, we suddenly find that this is not possible when kit is geographically dispersed, and even more so with cloud computing.
IP has not only revolutionized broadcasting in terms of the transport stream, but has opened the doors to virtualized computing, both on-prem and the public cloud. The bottleneck previously resided with the number of CPU cores and amount of memory available, and the speed with which we could transfer data from the ethernet card to user memory. But with the new generation of GPUs, video and audio processing can be executed in real-time. Furthermore, the availability of virtualized GPUs has made public cloud real-time processing a reality.
I must confess, I still worry about latency. But I’m of the opinion that we shouldn’t necessarily aim for the lowest latency possible but instead aim for a latency that is realistic and predictable. And predictability is the strength of any system. If the solution has predictable outcomes, then, it is reliable. And reliability is probably one of the most important traits of any broadcast system.
Cloud computing is the ultimate in remote operations. Not only can we not touch the devices processing our video and audio, but we also often have no idea where the processing is actually taking place. We may know within the limits of a city, but we do not know where server resides, or storage is located. And this is where monitoring provides confidence and assurance.
Public cloud workflows seriously constrain our ability to monitor in the traditional sense. There are no multimeters or oscilloscopes in the cloud, so instead, we must provide our monitoring in software.
At a hardware level we assume that somebody else is looking after this for us. Even if it breaks, the nature of public cloud operations means we can spin up a service on another server very quickly. And this delete-and-create philosophy is key to reliability in the cloud. Combined with adequate monitoring, we have broadcast workflows that are reliable and predictable.
To conclude, I believe the best technological progression for 2021 is the ability to process real-time video and audio in the public cloud, and remote monitoring is a close second. Any engineer who likes to take the hood off to find out what’s going on inside suddenly finds themselves with the ultimate in screwdrivers and soldering irons. That is, flexible software scripting languages – they are our future!