Huge data, Huge Challenge, Small Pool Of Engineers

The fundamental premise of technology is that it gives us leverage. A tool gives us more power, more effectiveness, more precision, more time. But there are few tools or technologies that do this for us without some effort on our part. Just buying a lathe doesn’t make the person a good woodworker. Buying an aeroplane doesn’t make you a pilot.

And as military procurement occasionally proves, even if there are pilots a-plenty, buying the wrong planes for them can be as ineffective as leaving them on the ground in jumpsuits and deckchairs.

Worse in fact, because an inconceivable amount of money has been wasted into the bargain.

Half the battle is in getting the technology right. Just simply getting kitted up is not the solution in itself, unless the real issues have been identified; unless the nature of the problem is understood fully, it’s going to be impossible to find the right solution. And with digital media services, the nature of the problem is pretty difficult to grasp in its entirety. It takes a special set of skills and experience to evaluate, plan and implement infrastructure that spans RF, broadcast standards, IP and all the delivery formats. A similarly broad, diverse – and rare – set of skills is needed to plan how to monitor that infrastructure.

It’s human nature when faced with a complicated task to do what appears to be the right job, based on familiar patterns. ‘Nobody ever got fired for buying IBM’, it used to be said. Safe, conventional thinking is always popular because there are lots of safe, conventional thinkers around. You can tick the right boxes and be seen to have done your job, but those boxes represent the standard fallback template, not an up-to-the-minute, fully informed and comprehensive view of the situation as it is now, and as it is developing. Like the generals who invest in defensive fortifications along the wrong border, or set the gun emplacements facing the the enemy from the last war, a conventional thinker is likely get it wrong.

One of the critical factors in planning digital media services and a monitoring strategy for them, is that the size of the beast is always changing. More channels, more services, more resolution, more devices. It’s as if the engineer is an ancient Greek hero: every time he hacks off the monster’s head, two more grow in its place.

So the nature of the challenge has to be understood clearly and profoundly in order to understand how to find a solution to it. Only then can a good decision be made about the technical installation; the choice of tools or systems. But the system architecture has to be right for the job, and it has to be installed optimally to avoid diminishing its performance. Then people have to know how to use it, and this might be the most difficult part of the puzzle. Along with the ever-expanding channel count, there’s increasing competition for engineers who are sufficiently knowledgeable – and a finite pool of them. The fact is that the more rarified and sophisticated a technology becomes, the higher are the demands on the people that work with it. Paradoxically, to deliver a satisfactory experience to consumers, their end of the technology has to demand as little as possible of them.

And there’s the key to the conundrum. In order to make complex consumer-targeted technology work, a great deal of effort goes into analysing the human experience of technology and shielding the user from the complexity. Without this, consumers wouldn’t be able to operate the products they buy, and they’d stop buying. Did any manufacturer of video tape recorders ever design an easy to use interface? It didn’t really matter because although it was annoying to wrangle with a recalcitrant VTR’s controls, the machine was basically pretty simple. But that’s not the case for the technology of today.

It’s not the case for the consumer, and it’s not the case for the service provider either. You simply can’t afford to have technology that is capable of getting the job done, but which is too difficult to operate – even if the operators are engineers. This point is critical now to the quality of experience the consumer receives: if the engineers aren’t getting the best out of the quality assurance systems, then it doesn’t matter a jot if the technology is theoretically capable of delivering the best.

Manufacturers in our industry have to consider the engineers who will operate digital media services, and think of them more as consumers than as nerdy experts heads down in a darkened room, ploughing through dense technical data that only the rare few can make head or tail of. What would a typical consumer do in that situation? Well, the digital media service engineer is a kind of consumer too, and he or she deserves better. In fact ‘better’ is the only way to give engineers the ability to guarantee good service quality to the consumer.

What does this mean in practice? What leverage can monitoring technology deliver if it’s designed with ‘consumer-grade’ levels of usability? Of course one of the benefits is simply a better ratio between the numbers of engineers employed and the number of services monitored. Another is a faster resolution of problems when they occur, because the technology sifts and presents the critical information in a form that’s very easy to understand – in some cases making intelligent knowledge-based choices in that presentation to assist the engineer reach the right decision quickly. Still another is the ability to assist the engineer in-depth strategic analysis for pro-active identification and preemption of problems before they become a service-threatening reality.

These are the type of benefits Bridge Technologies designs into its monitoring systems, because it’s the only way to help service providers stay on top of the many-headed monster. After all, we make a sophisticated miniature analysis probe that is so simple to set up and get working, even the subscriber can do it. Why should the tools made for engineers be any different?

You might also like...

The Pros (And Cons) Of Adding Cloud For Remote Workflows

At this year’s IBC Show in Amsterdam, finally in-person after two years, remote production solutions were scattered throughout the exhibition floors, to no real surprise. Reduced costs, travel and shipping expenses, scalable infrastructure and efficient use of resources were a…

Machine Learning For Broadcasters: Part 4 - Training Neural Networks

Training neural networks is one of the most import aspects of ML, but what exactly do we mean by this?

Cloud-Native Audio Mixers - Current Developments In Virtualized Broadcast Audio Mixing

As the wider broadcast industry picks up the pace with virtualized, cloud-native production systems we take a look at what audio vendors currently have available and what may be on the horizon.

The Sponsors Perspective: Proactively Monitor IP Video Networks & Essences With Inspect 2110 & PRISM

For over two decades Telestream has streamlined the ingest, production, and distribution of digital video and audio. Today, compared to its SDI/AES-based predecessors, IP video adds exciting new challenges to these workflows.

Flexible Contribution Over IP - Part 1

IP connectivity delivers flexibility and scalability but making the theory work often requires integrated solutions that are adaptable, open, and promote interconnectivity.