Listening to some pundits then we wouldn’t be surprised to learn that AI is the answer to all our dreams but listening to others we might think it’s going to take over our jobs and render us all redundant. So, is AI friend or foe?
Just to make my position clear, I like machine learning (ML), in fact, I like it a lot. For me, the fundamental benefit of this branch of AI is that it uses a data-led learning approach. In traditional programming we have to consider and account for every possible if-else type decision but in ML we use training data to teach neural networks that effectively replace the if-else statements.
Data-led learning used to teach ML engines is analogous to how the human brain operates. During the learning process correct answers are rewarded so that any data never seen before can be evaluated to provide predictions based on the “knowledge” of the ML model gained through learning.
The challenge for ML learning is that we need a lot of training data. The more data the better, but the caveat is that the data has to be extremely accurate and well validated. Wrong data can easily provide misleading, contradictory, or just wrong answers. Consequently, the new breed of data scientists spends a large amount of their time validating and pre-processing their data sets. The actual learning element is a fairly automated process with the computer churning through the data sets to find the optimum neural network model.
Likewise, broadcast engineers spend a lot of time evaluating and validating data, and in the more general sense we call this test and measurement. But every engineer knows that they should “assume nothing and question all”. A flashing video signal on the multiviewer or distorted audio on the MCR loudspeakers might be a real fault, or it might just be a problem with the monitoring feed, or in the extreme, the monitoring equipment itself.
With the advent of AI and ML my real concern is that we run the risk of forgetting our engineering principles and blindly accept data as its provided, especially as fewer engineers are being employed and they find themselves under increasing levels of stress and pressure. Furthermore, in the extreme, it’s entirely possible that any ML systems could suffer from the confirmation bias of the person (or persons) who have pre-processed the data for ML learning in the first place.
For me, it’s not actually the concept of AI we should be concerned about, but instead the validity of the data being used to train the ML networks in the first place. This has the potential to lead to all kinds of problems if it’s not checked at the beginning.
In the old days of VT I remember some of the experienced engineers guarding their test tape like it was the Holy Grail of television, because to them it was their source of truth, a constant they could rely on when fixing often very challenging problems. Digital compression specialists do the same, they have a set of test files they know can stress any system to its limits.
As we move to AI and specifically ML, I believe engineers should be building up their own stash of test files for audio, video, subtitles, and metadata to help them better evaluate these new systems in real-life applications. I’m sure vendors will have their own test sets that they can provide, but to independently evaluate and test systems then there is nothing better than your own test material.
It’s fair to say that it’s better to identify the strengths and weaknesses of critical infrastructure components before they are installed in live transmission chains, especially when we can’t use a scope probe to determine any meaningful test data.
So, is AI friend of foe? That’s up to you. I believe those who can understand the importance of unbiased data sets in the field of ML, and be able to independently validate and test them will excel in their profession.