Production & Post Global Viewpoint – August 2019

How AI Is Your AI?

I recently attended the Artificial Intelligence Summer School at the University of Surrey in the UK to hear some of the world’s leading experts in this field speak. As well as providing deeply technical insight into AI and its processes, some fascinating philosophical questions were raised, such as, who is responsible for a driverless car having an accident? The manufacturer? The software programmer? Or maybe the car itself?

Which then leads us onto another interesting question – who, or what, is responsible for the AI decisions? As I understand, the whole point of AI is that it is not geared around a list of structured logical decisions as found in conventional software, but it is about letting the AI entity derive its own decisions based on extremely large data sets and its ability to learn from them.

I believe, in human terms, the massive diverse set of data required is analogous to the entire human experience. In this context, it is possible for computers to have more knowledge than a human as the advances in high performance computing has demonstrated an incredible capacity to process raw data at breath taking speed.

But there needs to be a way of teaching the AI machine the initially correct outcome. Somebody, somewhere, needs to teach the machine, just like a parent teaches the child. And this is where some people believe the AI machine is going to take over humanity as they hypothesize that AI can teach its descendant to be more successful than itself.

I see many broadcast vendors claiming to have AI solutions, but I often think how many have drawn together vast independent datasets to find meaningful correlations between them. I have witnessed some companies achieving this, so I know it’s possible and is being done.

During the summer school, there was the suggestion that driverless cars have the potential to be much safer than human operated cars. They communicate and agree amongst themselves which car should provide which safe maneuver and when, on a purely logical basis, without emotion getting in the way. No road rage with driverless AI cars! Or so I would like to believe.

When discussing AI solutions, it might be prudent to ask some searching questions about large data set correlation and whether the decision making has moved away from traditional logical programming in favor of neural networks.

So, is broadcasting going in the true AI direction? Will decisions, from software defined networks to predictions of viewing habits, all be provided by AI machines? A basic understanding of the history of science has demonstrated that so many innovations are the product of human error. So how do we program a computer to make a mistake? And is this the answer? From my experience of programming, its often hard enough to get the software working in the first place, never mind programming it to make a mistake in such a way that AI will create the next Love Island.

Commenting is not available in this channel entry.