Test, QC & Monitoring Global Viewpoint – October 2019

Differentiating AI Vendors

As each week goes by more vendors are telling the broadcast industry that they have an AI solution. Apart from trying to work through the marketing hype and understanding what AI and machine learning (ML) really are, how do AI/ML vendors truly differentiate themselves from each other?

ML does not use traditional sequential logical programming found in computer software. This system requires the developer to know every possible outcome of every possible input and then to code for it. Instead, in ML, teaching algorithms use diverse datasets to train the neural networks (the brain at the heart of an ML system) to form automated decision tree networks so they can respond to specific input stimuli.

During the training process, a feedback system based on outcome-rewards, adjusts the various weights and biases within the neural network until the desired outcome is achieved. This is analogous to a parent teaching the child as they grow. Various rewards are offered to the child as they progress through life and learn new skills in the hope that they will make the right decision.

One of the performance measures for an AI system is based on the quality of the datasets being used to teach it. The more diverse the datasets, the better the system will learn, and the more datasets available to the system, the better it’s accuracy of prediction.

I believe, one of the key criteria for judging the performance of the ML system is both the uniqueness and diversity of the datasets used to train it. Again, this is analogous to teaching a child, the more experiences the parent exposes the child to, the more informed the child becomes.

This is where live becomes interesting.

Companies such as Google and IBM, to mention but a few, have publicly available datasets that can be downloaded and used to train machine learning systems. If, hypothetically speaking, two vendors were to use the same datasets, then is it correct to assume that the outcome of their machine learning systems would be the same? Or at least very similar? It’s fair to say that the implementation of the neural-network-brain would probably be different, but essentially, both systems have the same a priori knowledge.

This method of enquiry soon disappears down a philosophical rabbit hole and having experienced this myself I can see why many fear AI may take over the world, but it could also give us a clue to how we differentiate between vendors going forwards. If “vendor A” has access to a greater uniqueness and diversity of datasets than “vendor B”, then this could lead to a more accurate machine learning system from “vendor A”.

As I study AI systems more, and specifically ML, I can see so many opportunities for this new technology in the broadcasting domain. As we continue to digitize and archive historic tape footage then ML systems will improve in their analysis of the video and audio to aid tagging and logging. Voice-to-text systems will become more accurate, not only at deciphering spoken words but also at differentiating speech from background sounds to provide better subtitles.

I’m sure vendors and innovators the world over are having similar thoughts and are busily developing machine learning systems to improve the viewer experience and broadcast facility efficiencies. However, I believe the accuracy of their offerings will be more to do with the diversity and uniqueness of their datasets than anything else. When evaluating ML systems of the future, maybe you could enquire to the diversity and uniqueness of their training data?