More AI Acronyms & Buzzwords Explained

So many buzzwords & acronyms and so little time! I thought this was a better title than Understanding the Differences Part 2.

In my first article, Understanding the Differences, I took a broad stroke on the high-level basics of AI, Machine Learning, Deep Learning and Cognitive Computing.

Now that we will be hearing a lot about Qubits (Quantum Computing), I thought it’s worth getting a little deeper in what is considered or discussed as Artificial Intelligence AND what is not. To be clear, Quantum Computing has NOTHING to do with artificial intelligence other than providing faster processing power that ultimately helps the algorithms handle the data loads better.

There is Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), Artificial Neural Network (ANN) and Convolutional Neural Network (CNN) - not to be confused with CNN/Turner.

I will leave Artificial General Intelligence for last as its currently the unobtainable or “real” AI.

Artificial Narrow Intelligence (ANI) - is the most commonly referenced AI and is trained to perform a specific task or understand a specific area of knowledge (oncology or game – Jeopardy). Each instance of ANI is dedicated and is unable to learn on its own a different task or area of knowledge. ANI is designed to constantly seek patterns in data, learn from experience and then have the ability to select an appropriate response for a query or operation.

Machine Learning (ML) - is based on algorithms that parse data, learns from that data and can make informed decisions based on what it has learned. Basic machine learning functions improve progressively, but still rely on human guidance. When a ML algorithm returns an inaccurate prediction, an engineer needs to step in and make adjustments. Machine learning involves a lot of programming and training data to support operational functions performed by camera operators, archivists, editors, master control technicians and broadcast engineers.

Deep Learning (DL) - the machine’s algorithms can determine independently if a prediction is accurate or not. DL structures its algorithms in layers to create an Artificial Neural Network (ANN) that can learn and make intelligent decisions on its own. ANN design attempts to model the biological neural network of the human brain. Unlike human coded ML, DL machines continually learn to master complex and abstract aspects of a technical or creative process.

Deep learning algorithms continually analyze data applying a logic structure similar to how a human would draw conclusions. Based on a constant flow of human queries, DL agents can generate accurate predictions nearly instantaneously. While Machine learning algorithms perform a function with the data given to it and progressively improves.

Cognitive Computing - is taking machine & deep learning to a higher plane. While machine learning is based on “trial and error”, it does not apply “reasoning” to the data it creates. Cognitive computing applies a level of reasoning to the data and errors processed from machine learning. It looks for patterns in data and then compares those patterns to other patterns in a form of reasoning. Cognitive processes use existing knowledge and generates new knowledge.

Artificial General Intelligence (AGI) - is the “holy grail” of AI. It is the ability of a computer system to mimic human intuition, deductive reasoning and contextual understanding. We are still quite a way from that capability.

There are a few other terms like neural network or Artificial Neural Network (ANN), which is not the same as Convolutional Neural Network (CNN). ANN is a computer environment mimicking the way our brain synapses interact with each other as we process information and multi-task. CNN applies neural networking to how visual information/data is processed.

An algorithm is a computer formula that may be used to write AI code, however not all algorithms are AI.

When speaking AI, you will hear the word Bayesian used quite a bit in referring to which AI technique is being used. Bayes’ theorem is named after Reverend Thomas Bayes (1701–1761) who first used conditional probability to create an algorithm to calculate limits on an unknown parameter. This was published in 1763. Pretty cool.

Another AI technique is the Inference Engine, typically the inference engine is part of an expert system coupled with a knowledge base. The inference engine applies logical rules to the knowledge base to deduce new information.

Then there is Knowledge Representation and Reasoning. Knowledge representation incorporates concepts about how humans solve problems and represents the knowledge in algorithms in order to design applications that will make complex systems easier to design and build. Knowledge representation and reasoning applies logic to automate various kinds of reasoning such as the application of rules and relationships to data sets.

These are the most common terms used when discussing artificial intelligence as the science it is. Asking Alexa or Google to play a song is not artificial intelligence. It’s clever speech to text and then a search engine to find it. So many of the products and services claiming to use artificial intelligence are not. Automating processes is not AI. Having your refrigerator order groceries is not AI!

You might also like...

Chris Brown Discusses The Themes Of The 2024 NAB Show

The Broadcast Bridge sat down with Chris Brown, executive vice president and managing director, NAB Global Connections and Events to discuss this year’s gathering April 13-17 (show floor open April 14-17) and how the industry looks to the show e…

Designing IP Broadcast Systems: Part 2 - IT Philosophies, Cloud Infrastructure, & Addressing

Welcome to the second part of ‘Designing IP Broadcast Systems’ - a major 18 article exploration of the technology needed to create practical IP based broadcast production systems. Part 2 discusses the different philosophies of IT & Broadcast, the advantages and challenges…

Essential Guide: Network Observability

This Essential Guide introduces and explores the concept of Network Observability. For any broadcast engineering team using IP networks and cloud ecosystems for live video production, it is an approach which could help combat a number of the inherent challenges…

Audio For Broadcast: Cloud Based Audio

As broadcast production begins to leverage cloud-native production systems, and re-examines how it approaches timing to achieve that potential, audio and its requirement for very low latency remains one of the key challenges.

Designing IP Broadcast Systems: Timing

How adding PTP to asynchronous IP networks provides a synchronization layer that maintains fluidity of motion and distortion free sound in the audio domain.