Practical Broadcast Storage - Part 3

Artificial Intelligence (AI) has made its mark on IT and is rapidly advancing into mainstream broadcasting. By employing AI methodologies, specifically machine learning, broadcasters can benefit greatly from the advances in IT infrastructure innovation and advanced storage designs.

In the previous articles advanced IT storage systems were introduced and the benefits of each storage medium was investigated. Strategies to transfer files using AI were considered along with a brief introduction to the differences between simple statistical analysis and predictive AI.

Context-aware data-sets are those that have a high degree of correlation between them for the AI system in question. More correlation leads to improved future predictions of system behavior and greater efficiencies for dynamic systems.

Accurate Correlation is Paramount

For example, a data set that provides a record of disc drive parity errors combined with environmental measure such as temperature and humidity is a much greater predictor of disc drive failure than just detecting parity errors alone.

It might be that disc drives suffer parity errors when the humidity or temperature increases and once the environmental conditions restored the parity errors would subside. Although this may only cause a temporary delay to data retrieval it could potentially cause a shortened life expectancy for the disc drive. A traditional monitoring system would detect the initial errors and log an alarm but after a passage of time the log would be deleted or just forgotten about.

Find the Patterns

In this scenario the data-sets are thought of as two separate sources of information and they become separated from the long-term event, that is the failure of the drive. Its possible that other drives in the same rack didn’t suffer any parity errors due to manufacturing tolerances. Therefore, any correlation between the failure of one drive and a brief rise in temperature or humidity many months before could easily be over looked.

It is scenarios like this where AI excels. AI algorithms constantly analyze data-sets and logs and look for correlation of patterns between them. The data might be from sources other than the disc drive, such as the environmental monitoring system or power logs, but combined they provide a powerful source of truth.

Optimize the Result

For AI to be truly successful it needs to aim for a known result so it can automatically tune its algorithms to look for ever increasingly accurate patterns of data and correlations. In the case of a disc drive failure the outcome is clear. Once a disc has failed the algorithm can hone into the related data-sets and find the required patterns.

Diagram 1 – Reinforcement AI systems use a feedback loop to measure the outcome generated by the environments and provide a reward that is used to enable the intelligent agent to adapt and learn

Diagram 1 – Reinforcement AI systems use a feedback loop to measure the outcome generated by the environments and provide a reward that is used to enable the intelligent agent to adapt and learn

Over time, more disc anomalies occur, and the AI algorithms increase their database of information. Furthermore, a vendor may make its anonymized analytical data available to all users of similar products thus further increasing the number of potentially correlated data sets and give global scope. This helps prediction and pre-empts failure.

Share Analytical Data

In our disc drive example, another client may have witnessed a similar scenario which resulted in a disc drive from a specific batch failing prematurely. This information could be recorded into a data-set and made available to other users of that batch of drives. This anomaly wouldn’t be enough to trigger the replacement of all drives of the batch, only those that had suffered similar environmental changes in temperature and humidity.

A fine balance must be achieved when enforcing preventative maintenance. The very act of replacing a component such as a disc drive or power supply increases risk and can have unintended consequences. In this example, its not that the disc drive is necessarily faulty, just that in this situation its life expectancy may tend to the lower end of its Mean Time Between Failure (MTBF) specification. The AI algorithms creating data-sets throughout the world have shared information and can determine any disc drives with similar environmental deviation will fail in three years instead of five. It’s still within its warrantee specification so not all drives should be replaced.

Pre-Empt Failure

Predictive automation is used effectively to warn users when serviceable components need to be replaced or if an unexpected fault is developing in the hardware.

A further benefit is the simplification of complex storage systems. Traditionally, IT departments have relied on a small group of engineers who gain very specific detailed knowledge of how complex systems work in order to fine tune and optimize them. This is especially evident in storage due to the real-time interaction between the different storage mediums and the specialist long form files used in broadcasting. Predictive AI can help a business reduce its reliance on such specialist knowledge.

Diagram 2 – AI systems receive information from a diverse set of sensors and data-sets potentially from around the world, and run complex algorithms to find correlated patterns in real time, often these patterns are not easily available to human analysis

Diagram 2 – AI systems receive information from a diverse set of sensors and data-sets potentially from around the world, and run complex algorithms to find correlated patterns in real time, often these patterns are not easily available to human analysis

As well as predicting component life expectancy the correlated data is a rich source of information to provide both automated and suggested configurations for optimization. Again, this occurs in real-time and the AI optimization provided removes the dependency on local system knowledge. As the analytical information is anonymously shared throughout the globe, the accumulated specialized data leading to highly optimized storage that is far greater than any single expert or group of experts could ever hope to achieve.

More and More Data

AI is not restricted to obviously related data-sets and in some instances the more varied the available data-sets are, the more accurate the predictions become. This is a great benefit to cloud hybrid solutions. Analytical data gained from cloud systems can be used by the AI algorithms to optimize and simplify their interaction. Automated analysis can suggest when to use cloud systems and provide accurate costing information to help determine if on-prem systems should be used.

In the extreme, any part of the infrastructure that is creating monitoring logs or analytical data can be integrated into the AI algorithms to improve their predictions, optimization, and over-all efficiency.

AI Wins

The key win with AI is that the algorithms are constantly learning and adapting to new operational scenarios in real-time, many of which cannot be simulated in the development lab or tested during manufacture. It’s unreasonable to expect vendors of storage systems to test for every possible environmental condition and event that could occur in the work place. This is even more evident with complex storage systems due to the exceptionally high levels of interaction between the storage mediums.

Innovation in IT is continuing to benefit broadcasters looking to leverage complex systems and infrastructure. AI, using context-aware algorithms and data-sets is leading the way to help simplify and optimize highly specialized IT storage to constantly balance the user experience, cost, and reliability.

Part of a series supported by

You might also like...

An Introduction To Network Observability

The more complex and intricate IP networks and cloud infrastructures become, the greater the potential for unwelcome dynamics in the system, and the greater the need for rich, reliable, real-time data about performance and error rates.

Designing IP Broadcast Systems: Part 3 - Designing For Everyday Operation

Welcome to the third part of ‘Designing IP Broadcast Systems’ - a major 18 article exploration of the technology needed to create practical IP based broadcast production systems. Part 3 discusses some of the key challenges of designing network systems to support eve…

What Are The Long-Term Implications Of AI For Broadcast?

We’ve all witnessed its phenomenal growth recently. The question is: how do we manage the process of adopting and adjusting to AI in the broadcasting industry? This article is more about our approach than specific examples of AI integration;…

Next-Gen 5G Contribution: Part 2 - MEC & The Disruptive Potential Of 5G

The migration of the core network functionality of 5G to virtualized or cloud-native infrastructure opens up new capabilities like MEC which have the potential to disrupt current approaches to remote production contribution networks.

Standards: Part 8 - Standards For Designing & Building DAM Workflows

This article is all about content/asset management systems and their workflow. Most broadcasters will invest in a proprietary vendor solution. This article is designed to foster a better understanding of how such systems work, and offers some alternate thinking…