Improving Compression Efficiency With AI

A group of international technology vendors and broadcasters is working on developing and implementing Artificial Intelligence (AI) standards to improve video coding. Calling itself MPAI (Moving Picture, Audio and Data Coding by Artificial Intelligence) they believe that machine learning can improve efficiency of the existing Enhanced Video Coding standard by about 25 percent.

The collective efforts of the MPAI’s experts is focused on horizontal hybrid approach that introduces AI-based algorithms combined with traditional video codecs by replacing one or more blocks of the traditional loop with machine learning-based blocks.

The latest activities of the non-profit global group—that counts members, including broadcasters like Italy's RAI, from 15 countries and was officially launched in September 2020—was discussed at length during the recent HPA Tech Retreat online conference. It’s stated goals include leveraging AI as a core technology for its standards; and developing patent-friendly framework licenses to help its members monetize their intellectual property.

MPAI defines data coding as the transformation of data from a given representation to an equiv­alent one more suited to a specific application. Examples they cite are compression and semantics extraction. To date it has identified an AI module (AIM) and its interfaces as the AI building block. They say the syntax and semantics of interfaces determine what AIMs should per­form, not how. AIMs can be implemented in hardware or software, with AI or machine learning legacy data processing.

The group is looking at many facets of utilizing AI and will establish a licensing model for its application.

The group is looking at many facets of utilizing AI and will establish a licensing model for its application.

“MPAI’s AI framework, which enables the creation, execution, com­pos­ition and update of AIM-based work­flows, is the cornerstone of MPAI standardization because it enables building high-com­plexity AI solutions by interconnecting multi-vendor AIMs trained to specific tasks, operating in the standard AI framework and exchanging data in standard formats,” said Mikhail Tsinberg, president CEO at Key Digital Systems and a founding member of MPAI.

The group is looking at many facets of utilizing AI, including: MPAI-AF, for creation and execution of AI-ML-DP workflows; MPAI-EVC, extending video codec capability with AI; MPAI-MMC, conversation with machines, naturally as with humans; MPAI-CAE, audio at home, in the office, on the go and in the studio; MPAI-OSD, Visual Object band Scene Analysis using AI.

Focused on coding efficiency, the goal is to provide up to 25% additional bitrate reduction for the same resolution over existing EVC (MPEG-5) codecs. Testing is now on-going using different content types to build a data set that can be used to improve performance in a highly automated way. Content they are now working with includes Natural video (video camera captured content), Moving Film (film captured content), Computer-Generated Graphics and Video Gaming.

According to Tsinberg, the MPAI’s initial work has involved using lossy and visually lossless compression (currently, mathematically lossless coding is not being considered). Specific parameters include up to 8K resolution; rectangular video with wide range of aspect ratios, including video banners and vertical video; SD/HDR; standard and wide color gamut; 4:2:0, 4:2:2, and 4:4:4 color formats (initial focus will be on YUV-based coding); and multiple frame rates of up to 120 Hz.

They are also looking at IP-based (including HTTP live streaming) protocols, MPEG-2 Transport Streams, MPEG Media Transport and others. Using CBR, VBR and Capped VBR, they are hoping to deploy systems with (end-to-end) delay levels of: High: greater than 100 milliseconds to offline encoding; Low: 30 msec, less than Delay 100 msec; and Very Low: less than 30 msec (less than one picture frame).

These systems with AI-acceleration could be located on-premise or in the cloud. Backward compatibility with existing systems and scalability are also being taken into account.

The MPAI has come out with a Web socket method that looks to build an abstraction layer agnostic to the AI frameworks, the operating systems and  physical location.

The MPAI has come out with a Web socket method that looks to build an abstraction layer agnostic to the AI frameworks, the operating systems and physical location.

“The MPAI Enhanced video Coding mission is to exploit advances in AI to develop video coding standards that improve coding efficiency,” Tsinberg said. “The rationale is to use AI tools that are able to distill aspects of the data semantics relevant to video compression. These will be highly adaptive systems. Our focus currently is on Intra Predictions and we have already built a dataset for training.”

To manage the bi-directional communication between EVC codecs and the AI tools, he said the MPAI has come out with a Web socket method that looks to build an abstraction layer agnostic to the AI frameworks, the operating systems and the physical location.

The future plan is to port the code developed in the group’s Evidence Project to FPGA boards that are more effective than generic processors in terms of performance and feature low latency and high throughput.

“By accelerating the maturity of AI-enabled data compression, MPAI will help accelerate large-scale adoption of AI Technologies in devices leading to a future where AI is the dominating technology in devices,” Tsinberg said.

Recognizing the moral responsibilities linked to AI, the group’s website states that: “Although it is a technical body, MPAI is aware of the revolutionary impact AI will have on the future of human society. MPAI pledges to address ethical questions raised by its technical work with the involvement of high-profile external thinkers. The initial significant step is to enable the understanding of the inner working of complex AI systems.”

Editor’s Note: As part of its fourth standard project “Compression and understanding of industrial data (MPAI-CUI),” the MPAI has issued a Call for Technologies. The standard aims to enable AI-based filtering and extraction of information from a company’s “governance, financial and risk data enabling prediction of company performance.” It is also calling for technologies to develop two standards related to audio (MPAI-CAE) and multimodal conversation (MPAI-MMC).

You might also like...

NAB Show 2024 BEIT Sessions Part 2: New Broadcast Technologies

The most tightly focused and fresh technical information for TV engineers at the NAB Show will be analyzed, discussed, and explained during the four days of BEIT sessions. It’s the best opportunity on Earth to learn from and question i…

Standards: Part 6 - About The ISO 14496 – MPEG-4 Standard

This article describes the various parts of the MPEG-4 standard and discusses how it is much more than a video codec. MPEG-4 describes a sophisticated interactive multimedia platform for deployment on digital TV and the Internet.

The Big Guide To OTT: Part 9 - Quality Of Experience (QoE)

Part 9 of The Big Guide To OTT features a pair of in-depth articles which discuss how a data driven understanding of the consumer experience is vital and how poor quality streaming loses viewers.

Chris Brown Discusses The Themes Of The 2024 NAB Show

The Broadcast Bridge sat down with Chris Brown, executive vice president and managing director, NAB Global Connections and Events to discuss this year’s gathering April 13-17 (show floor open April 14-17) and how the industry looks to the show e…

Essential Guide: Next-Gen 5G Contribution

This Essential Guide explores the technology of 5G and its ongoing roll out. It discusses the technical reasons why 5G has become the new standard in roaming contribution, and explores the potential disruptive impact 5G and MEC could have on…