Digital Nirvana Updates MetadataIQ Metadata-Automation Tool For Avid Ecosystem

Digital Nirvana has announced an upgrade to MetadataIQ, its SaaS-based tool that automatically generates speech-to-text and video intelligence metadata, increasing the efficiency of production, preproduction, and live content creation services for Avid PAM/MAM users.

The new version, which will be previewed at the 2022 NAB Show, makes beta-tested video intelligence capabilities commercially available and integrates directly with Avid MediaCentral.

MetadataIQ 4.0 relies on advanced machine learning and high-performance AI capabilities in the cloud (speech to text, facial recognition, object identification, content classification, etc.) to create highly accurate metadata more quickly and less expensively than traditional methods. Crucially, MetadataIQ is the only tool that not only automatically generates speech-to-text transcripts on incoming feeds (or on stored content) in real time, but then takes the transcript, parses it by time, and indexes it back to the media in the Avid environment. No other such product integrates with Avid today.

Since Digital Nirvana introduced MetadataIQ about a year ago, the primary use case has been generating speech to text in real time as massive amounts of live streams are being ingested, then sending that STT transcript into the Avid Interplay PAM system with time inputs. The application’s unique ability to marry real-time transcript generation with real-time indexing in Avid means producers and editors can quickly find relevant media assets for their news stories, thereby accelerating the entire production process.

In the new version, MetadataIQ’s transcription and other video intelligence capabilities will emerge from the proof-of-concept stage and be commercially available based on the overwhelming success of the beta testing.

Also, instead of sending metadata only to Avid Interplay on-prem implementations, MetadataIQ 4.0 will integrate with Avid’s cloud-based MediaCentral hub, where editors access multiple Avid applications to do their work. Thanks to cloud integration, instead of being able to search only one type of metadata at once as they’ve been doing in Avid Interplay, editors will be able to combine searches in MediaCentral based on multiple forms of metadata. For example, if MetadataIQ generates metadata using OCR, facial recognition, and speech to text, when an editor enters search terms, MediaCentral will search all three of those types of metadata simultaneously. This means editors will get more precise results even faster.

You might also like...

NAB Show 2024 BEIT Sessions Part 2: New Broadcast Technologies

The most tightly focused and fresh technical information for TV engineers at the NAB Show will be analyzed, discussed, and explained during the four days of BEIT sessions. It’s the best opportunity on Earth to learn from and question i…

Standards: Part 6 - About The ISO 14496 – MPEG-4 Standard

This article describes the various parts of the MPEG-4 standard and discusses how it is much more than a video codec. MPEG-4 describes a sophisticated interactive multimedia platform for deployment on digital TV and the Internet.

Chris Brown Discusses The Themes Of The 2024 NAB Show

The Broadcast Bridge sat down with Chris Brown, executive vice president and managing director, NAB Global Connections and Events to discuss this year’s gathering April 13-17 (show floor open April 14-17) and how the industry looks to the show e…

Essential Guide: Next-Gen 5G Contribution

This Essential Guide explores the technology of 5G and its ongoing roll out. It discusses the technical reasons why 5G has become the new standard in roaming contribution, and explores the potential disruptive impact 5G and MEC could have on…

Audio For Broadcast: Cloud Based Audio

As broadcast production begins to leverage cloud-native production systems, and re-examines how it approaches timing to achieve that potential, audio and its requirement for very low latency remains one of the key challenges.