Digital Nirvana Releases Trance 3.2

Digital Nirvana’s Trance 3.2 includes new features to enable faster, more efficient production of publishable closed captions and translations.

New features include upgrades to the tool’s natural language processing (NLP) capabilities that, in an industry first, make it possible to identify grammar and style regulations and configure Trance to follow them. And thanks to advancements in machine learning, there are improvements to the machine translation model that satisfy captioners’ increased need for localization in multiple languages, quickly translating the content frame by frame while retaining the entire context.

“Our latest changes to Trance add a new level of completeness to the product in terms of covering the entire caption generation and localization workflow. Trance not only brings large-scale efficiency to each part of the process, but it also provides a platform where users can do everything related to transcription, captioning, and text localization in just about every imaginable use case,” said Russell Vijayan, director of business development at Digital Nirvana. “There’s no need for users to review the entire set of content in order to adhere to strict grammar and style guidelines. Instead, now grammar rules can be set as a standard instruction to the system. A 90-minute piece of content can now be captioned in a couple of hours versus a week or so.”

Trance is designed to use machine learning and AI capabilities to enhance the process of generating transcripts, closed captions, and translations for media content. Production houses, OTT platforms, broadcast networks, closed captioning companies, and any content producer that publishes content over broadcast outlets or the internet with closed captions and translations enabled will benefit from the major improvements.

Trance can be a critical tool in a variety of use cases, such as generating transcripts for audio content, importing an existing transcript and syncing it with video, or importing existing captions and opting for a caption QC service that compares the captions against streaming-platform guidelines and flags any nonconformance. Users can generate machine translations after the captioning process is done. They can also import an existing-caption sidecar file, retain the timecodes, and generate a high-quality machine translation that would help with the localization process. Trance also makes it possible to change the frame rate, apply timecode references, and export sidecar outputs in various formats.

Because Trance is a cloud-based application, the upgrades are available to Trance users immediately. 

You might also like...

Designing IP Broadcast Systems: Addressing & Packet Delivery

How layer-3 and layer-2 addresses work together to deliver data link layer packets and frames across networks to improve efficiency and reduce congestion.

The Business Cost Of Poor Streaming Quality

Poor quality streaming loses viewers at an alarming rate especially when we consider the unintended consequences of poor error reporting on streaming players.

Future Technologies: Asynchronous Transport

In this first in a series of articles considering technologies of the near future and how they might transform how we think about broadcast, we begin with the potential for asynchronous transport streams.

Next-Gen 5G Contribution: Part 1 - The Technology Of 5G

5G is a collection of standards that encompass a wide array of different use cases, across the entire spectrum of consumer and commercial users. Here we discuss the aspects of it that apply to live video contribution in broadcast production.

Why AI Won’t Roll Out In Broadcasting As Quickly As You’d Think

We’ve all witnessed its phenomenal growth recently. The question is: how do we manage the process of adopting and adjusting to AI in the broadcasting industry? This article is more about our approach than specific examples of AI integration;…