Trance 3.2 includes new features to enable faster, more efficient production of publishable closed captions and translations.
Digital Nirvana’s Trance 3.2 includes new features to enable faster, more efficient production of publishable closed captions and translations.
New features include upgrades to the tool’s natural language processing (NLP) capabilities that, in an industry first, make it possible to identify grammar and style regulations and configure Trance to follow them. And thanks to advancements in machine learning, there are improvements to the machine translation model that satisfy captioners’ increased need for localization in multiple languages, quickly translating the content frame by frame while retaining the entire context.
“Our latest changes to Trance add a new level of completeness to the product in terms of covering the entire caption generation and localization workflow. Trance not only brings large-scale efficiency to each part of the process, but it also provides a platform where users can do everything related to transcription, captioning, and text localization in just about every imaginable use case,” said Russell Vijayan, director of business development at Digital Nirvana. “There’s no need for users to review the entire set of content in order to adhere to strict grammar and style guidelines. Instead, now grammar rules can be set as a standard instruction to the system. A 90-minute piece of content can now be captioned in a couple of hours versus a week or so.”
Trance is designed to use machine learning and AI capabilities to enhance the process of generating transcripts, closed captions, and translations for media content. Production houses, OTT platforms, broadcast networks, closed captioning companies, and any content producer that publishes content over broadcast outlets or the internet with closed captions and translations enabled will benefit from the major improvements.
Trance can be a critical tool in a variety of use cases, such as generating transcripts for audio content, importing an existing transcript and syncing it with video, or importing existing captions and opting for a caption QC service that compares the captions against streaming-platform guidelines and flags any nonconformance. Users can generate machine translations after the captioning process is done. They can also import an existing-caption sidecar file, retain the timecodes, and generate a high-quality machine translation that would help with the localization process. Trance also makes it possible to change the frame rate, apply timecode references, and export sidecar outputs in various formats.
Because Trance is a cloud-based application, the upgrades are available to Trance users immediately.
You might also like...
There are two approaches to digital filtering. One is to implement the impulse response directly. The other is to use recursion. Here we look at the direct implementation.
In the last article in this series, we looked at how PTP V2.1 has improved security. In this part, we investigate how robustness and monitoring is further improved to provide resilient and accurate network timing.
NAB have announced the show scheduled for October 2021 has been cancelled.
Timing accuracy has been a fundamental component of broadcast infrastructures for as long as we’ve transmitted television pictures and sound. The time invariant nature of frame sampling still requires us to provide timing references with sub microsecond accuracy.
For the past year an international group of technology companies, funded by the European Union (EU), has been looking into the use of 5G technology to streamline live and studio production in the hopes of distributing more content to (and…