Enco Shows Cost-Effective, Automated Captioning Of Live Video

The enCaption4 suite uses Enco’s latest speech recognition engine to closely inspect and transcribe audio in near real-time.
At 2018 IBC, Enco Systems is introducing the latest generation of its specialized software for automated captioning of live video.
The new Enco enCaption4 is an automated hardware/software solution that automatically generates captioning on live or recorded video. Using enCaption4, content creators can provide real-time, live captioning to their hearing impaired audience anytime, without any advance notice and without the high costs of live captioners or signers.
The enCaption4 suite uses Enco’s latest enhanced speaker independent neural network-based speech recognition engine to closely inspect and transcribe your audio in near real-time (typically 3 seconds). Available in a number of languages, enCaption4 works with any audio stream, whether recorded or live and local to your facility. This puts the user in control of the captioning process and makes it available even in the most demanding situations.
The company said this is not an ENR/prompter based system, nor is it a “re-speaking” system requiring live personnel to operate. It is a fully automated, true speaker independent speech recognition based system that is available whenever you need it.
Linking enCaption4 to an electronic newsroom system allows it to automatically access current and historical script information to build a local dictionary, which allows the system to improve accuracy over time. It literally gets better every day. Users an also feed enCaption specific scripts for presentations, meetings or other productions.
For the first time, broadcasters, content producers and commercial AV facilities with an NDI infrastructure can add an automated captioning solution into their workflows. Once connected, enCaption will automatically generate captions through its NDI input stream and output an NDI signal with captions keyed directly on top of the video stream. This capability simplifies the closed captioning workflow by eliminating the need for specialized encoding hardware.
You might also like...
The Big Guide To OTT: Part 1 - Back To The Beginning
Part 1 of The Big Guide To OTT is a set of three articles which take us back to the foundations of OTT and streaming services; defining the basic principles of the OTT ecosystem, describing the main infrastructure components and the…
Learning From The Experts At The BEITC Sessions at 2023 NAB Show
Many NAB Shows visitors don’t realize that some of the most valuable technical information released at NAB Shows emanates from BEITC sessions. The job titles of all but one speaker in the conference are all related to engineering, technology, d…
The Streaming Tsunami: Part 1 - Seeing The Tsunami Coming
Streaming video is on the cusp of becoming a major problem for broadband networks. Up to now we have been dealing with a swell in the streaming sea that has caused a few large waves to crash on to the…
Machine Learning (ML) For Broadcasters: Part 10 - Automating Workflows For Compliance & Moderation
Machine learning and some other AI techniques are increasingly being applied for many aspects of media content compliance and moderation, ranging from technical compliance for parameters such as loudness, to vetting for hate speech.
Compression: Part 6 - Inter Coding
The greatest amount of compression comes from the use of inter coding, which consists of finding redundancy between a series of pictures.