CC and Subtitling 2018 NAB Show Report

The 2018 NAB Show will be noted historically for several reasons. Annual visitors attending the show with specific needs and budgets tend to be a bit myopic. Most TV stations are already satisfy captioning requirements and regulations. Even so, new captioning technology makes the service better, faster, more effective and less expensive.

The FCC and most TV regulators across the world require captioning, including building it into every TV set. The original objective of captioning for the hearing impaired remains unchanged.

However, some dialog is occasionally intelligible to nearly everyone for one reason or another. Viewers are learning that captions can enhance the experience by supplementing the dialog, like reading the words while hearing a song. It helps everyone understand what they are watching.

At the 2018 NAB Show, numerous captioning solution companies displayed their solutions and services. The following recaps captioning highlights at the show by manufacturer, in alphabetical order.

The ENCO Systems 2018 NAB Show exhibit featured the latest updates to ENCO’s enCaption4 system. The updates were further strengthened through ENCO’s new patent for automated captioning, significantly raise the bar for speed and accuracy in live captioning environments.

Last fall, ENCO acquired a U.S. patent specifically applying to providing automated captioning by using speech-to-text. Patent number 7047191B2 is entitled “Method and System for Providing Automated Captioning for AV Signals.” It specifically relates to ENCO’s continued innovations for its enCaption series of automated captioning solutions.

ENCO’s popular enCaption3 solution was updated to enCaption4, which was shown at the NAB Show.

ENCO’s popular enCaption3 solution was updated to enCaption4, which was shown at the NAB Show.

Key technical differentiators bring substantial improvements across accuracy, newsroom integration and speaker identification. enCaption4’s latest enhancements boost accuracy to 90% or higher, which far surpasses compliancy requirements in live captioning. These accuracy improvements are born through enCaption4’s advanced speech-to-text engine, which leverages breakthroughs in machine learning technology to develop a deep neural network approach to voice recognition.

New for NAB was enhanced integration to Newsroom Computer System (NRCS) MOS integration. This further improves accuracy by empowering enCaption4 to learn correct spellings from news scripts and rundowns prior to live newscasts, and without manual intervention. Additionally, enCaption4’s multi-speaker identification speaker eliminates crosstalk and other on-air confusion when multiple hosts and guests are contributing to live broadcasts. enCaption4 identifies speech from each speaker through dedicated microphones and processes each speaker separately.

The flexibility of enCaption4’s deployment options within automated production workflows also ensures optimal cost-efficiency and operational flexibility. And with support for more than 20 languages, enCaption4 stands alone as the industry’s most affordable, flexible and versatile captioning solution for dynamic, high-pressure live production environments.

Debuting at NAB was a file-based (non-live) transcription as a cloud-based captioning service that guarantees 99% accuracy. It ensures that the transcriptionist can quickly generate an accurate transcript of the audio. The resulting transcripts are made accessible from the cloud within two to eight hours, depending upon how quickly the captioned content is needed to achieve proper compliance.

Prior to the show opening, ENCO announced the enCaption4 had joined the growing NDI video over IP ecosystem. The enCaption4 systems on display in the exhibit included the new NDI capability.

EEG’s Lexi automatic captioning service is accessible through EEG encoders such as the iCap HD492 SDI encoder.

EEG’s Lexi automatic captioning service is accessible through EEG encoders such as the iCap HD492 SDI encoder.

EEG joined more than 50 manufacturers at the IP Showcase at NAB 2018, organized around the new SMPTE 2110 video standard. Solutions shown by EEG were designed for completely native 2110 live captioning, as well as transitional solutions on the EEG HD492 SDI encoder to co-originate SDI VANC and 2110-40 streams based on SDI program audio.

On display in the EEG exhibit as the latest enhancement to EEG's cloud-hosted automatic live captioning platform, Lexi. Announced earlier this year, newly developed "Topic Models" empower the system to recognize topics, immerse itself in distinctive vocabulary, and observe context through the absorption of relevant web data unique to each implementation. This ground-breaking advancement enables Lexi to perform in real-time with a degree of accuracy that reaches beyond previous speech-to-text systems.

As part of Lexi’s setup process, users are now invited to select one of EEG’s developed topic models. Or, users can generate their own custom model by supplying Lexi with any combination of reference URL’s or other bulk text data specific to their subject matter or locale. As it absorbs the data, Lexi observes context of new words and names to recognize where and how they are often used.

The Evertz Caption Conductor is an end-to-end, all IP closed captioning system.

The Evertz Caption Conductor is an end-to-end, all IP closed captioning system.

Evertz launched its Caption Conductor closed captioning solution at the 2018 NAB Show. Caption Conductor is a sophisticated closed captioning system that addresses the critical requirements of broadcasters looking for an all IP real-time closed captioning solution.

The Caption Conductor closed captioning solution is an end-to-end all IP system. It features an intuitive and simple to use web interface for closed captioning operators. The powerful web interface enables operators to easily select and work on upcoming captioning tasks.

The Caption Conductor system leverages the use of Evertz’ comprehensive closed captioning solutions including the 7825CCE-AUD-3G closed caption encoder and the closed caption encoding capabilities of Evertz’ OvertureRT-LIVE integrated playout solution.

Caption Conductor’s flexible IP infrastructure can be deployed onsite within a broadcaster’s facility or in a public cloud environment. It can be deployed in public datacenter environments including Amazon Web Services 

Caption Conductor’s flexible IP infrastructure can be deployed onsite within a broadcaster’s facility or in a public cloud environment. It can be deployed in public datacenter environments including Amazon Web Services. Caption Conductor’s cloud-based architecture offers numerous operational advantages. These include reduced hardware footprint and lower capital expenditures as well as the ability to more rapidly scale-up captioning capacity to meet surges in demand.

2018 NAB Show announcements from Link Electronics and IYUNO were both recently covered in The Broadcast Bridge.

Isis Subtitle File QC was demonstrated in the Starfish exhibit.

Isis Subtitle File QC was demonstrated in the Starfish exhibit.

Starfish Technologies launched a major software revamp to its Advantage Video Description product range, which has gained universal praise during its initial roll out to Starfish customers.

the NAB Show, Starfish featured several products including Subtitle File QC. The Isis Subtitle File QC Service automatically processes industry standard file formats including EBU .stl, and .pac files. It runs as a Windows service and will automatically process any subtitle or caption file that appears in the ‘Source’ watch folder. The Subtitle File Compare service was also demonstrated, as was Subtitle Routing, a TransCast subtitle exchange system that routes RS232 or IP signals in a subtitle routing application.

TransCast DVB Live was also on display. The real-time system extracts Teletext subtitles from an SDI input video source and provides a DVB subtitle stream output, via UDP or ASI.

Live VBI subtitle decoding and burn-in was addressed with the EnVision Live system. It decodes VBI closed caption information present on the SDI video input signal and converts it in real-time, to burnt-in subtitles on the SDI video output. The system is particularly useful for low cost regionalisation of TV channels.

Telestream integrated its caption insertion capability into its Lightspeed Live Stream.

Telestream integrated its caption insertion capability into its Lightspeed Live Stream.

Telestream introduced a unique ‘one-box-solution’ for closed caption encoding to multiple live streams. Telestream’s new integrated live caption encoding solution is now part of Lightspeed Live Stream.

Sophisticated 608/708 caption insertion capability is part of the latest version of Lightspeed Live Stream, Telestream’s multiscreen live encoding and packaging at scale solution for broadcasters, content aggregators and live event production companies. Integration helps avoid the cost and complexity of going to baseband when inserting digital captions, helping avoid the cost and complexity of going to baseband when inserting digital captions. Its unique approach to live captioning maintains integrity of content with zero restreaming. It also supports HDR.­

Nearly all IP caption insertion systems are cloud-based, requiring content to be moved first to the cloud, there captioned, then re-encoded and re-streamed for delivery. This introduces significant opportunity for human and mechanical error, ultimately impacting the quality of the final content. In contrast, Telestream’s live captioning solution with Lightspeed Live Stream eliminates restreaming, instead encoding the live captions within the original stream and ensuring the best possible quality.

In addition to the new live captioning capabilities, users benefit from a true enterprise-class live streaming, encoding and packaging system. For multiscreen live encoding and packaging at scale, this advanced solution allows users can add time delay to the stream for synchronization purposes. Lightspeed Live Stream can also route captured recordings to Telestream’s Vantage Media Processing Platform for further enhancement.

An 3Play interactive plugin to a website allows visitors to search the spoken audio and jump to any point in the video by clicking a word in the transcript.

An 3Play interactive plugin to a website allows visitors to search the spoken audio and jump to any point in the video by clicking a word in the transcript.

3Play Media showed several solutions for the first time at NAB. One solution is the “Expiring Editing Link.” It allows 3Play users to give non-3Play users access to specific files, without having to give them their 3Play username and password. It allows non-3Play users to edit and review a specific transcript or translation for a defined timeframe. Users are given access to an editing interface displaying the chosen file, and nothing else. The Expiring Editing Link works for all your caption, transcription, and translation files.

In response to the FCC’s requirements to caption online video clips and montages from full-length programming that previously appeared on television, the company announced 2-hour turnaround for short files and video clips, which is the fastest turnaround in the industry. This short-turnaround option allows content creators to get clips captioned and online as quickly as possible.

3Play also announced a 7-step path, beginning with creating an account to downloading finished captions and transcripts.

The company also showed Audio Description, a solution that uses synthesized speech-to-voice descriptions, and provides three voice options and three speed options. Extended audio description pauses the video to allow for longer descriptions in locations where there is not enough space to sufficiently describe the visuals.

The company confirmed its products can be integrated with Brightcove, Ensemble Video, DigitalChalk, Facebook, Kaltura, KnowledgeVision, Limelight, Liveclicker, MediaAMP, MediaPlatform, 

The company confirmed its products can be integrated with Brightcove, Ensemble Video, DigitalChalk, Facebook, Kaltura, KnowledgeVision, Limelight, Liveclicker, MediaAMP, MediaPlatform, Ooyala, thePlatform, Vimeo, Viostream, Wistia, and YouTube. It also confirmed it is compatible with Flowplayer, HTML5, JW Player, and Video.js. In addition, 3Play Media is an official iTunes partner.

VoiceIneraction recently shared that ABC affiliate WSIL-TV has been using AUDIMUS.MEDIA automatic real-time captioning for more than a year.

VoiceIneraction recently shared that ABC affiliate WSIL-TV has been using AUDIMUS.MEDIA automatic real-time captioning for more than a year.

VoiceInteraction showed its solution for live automated closed captioning: AUDIMUS.MEDIA. Its latest release sets a new standard in the automatic captioning solutions ecosystem as it now comprises multi-language modules to help crack the language barrier faced by video consumers and content distributors.

An automatic live translation module can produce a text stream in a new language, making the content accessible to wider audiences. The live-dubbing module that creates an extra audio track produced by a foreign-language synthetic voice, further helps media companies to instantly reach worldwide viewers. Multi-language speech recognition produces captions for all speakers in a live show, not limited to a spoken language defined in advance.

Being built on an extensive machine learning backbone, AUDIMUS.MEDIA performs a daily update of the vocabulary to make sure that unusual names and terms are covered in breaking news. The knowledge base can be further enriched with local names as well as custom word pronunciations. Moreover, an expletive dictionary is applied to filter out inappropriate expressions.

To cope with the increasing number of video distribution platforms and legal requirements for online and offline video captioning, AUDIMUS.MEDIA can deliver its captions to several platforms simultaneously through Closed Caption Encoders such as Link Electronics, Evertz, EED DoCaption and Wowza Streaming engine. It works with any encoder/CDN able to receive video with CEA-608 captions muxed by software.

Let us know what you think…

Log-in or Register for free to post comments…

You might also like...

Articles You May Have Missed – August 15, 2018

The standards for moving video over IP are all decided, right? Not yet. Even so, the innovation presents unprecedented opportunities and empowers broadcasters to deliver flexibility, scalability, and more efficient workflows. Consultant and The Broadcast Bridge technology editor, Tony Orme,…

Automating Titling and Graphics Creation for Multiplatform Distribution

Aesthetically pleasing 3D titles and graphics are integral to providing the wow factor that keeps today’s broadcast viewers glued to the screen. These visual elements—including 3D and 2D titles, animated graphics and real-time data-driven overlays—provide the vital conte…

BBC Inspires EBU Over Flexible Content Production

Innovation has become a mantra for broadcasters, driven in part by the disruption of online content consumption and proliferation of video content sources which now number 1 billion globally by some counts. Innovation is seen as crucial for the very long…

Local Stations Tap Forscene To Support OTT News Distribution

Local television stations are beginning to recognize the value in offering their content a VOD basis via OTT. The challenge is preparing sufficient content to fill an OTT channel in a timely and recurring manner. Broadcaster WTXL-TV, a North Florida…

Value of KVM in Broadcast-IP Infrastructures

KVM is more important now for broadcast-IP systems than it ever has been. As manufacturers turn to server based architectures private cloud installations have become more mainstream, requiring us to configure systems through traditional server control inputs, that is keyboard,…