Live IP Video Closed Captioning Moves To Amazon Cloud

EEG Alta.
Using AWS Cloud Digital Interface (CDI), EEG has expanded its Alta IP caption and subtitle encoder capabilities to include support for real-time captioning for uncompressed live video over IP.
“One of the promises of the SMPTE 2110 standard is that you can deliver video, audio, and ancillary data separately, and that’s important for a device like a closed captioning encoder that doesn’t need to be connected to an ultra high resolution video source to operate, which could save bandwidth,” said Bill McLaughlin, CTO, EEG Video. “However, technology is changing so quickly and SMPTE 2110 is not a public cloud technology. Using AWS CDI with Alta, we can offer the best of both worlds – the benefits of SMPTE 2110 and the scalable resources of AWS.”
The largest closed captioning and subtitle delivery network in the world, EEG’s iCap provides round-the-clock access to real-time captioners and is hosted on AWS. Connecting IP workflows to the iCap Network, Alta harnesses AWS CDI to ensure reliable, low latency transport of uncompressed live video. Captioning through Alta can be done via an iCap human typist or through AI and broadcasters often use a combination of the two approaches based on budget and project requirements.
“Fifteen years ago, it made sense to invest in a data center, because it was easier to predict what customers were going to want and need long term,” said McLaughlin. “Today, the live media industry is changing far too quickly to be able to make sizable investments like that with confidence, so many companies are now software-driven, which is made possible by the cloud. Using AWS, independent software vendors like us can more easily take our specialties, such as captioning, to the public cloud, and with functionality provided by technology like AWS CDI, integrate without concerns about network drivers or hardware-related issues, and that’s very helpful for building applications.”
A typical customer setup with Alta will include a central hub from a common vendor, such as Evertz, running on an AWS CloudFormation template or wherever their overall broadcast infrastructure systems are hosted for routing videos, and this is also where clipping, resolution transformation and similar tasks would be completed. When a feed needs captioning, it would then be sent to an Amazon Elastic Compute Cloud (EC2) instance that is hosting EEG channels, then ingested and captioning would be added, either via human or automated methods. From there, the captioned feed would be re-ingested into the central hub for recording or other applications. The feed could also be sent directly to AWS Elemental MediaLive for encoding and repurposing feed data. AWS Elemental MediaPackage can then be used to make an HLS stack that can help generate a link for use on different player sites. Where a video is being hosted, the HLS URL can be used to output closed captions on a web player.
In integrating AWS CDI with Alta, the goal for EEG Video was not to fundamentally change its operational workflow around closed captioning, but rather save customers money while broadening capabilities. EEG Video’s use of the cloud is also helping it globalize what has traditionally been a regionalized market.
McLaughlin concluded, “AWS is allowing us to do what we’ve always done but in the cloud. This allows us to offer a spin up/spin down opex model with the polish deserving of a major broadcast as many of our customers navigate business challenges driving them toward maximizing flexibility in their workflows.”
You might also like...
Microphones: Part 10 - Mid-Side (M-S) Recording And Processing
M-S techniques provide useful sound-field positioning and a convenient way to check mono compatibility. We explain the hard science behind this often misunderstood technique.
Microphones: Part 9 - The Science Of Stereo Capture & Reproduction
Here we look at the science of using a matched pair of microphones positioned as a coincident pair to capture stereo sound images.
Microphones: Part 8 - Audio Vectorscopes
The audio vectorscope is an excellent tool for assuring quality in stereo sound production, because it makes the virtual sound image visible in the same way that a television vectorscope allows the color signals to be seen.
Microphones: Part 7 - Microphones For Stereophony
Once the basic requirements for reproducing sound were in place, the most significant next step was to reproduce to some extent the spatial attributes of sound. Stereophony, using two channels, was the first successful system.
IP Security For Broadcasters: Part 12 - Zero Trust
As users working from home are no longer limited to their working environment by the concept of a physical location, and infrastructures are moving more and more to the cloud-hybrid approach, the outdated concept of perimeter security is moving aside…