AI is much more than just a passing buzzword; it will be a crucial driver of media technology spending in 2018 and beyond as companies seek to further automate their operations and build direct relationships with consumers – as the recent string of acquisitions demonstrates. According to IABM data, most technology users plan to deploy AI in content management, distribution and delivery. They will continue to invest in AI during 2018 to become more efficient and better understand their customers, driving loyalty and revenues.
In part one of this article, first published in the Journal of the IABM, a number of IABM members tell us how they are currently deploying AI in their product and service offerings and the benefits this is delivering to their customers. They also look forward to how AI will play an increasing role in the broadcast and media industry over the coming years. From the responses we received, AI is being brought to bear on practically every aspect of the media workflow already, and it’s set to go wider and deeper with every passing day.
Automating manual tasks to drive efficiency
At the content capture/creation end of the media chain, Wazee Digital has partnered with Veritone to integrate the latter’s proprietary platform of advanced cognitive engines and applications. “This partnership allows Wazee Digital Core customers to leverage Veritone’s AI technology for automated metadata extraction and analysis, including speech-to-text transcription, face recognition, translation, object recognition, content moderation, and optical character recognition,” says Andy Hurt, SVP of marketing and business development at Wazee Digital. The company already has two implementations underway with global media brands, though for commercial reasons, they can’t be named. One of these taps AI to “streamline metadata extraction through automated facial recognition and transcription,” says Hurt. “The second implementation is with one of the largest portfolios of cable networks. By implementing an AI strategy, this company can automate legal requirements that normally would have been highly manual and labor-intensive.”
IPV is also leveraging AI for both speech-to-text and generating metadata from image recognition. “We’re able to integrate AI-powered speech to text functionality into our Curator system to let users automatically create subtitles – saving a huge amount of resource deployment. This functionality can also be rolled out into any kind of post production workflow and used for the production of subtitles for any sized screen,” says Nigel Booth, EVP business development and marketing at IPV. “In a similar vein, we’re also able to easily integrate AI image recognition functionality for the intelligent addition of metadata. This is especially useful when ingesting a lot of content into an asset management system at once. Using AI, metadata can be added based on the contents of the video without needing to use valuable resources to review media and add tags of descriptions manually. One of the key short-term benefits for our education clients is compliance with accessibility legislation. They’re able to meet requirements without needing to deploy any more expensive resources than necessary.”
Wazee Digital’s Andy Hurt sees enormous untapped potential coming online through AI. “The Veritone Platform will make it possible for Wazee Digital customers to unlock previously unknown moments within their vast motion archives in a much more cost-effective and timely manner than manually tagging film and video assets. Historically, harvesting metadata has been a manual and grueling task. Without accurate and robust technical and descriptive metadata, a media asset is not easily searchable or discoverable for remonetization, so its value plunges. By leveraging the Veritone Platform within Wazee Digital Core, much of the tedious metadata curation can now be done through automation and machine learning.”
EVS’s Johan Vounckx, SVP, innovation & technology, sees AI as opening up new workflow possibilities rather than just automating existing processes. “What we’re focusing on is creating workflows with intelligent production assistants that let human operators do a better job,” he says. IPV’s Nigel Booth agrees: “AI is taking automation to the next step – it can do much more. The goal of automation is to create output based on fixed, inflexible rules but cognitive services allow so much more. AI functions can intelligently review media and in our case, create something tailored and specific. We don’t just want to make automatic workflows, we want to make them smart – independent from unnecessary manual processes.”
EVS Xeebra video refereeing system uses machine learning technology
AI now playing in sports
With the eyewatering price of sports rights, broadcasters are looking for anything that can reduce their production costs while attracting and retaining the maximum number of viewers. Remote IP production is increasingly attractive on the cost saving side of the equation, but, according to Tedial, AI can contribute on both counts. The company is launching SMARTLIVE at NAB, an automated live sport solution. Jay Batista, GM of US operations at Tedial, explains: “Before any action happens, SMARTLIVE ingests a live data feed and automatically creates an event inside its metadata engine, automatically building the corresponding log sheets, the player grids and a schedule of the event for feed capture. All these preparations are linked and organised by collections, so an entire season of sports events can be prepared automatically in advance. During live events, SMARTLIVE leverages AI to automatically annotate the incoming camera feeds and apply automated highlights creation to feed social media platforms based on event, personnel or multi-game highlights, all hands free!
“SMARTLIVE is 100 per cent compatible with PAM (Production Asset Management) providers such as SAM and EVS, which makes it the perfect tool to orchestrate all business processes on top of an existing PAM, mixing historical media with live feeds … as a learning machine, it can then manage the end-to-end media lifecycle, increasing fan engagement, thereby driving content to higher profitability,” adds Batista.
EVS has been a leader in sports production technology for many years, and it too has been working on an AI platform for some time to enhance its products. “Several technology demonstrations have been made using this platform – the first of which is set to be integrated within our Xeebra video refereeing tool,” says EVS’ Johan Vounckx. “We’re able to use AI to intelligently calibrate the field of play in the system, something that if done manually, is a very time-consuming job. But done automatically, the AI allows for operators to easily place graphics on to the pitch. With this in place for video refereeing in soccer, the system can now tell a referee as soon as a player goes offside. Integrating AI into Xeebra speeds up processes, while allowing users to be more precise. These are the two most important things for video refereeing, and will also benefit users when deploying AI in other production workflows.”
Brightcove also sees great potential for AI in sports. “One specific ML/AI application we envision that could potentially automate the segmentation of content and helping to edit video where a human has been traditionally involved,” says Matt Smith, VP and principal media evangelist at Brightcove. “For example, in a tennis match, ML could be trained to interpret increases and decreases in crowd noise and reaction with graphical elements and facial reactions of the players. The beginning and end of a game or match could be appended and segmented using these ML advances. Additional sports can use similar methodology to segment plays and games as well. This approach can help organizations reduce the number of personnel who are now required to physically sit in front of a monitor and use a browser-based editing product to edit the clips today.”
Brightcove uses AI to provide more efficient encoding using context-aware processing.
Enhanced quality control is useful at any stage in the content chain, and Interra Systems is exploiting AI for automatic QC and content classification in its Baton QC solution - on show at NAB. “The AI-based capability we are adding to Baton will help operators classify their content and be aware of exactly what is being viewed and heard. For example, it will help operators know if the content has any violence or explicit scenes. This kind of identification has traditionally been done manually, which is both error-prone, time consuming, and expensive. Using an AI approach, identification can be largely automated,” explains Anupama Anantharaman, VP, product marketing and business development at Interra Systems. “Automating this process will make this very efficient and less resource intensive, thereby reducing costs.”
AI at the sharp end
AI is also proving its worth at the delivery end of the media chain. Conviva offers a Video AI Platform for OTT publishers and broadcasters. “Today, publishers spend a considerable amount of time and effort to configure and monitor manual alerts to detect service delivery issues,” says Ed Haslam, CTO at Conviva. “This often requires large operational teams, who while extremely crucial to the business, most times are not able to find nor configure every possible alert. Furthermore, these teams must spend valuable hours diagnosing the root cause of each and every issue. From our work with the world’s largest publishers over the past ten years, we recognized the need for a solution addressing these pain points, leading to the development of Video AI Alerts.
“Video AI Alerts automatically detects and diagnoses service delivery issues, enabling publishers to deliver a consistently higher quality of experience while also better managing operational costs. Their viewers, in turn, experience streaming video more seamlessly with less buffering and interruptions, similar to traditional broadcast television delivery,” Haslam adds. “HBO is one example of how our customers are reducing the time and expense associated with detecting and fixing streaming delivery issues on the internet using AI. Here is a quote from Vikrant Kelkar, site reliability lead at HBO: ‘The automatic configuration and diagnostic root cause analysis data delivered with every alert has proved incredibly effective for HBO GO and HBO NOW. In one case, a stream had been misconfigured and without the full stream URL reporting with each alert, we would not have been able to see nor diagnose the cause of this issue. With Conviva’s Video AI Alerts, we were able to catch the issue right away.’”
The Conviva AI platform detects and diagnoses service delivery issues.
Imagine Communications is also exploiting AI in its playout and networking solutions. “Imagine Communications’ Zenium platform enables the company’s playout and networking solutions to leverage AI-based platforms and workflows. Imagine conducted a demonstration, for example, at last year’s IBC show, that showcased Zenium’s ability to work with cognitive computing platforms from IBM to employ machine learning to create closed-caption text on the fly,” says Glodina Lostanlen, CMO at Imagine Communications. “The benefit to customers is that they are able to seamlessly and quickly incorporate cognitive capabilities into their broadcast, playout and distribution workflows. The component-based nature of microservices design principles allows discrete functions to be added or incorporated into workflows and operations without disruption, allowing media companies to essentially trade out one function for another or incorporate new capabilities through a simple software procedure … The same building-block approach can also be used to seamlessly add the algorithms that constitute the AI application to a workflow.”
As with most other companies quoted in this article, Imagine Communications finds itself bound by confidentiality in discussing actual projects, but in late 2017, Imagine issued a press release highlighting work that it was doing with Sky Italia in the AI realm. “Imagine disclosed that the European operator’s compression operations were being augmented with the addition of artificial intelligence capabilities,” says Glodina Lostanlen. “The customizations, made possible through Zenium, were designed to enable Sky Italia to automatically optimize the quality of content and reduce costs through the insertion of machine learning directly into the data flow, bringing new and unique functionality to an existing product. Among the benefits of the customization project cited by Sky Italia was the ability to streamline its service chain, eliminating a separate and external function by incorporating it into an existing workflow.”
“Artificial Intelligence (AI) and Machine Learning (ML) are very important to the evolution and maturation of video and the associated data therein,” says Brightcove’s Matt Smith.“Specifically, we have deployed ML technology, Context Aware Encoding, that applies complex algorithms for the analysis of VOD source files, coupled with user-defined audience data (platforms that will consume), and a discrete ladder of renditions is created based on this data and the calculations. The benefits to Context Aware Encoding are significant reductions in the cost to deliver and store the video renditions. Early customers have seen a variety of savings across their libraries, driven largely by the type of content being processed. The more complex the content (action in frame), the shallower the savings. Conversely, less complex content (like a news anchor in frame) drives significant savings. Across the board, our customers are seeing savings between 25-50% on storage and delivery.” Backing that claim up, Smith continues: “Young Hollywood, one of the leading producers and distributors of celebrity-driven digital media, initially configured the technology for two of its brands—Young Hollywood and Young Hollywood TV.In the early innings of using CAE, the brands have seen reductions of up to 35% in bandwidth and 23% in storage.”
The article continues in part 2.
You might also like...
Maintaining controlled access is critical for any secure network, especially when working with high-value media in broadcast environments.
Covid-19 may have changed the course of broadcasting but has not slowed its development, judging from NAB 2022, the first major industry show with a physical presence since before the pandemic.
It was in December 2018, during the Rugby World Cup hosted by Japan, that national broadcaster NHK began testing what it called its “Super Hi-Vision” 8K system, broadcasting images via satellite at up to 16x greater than that of HD—with a com…
NAB happened! Yes, footfall was inevitably down and there were fewer vendors exhibiting, but the show went on. And what a great success it was too.
As we saw earlier when discussing transform duality, when something happens on one side of a transform, we can predict through duality what to expect on the other side.