AI is being applied to solve the most difficult challenges in media production and distribution by “learning” from a large set of data.
As studio content creation and live production continues to get more demanding and complex, producers are finding that having a reliable, fast and efficient assistant is invaluable… even if that assistant is actually a series of Artificial Intelligence (AI) and machine learning software algorithms. At the upcoming 2018 NAB Show, a multitude of broadcast equipment vendors will be touting AI features and capabilities in their products and systems in order to help users perform tasks not humanly possible.
Often called neural networks and machine learning, AI is being applied to solve the most difficult challenges in media production and distribution by “learning” from a large set of data. The technology can help an editor find a single clip among of storage arrays holding petabytes of information in mere seconds. It can automatically calibrate or learn a studio environment or field of play and instruct devices to react accordingly. And it can execute AI-powered image framing, highlights creation, graphics insertion and even serve as a virtual program director.
Indeed, AI solutions bring the promise of simplifying and enhancing everything from systems to processes, in turn cutting the costs of production, speeding up content syndication and significantly reducing the amount of man hours required for some of the most labor-intensive tasks. In some cases a full production crew will no longer be necessary to produce a live multi-camera production. This is called At-Home or REMI (remote-integration model) operations and it’s being implemented more and more every day for many of the largest sporting events around the world.
Engineers and marketers are now looking to bring these streamlined capabilities to mid-level projects—where budgets and resources are limited—by enabling systems to replicate human decision making and even make it faster and more accurate and able to understand and naturally adjust to unpredictable occurrences during a live production.
“It’s clear that AI will prove to be very helpful for a wide variety of production and post tasks that heretofore are labor intensive and take time,” said James Stellpflug, VP Product Marketing at EVS. “Recent advances in neural network technology, and more specifically the ability to execute complex networks in real-time, have opened the door for introducing a new kind of creative automation to the broadcast industry.”
The latest version of EVS’ Xeebra refereeing system features AI integration to enable users to automatically calibrate the field of play – something that’s very time-consuming if done manually.
EVS engineers have been hard at work to develop different types of intelligence into its portfolio of live production products (servers, routers and production switchers). The idea, they said, is not to replace people with machines, but to help humans do their job better (e.g., faster and more efficiently). As a result, setup time is significantly reduced and operators can, for example, easily place graphics onto the calibrated field with the highest level of precision to aid their decision-making process.
At the NAB Show, EVS will show a new version of its Xeebra referee system that features AI integration to enable users to automatically calibrate the field of play – something that’s time-consuming if done manually.
A robotics company called MRMC (owned by Nikon) is introducing its Polycam Chat solution, which simplifies and augments smaller studio environment with AI while minimizing production costs. It’s designed to work with MRMC’s AFC100 robotic camera system and uses face detection in combination with limb recognition for a high degree of accuracy and stability. The Polycam Chat automates the camera operation for up to four presenters and guests in one studio and can easily track a talking head with maximum stability within the frame.
Another robotics company, Telemetrics, has added AI to a new roving pedestal to avoid physical obstructions and for Automatic Shot Correction technology as part of its RCCP remote camera control panels to augment studio automation systems. The AI-powered reFrame technologyhelps users of automated news studios overcome unpredictable occurrences in the studio—like the talent slightly moving out of frame or an ill-positioned over-the-shoulder graphic or two-shot— and make quick adjustments automatically, on the fly.
“Operators of automated studios don't want the hassle of having to keep their eye on every camera while attending to other parts of the newscast,” said Michael Cuomo, vice president of Telemetrics. “ReFrame allows them that security that pre-planned shots are all trimmed correctly, each and every time.”
Telemetrics has included AI-assisted Automatic Shot Correction technology into its line of RCCP robotic camera control panels.
At the NAB Show an online video processing company called Bitmovin will demonstrate AI-powered encoding, claiming to dramatically speed up processing and enable service providers to deliver significant improvements in video quality. This AI-powered encoding technology works by continuously learning the parameters used in previous encodes, so that it can apply AI-optimized settings to every new video file.
By enriching its containerized encoding software (which enables video to be split into chunks for more efficient encoding) with machine learning capabilities, Bitmovin is able to achieve both faster processing times and significantly higher quality with no increase in bandwidth.
“Bandwidth should never hold back operators from delivering the best possible quality experiences,” said Stefan Lederer, CEO and co-founder at Bitmovin, adding that its AI enabled approach allows each video to be handled far more efficiently while delivering significant quality improvements. “Artificial intelligence is a step-change in encoding, allowing operators to significantly improve the visual quality of streams, eliminate buffering and improve consumer satisfaction.”
Tedial, a supplier of media asset management systems, will introduce Smart Live at the NAB Show, a live event support tool that leverages AI algorithms tools to increase the number of highlights created automatically, thus reducing production costs and boosting revenues for production companies. The company will demonstrate Smart Live’s automatic highlight creation feature and its integration with AI engines.
Tedial’s Smart Live automatically pushes content to AI engines, facilitating fast and reliable video and audio recognition to quickly generate additional locator data and annotate media proxies.
“By tightly integrating with AI tools, Smart Live can generate an increased number of highlights during or after an event, and deliver to a very targeted audience increasing the potential for significant growth in fan engagement,” said Jerome Wauthoz, vice president of products for Tedial.
Before any action, Smart Live ingests a data feed and automatically creates an event inside its metadata engine. Simultaneously Smart Live automatically creates the corresponding log sheets, the player grids and a schedule of the event. All these preparations are linked and organized by collections, so an entire season of sports events can be prepared automatically in advance.
During events, live data is ingested and the system can be configured to automatically create clips based on actions, keywords or logged occurrences. Smart Live automatically pushes content to AI engines, facilitating fast and reliable video and audio recognition to generate additional locator data and annotate media proxies. The systems can also automatically publish clips or push content to social media platforms. An internal metadata engine can be configured to create an automatic metadata ingest process, addressing more demanding and the more complex sport workflows.
Another vendor, Mobile Viewpoint, a supplier of IP contribution solutions, will unveil NewsPilot, which uses AI to automate the low-cost delivery of content from remote locations. It can be used in studio environments or in the field to completely automate news production without the need for camera crew or a director. For example, smaller broadcasters or independent reporters and producers could use NewsPilot for live field news and event reporting, or in-studio news and interview production.
NewsPilot consists of three PTZ (pan/tilt/zoom) cameras and Mobile Viewpoint’s Automated Studio control box, which incorporates Mobile Viewpoint’s AutoPointer and Virtual Director software. It also includes CameraLink, a robotic arm which can move a 3kg PTZ camera much like a traditional dolly arrangement, offering the same camera control normally associated with high quality news productions.
Virtual Director mimics a human director by using an AI engine that analyses audio signals (such as microphone levels), video images, and other sources such as autocue and rundown scripts, to identify which camera to switch to and control. Based on inputs from Virtual Director and 3D sensors, AutoPointer automatically points cameras and controls shots to track and focus on TV presenters.
“To meet the viewer expectations of instant news and coverage of anything that moves, broadcasters and brand owners are pressured to accelerate their output. This is rarely possible with traditional live broadcast set-ups that are expensive and resource-intensive,” said Michel Bais, CEO at Mobile Viewpoint. “Made possible by our AI algorithms and capabilities, NewsPilot and IQ Sports Producer are the perfect antidote, removing the complexity and much of the resource required for live broadcasting, and bringing greater flexibility to live event production.”
Veritone, an early adopter of AI technology, has developed a suite of cloud-based software tools it calls aiWARE that uses a proprietary, machine-learning orchestration layer (“Conductor”). Serving as a search engine aggregator, the software not only employs multiple AI engines at once, but it also chooses the best-available engine or engines spread out across the globe.
Veritone’s aiWARE can predict the accuracy of each transcription engine based on the characteristics of the media being processed.
For example, with natural language processing, aiWARE can predict the accuracy of each transcription engine based on the characteristics of the media being processed. Conductor then automatically selects the best engine to process that file. The newest version of Conductor under development can identify the best engine for each portion of a file, applying multiple engines when needed to fill accuracy gaps.
“AI, as most people know it, is actually artificial narrow intelligence [ANI], which represents a class of AI technology designed to perform a specific task,” said Drew Hilles, senior vice president of Veritone. “This narrow approach is expected to dominate the AI market in the coming years.”
Most people are extremely familiar with this type of technology and use it regularly, without even realizing it. For example, Google Translate or geolocation searches via Apple Maps. Hilles said these commonly used resources involve a type of machine learning to complete a single-problem task like “translate this.”
“This type of technology is called a cognitive engine, and translation is just one of many possibilities,” he said.
At the NAB Show Veritone and storage systems provider Quantum will jointly show “aiWARE for Xcellis” that leverages Quantum’s StorNext file system and its range of Xcellis storage solutions (cloud, LTO tape, SSD and HDD spinning disk). StorNext serves as a database to a storage library that can be searched quickly, although the large the library, the slower the search. The combination of the two will enable users to apply AI to on-premise stored content that previously could not be leveraged for this purpose and to add new content for analysis as the data is captured.
As broadcasters and production companies look to maximize their available resources, AI is playing an increasingly important role. It looks to be the best production assistant a crew could ask for.
[Editor’s Note: The NAB Show will host a number of AI-related events, including a half-day program entitled “Get Ready for Machine Learning and Artificial Intelligence,” which takes place Tuesday, April 10, from 9 a.m. to 12 p.m. at the Las Vegas Convention Center. It will include six sessions that highlight the various ways machine intelligence is impacting content creation.
The panels will explore how machine intelligence can increase productivity, efficiencies and creativity in production planning, animation, visual effects, post-production and localization. Attendees will learn the current capabilities of neural network-based tools while also seeing the potential of these innovations to alter jobs, workflows and the nature of content itself.]
You might also like...
In the previous article in this two-part series we looked at how cloud systems are empowering storytellers to convey their message and communicate with viewers. In this article we investigate further the advantages for production and creative teams.
Practically all communication, including broadcasting, relies totally on electromagnetic waves that may be radiated far and wide from transmitters or guided along wires, waveguides or optical fibers.
Television is still a niche industry, but nonetheless, one of the most powerful storytelling mediums in existence. Whether reporting news events, delivering educational seminars, or product reviews, television still outperforms all other mediums in terms of its ability to communicate…
In the last article in this series, we looked at how PTP V2.1 has improved security. In this part, we investigate how robustness and monitoring is further improved to provide resilient and accurate network timing.
Timing accuracy has been a fundamental component of broadcast infrastructures for as long as we’ve transmitted television pictures and sound. The time invariant nature of frame sampling still requires us to provide timing references with sub microsecond accuracy.