Media Distillery is among a new breed of vendors promoting automated object and voice based metadata generation.
Metadata is well known to hold the keys to good user experiences by making content readily searchable and enabling more compelling or relevant recommendations, but has been held back by limited depth and need for laborious manual production.
While metadata has the potential for significant competitive differentiation and can help legacy broadcasters or operators exploit more fully the one advantage they often have over the big Internet players, unfortunately the video is often poorly described at only a basic level of detail. That information typically includes little beyond genre and title. Today's metadata is often inconsistent and reliant on diverse and often incompatible third-party sources.
This is their content, which especially in the case of TV shows and live sports has scope for exploitation beyond the existing footprint via OTT distribution over the Internet.
These deficits have spawned a new breed of metadata specialist taking advantage of more advanced tools based on techniques under the banners of Artificial Intelligence (AI) and Machine Learning (ML). Although much overused and over hyped, these terms loosely apply to software based methods encapsulating various forms of human expertise enabling tasks to be automated and performed much faster. In the case of ML, there is also some capability to improve through continued exposure to relevant data sets, which can for example enable better recommendations in the light of feedback from users.
Automatic Detail Identification
Media Distillery, based in Amsterdam, Netherlands, is one such vendor that has developed software designed to extract levels of detail at the scene or frame level from video in real time. This includes identification of people from facial recognition applied to the relevant part of a frame, as well as logos, objects such as buildings, subtitles and words or phrases within the audio stream.
Speaking at the recent OTT World Summit in London, Media Distillery CEO Roland Sars argued such metadata can take recommendations to a new level of detail in keeping with emerging content consumption rends, especially among Millennials. It cannot just identify titles on the basis of finer levels of search data, but can perform “intra” searches within whole content items such as movies or TV shows to extract clips of particular interest. “Current recommendations are often based on collaborative filtering but with this metadata we can make these more relevant to probing more deeply what consumers are really watching. They may prefer to receive short clips of interest rather than whole programs. Current metadata is too limited for this, but we feed much richer metadata,” said Sars.
Media Distillery CEO Roland Sars argues that granular metadata will revolutionize the way people watch as well as find content.
Such detailed metadata can also improve the viewing experience in other ways than better beyond recommendation. Sars pointed out that 75% of linear programs do not start on time, but that Media Distillery’s technology can locate the correct point by recognizing visual or audio cues. “We can point to the right start or end time without bias to channels,” said Sars. “This is becoming more important with the trend towards replay watching away from linear.”
Another vendor applying AI tools to metadata creation is Finland’s Valossa, which has focused on exploiting this to generate more descriptive tags and categories to improve navigation as well as search. Like Media Distillery, Vanossa emphasizes the importance of automating metadata generation given that manual inspection is becoming far too time consuming and expensive with proliferating volumes of content becoming available all the time. Valossa is promoting its metadata for more effective ad targeting as well since there is scope for matching an ad more exactly not just to broad genre but precisely what is showing within a program at a given time. Ads can be matched to scenes or spoken words within the content.
A longer-term goal is to enable video search to be as granular as Google is for text, enabling any content including user generated material to be located almost instantly on the basis of a relevant search key. Valossa is heading in this direction with a search engine currently under development, building on the existing scene-level metadata to allow content discovery based on natural language.
One problem these systems do not address on their own is the issue of metadata integration from multiple sources. In fact, this could get worse with greater granularity in the absence of common standards at this level. As UK based video systems integrator and consultancy Piksel argues consistently, metadata management is often compromised by inconsistencies across different sources at content ingest, which reduces effectiveness of discovery and recommendation. Given the growing reliance on metadata, Piksel now suggests that broadcasters and distributors should consider replacing third party sources and instead create their own unique metadata using tools it or rivals can provide.
This is not feasible for many smaller service providers and so the likes of Piksel are also offering services to create bespoke packages that work on metadata at the scene-level, utilizing visual identification and natural language processing of closed captions, as well as speech recognition.
You might also like...
There is an unprecedented transformation occurring in the TV platform, from a rigid, linear TV experience to one of flexible fluidity in the OTT and multiscreen worlds. More than half of today’s TV viewers say they now watch their f…
Android TV is finally being adopted on a large scale by pay TV operators three years after its launch and seven years on from the original unveiling of its predecessor Google TV. One casualty could be the RDK (Reference Design…
Com Hem, Sweden’s largest cable operator, has revealed that Germany’s 3 Screen Solutions (3SS) played a key role in project development and systems integration for its TV Hub, a hybrid set top box (STB) based on Android TV. 3SS als…
The arcane world of metadata has been enlivened by automation with the promise of efficiency savings in asset management and much richer labelling of content to enhance discovery. At the same time, there are hopes at last of the field…
RDK (Reference Design Kit) is set for its next phase helping cable operators migrate to all-IP combined video and broadband services by embracing wireless delivery for the final hop to the user and enabling integration with Android.