Ultra HD Forum Announces Watermarking API

The Ultra HD Forum has confirmed launch of its first API for forensic watermarking before the end of 2020.

This announcement coincided with the release of the Forum’s latest guidelines for members, version 2.4.

The Watermarking API for Encoder Integration is a vendor-agnostic API tool with the goal of stimulating distribution of premium UHD content, especially over the internet. While forensic watermarking technology is available from several vendors, integration with encoders can be challenging and an impediment for content providers seeking to distribute online using Adaptive Bitrate Streaming.

Watermark preprocessing is designed to reduce the computational load downstream by performing generic steps around the transcoding stage. It firstly involves duplicating and imperceptibly marking, creating two variants of the same segment. These variants are perceptually identical, but contain different bits of information. Then the second step, known as the embedding stage, involves generating a unique sequence of A and B variant segments for each session or “instance” of video, which is then translated into a chosen manifest format for delivery, such as DASH MPD or Apple HLS.

At the receiving end, the process is reversed to recover the marks, which then enables the source device of a given video stream to be identified. This process is agnostic to the actual watermarking technology involved, notable vendors of which include VerimatrixKudelski’s NagraIrdeto and ContentArmor.

Choice of technology hinges on various factors, including efficiency and resistance to attack on the watermarking system itself. Given that watermarking has emerged as the only technology reliable and robust enough to enable source identification of illicitly redistributed streams over the internet, it is not surprising pirates have devoted considerable resources to attempts at disabling them.

One method commonly employed is the collusion attack, where several watermarked streams of the same content are combined in an attempt to wash out the marks by detecting individual unmarked segments to obtain unwatermarked content.

The other common method that has proved difficult to counter is the rotation attack, which targets the synchronization needed for successful watermark extraction by changing the relative positions of pixels in a watermarked video frame.

Watermarking technology vendors have developed various counters to these attacks, including redundancy so that not all elements of marks need to be recovered for source identification. Naturally, broadcasters, content owners and video service providers require some independent verification that acceptable levels of robustness against piracy have been achieved.

That has been provided by Cartesian, a media and telecoms consultancy, with ContentArmor in January 2018 becoming the first forensic watermarking vendor to gain certification. These robustness tests were defined by Cartesian in collaboration with content owners, including the major movie studios, although more recently providers of premium sports content have come to prominence as advocates for resilient watermarking to combat live stream piracy. The role of watermarking is different in live sports because value of the content is very high but transient, decaying quickly as events take place, so that infringing streams have to be identified and taken down within minutes to minimize business damage.

While watermarking preprocessing has been welcomed by the video industry, there has been some regret over choice of terminology since it courts confusion with A/B testing, a totally unrelated concept. This was conceded by ContentArmor’s Gwenaël Doërr, who is an active member of the Ultra HD Forum’s Security Working Group.

A/B testing is used for perceptual evaluation of video quality, also sometimes referred to as ABX testing or the double-blind A-B-C triple-stimulus hidden reference test procedure. As Doërr noted, “the observer is first presented with a version A and version B of the same content. The observer is then randomly presented several times with either A or B and asked to assess whether it is A or B. If a statistically consistent bias is observed, then the two versions A and B are considered to be perceptually different. This is a routine procedure to evaluate audio/video quality after lossy compression and/or watermarking, but is unrelated to A/B watermarking. One could argue that similar terminology adds to the confusion, but unfortunately A/B watermarking terminology is now a standard in the industry.”

The Ultra HD Forum’s pre-announcement of watermarking preprocessing coincided with actual release of the latest version 2.4 of its Guidelines. This includes more detail on live OTT distribution of UHD, as well as ‘objective measurement and analysis’ of the quality of HDR Tone Mapping.

The Forum noted that 35 Forum member companies have contributed to this effort over its five years of existence. It also noted that the Forum’s online UHD service tracker, which now includes Next Generation Audio (NGA), tracks 190 commercial consumer-facing and B2B Ultra HD TV service offerings, representing a 46% CAGR (Compound Annual Growth Rate) for UHD services over 5 years.

“UHD service launches have continued to accelerate since 2013 when YouTube first streamed 4K publicly,” said Benjamin Schwarz, who maintains the Tracker for the Forum. “Despite the COVID-related slowdown, growth this year is already at double digits again.”

You might also like...

Designing IP Broadcast Systems: Integrating Cloud Infrastructure

Connecting on-prem broadcast infrastructures to the public cloud leads to a hybrid system which requires reliable secure high value media exchange and delivery.

Video Quality: Part 1 - Video Quality Faces New Challenges In Generative AI Era

In this first in a new series about Video Quality, we look at how the continuing proliferation of User Generated Content has brought new challenges for video quality assurance, with AI in turn helping address some of them. But new…

Minimizing OTT Churn Rates Through Viewer Engagement

A D2C streaming service requires an understanding of satisfaction with the service – the quality of it, the ease of use, the style of use – which requires the right technology and a focused information-gathering approach.

Designing IP Broadcast Systems: Where Broadcast Meets IT

Broadcast and IT engineers have historically approached their professions from two different places, but as technology is more reliable, they are moving closer.

Encoding & Transport For Remote Contribution At NAB 2024

As broadcasters embrace remote production workflows the technology required to compress, encode and reliably transport streams from the venue to the network operation center or the cloud become key, and there will be plenty of new developments and sources of…