Encoding Shines At Virtual IBC 2020

Had IBC 2020 taken place as usual, some of the liveliest discussions would have centered around encoding, after an eventful year leading up to the virtual event that took place over the same time slot.

Inevitably, one or two vendors that would have been there, such as AWS Elemental, seem to have taken more of a backseat this year despite being active in the field, while others like V-NovaHarmonic and Synamedia, have been keen to peddle their wares, views, or both.

The stage for vigorous debate over the future of video coding was set by the resignation from MPEG of its co-founder Leonardo Chiariglione in June 2020, coinciding with restructuring that seemed to end the group’s existence as a distinct entity. However, the standardization work is continuing under separate working groups within the body known as ISO/IEC JTC 1/SC 29 under the International standards Organization, so on paper nothing has changed, although the moves reflect shifting tectonic plates within the industry.

MPEG or ISO representatives would have been busy at IBC 2020 trying to reassure the industry that it was business as usual, and to some extent even Chiariglione would agree, despite having declared the organization dead in what may be partly pique. He was quoted in an article on the IBC web site by Broadcast Bridge contributor Adrian Pennington admitting that the MPEG standardization train was still on track and would remain so unless deliberately derailed.

He was alluding particularly to the three MPEG codecs coming out in 2020, which are Versatile Video Coding (VVC), Essential Video Coding (EVC) otherwise known as MPEG-5 Part 1, and Low Complexity Enhancement Video Coding (LCEVC) or MPEG-5 Part 2. The latter was developed by London-based V-Nova and can be seen as a rebranding but also represents at least a temporary shift in encoding strategy towards enhancing legacy codecs, or at any rate delay need to upgrade to a new one.

Nevertheless, the fact MPEG is launching three codecs in one year when historically it took 10 years to bring out just one has been seized on as evidence of confusion as well as disintegration. It certainly signals two realities, firstly that the current pace of video service evolution with rapidly rising expectations over quality at a time when distribution has been migrating to streaming, which is often bandwidth constrained, calls for more rapid advances in compression technology. It is no longer good enough for compression efficiency only to double every 10 years as happened during the progression first to MPEG-2, then onto MPEG-4 of H.264 and finally to HEVC or H.265.

The second reality is that MPEG has been beset by internal divisions and tensions between its patent holders, which have always been there but have boiled over in recent years after being largely contained during the MPEG-2 and H.264 eras. Ironically as it might appear now, MPEG’s early success in collaboration with the ITU Telecommunication Standardization Sector (ITU-T), which coordinates the standards development, was founded on its promise to end what were emerging format wars over codecs. The broadcasting and pay TV industries at that time were keen to coalesce around a stable common format and MPEG delivered that with first MPEG-1, MPEG-2, and then MPEG-4/AVC, or H.264.

However, MPEG was already fraying around the edges in the H.264 era as it struggled to cope with the incipient migration towards internet streaming. High patent costs, which fed patent holders collectively $1 billion a year in royalties during the MPEG-2 era through the late 1990s and well into the noughties according to Chiariglione, met increasing objections from the ever more powerful internet streaming lobby, initially led by Google. This led to formation of the rival Alliance for Open Media (AOM), whose governing members include Amazon, Cisco, Google, Intel, Microsoft, Apple and Netflix. MPEG’s corresponding malaise and fragmentation has made it increasingly likely that AOM will assume the mantle as defacto setter of codec standards.

Meanwhile, VVC/H.266 has been finalized as MPEG’s next codec, in turn doubling the efficiency of HEVC and this time after considerably less than 10 years. But its impact has been blunted by coming alongside EVC as a stripped down version designed to match the supposedly royalty free status of AOM’s AV1. Then LCEVC almost looks like an attempt to buy time and extend the life of HEVC or even H.264 while VVC comes along.

Against this background come this year’s activities around the virtual IBC. The event does at least cut through the MPEG internal divisions and wars with AOM to signpost where encoding as a whole is going, essentially towards reduction of complexity where possible and yet tuning the process to specific aspects of the content. The aim is to spare both processing power by reducing complexity and bandwidth by eliminating those aspects of a stream or file that have least impact on human visual perception.

So we find Fraunhofer HHI showing optimized VVC encoding with reduced complexity at virtual IBC 2020, demonstrating significant bit-rate reductions over HEVC for content ranging from legacy High Definition (HD) to High Dynamic Range Ultra-HD content. Then we have California based Bitmovin talking up the benefits of its per title encoding for on-demand content, aiming to focus the bits actually delivered in that fairly narrow human perceptual range.

As Steve Geiger, Director of Solutions for Americas at Bitmovin, explained, the aim is to achieve a smooth curve matching bit rate to quality so that there are no sudden lurches in the experience when network bandwidth changes. Similarly, the aim is to achieve a consistent perceptual quality across the media library, avoiding noticeable differences in quality at a given bit rate between one title and another. The process involves sampling a whole asset and then matching the different bit rates to resolution to obtain that smooth curve.

Mux is another company evangelizing about per-title encoding, with a similar approach. The company has pointed out that catering for individual content can enable perceptual quality to be maintained while actually reducing the headline resolution, therefore offering greater scope for bit rate reduction.

Then Synamedia, which boasts of being the world's largest independent video software provider after emerging from Cisco, is leveraging IBC to boost the credentials of its content-aware encoding, which is a similar idea, except that it can be applied to live content with the help of AI and machine learning techniques. The firm has shown how its content-aware encoding algorithms can incorporate program recurrence, program similarity and genre, taken from sources such as program guides and also IMDb, Amazon’s online database comprising information on films, TV programs, home videos, video games and other streaming content. These algorithms apply pattern matching to predict the quality or bitrate per program that optimizes the encoding. Machine learning then operates inside the encoding algorithms to minimize the number of bits used as far as they are capable, while maintaining a given target video quality.

Harmonic is another vendor espousing the benefits of live video encoding with its Electra X Live Video Processor. This will be shown as part of the IBC 2020 edition of what has become its signature digital event, Live Connection. Among various offerings, Harmonic is showing enhancements to its live sports streaming portfolio, noting that demand for these has started to pick up as sports return to TV screens, even though they are mostly still behind closed gates without spectators.

You might also like...

Waves: Part 9 - Propagation Inversion

As a child I came under two powerful influences. The first of these was electricity. My father was the chief electrician at a chemical works and he would bring home any broken or redundant electrical parts for me to tinker…

Motion Pictures: Part 2 - Optical Flow Axis

There is no motion in the static frames of a movie. The motion is purely in the imagination of the viewer. But how does it work?

The Big Guide To OTT: Part 2 - Content Origination

Part 2 of The Big Guide To OTT is a set of three articles which dig into the key issues of OTT content origination, the unique demands of OTT content storage, and the role of CDN selection in achieving broadcast grade…

Compression: Part 8 - Spatial Compression

Now we turn to Spatial Compression, which takes place within individual images and takes no account of other images.

Waves: Part 8 - Shock Waves

Shock waves are interesting phenomena that take place in a number of different media. For an arcane physical process, they have done quite well to be adopted by the mainstream media as one of their clichés, along with Mae …