Encoding Shines At Virtual IBC 2020

Had IBC 2020 taken place as usual, some of the liveliest discussions would have centered around encoding, after an eventful year leading up to the virtual event that took place over the same time slot.

Inevitably, one or two vendors that would have been there, such as AWS Elemental, seem to have taken more of a backseat this year despite being active in the field, while others like V-NovaHarmonic and Synamedia, have been keen to peddle their wares, views, or both.

The stage for vigorous debate over the future of video coding was set by the resignation from MPEG of its co-founder Leonardo Chiariglione in June 2020, coinciding with restructuring that seemed to end the group’s existence as a distinct entity. However, the standardization work is continuing under separate working groups within the body known as ISO/IEC JTC 1/SC 29 under the International standards Organization, so on paper nothing has changed, although the moves reflect shifting tectonic plates within the industry.

MPEG or ISO representatives would have been busy at IBC 2020 trying to reassure the industry that it was business as usual, and to some extent even Chiariglione would agree, despite having declared the organization dead in what may be partly pique. He was quoted in an article on the IBC web site by Broadcast Bridge contributor Adrian Pennington admitting that the MPEG standardization train was still on track and would remain so unless deliberately derailed.

He was alluding particularly to the three MPEG codecs coming out in 2020, which are Versatile Video Coding (VVC), Essential Video Coding (EVC) otherwise known as MPEG-5 Part 1, and Low Complexity Enhancement Video Coding (LCEVC) or MPEG-5 Part 2. The latter was developed by London-based V-Nova and can be seen as a rebranding but also represents at least a temporary shift in encoding strategy towards enhancing legacy codecs, or at any rate delay need to upgrade to a new one.

Nevertheless, the fact MPEG is launching three codecs in one year when historically it took 10 years to bring out just one has been seized on as evidence of confusion as well as disintegration. It certainly signals two realities, firstly that the current pace of video service evolution with rapidly rising expectations over quality at a time when distribution has been migrating to streaming, which is often bandwidth constrained, calls for more rapid advances in compression technology. It is no longer good enough for compression efficiency only to double every 10 years as happened during the progression first to MPEG-2, then onto MPEG-4 of H.264 and finally to HEVC or H.265.

The second reality is that MPEG has been beset by internal divisions and tensions between its patent holders, which have always been there but have boiled over in recent years after being largely contained during the MPEG-2 and H.264 eras. Ironically as it might appear now, MPEG’s early success in collaboration with the ITU Telecommunication Standardization Sector (ITU-T), which coordinates the standards development, was founded on its promise to end what were emerging format wars over codecs. The broadcasting and pay TV industries at that time were keen to coalesce around a stable common format and MPEG delivered that with first MPEG-1, MPEG-2, and then MPEG-4/AVC, or H.264.

However, MPEG was already fraying around the edges in the H.264 era as it struggled to cope with the incipient migration towards internet streaming. High patent costs, which fed patent holders collectively $1 billion a year in royalties during the MPEG-2 era through the late 1990s and well into the noughties according to Chiariglione, met increasing objections from the ever more powerful internet streaming lobby, initially led by Google. This led to formation of the rival Alliance for Open Media (AOM), whose governing members include Amazon, Cisco, Google, Intel, Microsoft, Apple and Netflix. MPEG’s corresponding malaise and fragmentation has made it increasingly likely that AOM will assume the mantle as defacto setter of codec standards.

Meanwhile, VVC/H.266 has been finalized as MPEG’s next codec, in turn doubling the efficiency of HEVC and this time after considerably less than 10 years. But its impact has been blunted by coming alongside EVC as a stripped down version designed to match the supposedly royalty free status of AOM’s AV1. Then LCEVC almost looks like an attempt to buy time and extend the life of HEVC or even H.264 while VVC comes along.

Against this background come this year’s activities around the virtual IBC. The event does at least cut through the MPEG internal divisions and wars with AOM to signpost where encoding as a whole is going, essentially towards reduction of complexity where possible and yet tuning the process to specific aspects of the content. The aim is to spare both processing power by reducing complexity and bandwidth by eliminating those aspects of a stream or file that have least impact on human visual perception.

So we find Fraunhofer HHI showing optimized VVC encoding with reduced complexity at virtual IBC 2020, demonstrating significant bit-rate reductions over HEVC for content ranging from legacy High Definition (HD) to High Dynamic Range Ultra-HD content. Then we have California based Bitmovin talking up the benefits of its per title encoding for on-demand content, aiming to focus the bits actually delivered in that fairly narrow human perceptual range.

As Steve Geiger, Director of Solutions for Americas at Bitmovin, explained, the aim is to achieve a smooth curve matching bit rate to quality so that there are no sudden lurches in the experience when network bandwidth changes. Similarly, the aim is to achieve a consistent perceptual quality across the media library, avoiding noticeable differences in quality at a given bit rate between one title and another. The process involves sampling a whole asset and then matching the different bit rates to resolution to obtain that smooth curve.

Mux is another company evangelizing about per-title encoding, with a similar approach. The company has pointed out that catering for individual content can enable perceptual quality to be maintained while actually reducing the headline resolution, therefore offering greater scope for bit rate reduction.

Then Synamedia, which boasts of being the world's largest independent video software provider after emerging from Cisco, is leveraging IBC to boost the credentials of its content-aware encoding, which is a similar idea, except that it can be applied to live content with the help of AI and machine learning techniques. The firm has shown how its content-aware encoding algorithms can incorporate program recurrence, program similarity and genre, taken from sources such as program guides and also IMDb, Amazon’s online database comprising information on films, TV programs, home videos, video games and other streaming content. These algorithms apply pattern matching to predict the quality or bitrate per program that optimizes the encoding. Machine learning then operates inside the encoding algorithms to minimize the number of bits used as far as they are capable, while maintaining a given target video quality.

Harmonic is another vendor espousing the benefits of live video encoding with its Electra X Live Video Processor. This will be shown as part of the IBC 2020 edition of what has become its signature digital event, Live Connection. Among various offerings, Harmonic is showing enhancements to its live sports streaming portfolio, noting that demand for these has started to pick up as sports return to TV screens, even though they are mostly still behind closed gates without spectators.

You might also like...

Is Remote Operation Underrated?

A recent Lawo remote activities case study notes, “It should be obvious by now that remote operation has been seriously underrated. For some, it allows to save substantial amounts of money, while others will appreciate the time gained from not…

Computer Security: Part 4 - Making Hardware Secure

The history of computing has been dominated by the von Neumann computer architecture, also known as the Princeton architecture, after the university of that name, in which one common memory stores the operating system, user programs and variables the programs…

Transitioning to Broadcast IP Infrastructures; Part 3 - Delivering Operational Reliability

Our first Essential Insights is a set of three video episodes in which we discuss transitioning to IP with industry experts. We explore the fundamental challenges during the planning stage. The decisions that need to be made, and the long-term…

Latency Remains Thorn In Side Of Live Sports Remote Production

After years of trial and error designed to reduce operating cost and (more recently) keep crews safely distanced, remote production has found its niche in live production and will remain the de facto method for producing events over a distributed…

Transitioning to Broadcast IP Infrastructures; Part 2 - Practical Topologies and Systems

Our first Essential Insights is a set of three video episodes in which we discuss transitioning to IP with industry experts. We explore the fundamental challenges during the planning stage. The decisions that need to be made, and the long-term…