Apple announced at the WWDC 2017 that the company has selected HEVC coding for their latest iOS and macOS software updates. To paraphrase, this is yuge!
With that decision, Apple can deliver to the industry 1 billion active iOS devices across the globe. Until today, OTT services have been limited to using H.264 coding to reach the massive Apple device ecosystem. All that changes with this decision.
It’s all about the second screen
Home TV sets are increasing only part of a viewer’s choice. Mobile phones, laptops and tables now occupy about 50% of viewing time. Vendors are desperate to reach this audience, but needed a uniform codec standard with which to do that. When Apple speaks, the world listens, and HEVC just became the de facto codec standard for second screen devices.
Streaming services and video distributors will now be able to reach upwards of 1 billion active users around the globe with high quality video at bitrates up to 50% less than what is required with H.264. This is a huge breakthrough and fantastic development for operators launching OTT services where the second screen (mobile devices) will be pivotal to their success.
Consumer demand for high quality imagery has never been higher. And with 4K and UHD on the horizon for home TVs, viewers will expect similar presentations on their mobile and streaming devices. Prior to Apple’s announcement, video streaming service providers have been limited by the technology constraints of only having access to an H.264 codec across this massive device ecosystem. Now they have another option.
Basically, services compete for eyeballs based on catalog and image quality. Sure, price matters, but few will choose to pay less for, well less quality. No matter the output format, SVOD, TVOD, or AVOD, all products will be compared on these basic factors.
Many video experts understand the benefits of HEVC as a robust and ultra high-quality next generation codec. But with limited playback support outside the TV, questions about timing held back some deployments.
Apple plans to roll out iOS 11 and High Sierra public betas to developers this month and to users this fall. This is a fast timeframe.
With Apple’s announcement of 500 nit displays, 10-bit graphics support and 7th generation Intel Kaby Lake with Iris Pro Xeon processors in the new iMac line, Apple has drawn a line in the sand in terms of quality. Apple selected HEVC as the codec that will carry their video services such as iTunes into the future along with third party services who take advantage of the built-in HEVC support.
The benefits of HEVC include higher-order entertainment experiences, including HDR, which other codecs may not be able to supply at similar bit rates. Although HDR is not yet a household term, the consumer television industry is already pushing it hard. And expect that emphasis to become even more obvious during the key TV set buying time, this holiday season.
Newer video compression and delivery easily support higher frame rates, yet content producers and distributors still rely on older grading protocols that target the dynamic range standards set some 40 years ago for that era technology. Modern cinema distribution is moving to digital, and CRT displays have been replaced by flat panels, imagery can be much better. Viewers can enjoy imagery with a wider dynamic and color range. But, this is only possible if the delivery systems support high resolution, high frame rate, and high dynamic range.
HEVC extensions and new features were added to the standard in July 2014. This enabled native HDR Color profiles, which were previously limited to sample sizes of 8 and 10 bits. Because chroma subsampling was limited to 4:2:0 the result was a significant restriction of chroma relative to luminance by a factor of 4 to 1. New HEVC range extensions now support bit depths of 12, 14, and 16 bits, along with lower subsampling of chroma to 2:1 (4:2:2) and 1:1 (4:4:4).
New technologies typically have the problem of new is great, but what about legacy equipment. In the case of HDR, Dolby created Dolby Vision HDR. It comes in two versions: dual layer and single layer.
Single layer Dolby Vision HDR was introduced later as an alternative to the original (2-laayer) approach, in part to address competitive (and less expensive) single layer technologies. The solution relies on one HDR video stream multiplexed with metadata. While single layer encoding is not compatible with legacy displays, but it is cost-effective for new deployments.
The dual layer Dolby Vision encoding relies on a dual layer HDR architecture. This is represented by a combination of a base layer containing video data constrained to current specifications, with an optional enhancement layer carrying a supplementary signal along with the Dolby Vision metadata, hence “dual layer”. While this solution ensures backwards compatibility, it does demand new workflow that supports both channels at once. Not a minor point also is that dual layer Dolby Vision requires two client device decoders.
Dolby Vision adds approximately 20% to 30% additional bitrate load for the higher dynamic range data. If the service provider is using AVC, this may be a challenge.
While AVC compression remains a successful coding scheme for HD, HEVC can be even more efficient. HEVC video compression can process a typical HD signal 30% more efficiently than does AVC. One coding company says that an HEVC encoder can process an HD, dual-layer HDR signal at an equal to or less than bitrate needed for a non-HDR HD bit stream encoded using AVC.
So what does all this mean to the streaming industry. You now have a single coding standard.