Haivision Acquires LightFlow AI Content Aware Encoding Technology From Epic Labs

Haivision, a pioneer of low latency video streaming, has acquired the media optimization business LightFlow Media Technologies from Spanish video Quality of Experience software company Epic Labs.

This acquisition adds machine learning algorithms to Haivision’s armory, for content-aware encoding with a view to reducing latency further by making optimal use of network bandwidth. But as well as optimizing video contribution, distribution, and delivery for low-latency live or VoD feeds, Haivision also plans to exploit the Epic Labs technology in content indexing and object detection, which will have applications in search, recommendation and discovery.

More fundamentally for Montreal, Canada, based Haivision, the LightFlow technology suite will help accelerate the company’s cloud strategy of creating an ecosystem of modular video streaming and management technologies. Notable here is that the LightFlow team led the development of the DASH.js implementation that includes low-latency CMAF (Common Media Application Format) support, which reinforces Haivision’s market position with the SRT (Secure Reliable Transport) open source protocol designed to cut latency. CMAF is important here because that has emerged as a unifying underlying framework for HTTP adaptive bit rate streaming widely used for online video delivery.

Before CMAF, content distributors had both to store and encode the same video twice to reach the most popular devices because Apple used its HTTP Live Streaming (HLS) protocol, while Microsoft and most other platforms had converged around Dynamic Adaptive Streaming of HTTP (DASH). The two operated on similar principles where content is streamed in chunks typically two to 10 seconds in duration, with encoding of the same video at multiple bit rates to cater for varying network conditions and device playback capabilities.

But the two differ in how they package the streams and their chunks, as specified in the slightly confusingly named container format. The word “container” makes sense but they are not really formats but boxes that house the video streams, including audio. They vary in what they are able to contain and while Apple HLS uses the .ts format, DASH uses the more widely used but incompatible .mp4.

However, Microsoft and Apple surprised some in the industry by finally burying their hatchets and agreeing to support CMAF as a specification that would allow fragmented .mp4 containers to be referenced by both HLS and DASH. That means content owners or broadcasters no longer need to encode twice, because they can adopt CMAF.

Another significant aspect of Haivision’s acquisition of LightFlow is that it gives air to an alternative technology for perceptual video quality recognition and enhancement. To date, a lot of the running has been made by Netflix with Video Multimethod Assessment Fusion (VMAF) more for on-demand content enhancement and SSIMWave’s SSIM (Structural Similarity Index Method), which works better for live. The underlying idea of SSIM is that neighboring pixels both in space (within a frame) and time (between adjacent frames) are related and provide a framework for assessing changes in structure that make the greatest impact on the human eye. The SSIM Index is then calculated by considering various windows of each frame as a whole rather than just isolated pixels, using a mathematical formula engineered to yield fractional scores in the range 0 -1 representing the degree of degradation from the original source.

SSIMWave has recently enhanced SSIM with machine learning now, adopting a more expressive 0-100 scale, matching the scores linearly with human subjective tests during testing. One innovation putting SSIMPlus ahead of VMAF is adaptation to the viewing device with the ability to compare video quality as objectively as possible across different resolutions and formats. This could determine that a given video might look excellent with a much higher rating on say a smart phone while being much poorer on a large 4K resolution (2160x3840) TV.

LightFlow has adopted an approach that seems similar on the surface at least, incorporating machine learning algorithms that help anticipate what perceptual quality will result after a given video is encoded and then played on the basis of final bitrate, screen resolution and presumably frame rate.

With significant momentum behind SSIMPlus, Epic was facing an uphill battle to gain traction for LightFlow. SSIMPlus had won significant endorsements from various prominent authorities, including the world’s biggest CDN (Content Delivery Network) provider Akamai, which is using it as the basis for its work towards an industry standard for measuring perceptual video streaming quality.

But Haivision also has momentum and that was a major factor in Epic’s decision to sell LightFlow. Epic believed that allied to the SRT protocol itself gaining ground rapidly, its perceptual video technology had greater hopes of living on and being a major player in the field under Haivision’s control.

Indeed, the LightFlow team will continue as a distinct offshore unit of Haivision in Madrid, complementing the latter’s other R&D centers in Portland, Chicago, Austin, and Hamburg.

You might also like...

Super Bowl LIV - Betting On Big Game Bandwidth?

Video compression, bonded cellular technology and cellular networks have evolved to the level that giant planned events like the Super Bowl no longer challenge cellular service adequacy.

Color and Colorimetry – Part 8

The derivation of the famous CIE horseshoe was explained in the previous part in terms of a re-mapping or distortion of rg color space. The derivation is somewhat abstract because the uses of color science go far beyond the applications…

Essential Guide: Practical High Dynamic Range Broadcast Workflows

HDR is taking the broadcasting world by storm. The combination of a greater dynamic range and wider color gamut is delivering images that truly bring the immersive experience to home viewers. Vibrant colors and detailed specular highlights build a kind…

The Sponsors Perspective: Media Companies - Advance Your Security And Innovation Lifecycles

Hackers are always improving the level of sophistication and constantly finding new surface areas to attack – resulting in the surging volume and frequency of cyberattacks.

Color and Colorimetry – Part 7 – CIE XYZ

The rg color space served to document the chromaticity gamut of the HVS, and so was a great step forward in understanding color and color vision. However, it was based on a certain set of primaries. As no set of…