Haivision is based in Montreal, Canada.
Haivision, a pioneer of low latency video streaming, has acquired the media optimization business LightFlow Media Technologies from Spanish video Quality of Experience software company Epic Labs.
This acquisition adds machine learning algorithms to Haivision’s armory, for content-aware encoding with a view to reducing latency further by making optimal use of network bandwidth. But as well as optimizing video contribution, distribution, and delivery for low-latency live or VoD feeds, Haivision also plans to exploit the Epic Labs technology in content indexing and object detection, which will have applications in search, recommendation and discovery.
More fundamentally for Montreal, Canada, based Haivision, the LightFlow technology suite will help accelerate the company’s cloud strategy of creating an ecosystem of modular video streaming and management technologies. Notable here is that the LightFlow team led the development of the DASH.js implementation that includes low-latency CMAF (Common Media Application Format) support, which reinforces Haivision’s market position with the SRT (Secure Reliable Transport) open source protocol designed to cut latency. CMAF is important here because that has emerged as a unifying underlying framework for HTTP adaptive bit rate streaming widely used for online video delivery.
Before CMAF, content distributors had both to store and encode the same video twice to reach the most popular devices because Apple used its HTTP Live Streaming (HLS) protocol, while Microsoft and most other platforms had converged around Dynamic Adaptive Streaming of HTTP (DASH). The two operated on similar principles where content is streamed in chunks typically two to 10 seconds in duration, with encoding of the same video at multiple bit rates to cater for varying network conditions and device playback capabilities.
But the two differ in how they package the streams and their chunks, as specified in the slightly confusingly named container format. The word “container” makes sense but they are not really formats but boxes that house the video streams, including audio. They vary in what they are able to contain and while Apple HLS uses the .ts format, DASH uses the more widely used but incompatible .mp4.
However, Microsoft and Apple surprised some in the industry by finally burying their hatchets and agreeing to support CMAF as a specification that would allow fragmented .mp4 containers to be referenced by both HLS and DASH. That means content owners or broadcasters no longer need to encode twice, because they can adopt CMAF.
Another significant aspect of Haivision’s acquisition of LightFlow is that it gives air to an alternative technology for perceptual video quality recognition and enhancement. To date, a lot of the running has been made by Netflix with Video Multimethod Assessment Fusion (VMAF) more for on-demand content enhancement and SSIMWave’s SSIM (Structural Similarity Index Method), which works better for live. The underlying idea of SSIM is that neighboring pixels both in space (within a frame) and time (between adjacent frames) are related and provide a framework for assessing changes in structure that make the greatest impact on the human eye. The SSIM Index is then calculated by considering various windows of each frame as a whole rather than just isolated pixels, using a mathematical formula engineered to yield fractional scores in the range 0 -1 representing the degree of degradation from the original source.
SSIMWave has recently enhanced SSIM with machine learning now, adopting a more expressive 0-100 scale, matching the scores linearly with human subjective tests during testing. One innovation putting SSIMPlus ahead of VMAF is adaptation to the viewing device with the ability to compare video quality as objectively as possible across different resolutions and formats. This could determine that a given video might look excellent with a much higher rating on say a smart phone while being much poorer on a large 4K resolution (2160x3840) TV.
LightFlow has adopted an approach that seems similar on the surface at least, incorporating machine learning algorithms that help anticipate what perceptual quality will result after a given video is encoded and then played on the basis of final bitrate, screen resolution and presumably frame rate.
With significant momentum behind SSIMPlus, Epic was facing an uphill battle to gain traction for LightFlow. SSIMPlus had won significant endorsements from various prominent authorities, including the world’s biggest CDN (Content Delivery Network) provider Akamai, which is using it as the basis for its work towards an industry standard for measuring perceptual video streaming quality.
But Haivision also has momentum and that was a major factor in Epic’s decision to sell LightFlow. Epic believed that allied to the SRT protocol itself gaining ground rapidly, its perceptual video technology had greater hopes of living on and being a major player in the field under Haivision’s control.
Indeed, the LightFlow team will continue as a distinct offshore unit of Haivision in Madrid, complementing the latter’s other R&D centers in Portland, Chicago, Austin, and Hamburg.
You might also like...
HDR offers unbelievable new opportunities for broadcast television. Not only do we have massively improved dynamic range with the potential of eye-watering contrast ratios, but we also have the opportunity to work with a significantly increased color gamut to deliver…
At the recent IBC conference, vendors were showing ST2110 compatible products. The IP pavilion was there to demonstrate how it all works nicely together, all interoperable, etc. There were sessions to introduce and provide the information and knowledge to implement…
We all understand what it means when someone says a video went viral. It typically means a person used a mobile device to record an event and posted it to any number of social media websites. How does that have…
It is almost a hundred years since the color space of the human visual system was first explored. John Watkinson looks at how it was done.
In the first part of this four-part series we described the basic principles of the Precision Time Protocol. In part two, we investigate PTP redundancy, specifically for media networks.