Part 1 of this series described how network-side QoE (Quality of Experience) measurement is fundamental to proactively assuring the quality of OTT services. At its core, the network-side can be an early warning system for QoS, which in turn correlates to actual QoE performance. This article considers the two types of network monitoring available to us, relative priorities for the points of measurement, and how the video platforms contributing to OTT services are evolving to support OTT quality at scale.
What is the internet? Who is the internet? Where is the internet? These are the first three questions on the tip of every engineers and technologist’s lips. Before we can ever possibly hope to work with internet technology, we must be able to answer these three basic questions.
In the beginning, there was television. And whenever people tried to make television programmes effective video signal monitoring was an essential pre-requisite.
Broadcasting video and audio has rapidly developed from the send-and-forget type transmission to the full duplex OTT and VOD models in recent years. The inherent bi-directional capabilities of IP networks have provided viewers with a whole load of new interactive viewing possibilities.
Synchronizing became extremely important with the growth of AC power systems, which ended up being used to synchronize all sorts of equipment, from Radar to television.
The features we love about OTT services – such as combined linear and on-demand content, multi-device viewing mobility, tailored viewing experiences, and in some cases better resolutions – are driving the general rapid uptake of OTT services.
The power and flexibility of cloud computing is being felt by broadcasters throughout the world. Scaling delivers incredible resource and the levels of resilience available from international public cloud vendors is truly eye watering. It’s difficult to see how any broadcaster would run out of computing power or storage, even with 4K and 8K infrastructures.