CDNs are much more than just high-speed links between ISPs. Instead, they form a complete ecosystem of storage and processing. In this article we look at the different workloads for Live and VOD to understand better how they operate.
A typical mix of content delivery for an OTT service from a household-name Broadcaster that delivers both Live and VOD is about 80-90% VOD and 10-20% Live. The CDN workloads in each scenario are different with important ramifications for the technology.
For Live content, large audiences are watching the same content at the same time. Inside the CDN, pressure is placed on server memory and CPU plus Network Ports to sustain the stream and the bit-rate. As a Cache operates as a proliferation mechanism for the same live-streams – i.e., one 5 Mbps HLS stream in, with 1000 x 5 Mbps HLS streams out – the job is to sustain egress with low latency to every single endpoint.
On the other hand, the VOD workload is driven by individual consumers watching varied content that is being streamed at different times. Pressure is placed on Storage, CPU and Streaming Algorithms at Origin and CDN levels. Not only does the stream of a single VOD file need to be sustained, but this has to be achieved on potentially hundreds or thousands of discrete files to unique endpoints.
Edge computing has been a hot topic over the last few years as OTT video and general use of public cloud services have expanded. For OTT, edge computing can be defined as processing video into its final delivery format at the Edge device. Specifically, this involves delivering from the Origin to the Edge in a mezzanine format, like CMAF, and then processing at the Edge to the client device in the required package, like HLS, DASH or MSS. But where is the Edge?
In theory, the Edge should be as close to the consumer as possible. In a perfect world it would be possible for every consumer device to function as an Edge device. While peer to peer networking offers some potential here, at least for Live use cases (the subject of another article), there isn’t a viable consumer device level solution for VOD.
One step further from the consumer is the telco’s access network where there are street cabinets, mobile masts, and the original telephone exchanges. While these may become edge locations in future, today the volume of locations and the volume of video does not justify the expense. In addition, access networks are generally aiming to be data-agnostic, with video as just one form of data. Over time this may change as we move to more advanced forms of video delivery like virtual reality and holographics.
The next step away from the consumer is the ISP’s core network. This is the first opportunity for a CDN to become ISP-specific, and because the centralized core network is serving all data delivery needs, offloading traffic where possible makes sense for the ISP. This is the current focus for Edge Cache placement for the largest OTT Operators. However, too many caches in the core network can create unwanted technical and operational complexity for the ISP. But given the disproportionate amount of total traffic due to video, the trend towards ISP-based edge caching is strong.
This is why the Edge, for the most part, is currently deployed in Internet Exchange locations – the “meet me room” for the ISPs and the OTT Operators (via the CDN Service Providers). Even so, processing at the Edge is not the norm. But it’s coming, because it’s based on the principle of pull-system efficiency. Edge computing will help reduce bandwidth requirements between the Origin and the Edge, but it will put more pressure on the Edge to add an 8th function (further reading here) of managing just-in-time packaging and encryption. The business trade-off for edge computing is between Network Cost, CDN Cost and Server Cost. In the end, each OTT network topology will be evaluated on a case-by-case basis to find the optimal approach.
The Appearance of the iCDN
CDNs are transitioning from being a series of interconnected computers to a series of interconnections. The difference may sound subtle, but it is fundamental. It means that instead of using centralized brains with “dumb” pipes, we will use distributed brains with “actively engaged” pipes. Greater interconnectedness will create greater intelligence, i.e. the iCDN.
Today the CDN can be described as the HOV (High Occupancy Vehicle) lane of the multi-lane internet – it gives a faster, less congested route than the unmanaged public internet. As OTT traffic grows, we think about building more and bigger HOV lanes, with “exit ramps” closer and closer to the consumer. This is the natural progression of the pull-system.
These HOV lanes are supported by multiple access network expansions, including telco fibre-to-the-home roll-outs, CableLabs’ 10G programme in the cable industry, and 5G in the mobile industry. These major infrastructure changes will improve customer experience of OTT video, but also pave the way for enhanced video experiences that place new pressures on the network in a continuous cycle of using the capacity that we have available to us. These developments take many years to become widespread reality.
In this context, OTT Operators need to think differently, as their traffic demands outstrip network supply. Deploying Edge Caches closer to the consumer is important, but as noted there are challenges. Leading CDN businesses have recognized this issue and are focused on making more intelligent use of network resources to reduce dependency on network supply. So, what is being done to create these iCDNs?
First, leading CDNs are interconnecting all their Cache servers, and then distributing content caching and processing across them. This reduces the traditional dependence on an Edge Cluster or POP, while utilizing performance intelligence gathered from all parts of the CDN infrastructure. This more sophisticated approach is superseding the traditional CDN “acquirer server” method and the more recent hierarchical architectures of Edge Cache clusters that refer back to Intermediate Cache clusters.
Second, leading CDNs are using performance data from all 5 service domains – the 4 QoS domains of hardware, software, network and stream, and the QoE domain of the client – in order to create a complete customer-centric view. This view becomes even more important as CDNs expand their workload for bigger audiences. All CDNs monitor software, hardware and streams, focusing on the metrics that are directly under their control. Some CDNs go beyond this to add 3rd party network data into a unified QoS view. But only the most advanced CDNs combine QoE and QoS for a complete view of performance. These iCDNs are the next-generation of CDN platforms.
Third, the iCDNs are taking data from the 5 domains and applying machine-learning and artificial intelligence in real-time in order to predict quality issues and take proactive actions to avert them. This proactive approach based on all available information – which must be filtered to avoid data overload and slow decision-making – is the hallmark of the intelligent CDNs and will be how OTT Operators assure QoE as their audiences grow.
Figure 2: Distributed Intelligence within CDNs is the future…and already available from leading CDNs.
The roadmap for CDNs is not just about supporting the pull-system principles with more powerful edge caches that are simply placed closer to the consumer, although this remains fundamentally important because this addresses the traffic volume challenge. It is also about making the edge caches more interconnected and more intelligent in order to be more proactive which addresses the efficiency challenge.
OTT Operators should therefore look at their CDN strategy in terms of the combination of placing caches in the optimal locations (i.e. peering point or ISP) with the optimal business model (i.e. public, private or hybrid) and ensuring that the CDNs make maximum use of relevant data for intelligent, real-time stream routing. This will make a significant difference to both customer satisfaction and business profitability as video traffic grows.
You might also like...
OTA (over-the-air) broadcasting has long-been considered an efficient way to reach very large audiences. As OTT (over-the-top) streaming grows there are concerns that we are going backwards in our levels of efficiency, which is not what we need to do…
Founded in 2008 in Boston Mass, Zixi began as a company that targets broadcasters, enterprises, over-the-top video providers, and service providers around the world with a content- and network-aware protocol that dynamically adjusts to varying network conditions and employs error-correction techniques…
Part 1 focused on what poor streaming quality means and what it can cost a D2C Streamer across multiple financial dimensions. This article focuses on preventing and fixing problems.
Analytics and monitoring are now more critical than ever for media supply chains to deliver on various targets, including customer retention, regulatory compliance, revenue generation through effective targeting of ads and content, and efficient use of network resources.
It is safe to say that probably every streaming service has delivered poor quality experiences to its viewers at some point.