Demand for bandwidth is growing at a remarkable rate. The demand is so high that Equinix’s Global Interconnection Index (GXI) Volume 4 forecasts a 45% CAGR specifically for interconnection bandwidth from 2019-2023. This will bring global interconnection bandwidth, used specifically by businesses that need to guarantee throughput capacity, from 5,000 Tbps in 2020 to 16,000 Tbps in 2023.
What is driving this? Mainly Telcos on one side and Cloud & IT Service Providers on the other side that together represent just over 50% of the total capacity requirement. Both are scaling out to support the burgeoning digital economy. Content & Digital Media demands are expected to consume 10% of the 16,000 Tbps of global interconnect capacity, mostly through CDN service providers.
But interconnection is only one of the content delivery models used by D2C (Direct to Consumer) OTT operators. This article reviews how Internet Exchange Providers support the fast-growing OTT industry with multiple connectivity models.
The Internet Exchange Provider
Internet Exchange Providers play a central role in the growth of OTT video services.
They provide 3 specific ingredients essential for OTT video delivery. First, they operate data centers that house the network edge points of service providers and network operators. Second, they provide an interconnect platform that provides peering and interconnection services for service providers and network operators. Third, they provide network services to manage network expansion, network assurance, and data security. Each point is fundamental to the success of OTT video services in the years ahead.
The Data Center
Internet services are far more than best-efforts. They are treated as critical national infrastructure. Telcos have had carrier-class data centers running with 5 to 6 “9s” availability for many years already. Internet Exchange Providers (IXPs) operate to the same standards given the importance of their facilities to the delivery of internet-based services.
Capacity of a data center is defined by the space and power of the facility. A very large facility today can have over 70,000 sq ft of space available and in excess of 20 megawatts. Internal connection bandwidth of the largest data centers can reach many Terabits and beyond.
IXP customers require geo-redundancy and geo-expansion. IXPs cater for this through their site strategies which require capacity expansions in existing sites and nearby sites. Interconnectivity naturally creates dense points of presence as optical network circuits converge on a single place. IXP building strategy is a primary factor for expanding connectivity.
There are two grades of connectivity. Public known as “peering” and private known as “interconnection” or “Private Network Interconnection (PNI)”.
The Peering Network, or internet exchange capacity for general use, typically ranges from 1GbE up to 400GbE for a single user. Peering is treated by CDN Service Providers as a valuable way to connect with smaller ISP network operators. To meet with larger ISPs, CDNs interconnect. The very largest CDNs, which are often the private networks of the biggest content providers like Netflix, YouTube, Facebook and Amazon use peering or interconnecting for the smaller ISPs, with private edge caches deployed inside the largest ISPs.
The Interconnection Network is defined as private network connection between two participants. A typical threshold at which a service provider will move to an interconnection model with a network operator is around 1-2Gbps of traffic. As noted above, this area of the industry is growing very quickly. To accommodate fast and frequent capacity growth, IXPs provide API-driven, automated interconnection via a central service.
Alongside this growth in throughput, security has taken on a renewed urgency. Starting in 2017 efforts were accelerated to resolve one of the longstanding weaknesses of the internet: the independent validation of IP address ownership. This problem has been an issue for OTT streaming services, creating a weak spot for content to be pirated and services to be disrupted. To validate IP addresses in a centralized manner, IXPs provide a Route Server service which redistributes the routes and verifies they are matched with the central encrypted lists which are held securely be the regional internet authorities. IXP members can peer with the IXP-hosted Route Server instead of having individual clearing relationships with all other IXP members. The Route Server creates efficiency for an ever-larger set of network interconnections. CDNs carrying OTT video streams routinely use IXP Route Server security services. Internet Service Providers are also now validating their IP address ranges and those of their peers through the same centralized model.
IXP capacity is growing rapidly to keep pace with the demand. Consumers pull content and content providers make it available. Network Operators expand core and access network bandwidth. IXPs expand the interconnection points to support the demand on all sides. From the early 1990s to c. 2007, internet usage grew quickly but did not exceed 100 Exabytes delivered per year. From 2007 to 2016 total data delivered grew 10x to reach about 1000 Exabytes. Cisco forecasts 3500 Exabytes will be delivered in 2023 which complements the faster-than-average 4x growth in interconnect capacity forecast by Equinix. But by 2023 we will still only be near the beginning of the Media Industry’s transformation towards a fully OTT-centric delivery model.
While network bandwidth continuously grows to meet demand, network contention is a perennial problem. To avoid contention network operators must build capacity ahead of the demand. If this does not happen, traffic must be prioritized to limit the impact of contention. As a rule, peering points and internet exchanges are less susceptible to capacity shortfalls because they are built for peak aggregate traffic, with headroom to manage serious facility outages. But the fundamental concept of multi-tenancy that applies to telecoms networks, CDNs and service provider networks means that capacity can be oversubscribed.
Individual networks can have capacity shortages when overbooked or when demand reaches unexpected levels. This typically would only happen at peak time as otherwise the network’s peak capacity is generally available at all other times. In 2020 the world’s shift to homeworking, homeschooling and daytime video streaming highlighted network robustness because they are built for evening peak use. At the Internet Exchange level, individual service providers can reach maximum capacity on their network ports. But CDNs, IXPs and ISPs generally build enough headroom in to their capacity so they can handle the loss of an entire facility, thereby minimizing the chance of capacity shortages.
In the end however, we are in an accelerating growth phase for internet traffic. Therefore, the chance of capacity shortages occurring in a multi-tenant service provider environment is an ever-present risk. While service providers and network operators will aim to build to stay ahead of the demand curve, the demand is coming around that curve very quickly. Which is why we see headlines on a routine basis about streaming records being broken with associated customer experience impacts as network capacity buckles under the pressure. This will continue to happen as long as the internet is a pull system with growing consumer consumption and expanding service provider inputs.
Figure 2: Modes of connectivity as OTT service throughput grows. Mixing modes for a single service provider is normal.
OTT operators are one of the biggest consumers of internet bandwidth with their specialist use case of video streaming. One of the ways the biggest OTT operators avoid contention is to deploy private, dedicated capacity inside ISP networks. As IXPs are observing, the trend towards deploying capacity inside ISP networks to get even closer to the consumer is growing, driven by the largest video streamers. There is a trend towards services being more distributed towards users, as the internet’s fundamental pull system progresses towards its ultimate state of caching and processing ever closer to the consumer.
For a network operator the benefit of deploying caches deeper in the network is the saving in backhaul bandwidth. Opencaching and 5G both support this business objective.
Opencaching is focused on standardizing edge cache communication protocols so that ISPs can utilize their own infrastructure for multiple tenants or individual tenants. This could ultimately enable the placement of standardized edge caches into many local telephone exchanges and potentially even street cabinets. 5G places smaller cellular masts closer to consumers, creating infrastructure that can ultimately be used for content processing, storage and streaming. This continuous push towards the consumer is critical to achieve maximum total throughput for minimal additional investment in the core network.
As a result, network operators are starting to work more as infrastructure providers with IaaS (Infrastructure as a Service) offerings. IXPs now have the opportunity to deploy their own network management services on top of network operator infrastructure. And network operators themselves can deploy their own network services on top of other network operators’ infrastructure.
While network operators can expand their networks and deploy processing and storage capacity ever deeper in their networks, the IXPs will maintain a critical position of consolidated interconnection between many service providers and many network operators. There are various scenarios where OTT Operators will not want to (or will not be able to) deploy cache capacity inside ISP networks, so peering and interconnection will continue to be valid options for the long-term. In the end, the IXP’s many-to-many relationship position is unique and will persist and grow in order to provide secure, resilient and scalable networking services over the years to come.
Considerations For OTT Operators
Live Streaming events are driving larger and larger audiences. Individual OTT Operators are normally on top of their patterns and levels of viewership, particularly for live events.
As audience sizes grow, OTT Operators will increasingly use advanced forms of caching to improve and maintain quality of experience for their audiences, particularly for live events that drive the biggest audiences and the biggest commercial returns.
Because of the risk of unexpected multi-tenant capacity shortages and its effect on a customer’s QoE (Quality of Experience) the largest OTT Operators are building their own CDNs, which provide dedicated capacity, lowest-possible-contention, and perfect access to big data sets for service and consumer analysis. For ultimate control, an OTT Operator does need a private environment.
IXPs see that video streaming from caches deployed inside ISPs is growing year on year, led by the major video service providers like Netflix, YouTube and Facebook. These three digital giants, plus Amazon, drive about 50% of peak evening internet traffic between them in countries where they are present. The next wave of streaming giants are the national and international broadcasters, whose OTT services are becoming increasingly strategic and are the focus point for future investment in programming, advertising and customer engagement. When prime-time TV audiences move to OTT platforms – let’s imagine a concurrent audience of 20 million in a country with a population of 60 million – then the national household-name broadcasters will represent a significant proportion of total internet usage and rely even more heavily on the carrier-class Internet Exchange infrastructure.
You might also like...
The decline of public service broadcasting has been one of those long running narratives that is sometimes defied by reality, like the death of the set top box.
NDI (Network Device Interface) is a free protocol for Video over IP, developed by NewTek. The key word is “free.”
NAB have announced the show scheduled for October 2021 has been cancelled.
Violent weather storms are wreaking havoc on the East Coast of the U.S. and radio and TV stations there are struggling to get the life-saving news out. In the past two months alone storms have knocked out TV antenna…
Timing accuracy has been a fundamental component of broadcast infrastructures for as long as we’ve transmitted television pictures and sound. The time invariant nature of frame sampling still requires us to provide timing references with sub microsecond accuracy.