The Streaming Tsunami: Part 6 - A National Blueprint For Video Streaming Delivery (Part 2)

The shift from DTT to OTT centric delivery and full-scale streaming is set to generate growth from current peak streaming demand to a potential 10x increase in required capacity. In this two-part article we use the UK as a model to present a theoretical new streaming infrastructure based on a unified edge network.

This article is part 2 in a 2-part series, so there are references which should be read in context with Part 1.

If we follow the scenario that we build Edge streaming capacity for best possible regional-level viewing, which would future-proof the capacity, then in the UK we would build about 660 Tbps of capacity that could be shared across all content providers and all ISPs. In the USA we would need 3,300 Tbps, in Germany 830 Tbps, in Argentina 450 Tbps, and in Finland 55 Tbps.

660 Tbps is enough capacity to deliver 10 Mbps streams per viewer for 66 million concurrent viewers. In a unicast-centric delivery model 660 Tbps means 13,000 1RU servers which, if deployed across 1000 BT Openreach PoPs, means 13 servers per PoP. Add a 1RU network switch and a 1RU firewall server, and we have 15RU per PoP, or about one-third of a standard rack. For a platform that could stream at 10Mbps to everyone in the country, or at 30Mbps to the prime-time audience of about one-third of the population, that’s not a big and scary footprint.

And if we have servers that stream at 100 Gbps then we reduce the server count to 6,000 servers. Or if the network topology means that it is more efficient to deploy Video Edge servers in 200 Exchanges instead of 1000 Exchanges, then we reduce the number of physical locations that need to be managed by 500%. Additionally, if server use can be balanced out in a way that extends its working life from a typical 5 years to 7-8 years then we can reduce the carbon footprint associated with server manufacturing and deployment.

There are various ways to design the deployment to optimize system-wide cost and efficiency, but the basic requirement is that it can deliver consistently good quality video at consistently low latency to daily prime-time audiences and occasional super-large audiences, and it can do this while being highly efficient and cost-effective to operate and maintain. Keeping the system simple by reusing internet protocols and implementing standard servers that are de-tuned, energy-optimized, and deployed close to the consumers can achieve these objectives.

So, how much capacity and how many servers (roughly) would need to be deployed in some of the UK’s locations like Greater London, Greater Manchester & Liverpool, and, as an example of a smaller urban area, Daventry? Note that this same approach could be applied to other population centers around the world with broadband infrastructure.

A first assumption to make is how many Exchange buildings are likely to be present in each location (an “Exchange building” is an original telephony building, housing telecommunications infrastructure). Without an exact and confirmed BT Openreach map, we can simply apply a proportional reduction in Exchange count based on the planned total reduction from 5600 exchanges to 1000 exchanges (i.e., 18% of current buildings will remain).

This approach seems to make good sense in terms of total infrastructure deployed for video streaming at full-scale. But there are three big hurdles to overcome.

First is how to deploy the servers this far inside the ISP’s current networks. Each ISP has its own policies for on-net server deployment. To find the most efficient video delivery model needs a partnership approach between Streamers and ISPs which is based on a shared vision of full-scale streaming delivery that achieves broadcast-grade standards for excellent reach, reliability and QoE to consumers, and significantly reduces the load on the current ISP core network infrastructure. At this moment in time, things like the Fair Share debate in the EU are taking a different approach to this situation, looking for extra investment in network capacity to support the already large, but growing, streaming traffic levels. But while that moves money from one party to another, it doesn’t describe how that delivers an ultra-efficient and sustainable network for full-scale video streaming.

The second point is how to manage the CDN service to drive it towards this ultra-efficient model. Leaving it open to standard market forces may not yield the target results quickly. How much will we overbuild capacity and duplicate effort in a race to provide this service to the fast-growing Streamers? A pre-requisite to achieving efficiency is for the major streamers to work together on their delivery capacity requirements – given that streamers share audiences at different times of the day/week/month, then how can they reuse the same capacity between them like they do today in DTT broadcast networks? It is worth considering that the aggregation of streamer traffic towards the ISPs would optimize the capacity deployments because the primary driver of capacity will be total audience size, which is determined independently of an individual streamer or individual ISP or any CDN working in the market. So, as broadband market shares ebb and flow, and as total audience size adjusts from hour to hour, and as new viewing formats enter into the market, and as the traditional broadcast infrastructure is largely replaced by streaming video infrastructure, an industry-level unified Edge platform can be right-sized and right-featured to fit exactly what is required for the market. And while this may sound like a heavily regulated, monopolistic and even nationalistic approach, which could be reasonably argued to go against the normally efficient principles of free market competition, the point is that the final solution should be focused on sustainability and efficiency, however that is best achieved.

The third point is how to ensure the platform is secure. Redundancy is one element of providing secure infrastructure, which was highlighted in the previous article. We can probably obtain sufficient redundancy within the overall system by deploying capacity that enables reaching each person in the country, which would enable delivery to continue even if significant elements of the platform became unavailable. But at full-scale streaming, the network will need a world-class ability to withstand cyber-attacks. The video delivery network will need to be treated like a piece of critical national infrastructure, just like existing telco networks and broadcast networks. The Video Edge Network Provider must focus heavily on this specific aspect of platform management on behalf of all its media customers and their governmental, societal, and commercial stakeholders.

Partnering For Full-scale Streaming

Fragmentation can often be the enemy of efficiency because it duplicates effort and can leave resources idle. Deep levels of efficiency require industry-level partnership with the most demanding shared objectives. The Media industry on the one hand and the Telecommunications Industry on the other hand can form a partnership to work together to define the best deployment model that works for everyone and gets us to an environmentally and economically sustainable operating model more quickly. To repeat a point made in Part 1 of this article, we are talking about the 80% of internet traffic that is made up of video which can be treated differently from the other types of data in the remaining 20%. Some businesses already operate in this media/telecommunications middle-ground, providing specialized network services to the Media Industry, aggregating their content delivery and ensuring all the content reaches consumers. Some of these network services already include the terrestrial broadcast networks, which will need to be carefully phased out over time as DTT shifts to OTT. These service providers know what full-scale video delivery looks like, and some are adapting quickly to a world where video delivery changes from 90:10 DTT:OTT into an OTT-centric model. They can manage the transition pathway for the different technology platforms to decrease in DTT and increase in OTT, specifically to ensure video can be efficiently and effectively managed.

These service providers have the capability and business interests to build out our full-scale streaming platforms of the future. Under the hood of their services there is opportunity for new technologies to innovate, combining CDNs, peer-to-peer networking, multicast, WebRTC, DVB-I / ATSC 4.0, and even AI and blockchain. All of these technologies may have a role in different ways in different environments, but the complexity can be abstracted and managed to make things more efficient for both the media industry and the telecommunication industry.

When we talk about full-scale streaming, hopefully we can agree on a shared vision of an optimally efficient network, that provides us with all our forms of video in the most environmentally and economically sustainable manner possible.

How Long Will This Take?

This might sound like a very long road ahead but building out a full-scale network will not take as long as you might think. This is not the “design, manufacture, build from scratch” process of a highly complicated new system. This is a new deployment of proven software and hardware technologies (i.e., CDN technologies) in real-estate that is largely already in place (i.e., telco networks and “Exchange buildings”).

The main part of the deployment time is the network architecture planning to ensure the connectivity between the distributed Edges and from Edges to Origins is correctly defined, and this requires aligning the vested interests in the media and telecommunications markets to reach a shared vision. Even with this planning process, the deployment is not a big-bang moment in time. The Edge capacity can and should expand over time.

As streaming audiences shift from 10% of total video viewership to 20%, 50%, 80% and maybe 100%, the network can be expanded accordingly. Aligning Video Edge network expansion with telco fiber roll-outs and broadcaster streaming plans will create a clear roadmap towards full-scale streaming.

With the right shared vision and political will, it is conceivable that the next 5-10 years will bear witness to this industry transformation towards a Video Edge Blueprint that meets all our objectives.

You might also like...

Why AI Won’t Roll Out In Broadcasting As Quickly As You’d Think

We’ve all witnessed its phenomenal growth recently. The question is: how do we manage the process of adopting and adjusting to AI in the broadcasting industry? This article is more about our approach than specific examples of AI integration;…

Designing IP Broadcast Systems: Integrating Cloud Infrastructure

Connecting on-prem broadcast infrastructures to the public cloud leads to a hybrid system which requires reliable secure high value media exchange and delivery.

Video Quality: Part 1 - Video Quality Faces New Challenges In Generative AI Era

In this first in a new series about Video Quality, we look at how the continuing proliferation of User Generated Content has brought new challenges for video quality assurance, with AI in turn helping address some of them. But new…

Minimizing OTT Churn Rates Through Viewer Engagement

A D2C streaming service requires an understanding of satisfaction with the service – the quality of it, the ease of use, the style of use – which requires the right technology and a focused information-gathering approach.

Designing IP Broadcast Systems: Where Broadcast Meets IT

Broadcast and IT engineers have historically approached their professions from two different places, but as technology is more reliable, they are moving closer.