Viewpoint: Secure Edge Cache: Optimizing the Network and Reducing Latency

The efficient distribution of content, especially video, on the web with the best performance and highest quality of experience requires a large number of servers to be deployed as close as possible to end-users. Consequently, Content Providers (CP) and third-parties have built large networks of content distribution servers, also known as content delivery networks (CDNs).

Today, CDN owners partner with Internet Service Providers (ISPs) to jointly deliver content in the most efficient manner. This includes localizing a substantial amount of their traffic, which allows for the retrieval of assets from a cache closer to the end-user, resulting in faster downloads and delivery times. Localizing traffic also helps ISPs lower the cost of serving traffic demanded by the CP’s subscribers, translating into substantial savings in transit and transport bandwidth. Indeed, if the content requested is already in the local cache and is considered “fresh”, then it will be served directly to the end-user, resulting in improved user experience and bandwidth saving.

To effectively localize traffic, the CP asks the ISPs to deploy inside its network a certain number of the CP’s proprietary servers that provide caching functionalities together with other optimizations. The ISPs work closely with the CP to carefully map where to deploy these servers in the network to ensure a well-targeted deployment which substantially enhances performance.

Caches allow an HTTP origin server to offload the responsibility for delivering certain content. The cache hit rates vary depending on the number of end-users served by the cache, the unique consumption patterns of end-users, and the size and type of the cache. It’s been reported that “between 70-90% of CP cacheable traffic can be served from the deployed CP’s cache infrastructure” [1].

However, there is a major drawback of existing solutions for content distribution: an origin is required to yield control over their content to the CDNs, allowing them to see and modify the content that they distribute. In some cases, expediency can dictate that the CDN be given control over the entire origin. As a result, in the past three years, the larger CPs have built their own CDNs as a way to overcome this problem. In doing so, they have caused a proliferation of third-party proprietary cache boxes within the ISPs. This proliferation has become so big that the ISPs’ spending for those third-party boxes deployed in their network has far exceeded the savings in transit and transport bandwidth.

As an active member of the Internet Engineering Task Force (IETF), a large open international community of network designers, operators, vendors and researchers concerned with the evolution of the Internet architecture and the smooth operation of the internet, Ericsson is recommending a solution for the proprietary nature of caches in ISP networks, while ensuring privacy and protection of content stored there.

Ericsson, together with other companies, is proposing to the IETF a new architecture for distributing content via a third-party CDN with a stronger level of security and privacy for the end user while reducing the security privileges of the CDN compared with current practice.   

The proposed architecture allows an origin server to delegate the responsibility for delivery of the payload of an HTTP response (the content item) to a third-party in a way that makes it unable to modify the content. In this solution, the content is also encrypted, which prevents the third-party from “seeing” or learning about the content.

An origin server can use this proposed architecture to take advantage of CDNs where concerns about security might otherwise have prevented their use in the past. This is also relevant for types of content that were previously deemed too sensitive for third-party distribution.

The Ericsson proposed architecture consists of three basic elements:

  1. A delegation component
  2. Integrity attributes
  3. Confidentiality protection

Content Delegation

The out-of-band content encoding [2] provides the basis for delegation of content distribution.

  • A request is made to the origin server including a value of "out-of-band" in the Accept-Encoding HTTP header field indicating a willingness to use the secure content delegation mechanism and a new BC header field (defined in [5]) indicates that the client is connected to a proxy cache that it is willing to use for out-of-band requests.
  • In place of the complete response, the origin only provides response header fields and an out-of-band content encoding.
  • The server populates the proxy cache or CDN with the resource to be served, encrypted and integrity protected.
  • The out-of-band content encoding directs the client to retrieve content from the cache or CDN. The URL used to acquire a resource from the CDN is unrelated to the URL of the original resource. This allows an origin server to hide from the CDN provider the relationship between content in the CDN and the original resources that was requested by the client.

Content Integrity

Content integrity is crucial to ensuring that content cannot be improperly modified by the CDN.

Several options are available for authenticating content provided by the CDN [3]. Content that requires only integrity protection can be safely distributed by a third-party CDN using this solution.

Confidentiality Protection

Confidentiality protection limits the ability of the delegated server to learn what the content holds.

Confidentiality for content is provided by applying an encryption content encoding [I-D.ietf-httpbis-encryption-encoding] to the content before that content is provided to a CDN. It is worth highlighting that the proposed solution only allows content on the CDN that is protected by access controls on the origin server to prevent the CDN from finding out the real resources at the origin by pretending to be a client and querying the origin.

[1] Google Global Cache (GGC) checked 2015-04-29

[2] J. Reschke, S. Loreto “'Out-Of-Band' Content Coding for HTTP”

[3] M. Thomson, G. Eriksson, C. Holmberg ” An Architecture for Secure Content Delegation using HTTP”

[4] Thomson, M., "Encrypted Content-Encoding for HTTP",

[5] M. Thomson, G. Eriksson, C. Holmberg “Caching Secure HTTP Content using Blind Caches”

You might also like...

Learning From The Experts At The BEITC Sessions at 2023 NAB Show

Many NAB Shows visitors don’t realize that some of the most valuable technical information released at NAB Shows emanates from BEITC sessions. The job titles of all but one speaker in the conference are all related to engineering, technology, d…

The Streaming Tsunami: Part 1 - Seeing The Tsunami Coming

Streaming video is on the cusp of becoming a major problem for broadband networks. Up to now we have been dealing with a swell in the streaming sea that has caused a few large waves to crash on to the…

5G Broadcast Positioned As Revenue Winner For Telcos At Mobile World Congress 2023

Mobile World Congress (MWC) has become an increasingly relevant show for broadcasters and video service providers as more and more viewing takes place on smart phones, tablets and laptops, and as 5G networks becomes capable of delivering HD and even…

FAST vs. Linear TV: Part 2 - What Are The Real Differences?

FAST can deliver the promise of true personalization intertwined with mass market reach. This heady mix can maximize advertising revenues from the large audiences currently turning away from traditional Pay-TV services towards free streaming services.

Celebrating BEITC At NAB Show

As we approach the 2023 NAB Show in the NAB centenary year, we celebrate the unique insight and influence of the Broadcast Engineering & IT Conference that happens alongside the show each year.