Viewpoint: Secure Edge Cache: Optimizing the Network and Reducing Latency

The efficient distribution of content, especially video, on the web with the best performance and highest quality of experience requires a large number of servers to be deployed as close as possible to end-users. Consequently, Content Providers (CP) and third-parties have built large networks of content distribution servers, also known as content delivery networks (CDNs).

Today, CDN owners partner with Internet Service Providers (ISPs) to jointly deliver content in the most efficient manner. This includes localizing a substantial amount of their traffic, which allows for the retrieval of assets from a cache closer to the end-user, resulting in faster downloads and delivery times. Localizing traffic also helps ISPs lower the cost of serving traffic demanded by the CP’s subscribers, translating into substantial savings in transit and transport bandwidth. Indeed, if the content requested is already in the local cache and is considered “fresh”, then it will be served directly to the end-user, resulting in improved user experience and bandwidth saving.

To effectively localize traffic, the CP asks the ISPs to deploy inside its network a certain number of the CP’s proprietary servers that provide caching functionalities together with other optimizations. The ISPs work closely with the CP to carefully map where to deploy these servers in the network to ensure a well-targeted deployment which substantially enhances performance.

Caches allow an HTTP origin server to offload the responsibility for delivering certain content. The cache hit rates vary depending on the number of end-users served by the cache, the unique consumption patterns of end-users, and the size and type of the cache. It’s been reported that “between 70-90% of CP cacheable traffic can be served from the deployed CP’s cache infrastructure” [1].

However, there is a major drawback of existing solutions for content distribution: an origin is required to yield control over their content to the CDNs, allowing them to see and modify the content that they distribute. In some cases, expediency can dictate that the CDN be given control over the entire origin. As a result, in the past three years, the larger CPs have built their own CDNs as a way to overcome this problem. In doing so, they have caused a proliferation of third-party proprietary cache boxes within the ISPs. This proliferation has become so big that the ISPs’ spending for those third-party boxes deployed in their network has far exceeded the savings in transit and transport bandwidth.

As an active member of the Internet Engineering Task Force (IETF), a large open international community of network designers, operators, vendors and researchers concerned with the evolution of the Internet architecture and the smooth operation of the internet, Ericsson is recommending a solution for the proprietary nature of caches in ISP networks, while ensuring privacy and protection of content stored there.

Ericsson, together with other companies, is proposing to the IETF a new architecture for distributing content via a third-party CDN with a stronger level of security and privacy for the end user while reducing the security privileges of the CDN compared with current practice.   

The proposed architecture allows an origin server to delegate the responsibility for delivery of the payload of an HTTP response (the content item) to a third-party in a way that makes it unable to modify the content. In this solution, the content is also encrypted, which prevents the third-party from “seeing” or learning about the content.

An origin server can use this proposed architecture to take advantage of CDNs where concerns about security might otherwise have prevented their use in the past. This is also relevant for types of content that were previously deemed too sensitive for third-party distribution.

The Ericsson proposed architecture consists of three basic elements:

  1. A delegation component
  2. Integrity attributes
  3. Confidentiality protection

Content Delegation

The out-of-band content encoding [2] provides the basis for delegation of content distribution.

  • A request is made to the origin server including a value of "out-of-band" in the Accept-Encoding HTTP header field indicating a willingness to use the secure content delegation mechanism and a new BC header field (defined in [5]) indicates that the client is connected to a proxy cache that it is willing to use for out-of-band requests.
  • In place of the complete response, the origin only provides response header fields and an out-of-band content encoding.
  • The server populates the proxy cache or CDN with the resource to be served, encrypted and integrity protected.
  • The out-of-band content encoding directs the client to retrieve content from the cache or CDN. The URL used to acquire a resource from the CDN is unrelated to the URL of the original resource. This allows an origin server to hide from the CDN provider the relationship between content in the CDN and the original resources that was requested by the client.

Content Integrity

Content integrity is crucial to ensuring that content cannot be improperly modified by the CDN.

Several options are available for authenticating content provided by the CDN [3]. Content that requires only integrity protection can be safely distributed by a third-party CDN using this solution.

Confidentiality Protection

Confidentiality protection limits the ability of the delegated server to learn what the content holds.

Confidentiality for content is provided by applying an encryption content encoding [I-D.ietf-httpbis-encryption-encoding] to the content before that content is provided to a CDN. It is worth highlighting that the proposed solution only allows content on the CDN that is protected by access controls on the origin server to prevent the CDN from finding out the real resources at the origin by pretending to be a client and querying the origin.

[1] Google Global Cache (GGC) https://peering.google.com/about/ggc.html checked 2015-04-29

[2] J. Reschke, S. Loreto “'Out-Of-Band' Content Coding for HTTP” https://tools.ietf.org/html/draft-reschke-http-oob-encoding-04

[3] M. Thomson, G. Eriksson, C. Holmberg ” An Architecture for Secure Content Delegation using HTTP” https://tools.ietf.org/html/draft-thomson-http-scd-00

[4] Thomson, M., "Encrypted Content-Encoding for HTTP", https://tools.ietf.org/html/draft-ietf-httpbis-encryption-encoding-01

[5] M. Thomson, G. Eriksson, C. Holmberg “Caching Secure HTTP Content using Blind Caches” https://tools.ietf.org/html/draft-thomson-http-bc-00

You might also like...

HDR Picture Fundamentals: Brightness

This article describes one of the fundamental principles of broadcast - how humans perceive light, how this relates to the technology we use to capture and display images, and how this relates to HDR & Wide Color Gamut

Virtualization - Part 2

In part one, we saw how virtualization is nothing new and that we rely on it to understand and interact with the world. In this second part, we will see how new developments like the cloud and Video Over IP…

The Big Guide To OTT - The Book

The Big Guide To OTT ‘The Book’ provides deep insights into the technology that is enabling a new media industry. The Book is a huge collection of technical reference content. It contains 31 articles (216 pages… 64,000 words!) that exhaustively explore the technology and…

Pioneering 5G Broadcast In The USA

As momentum for 5G Broadcast around the world slowly grows, we catch up with progress in the USA with recent and forthcoming trials.

Virtualization - Part 1

As progress marches us resolutely onwards to a future broadcast infrastructure that will almost certainly include of a lot more software running on cloud-based infrastructure, this seems like a good moment to consider the nature of Virtualization.