Viewpoint: Secure Edge Cache: Optimizing the Network and Reducing Latency

The efficient distribution of content, especially video, on the web with the best performance and highest quality of experience requires a large number of servers to be deployed as close as possible to end-users. Consequently, Content Providers (CP) and third-parties have built large networks of content distribution servers, also known as content delivery networks (CDNs).

Today, CDN owners partner with Internet Service Providers (ISPs) to jointly deliver content in the most efficient manner. This includes localizing a substantial amount of their traffic, which allows for the retrieval of assets from a cache closer to the end-user, resulting in faster downloads and delivery times. Localizing traffic also helps ISPs lower the cost of serving traffic demanded by the CP’s subscribers, translating into substantial savings in transit and transport bandwidth. Indeed, if the content requested is already in the local cache and is considered “fresh”, then it will be served directly to the end-user, resulting in improved user experience and bandwidth saving.

To effectively localize traffic, the CP asks the ISPs to deploy inside its network a certain number of the CP’s proprietary servers that provide caching functionalities together with other optimizations. The ISPs work closely with the CP to carefully map where to deploy these servers in the network to ensure a well-targeted deployment which substantially enhances performance.

Caches allow an HTTP origin server to offload the responsibility for delivering certain content. The cache hit rates vary depending on the number of end-users served by the cache, the unique consumption patterns of end-users, and the size and type of the cache. It’s been reported that “between 70-90% of CP cacheable traffic can be served from the deployed CP’s cache infrastructure” [1].

However, there is a major drawback of existing solutions for content distribution: an origin is required to yield control over their content to the CDNs, allowing them to see and modify the content that they distribute. In some cases, expediency can dictate that the CDN be given control over the entire origin. As a result, in the past three years, the larger CPs have built their own CDNs as a way to overcome this problem. In doing so, they have caused a proliferation of third-party proprietary cache boxes within the ISPs. This proliferation has become so big that the ISPs’ spending for those third-party boxes deployed in their network has far exceeded the savings in transit and transport bandwidth.

As an active member of the Internet Engineering Task Force (IETF), a large open international community of network designers, operators, vendors and researchers concerned with the evolution of the Internet architecture and the smooth operation of the internet, Ericsson is recommending a solution for the proprietary nature of caches in ISP networks, while ensuring privacy and protection of content stored there.

Ericsson, together with other companies, is proposing to the IETF a new architecture for distributing content via a third-party CDN with a stronger level of security and privacy for the end user while reducing the security privileges of the CDN compared with current practice.   

The proposed architecture allows an origin server to delegate the responsibility for delivery of the payload of an HTTP response (the content item) to a third-party in a way that makes it unable to modify the content. In this solution, the content is also encrypted, which prevents the third-party from “seeing” or learning about the content.

An origin server can use this proposed architecture to take advantage of CDNs where concerns about security might otherwise have prevented their use in the past. This is also relevant for types of content that were previously deemed too sensitive for third-party distribution.

The Ericsson proposed architecture consists of three basic elements:

  1. A delegation component
  2. Integrity attributes
  3. Confidentiality protection

Content Delegation

The out-of-band content encoding [2] provides the basis for delegation of content distribution.

  • A request is made to the origin server including a value of "out-of-band" in the Accept-Encoding HTTP header field indicating a willingness to use the secure content delegation mechanism and a new BC header field (defined in [5]) indicates that the client is connected to a proxy cache that it is willing to use for out-of-band requests.
  • In place of the complete response, the origin only provides response header fields and an out-of-band content encoding.
  • The server populates the proxy cache or CDN with the resource to be served, encrypted and integrity protected.
  • The out-of-band content encoding directs the client to retrieve content from the cache or CDN. The URL used to acquire a resource from the CDN is unrelated to the URL of the original resource. This allows an origin server to hide from the CDN provider the relationship between content in the CDN and the original resources that was requested by the client.

Content Integrity

Content integrity is crucial to ensuring that content cannot be improperly modified by the CDN.

Several options are available for authenticating content provided by the CDN [3]. Content that requires only integrity protection can be safely distributed by a third-party CDN using this solution.

Confidentiality Protection

Confidentiality protection limits the ability of the delegated server to learn what the content holds.

Confidentiality for content is provided by applying an encryption content encoding [I-D.ietf-httpbis-encryption-encoding] to the content before that content is provided to a CDN. It is worth highlighting that the proposed solution only allows content on the CDN that is protected by access controls on the origin server to prevent the CDN from finding out the real resources at the origin by pretending to be a client and querying the origin.

[1] Google Global Cache (GGC) checked 2015-04-29

[2] J. Reschke, S. Loreto “'Out-Of-Band' Content Coding for HTTP”

[3] M. Thomson, G. Eriksson, C. Holmberg ” An Architecture for Secure Content Delegation using HTTP”

[4] Thomson, M., "Encrypted Content-Encoding for HTTP",

[5] M. Thomson, G. Eriksson, C. Holmberg “Caching Secure HTTP Content using Blind Caches”

You might also like...

BT Sport’s Live VR 360 Coverage Of Premier League Brings Fans Closer To The Action

While the merits of 8K delivery is being debated by broadcasters around the world, some are moving forward with plans to deploy the high resolution quality in creative ways that engage viewers and encourage them to interact with a live…

EU-Funded Group Looking To Productize 5G For Broadcast Production And Distribution

For the past year an international group of technology companies, funded by the European Union (EU), has been looking into the use of 5G technology to streamline live and studio production in the hopes of distributing more content to (and…

The World Of OTT (Infrastructure Pt7) - ISPs And The Growth Of OTT Video

Internet Service Providers (ISPs) are experiencing significant growth in bandwidth consumption largely due to the uptake of OTT video services and the growth in numbers of connected devices per household. ISPs are therefore navigating the path of making successful investments…

Bringing The Olympic Games Home In An Innovative, Automated Way

Sitting at home watching the Olympics 400m Women’s hurdles final live on NBC’s 4K HDR channel, home audiences were captivated by the sweat and effort displayed on screen with immersive sound of the runners’ feet hitting the track. Viewe…

Esports - A New Prescription For Broadcasting: Part 3 - Considering The Cloud

Esports is demonstrating how agile mindsets can provide flexible and scalable solutions within relatively short timescales. But as more software solutions become viable, esports is taking advantage of the cloud and its offerings.