Hybrid CDN - Part 1

Broadcasting video and audio has rapidly developed from the send-and-forget type transmission to the full duplex OTT and VOD models in recent years. The inherent bi-directional capabilities of IP networks have provided viewers with a whole load of new interactive viewing possibilities.



This article was first published as part of Essential Guide: Hybrid CDN - download the complete Essential Guide HERE.

Even though the reliability of the internet can be occasionally challenging, discerning viewers do not make any allowances for variances in internet delivery and demand the same quality of service and experience as they get from traditional broadcasting.

The internet was never designed to carry the amount of data a modern streaming service offers. Video and audio are relentless in their thirst for data capacity, especially with mobile device delivery where multiple streams are offered within a package to take into consideration the differing environmental delivery conditions. Capacity requirements soon ramp up as more services are offered by broadcasters, and that’s before we start looking at the interactive nature of OTT.

Hyper Text Transfer Protocol is the fundamental streaming mechanism used in OTT delivery as it is the common format for portable device players and internet browsers and is ubiquitous for internet delivery for all types of data exchange. Consequently, the whole internet relies on facilitating the HTTP protocol to make it reliably operate.

Delivering streamed video and audio to viewers is more than just taking the video and audio data and sending it across a network. Multicasting is a technology that is constantly being researched and developed but is yet to find its way into the public internet. It is possible to provide multicasting in private networks and is regularly used within the broadcast infrastructure, however, technical limitations restrict its use and instead we rely on a one-to-many mapping of program stream to user.

Each viewers device requires a direct logical connection to the server providing the program stream. As the numbers of viewers increases then so does the demand on these servers. Having a centralized system is inefficient and often impractical. Instead, a distributed server mechanism provides the optimum solution. This results in “edge-servers” being deployed as viewer demand increases.

The edge-server forms part of a complex interaction between the broadcaster, internet and viewer. More infrastructure is required such as the transcoders and playout servers, especially as we start to look at the differences between VOD and OTT, and it is this interaction that lays the ground for Content Delivery Network (CDN).

The internet’s network, as provided by a collection of ISPs and intermediaries, delivers the backbone of the network. They do provide infrastructure to help distribute IP datagrams, especially for HTTP systems, but they tend to leave the tuning for streaming to others. CDN is one method of tuning the internet to deliver streamed programming and generally relies on adding storage, packaging and edge servers to the network to facilitate better VOD and live OTT delivery to viewers.

Both private and public CDN’s are available with multiple advantages and disadvantages for both. Public CDN’s are a generalized solution but private CDN’s deliver specific services to broadcasters including increased granularity of monitoring and higher tolerances for data delivery.

The whole internet delivery mechanisms for VOD and OTT has ballooned enormously over the past few years and it can often be difficult to keep up with the technological advances and why we use them. Understanding the “why” is often the starting point to understanding how complex systems work, who uses them and when.

These articles introduce the concepts of CDN’s and explains why we need them. It then goes on to discuss both public and private CDN, and how a hybrid model approach adds value for both broadcasters and viewers.


Content Delivery Networks (CDN) are gaining popularity as broadcasters move to the OTT method of distribution. But what are CDNs? Who operates them? And how does the hybrid model benefit us? In this Essential Guide, we uncover the challenge hybrid CDNs solve and the practical applications of making them work.

Three fundamental concepts change as broadcasters adopt the OTT method of distribution; we no longer “own” and have complete control of the distribution medium, the network is a one-to-many mesh configuration, and the data path is bi-directional.

Broadcasters transmitting television programs can be sure that when a signal leaves their transmitter, it will reach the viewer. Unless somebody has erected a skyscraper between the viewer and the transmitter, the television pictures and sound will be reliably received.

But to make OTT systems work reliably and efficiently, we must be much more aware of the deeper underlying network capabilities of all the systems between the broadcaster and the viewer.

TCP and Latency

By design, IP is a non-guaranteed delivery mechanism. That is, when we send an IP datagram from a server into the network, we can only say, with some certainty of probability, that the IP datagram will be received by the viewers device. To provide a level of guarantee and be sure the viewer sees the broadcast, we must add TCP (Transmission Control Protocol). However, TCP adds latency, but this is an inevitable consequence of using flow control methods.

OTT has some further challenges as broadcasters no longer send just one format for each service. Three viewing formats dominate the OTT market; Android, Apple and Microsoft. These are designed to optimize the viewing experience for mobile devices where the WiFi network conditions may change rapidly. This isn’t just due to location, but congestion can be caused by more mobile users entering a location reducing the overall availability of data.

ABR (Adaptive Bit Rate) distribution overcomes congestion to a larger extent. To achieve this, a service is encoded with multiple streams with varying data rates, and for an SD or HD service this may consist of 4-6 streams. As well as providing differing data rates to meet the changing conditions of the network, this bouquet of streams delivers different screen sizes and even frame rates.

Increasing Streams

Vendor specific formats further increase the number of streams as a complete bouquet is needed for each of the three main mobile device viewing vendors. Manifest files are also needed and must be reliably streamed to tell the viewing device which type of streams are available and where they can be found.

This all soon mounts up and a service that started as just one stream can soon increase to 18 streams with the associated manifest and description files. From a broadcasting perspective, this can be incredibly daunting, the challenge of providing a one-to-many distribution system over the internet is difficult enough, but when we then increase the number of streams to 18 per service, then life becomes incredibly challenging.

The broadcaster also has to divide the streams into smaller data packets to improve distribution throughout the network. Referred to as chunking, or segment sizing, the stream is separated into smaller packets as the design works on the IP-retry principle. DASH requires three, four second segments (adding 12 second latency) and HLS requires three, ten second segments (adding 30 second latency) to achieve lock and synchronization by the receiver. This results in a trade as the CPU prefers to process larger packets, but to achieve ultra-low latency, the segment size has to be dramatically reduced to allow video players to lock to the stream.

In terms of operation, topology, and technology, OTT distribution is worlds apart from traditional RF broadcast and one of the major challenges to be overcome is dealing with who owns the network.

OTT delivery fundamentally differs from traditional RF broadcast as the viewers device requests video and audio in the form of data and the broadcast server responds by sending the requested information. Segmentation and packaging of the stream help achieve this as the mobile device requests the next in sequence packet segment. This keeps the viewers device memory buffer full to achieve smooth video playback and distortion free audio. The Packager and Storage processes are split between the Origin (where the ingest, recording, storage, packaging and encrypting takes place) and the Cache servers (either Intermediate of Edge Cache) so that VOD can be cached and the content can be more efficiently distributed as multiple files and streamed from the edge as opposed to streaming through the network, and live programs are held in the Edge Cache in fast storage or memory so that each bit-rate of the live stream can be provided should multiple request be initiated from viewers.

OTT delivery fundamentally differs from traditional RF broadcast as the viewers device requests video and audio in the form of data and the broadcast server responds by sending the requested information. Segmentation and packaging of the stream help achieve this as the mobile device requests the next in sequence packet segment. This keeps the viewers device memory buffer full to achieve smooth video playback and distortion free audio. The Packager and Storage processes are split between the Origin (where the ingest, recording, storage, packaging and encrypting takes place) and the Cache servers (either Intermediate of Edge Cache) so that VOD can be cached and the content can be more efficiently distributed as multiple files and streamed from the edge as opposed to streaming through the network, and live programs are held in the Edge Cache in fast storage or memory so that each bit-rate of the live stream can be provided should multiple request be initiated from viewers.

Telco’s have been providing managed networks to television stations for nearly as long as there have been television stations. Analogue video, audio and SDI distribution services have all been available to us. But these bespoke services attracted a hefty price tag. The beauty of IP distribution, specifically with OTT over the internet, is that the distribution costs to broadcasters are orders of magnitude lower. But the price we pay for less is a reduction in control.

Internet Limitations

When broadcasters started experimenting with OTT they soon realized that the internet could not withstand the amount of traffic being streamed across it. Although the public internet may appear to be “free”, at some point, somebody somewhere must provide the infrastructure. To a larger extent this fell to the major Internet Service Providers (ISPs).

To understand public CDN it’s worth considering the ISP business model. Essentially, ISPs make their money by providing a bidirectional data pipe to our homes. They also provide bolt on services such as television and film channels, but the bulk of their revenue comes from providing the data service to homes. There is really no incentive for them to boost data connection between two POPs (Point of Presence), for the public CDNs to then flood it again with even more streaming traffic.

Furthermore, a CDN builds on a managed network, whether private or public. The CDN fundamentally consists of the origin server, storage and edge- server. These components, combined with the network, make up the CDN. It’s entirely possible for a CDN provider to work with a network provider, but not necessarily own the network.

A public CDN shares resource and doesn’t guarantee bandwidth or latency. It may improve distribution compared to normal internet systems, but the components that make up the CDN are shared amongst several users, in the case of television, this would be several broadcasters.

Dedicated Premiums

When broadcasters leased SDI circuits from Telco’s, they paid a high premium for guaranteed bandwidth and latency. Telco’s were able to deliver on this as they costed exclusivity into the service. However, the business model for the public CDN providers does not guarantee exclusivity. In fact, they actively promote sharing of the data circuit across many clients. Taking advantage of the distribution of data from statistical analysis, public CDN providers are able to provide an average data rate and latency.

But the devil is in the detail and averages often mask underlying peaks. For example, the latency may average at 10ms, but it might peak at 100ms, or even more, and this could have disastrous consequences for the reliability of the service. In the new connected-world, broadcasters are fighting growing competition from other service providers. They know all too well how easy it is for a viewer to switch to another channel should their service fail.

Due to the complexity of the internet it’s often difficult for the broadcasters to determine where a problem has occurred and who is responsible for it. This is not necessarily a fault of the public CDN or ISP providers, but instead, is just a consequence of how the system works. One of the benefits of OTT distribution is that broadcasters are taking advantage of a system that can be shared. But with sharing comes compromise, and that is exactly what broadcasters are doing when they are using a public CDN.

Although viewers have come to expect the flexibility of watching their favourite programs on the device of their choice, they also expect the same quality of service traditional RF broadcast mediums have always provided. From the viewers perspective, they don’t really care how the signal reaches their mobile device, they just want to watch what they want, when they want, and how they want. Viewers expectations for live events are even higher as nobody wants to miss the winning goal in a premier game.

Supported by

You might also like...

NAB Show 2024 BEIT Sessions Part 2: New Broadcast Technologies

The most tightly focused and fresh technical information for TV engineers at the NAB Show will be analyzed, discussed, and explained during the four days of BEIT sessions. It’s the best opportunity on Earth to learn from and question i…

Standards: Part 6 - About The ISO 14496 – MPEG-4 Standard

This article describes the various parts of the MPEG-4 standard and discusses how it is much more than a video codec. MPEG-4 describes a sophisticated interactive multimedia platform for deployment on digital TV and the Internet.

Chris Brown Discusses The Themes Of The 2024 NAB Show

The Broadcast Bridge sat down with Chris Brown, executive vice president and managing director, NAB Global Connections and Events to discuss this year’s gathering April 13-17 (show floor open April 14-17) and how the industry looks to the show e…

NAB Show 2024 BEIT Sessions Part 1: ATSC 3.0 And TV RF

A full-time chief engineer in good relationships with manufacturer reps and an honest local dealer should spend most of their NAB Show time immersed in BEIT sessions. It’s an incredible opportunity to learn from and personally question indisputable industry e…

Audio For Broadcast - The Book

​Audio For Broadcast - The Book gathers together 16 articles into a 78 page eBook which explores the science and practical applications of audio in broadcast.  This book is not aimed at audio A1’s, it is intended as a reference resource for …