OTT’s Unique Storage Requirements

Central storage systems for OTT are unique in the world of media storage. They combine a set of requirements which no other media use case must handle.

Most OTT services combine live and on-demand content, especially broadcaster OTT services that incorporate linear channels, live programming, and VOD libraries. We start by considering the different storage requirements of Live vs. VOD.

Live-Only

Storage for Live-only is all about small high-performance storage environments for both central storage and CDN storage. Storage in a live-only environment is largely the responsibility of the Edge Cache in the CDN. This is to enable low-latency time-shifted viewing (i.e., live-pause, rewind, restart and lookback). Most live-only OTT services have a small number of channels or streams and therefore a relatively small amount of content to cache. This results in a relatively small storage requirement.

To illustrate:

CDN Storage: If an OTT Operator gives its customers a 4-hour time-shifted viewing window on a single ABR stream (let’s assume 7 bit-rate profiles, totalling 22Mbps), then it will require 40GB of storage (22Mbps / 8 bits in a byte * 60 seconds * 60 minutes * 4 hours) in a single Cache location. This grows to 120GB when considering unique files required for the 3 primary package types of HLS, DASH, and MSS. That is not a lot of storage for the important time-shifted user experience. If we extend the lookback window to a typical 7 days, then this grows to 1.7TB. Still small, given a single HDD is now up to 16TB. Even a month of lookback is only 7TB of storage. The CDN needs to distribute this content across all Edge Cache locations. So, if you have 20 Edge Caches which would make you a large OTT Operator your business model should justify the 140TB of distributed storage. And for every additional channel at the same bit-rates and time-shifted offering you would add the same amount of Edge Cache storage.

Central Storage: The central storage records each live stream so it can deliver any video segment to any CDN Edge Cache when required. Today, many OTT streams are encoded and packaged before being recorded and stored. This means that content is multiplied by the number of package types. Some solutions offer the ability to encode, then record and store, before packaging and streaming. This can significantly reduce the amount of central storage required depending on the number of package types offered. In the 22Mbps per channel scenario, the central storage would either be 7TB (storing packaged content) or 2.33 TB (storing un-packaged content) per channel. This choice makes a difference for larger multi-channel / multi-stream operators.

VOD-Only

Storage for VOD-only is all about large, scalable, secure, and cost-effective central storage plus intelligent caching of the most popular content within the CDN. The central storage stores a large content library. Leading OTT Operators store multiple petabytes depending on resolutions, formats, and the storage of packaged or un-packaged files. Storing un-packaged files is a big benefit for larger libraries. The CDN then stores the content as it is streamed on-demand to the consumer, so the next consumer request for exactly the same content can be delivered from the CDN, instead of calling back to the central storage. In VOD use cases the storage must ingest a large amount of transcoded files (JIT-Transcoding is not recommended due to the excessively high cost of processing power to meet customer latency expectations vs. the relatively lower cost of storage), store them securely for the long-term, and stream hundreds or thousands of unique files simultaneously to the audience. Central Storage ingress and egress is therefore subjected to a very different workload compared to the Live use case.

To illustrate:

Small VOD library: If a VOD library has 5,000 titles with an average length of 60 minutes, then the ABR-transcoded content (let’s assume 22Mbps again) will total 50TB. If packaged into 3 formats before storing, this becomes 150TB. If there are 20 Edge Caches and 10% of the titles are cached for subsequent low latency delivery, then each Edge Cache needs 15TB, which means 300TB (15TB * 20 Caches) of total cache storage.

Larger VOD libraries can reach many times greater than 5,000 titles. A library with 100,000 titles (i.e., more typical of broadcaster VOD) and an average asset length of 60 minutes would require 3PB. Across 20 Edge Caches, using the 10% figure, this would mean 6PBs of cache storage (300TB * 20 Caches). To manage costs, a leaner method is typically used which either requires extensive intermediate caching or more utilization of bandwidth between the central storage and edge caches. But the magnitude of the investment is clear. It is worth highlighting that these figures do not take into account storing multiple copies for redundancy or the typical 20% capacity overhead required by the storage software.

Figure 1: Central Storage & Cache Storage work together in OTT service delivery.

Figure 1: Central Storage & Cache Storage work together in OTT service delivery.

Storage For The OTT Use Case

Most OTT services with combined Live and VOD content observe about 10-20% of their consumption on live content and 80-90% on VOD content. As live streaming grows, some OTT services are reporting 30% Live and 70% VOD. These types of OTT services therefore focus on a strong VOD library (which generally means a large library to offer compelling consumer choice) and a strong live content delivery capability (which generally means sufficient Cache storage to provide customer-satisfying time-shifted viewing options).

The combination of fast file ingest, live recording, time-shifted viewing features, simultaneous streaming of live streams and potentially thousands of unique VOD streams, plus long-term archiving is a unique workload for OTT storage to manage. Traditional editing, playout and archive operations do not combine these requirements. In essence, an OTT central storage system must be very large, very fast, and cost-effective. It’s a tough ask.

Building Object Storage For OTT

In recent years, there has been a visible move by media businesses towards using disk-based software-defined object storage for video use cases. Object storage is built for unstructured data, and particularly for larger datasets. At its core, object storage provides a highly scalable, highly resilient platform for long-term storage. In the Media industry it is becoming more and more common to see new Archive storage environments selecting object storage for its strong data-protection capabilities, scale-out and cloud-integration abilities – AWS S3, Ceph and proprietary object storage technologies are expanding their presence. But object storage is not typically associated with high-performance storage.

However, there is a part of the Media industry that has dealt with this “large + fast” challenge for years already. MVPDs (Multichannel Video Programming Distributors) have been streaming thousands of VOD files and hundreds of live channels to millions of consumers for almost 2 decades since the introduction of VOD in the cable TV and IPTV industry. MVPDs built their own private CDNs and deployed managed set-top boxes with on-board storage. During the first half of the 2010s, MVPD requirements for cloudDVR or networkPVR started to grow. Their aim was to remove expensive hard-drives from set-top-boxes to achieve a significant cost-saving and concurrently remove a leading cause of customer dissatisfaction (as anyone who has ever lost or replaced a set-top box with tens of hours of recorded content can attest to). Instead, they wanted to leverage their network connectivity and content caching to centralize content storage and streamline their customer premise equipment infrastructure.

MVPDs asked for something new: multi-petabyte archive-capable storage, to include cloudDVR capacity, that could record every live stream, ingest thousands of new files, and deliver time-shifted TV and thousands of VOD files simultaneously to millions of individual consumers. In some countries, like the USA and Germany, the MVPDs needed to have enough cloudDVR capacity to store unique copies of every consumer recording to comply with copyright law. While this law requires the addition of many petabytes of storage capacity to a central storage system, the cost saving versus having SSDs in millions of set-top boxes still makes this the right investment.

Figure 2: OTT Central Storage performs many storage tasks simultaneously and at scale.

Figure 2: OTT Central Storage performs many storage tasks simultaneously and at scale.

Leading storage businesses responded with software-defined hybrid object storage. It was the only storage technology that could have a chance of meeting the requirements for size and cost. The biggest challenge was to make it fast, but huge steps have been made with video-centric protocols and algorithms, plus tight integration with Origin platforms for managing ingesting, recording and streaming. The leading solutions have focused on automatic tiering between high-performance flash storage and HDD storage so that a cost-effective multi-petabyte system can stream any file on-demand. That tiering is becoming more finely grained as storage hardware technology evolves and the latest innovations involve usage-based data placement and geographically dispersed data tiers. The leading vendors remain focused on the storage software that manages the data across the drive types, storing it safely and enabling low-latency on-demand streaming.

OTT Operators have very similar requirements. While most are not replacing SSDs inside set-top boxes with a cloudDVR offering, nor streaming hundreds of live channels, nor delivering VOD assets from almost every single content provider in a geographical market, they are placing the same pressures on the central storage. They are ingesting, recording, streaming and protecting their growing content libraries. And as Broadcasters deploy and expand their OTT services, the same mentality they have thrived on for decades persists – as household-name rightsholders delivering live content, they cannot fail. Black to air or rebuffering isn’t an option.

Central storage is a key contributor to achieve this goal. It is the protector of the content library and of the network-connected Live and VOD viewing experience. If content is not in the central storage and a CDN needs it, then the customer won’t see it. At that point the OTT pull system has failed. As this is not an option for OTT Operators, the central storage needs to handle the pressure and do it cost-effectively. It’s what hybrid object storage is built for.

You might also like...

NAB Show 2024 BEIT Sessions Part 2: New Broadcast Technologies

The most tightly focused and fresh technical information for TV engineers at the NAB Show will be analyzed, discussed, and explained during the four days of BEIT sessions. It’s the best opportunity on Earth to learn from and question i…

Standards: Part 6 - About The ISO 14496 – MPEG-4 Standard

This article describes the various parts of the MPEG-4 standard and discusses how it is much more than a video codec. MPEG-4 describes a sophisticated interactive multimedia platform for deployment on digital TV and the Internet.

The Big Guide To OTT: Part 9 - Quality Of Experience (QoE)

Part 9 of The Big Guide To OTT features a pair of in-depth articles which discuss how a data driven understanding of the consumer experience is vital and how poor quality streaming loses viewers.

Chris Brown Discusses The Themes Of The 2024 NAB Show

The Broadcast Bridge sat down with Chris Brown, executive vice president and managing director, NAB Global Connections and Events to discuss this year’s gathering April 13-17 (show floor open April 14-17) and how the industry looks to the show e…

Essential Guide: Next-Gen 5G Contribution

This Essential Guide explores the technology of 5G and its ongoing roll out. It discusses the technical reasons why 5G has become the new standard in roaming contribution, and explores the potential disruptive impact 5G and MEC could have on…