The Streaming Tsunami: Part 8 - Finding New VOD Efficiencies

As demand continues to grow the eternal search for ever greater efficiency through optimization of processing, publishing, and storage of ready-to-stream VOD files remains a core developmental goal.

The phrase “media factory” has been around for well over a decade in the Media industry, when it sprang to life during the early days of SVOD services. The concept of a media factory is about producing VOD files, on-time and to-specification, for delivery to a long list of publishing platforms. With the growth of VOD consumption and the shift in underlying technologies used for VOD preparation and delivery, each broadcaster’s VOD operations group focuses on continuously optimizing the efficiency of producing video files for distribution.

While costs are a driver of VOD efficiency decisions, media businesses are also finding new ways to monetize these valuable assets. FAST is a perfect example. With FAST channels VOD assets are stitched together to produce themed “channels”. To expand this service, live video is inserted. The output looks like linear TV with a mix of pre-recorded and live content, but FAST is generally more tightly themed and more granular in its audience segmentation. In addition, because of the OTT delivery method of FAST, there is more flexibility from a technology and cost perspective.

The OTT delivery model for this VOD library creates an agility benefit. Media businesses can identify quickly and easily how popular different types of content really are – from a single VOD asset to a FAST channel. Business decisions can be made faster. To use a term from the manufacturing industry, the media business’s valuable content inventory can be turned more quickly and more easily in the OTT streaming environment. More and faster inventory turns is generally a good thing.

This dynamic becomes more important when VOD consumption dominates a broadcaster’s streaming consumption. About 70% of total content delivered by big broadcasters on their streaming platforms is VOD. Live and scheduled TV programs are still important and draw in big audiences for short periods of time, driving very significant revenues to the streamer. Therefore, VOD and live video delivery have different dynamics to be managed. In VOD, improving efficiency for the VOD Media Factory is always a high priority.

So how do we make VOD more efficient in a world of OTT, FAST, and Cloud? And what does it mean for a broadcaster to deliver through a pay-TV aggregation platform versus the broadcaster’s own direct-to-consumer streaming service?

To Cloud Or Not To Cloud, That Is The Question

Today a large and growing number of broadcasters are working in cloud platforms for their video creation, storage, editing, compliance, processing, and origination. The cloud has provided much-needed operational flexibility and accessibility to content. Broadcasters that moved to the cloud early generally focused on using it for the peaky, unpredictable workloads as well as remote collaboration. From the beginning of cloud in the media industry, VOD file processing and delivery was a good use of cloud resources because it allowed files to be delivered to meet very urgent deadlines and not be constrained by on-premise fixed hardware capacities. But, just like a factory that finds itself struggling to meet all its deadlines in a cost-efficient manner because of a tendency to “expedite” or “prioritize” specific orders, which upsets the workflow and decreases the overall efficiency of the factory, media factories can fall into a similar trap. Expediting the production and delivery of VOD files to meet very tight deadlines, while possible with elastic cloud-based resources, generally inflates costs.

At ITVX in the UK, Mark Ison, Director of Engineering, is looking at efficiency optimizations for VOD workflows as a top priority in the year ahead. “VOD workload is largely deterministic during a year, with a few spikes for major new program releases,” states Mark. “Yet the preparation and publication of VOD assets is often being managed on infrastructure that was intended for non-deterministic workloads that require extra technical and operational flexibility. The processes of transcode, package, publish, and store are the focus areas of our attention in our drive for VOD efficiency improvements. We must decide if these processes are most efficiently and cost-effectively run in the cloud, or in another environment.”

Gravity, & Cloud Lock-in

A common reason for the continued use of cloud resources for VOD processing workloads is the center of gravity of media. In other words, where media is stored is generally where it should be processed. A VOD streamer continuously decides between storing content either in the cloud or in a CDN and shifting content between the two for best cost efficiencies is the never-ending game.

Figure 1: Considerations for making VOD workflows more efficient and cost-effective.

Figure 1: Considerations for making VOD workflows more efficient and cost-effective.

Moving content in and out of a cloud storage location for processing can be time-consuming and expensive for major users of storage. Cloud storage “lock-in” is an important topic, so much so that the UK’s competition regulator, the Competition & Markets Authority (CMA), announced in early October 2023 that they would launch a probe to focus on three areas of concern related to public cloud services: egress fees, interoperability, and software licensing. Specifically for egress fees, the CMA have listed possible remedies they will investigate, including capping fees for egress charges, preventing egress charge fees, and increasing the visibility of fees. The overall objective of the CMA’s review is to help cloud users better control their spend on cloud services.

Even though egress costs may change to allow VOD workloads to be more easily migrated between different public clouds or between a public and private cloud environment, it is considered highly unlikely that all workloads will move out of the public cloud. Centralized and multi-input processes like editing and compliance are well-served in resilient, accessible cloud infrastructure. Archive storage, which requires high levels of security and resilience at lowest-possible long-term cost, is another good use case for cloud infrastructure. So that leaves the processing, publishing, and storage of ready-to-stream VOD files, just as ITVX are investigating.

Executing VOD Workflows More Efficiently

Operationally, these functions are normally supported by workflow automation tools. But while automating the processes should be a natural efficiency gain, the underlying use of hardware and software resources for processing and storing video is where the costs can differ significantly between private cloud and public cloud environments. The Swedish broadcaster, SVT, has developed its own VOD workflow solution called Encore, built from various open-source components. The purpose of Encore is to help SVT publish all their content on-time, in great quality and with high levels of device compatibility. SVT run their processes on a private, on-premise cloud that has a capacity constraint due to not having “infinite” public-cloud elasticity, and so Encore has been developed with a particular focus on intelligently prioritizing file processing jobs to meet deadlines. As noted earlier, prioritization in any factory can have unforeseen consequences for other jobs, which can result in continuous re-prioritization and a lack of smooth output delivery. This normally means higher costs or missed delivery deadlines. But in a fixed-capacity, fixed-cost cloud environment, intelligent job prioritization can deliver big benefits. Conversely, in a flexible-capacity, flexible-cost cloud environment it is critical to control variable pay-as-you-go costs while working to meet file delivery deadlines.

The ideal scenario is to have the resource flexibility with excellent cost control. In this area, newer workflow solutions envisaged for many years by industry thought-leaders are blending workflow automation with intelligent planning and simulating capabilities to help media operations to optimize resource usage and costs while meeting their deadlines. With detailed costs attached to workflow functions, these tools plan ahead to identify the most efficient way to complete their tasks. In some cases, this may involve increasing processing capacity significantly in a short-period to benefit from available lower-cost resources rather than level-loading the work throughout a day when capacity costs increase. In other cases, this may involve limiting resource use to a specific number of resource instances to complete the work on budget. These tools also enable efficiency-optimization of entire operations, with simulation of the impact of resource changes on the ability of the media operation to meet all its deadlines on budget. Media Factories of the future, whether they use private or public cloud platforms, will benefit from this next generation of efficiency-improving tools.

Collaborating with other broadcasters to achieve best practice is often constrained by competitive concerns, which encompasses video workflow efficiency subjects that can be the source of cost advantages. But Mark Ison is encouraged by initiatives like Encore that open up new ways to collaborate with industry peers. “SVT did a helpful thing by open-sourcing their Encore solution. ITV are encouraged by initiatives such as SVT Encore bringing more technology to the open-source space that can be applied at scale, and we’re excited to be working with SVT to explore the applicability of their Encore initiative within ITV’s on-demand streaming ecosystem.”

Standardizing & Simplifying VOD Processing

Customer experience of media services is mostly about the content, its discoverability, its personalization, and the quality of video playback. The latter point is a basic building block of a satisfying viewer experience and is always a top priority subject for broadcasters.

For video playback, the transcoding function is the first point at which customer experience is defined by a Streamer. A set of video and audio profiles are defined for the viewers to access, based on quality and cost parameters, covering bitrate, resolution, GOP-size, ABR ladders, and framerates. Given the importance of video playback to overall customer satisfaction with streaming services, these transcoding choices are critical to success.

At the same time, VOD transcoding technology is a relative commodity compared to 10-20 years ago. Specialist transcoding operations departments used to oversee this specific function, working with an array of powerful, on-premise transcoders. Then workflow automation solutions made transcoding many files into many versions more operationally efficient. Then cloud-based SaaS transcoding further simplified the process of transcoding by providing managed infrastructure and managed software.

Next, given that transcoding profiles used for live and VOD services are intended to supply the devices that will play the video, there is an opportunity to standardize profiles by device type for best video playback quality and best transcoding workflow efficiency. This would further optimize transcoding workloads for Streamers, and even optimize transcoding across at least two of the major video distribution layers in the industry.

Let’s consider three main industry layers – Studios, Content Providers, and Pay-TV Aggregators. Studios produce the original content and provide it to the Content Provider brands. Content Provider brands operate channels and VOD libraries which monetize the content in various models, which include syndicating content, advertising, subscriptions, and taxpayer-supported. Pay-TV Aggregators provide packages of Content Provider channels and VOD content to consumers, often bundled alongside other technology services like broadband or mobile services.

As content is moved between these industry layers, transcoding is performed to convert content into the format required by the distributing organization. For example, a single Content Provider prepares its content in different formats to be received by Pay-TV aggregators like Netflix, Comcast and Sky while also preparing content for its own D2C (direct-to-consumer) streaming service. The Pay-TV Aggregators will receive content from many Content Providers and transcode it into the formats their own platforms support. Essentially there are media transcoding factories in every layer of the media value chain.

The integration of D2C Streaming apps into Pay-TV Aggregator platforms is reducing the amount of total VOD transcoding needed, because the video playback for a VOD file on the D2C Streaming app is the responsibility of the Content Provider, and this uses the Content Provider’s transcoding profile. In this case, the Content Provider transcodes once for each profile, and the Pay-TV Aggregator does not re-transcode the VOD file. This is a small but meaningful efficiency improvement in D2C VOD delivery. As noted by the Greening of Streaming initiative, reducing compute usage (transcoding is a compute-intensive process) will have a bigger impact on energy consumption in the media industry than reducing storage or bandwidth usage. In addition, we now face the opportunity for another efficiency gain by collaborating to standardize Content Provider profiles across consumer device types.

Collaborating For New VOD Efficiencies

While VOD is the lion’s share of viewing, delivering live video is considered most challenging due to concurrent audience sizes, sensitivity to latency and quality, and the business impact of failing to deliver a live video signal. Many broadcaster D2C streaming apps are focused on the live experience, emphasizing linear and live programming, but is this how content will be consumed in the future? It seems likely that the must-see nature of live content, and what this means for fulfilling a public service broadcaster mandate as well as commercial factors like advertising revenues, will mean live content remains pre-eminent for public service broadcasters. But VOD’s larger percentage of the number of hours published and viewed - including pre-recorded VOD content in the growing list of FAST channels – and its more consistent and predictable consumption patterns mean that each broadcaster’s media factory efficiency program will be a priority year after year.

“The industry needs to continue to work together, ideally across competing groups, to serve viewers effectively,” Mark concludes. “Our transition to D2C video services involves preparing and delivering VOD and live video at scale across various and changing compute, storage, and network environments. We need the Public Service Broadcasters working with Studios and Internet Service Providers to holistically bring services together across the various technology platforms to more efficiently serve our viewers.”

You might also like...

Why AI Won’t Roll Out In Broadcasting As Quickly As You’d Think

We’ve all witnessed its phenomenal growth recently. The question is: how do we manage the process of adopting and adjusting to AI in the broadcasting industry? This article is more about our approach than specific examples of AI integration;…

Designing IP Broadcast Systems: Integrating Cloud Infrastructure

Connecting on-prem broadcast infrastructures to the public cloud leads to a hybrid system which requires reliable secure high value media exchange and delivery.

Video Quality: Part 1 - Video Quality Faces New Challenges In Generative AI Era

In this first in a new series about Video Quality, we look at how the continuing proliferation of User Generated Content has brought new challenges for video quality assurance, with AI in turn helping address some of them. But new…

Minimizing OTT Churn Rates Through Viewer Engagement

A D2C streaming service requires an understanding of satisfaction with the service – the quality of it, the ease of use, the style of use – which requires the right technology and a focused information-gathering approach.

Designing IP Broadcast Systems: Where Broadcast Meets IT

Broadcast and IT engineers have historically approached their professions from two different places, but as technology is more reliable, they are moving closer.