Heading Into the Clouds With Next-Gen Media Processing Headends

Since the start of the millennium, TV and video services have changed enormously. Along with the changes to the content itself, the infrastructure used to create, process and deliver that content has also changed. However, the rate of transformation is about to increase significantly, with radical changes to multiple facets happening concurrently. The first digital TV services were Standard Definition (SD) and encoded as MPEG-2. Since then, there has been a major shift towards HD, mostly in MPEG-4 AVC and now the early stages of Ultra-High Definition (UHD) using the latest compression standard: High Efficiency Video Coding (HEVC). Already some SD services are starting to be discontinued and, where SD is still needed, down-conversion from HD is becoming the norm. As each major technology shift is expensive – in terms of content creation, production and consumer devices – it therefore makes sense to have steps where there is a meaningful value resulting from a combination of changes that can occur together.

Tony Jones is  VP technology and architecture, Media Solutions, Ericsson<br />

Tony Jones is VP technology and architecture, Media Solutions, Ericsson

UHD is a good example of this. It has been clear that just more pixels makes some level of benefit to the quality of experience for the consumer, at least for large screens, but perhaps in isolation it is not so easy to justify the costs of UHD throughout the delivery chain. On the other hand, it’s clear that High Dynamic Range (HDR), along with Wide Color Gamut (WCG) will provide a significant impact and can be used with either HD or UHD resolutions; HDR and WCG needs the whole end-to-end delivery chain (from camera to TV screen and everything in between that processes or combines sources) to know how to handle video that can represent a wider range of levels. The impact is significant, both in terms of the visual impact for the end viewer and for all processing stages.

Combined together, UHD, HDR and WCG provide a compelling visual enhancement package that consumers will undoubtedly value. Some delivery systems will find the bandwidth needed for UHD too costly, in which case HD versions of HDR signals will still provide a convincing material step in perceived quality. At the same time, the expectation of accessing any content, anywhere on any device means that TV services and video content must be offered in yet more formats. There is now the commercial and competitive need to feed very large (over 100″ diagonal) UHD HDR TVs all the way down to small (5″ diagonal) mobile screens with connections that need to operate at very low bit rates.

Ericsson ConsumerLab TV and Media research shows that consumer demand for higher video quality is real and is considered a vitally important service-specific feature, regardless of age or preferences for scheduled linear TV or on-demand services. It seems clear that UHD, HDR and WCG are likely to fulfill this desire and therefore likely to prove popular with consumers. Even with the latest video encoding standards, such as HEVC, UHD requires relatively high bit rates, adding further to the pressure on bandwidth in the last mile connection. When combined with the acceleration in consumption driven by media viewing on hand-held devices, bandwidth management and efficient encoding are likely to increase in importance.

So, the question becomes: how does this demand for more and better content match with changes to technology in production and media processing headends?

Headends are transforming into data centers.

Headends are transforming into data centers.

Using UHD to drive change in the use of interfaces

Firstly, let’s consider connectivity of the signals themselves. SD and HD signals in production environments have to date used a format that is unique to the TV industry: Serial Digital Interface (SDI). The first format provided 270 Mb/s, which was sufficient for SD, but when HD video emerged, a new, higher bit rate was needed: 1.5 Gb/s. This required a major change of infrastructure with replacement of all equipment that handled baseband video. HD-SDI was sufficient for up to 1080i at 29.97 fps, but not enough to support 1080p (i.e. full frame progressive) at 59.94 ffps, so then 3G-SDI was added to the list, operating at 3Gb/s. UHD (specifically 2160p at up to 59.94 fps) requires another major step in rates: 12 Gb/s. Early connections use 4 x 3 Gb/s connections in parallel, which enables the use of existing equipment, but it’s clearly not a good solution for the longer term.

There is an opportunity to use UHD as a driver of change that will transform the nature of interfaces that are used. For instance, production equipment will need to be replaced, so perhaps rather than using TV-industry specific interfaces, the time has come to migrate to more widely used physical interfacing. This might have seemed a difficult step (12 Gb/s won’t fit inside 10 G Ethernet), however the advent of 25 Gb/s Ethernet - which is likely to become prevalent in data center servers - opens the opportunity to carry baseband through 25G interfaces. It also offers an easier migration path if there is a need for yet higher data rates (for example if there is a need to double frame rates).

Of course, the fact the interfaces have the bandwidth to carry the baseband data does not guarantee that the servers will necessarily have the processing power to handle the higher frame rates. However, at least the physical interface can be defined in a future compatible manner this time. Moving to Ethernet transport is not quite so straight forward, since live production requires extremely accurate timing control in order to mix sources. Nevertheless, it is achievable with more recent time synchronization technology (and that is also needed for other industries). One particular advantage of this approach is that the data format is not directly tied to the line rate of the physical connection, allowing easier co-existence of different formats (and even mixed compressed/uncompressed formats).

Responding to a more complex delivery and processing path

As a result of more formats and delivery mechanisms, the flow of content to the consumer, whether live or on-demand, now follows a more complex delivery and processing path. This in turn has driven a need to consider operational complexity and once again there is potential from the IT industry, in particular from data center infrastructure and cloud technology, to make this significantly more efficient and provide advanced insights into the operation itself.

The term ‘cloud’ is used widely, however it can have very different meanings to different people and so it is important to be clear about the specific meaning in a given context. Many people equate ‘cloud’ with ‘public cloud’ – i.e. an operation that is located in and managed by a third party, however ‘private cloud’ – i.e. that are managed by the operator and connected to an operator’s own private network – can be very attractive for organizations that also own the primary delivery path, be it satellite, cable or telco. On the other hand, if the primary means to deliver the output is to the internet via CDNs (i.e. ‘pure OTT’), a ‘public cloud’ offering may be beneficial, since they invariably have good connectivity to third party CDN providers’ networks. However, the key attribute of interest in both cases is taking advantage of the technology that underpins the data centers, whether private or public.

Some media applications are in fact very similar to normal web applications and so can map in a straightforward manner to data centers using cloud native applications: i.e. those designed for that environment. Cloud platforms are typically built with transaction-based interactions using HTTP APIs. These are typically stateless, meaning that a different server could respond to the next request with there being no impact to the functionality. Media functions such as acquiring program guides fall exactly into this style, however other media functions are very different in nature and need more consideration. For example, a continuous flow of data with a very high availability expectation (ETSI, Network Functions Virtualisation) provides a definition of service availability; live television expectations are in line with availability expectations required by emergency services like the police or fire services. From this, it should not be a surprise that in some cases, an additional application-level method to achieve better availability than is natively offered by data centers would be appropriate. Media applications that provide real-time data flow processing will need to consider not only whether the data flow reaches the next stage, but also the latency and jitter of the timing of that delivery.

So once again, there are a few considerations beyond standard web application needs. Fortunately, these can be resolved without losing the operational efficiencies and flexibility gained by having cloud native applications managed by data center application management. The key to successful cloud media application deployment is to ensure that the placement of media functions is correct – either in terms of resource (e.g. SSD) availability – or in terms of relationship to other instances of the same function and/or media functions that either precede or follow it.

The more advanced container orchestration systems, such as Kubernetes, are now recognizing the need for some diversity in the categories of application requirements. For example, there are newly introduced means to support persistent volumes, which is likely to be useful for Video on Demand (VoD) storage, since the size of data stored can be very large.

Leveraging the benefits of complementary monitoring capabilities

Data center deployment models bring with them the ability to have more standardized monitoring for the applications and the infrastructure underneath, which has good potential to help diagnose problems that might occur. However, in addition to that, there still needs to be a media service operational view, to help understand behavior of media data flows, which is more complex with a multitude of output formats and delivery paths for every media asset or channel.

These two monitoring capabilities are complementary and support different operational needs. In addition, the move to cloud native applications creates opportunities to standardize and simplify the deployment of updates to software, allowing faster deployment of enhancements to the media applications.This enables operators and broadcasters to further evolve and differentiate their service offerings and rapidly deliver them to their customers.

Let us know what you think…

Log-in or Register for free to post comments…

You might also like...

DPP - The Live Explosion

Away from traditional broadcasting a revolution is happening. Live internet streaming is taking the world by storm with unprecedented viewing figures and improved accessibility for brands looking to reach better targeted audiences. The Live Explosion, hosted by the DPP in…

IT Agility: The Key for Great User Experiences in OTT Transition

The rise of over-the-top (OTT) content has made an already tricky job even more complex for broadcasters. It’s challenging enough to deliver perfect live video to traditional platforms 24/7 – now try doing it over broadband connections, to millions of consumers all…

Betting Industry Aims to Cut Stream Latency Below 2 Seconds

Stream latency is more critical for online betting than almost any industry and now leading bookmakers are starting to deploy technology to cut delays to the bone, aiming to make OTT delivery even faster than traditional linear broadcast.

Articles You May Have Missed

While OTA broadcasters contemplate the future, competitors advance. Over-the-air broadcasters are entering perhaps the most chaotic period in the technology’s history. With the FCC-mandated spectrum swap and new technology ATSC 3.0 on the horizon, video program delivery becomes ever more c…

Europe Up to Speed for Broadband says Akamai

Akamai Technologies, the firm synonymous with CDN services, has given Europe good marks for broadband speeds in its Third Quarter, 2016 State of the Internet Report. The principle measure is not how fast broadband speeds are increasing but whether they are…