Esports - A New Prescription For Broadcasting: Part 3 - Considering The Cloud

Esports is demonstrating how agile mindsets can provide flexible and scalable solutions within relatively short timescales. But as more software solutions become viable, esports is taking advantage of the cloud and its offerings.



This article was first published as part of Core Insights - Esports - A New Prescription For Broadcasting - download the complete Core Insight HERE.

One of the advantages of esports is that much of its transmission takes place within the internet. Twitch and Youtube are just two of the platforms that provide streaming services allowing virtually anybody to set up their own streaming service.

It seems logical, therefore, to keep as much processing in the cloud as possible as the business model adopted by the public cloud provider defines where the costs apply. Some service providers discount their resource but charge for ingress and egress. Others keep ingress charges low and bump up egress costs.

High Speed Data Availability
Service providers often provide links between their data centers to speed up distribution costs resulting in high-speed data highways between the processing resource distributed around the world. These are a natural fit for esports production companies as they use the facilities already available to IT without having to employ custom distribution circuits.

Although latency and bandwidth are not generally guaranteed, the esports community takes a more pragmatic approach to these challenges. Principally, packet loss is assumed and methods to work with this are used. TCP is a protocol that guarantees data throughput but at the expense of latency. Round Trip Times can be excessive if a particularly lossy network is being used or a switch is dropping packets due to egress congestion.

SRT and RIST are alternatives to TCP based protocols as they have been developed to deal with challenges specific to broadcast television. But in the IT world, ARQ (Automatic Repeat Query) derived protocols are often used to maintain accurate throughput.

ARQ Challenges
ARQs describe several different packet loss and detection strategies that seek to improve data throughput in IP networks. TCP is a version of ARQ and uses the Go-Back-N mode to guarantee data throughput. In essence, the receiver sends a message back to the sender to tell it to re-send any lost packets or send the next window of packets. But if a receiver is not aware it has missed a window of packets as it didn’t know they had been sent, then the sender will time-out waiting for a response from the receiver. This greatly effects latency and data throughput.

SRT and RIST add forward error correction and their own ARQ strategies to improve the efficiency of streaming video and audio. TCP has been established for over thirty years and there are a lot of backwards compatibility challenges, so it’s proved difficult to develop the protocol for specialized applications such as streaming video and audio.

By effectively starting again, protocols such as SRT and RIST have been able to address many of the challenges seen with TCP and provide lower latency and high data throughput links.

It’s fair to say that SRT and RIST will never achieve the low latencies of ST2110, but the quality of the network that is required for ST2110 to make it reliable enough are orders of magnitude higher than a network using TCP, SRT, or RIST. There is a simple trade-off between latency and cost.

Latency has attracted a lot of bad publicity in recent years especially in the OTT domain where 30 seconds of delay are not uncommon. However, some latency is inevitable and a physical certainty. We shouldn’t just be asking the question of how to make latency as low as possible, but instead ask how much is enough for the application we’re working with. As esports networks are showing, latency is acceptable and to be expected.

Acceptable Latency
As we deliver more to cloud services, the key is to understand how much latency is acceptable. We don’t have to just contend with network delays but also processing delays and the influences buffers play within the overall design. Buffers are inevitable and converting between synchronous and asynchronous systems demands their adoption.

Esports engineers wouldn’t think in terms of keeping latency low for all services, instead, they would adopt a method of prioritization. It might be perfectly acceptable for the home viewer to have a thirty second delay on their OTT feed, just as long as it is consistent. In the studio the latencies would have to be tightened up to a few hundred milliseconds.

Reducing video latency to a few milliseconds is going to put incredible strain on the network and probably end up increasing its overhead through the procurement of more switches and interconnections. But what benefits will this extra cost and complexity achieve? Would somebody operating a production switcher notice the difference between a cut delay of 20mS and 200mS? They probably would, but would it make a massive difference to this type of production? Probably not. Humans adapt and as long as the latency is consistent, those operating the equipment will adapt with it.

Combining Formats
This is where the esports thought processes are potentially offering broadcasters a lifeline. IP offers us the opportunity to mix and match our solutions as we’re no longer tied to the static SDI and AES transport systems. Some video feeds may not benefit from 20mSec delay so why try and impose this on them?

Keeping signal processing in the cloud is a utopian dream for anybody who’s ever worked in a technology field as the massive amount of compute and storage resource available is breathtaking. It might not be infinite, but it would be really difficult for any broadcaster to use more than is available to them.

Controlling processes such as vision switching, and audio mixing may at first appear to be a challenging job. One solution is to use traditional broadcast control panels found in broadcast facilities throughout the world. The actual processing of the video and audio streams was removed years ago from the physical control interface so this is easily achievable.

Remoting Control
Production switchers and sound consoles often have a network link to the signal processing engine giving the potential to connect them to cloud services. This is indeed possible, but another alternative is to write software for a specific job. Production switchers and sound consoles look complicated and are, as they’re designed to do a multitude of jobs. If instead a generic human interface was written in software that performs a specific task for that production, the whole design would be much less complex and easier to use.

Writing custom software to achieve automation is much easier with web-type frameworks such as RUST. These frameworks hide the complexity from the developer thus reducing the need for repetitive programming. This allows the developer to concentrate of solving the challenges of the production as opposed to getting bogged down in the technology.

Esports technologists and engineers are looking at production from the perspective of IT. And by applying many of the principles of reducing complexity they’re providing easy to operate customized systems that allow the production teams to get on with making compelling programs and ultimately enhance the immersive viewing experience.

Supported by

You might also like...

NAB Show 2024 BEIT Sessions Part 2: New Broadcast Technologies

The most tightly focused and fresh technical information for TV engineers at the NAB Show will be analyzed, discussed, and explained during the four days of BEIT sessions. It’s the best opportunity on Earth to learn from and question i…

Standards: Part 6 - About The ISO 14496 – MPEG-4 Standard

This article describes the various parts of the MPEG-4 standard and discusses how it is much more than a video codec. MPEG-4 describes a sophisticated interactive multimedia platform for deployment on digital TV and the Internet.

The Big Guide To OTT: Part 9 - Quality Of Experience (QoE)

Part 9 of The Big Guide To OTT features a pair of in-depth articles which discuss how a data driven understanding of the consumer experience is vital and how poor quality streaming loses viewers.

Chris Brown Discusses The Themes Of The 2024 NAB Show

The Broadcast Bridge sat down with Chris Brown, executive vice president and managing director, NAB Global Connections and Events to discuss this year’s gathering April 13-17 (show floor open April 14-17) and how the industry looks to the show e…

Essential Guide: Next-Gen 5G Contribution

This Essential Guide explores the technology of 5G and its ongoing roll out. It discusses the technical reasons why 5G has become the new standard in roaming contribution, and explores the potential disruptive impact 5G and MEC could have on…