Esports is demonstrating how agile mindsets can provide flexible and scalable solutions within relatively short timescales. But as more software solutions become viable, esports is taking advantage of the cloud and its offerings.
One of the advantages of esports is that much of its transmission takes place within the internet. Twitch and Youtube are just two of the platforms that provide streaming services allowing virtually anybody to set up their own streaming service.
It seems logical, therefore, to keep as much processing in the cloud as possible as the business model adopted by the public cloud provider defines where the costs apply. Some service providers discount their resource but charge for ingress and egress. Others keep ingress charges low and bump up egress costs.
High Speed Data Availability
Service providers often provide links between their data centers to speed up distribution costs resulting in high-speed data highways between the processing resource distributed around the world. These are a natural fit for esports production companies as they use the facilities already available to IT without having to employ custom distribution circuits.
Although latency and bandwidth are not generally guaranteed, the esports community takes a more pragmatic approach to these challenges. Principally, packet loss is assumed and methods to work with this are used. TCP is a protocol that guarantees data throughput but at the expense of latency. Round Trip Times can be excessive if a particularly lossy network is being used or a switch is dropping packets due to egress congestion.
SRT and RIST are alternatives to TCP based protocols as they have been developed to deal with challenges specific to broadcast television. But in the IT world, ARQ (Automatic Repeat Query) derived protocols are often used to maintain accurate throughput.
ARQs describe several different packet loss and detection strategies that seek to improve data throughput in IP networks. TCP is a version of ARQ and uses the Go-Back-N mode to guarantee data throughput. In essence, the receiver sends a message back to the sender to tell it to re-send any lost packets or send the next window of packets. But if a receiver is not aware it has missed a window of packets as it didn’t know they had been sent, then the sender will time-out waiting for a response from the receiver. This greatly effects latency and data throughput.
SRT and RIST add forward error correction and their own ARQ strategies to improve the efficiency of streaming video and audio. TCP has been established for over thirty years and there are a lot of backwards compatibility challenges, so it’s proved difficult to develop the protocol for specialized applications such as streaming video and audio.
By effectively starting again, protocols such as SRT and RIST have been able to address many of the challenges seen with TCP and provide lower latency and high data throughput links.
It’s fair to say that SRT and RIST will never achieve the low latencies of ST2110, but the quality of the network that is required for ST2110 to make it reliable enough are orders of magnitude higher than a network using TCP, SRT, or RIST. There is a simple trade-off between latency and cost.
Latency has attracted a lot of bad publicity in recent years especially in the OTT domain where 30 seconds of delay are not uncommon. However, some latency is inevitable and a physical certainty. We shouldn’t just be asking the question of how to make latency as low as possible, but instead ask how much is enough for the application we’re working with. As esports networks are showing, latency is acceptable and to be expected.
As we deliver more to cloud services, the key is to understand how much latency is acceptable. We don’t have to just contend with network delays but also processing delays and the influences buffers play within the overall design. Buffers are inevitable and converting between synchronous and asynchronous systems demands their adoption.
Esports engineers wouldn’t think in terms of keeping latency low for all services, instead, they would adopt a method of prioritization. It might be perfectly acceptable for the home viewer to have a thirty second delay on their OTT feed, just as long as it is consistent. In the studio the latencies would have to be tightened up to a few hundred milliseconds.
Reducing video latency to a few milliseconds is going to put incredible strain on the network and probably end up increasing its overhead through the procurement of more switches and interconnections. But what benefits will this extra cost and complexity achieve? Would somebody operating a production switcher notice the difference between a cut delay of 20mS and 200mS? They probably would, but would it make a massive difference to this type of production? Probably not. Humans adapt and as long as the latency is consistent, those operating the equipment will adapt with it.
This is where the esports thought processes are potentially offering broadcasters a lifeline. IP offers us the opportunity to mix and match our solutions as we’re no longer tied to the static SDI and AES transport systems. Some video feeds may not benefit from 20mSec delay so why try and impose this on them?
Keeping signal processing in the cloud is a utopian dream for anybody who’s ever worked in a technology field as the massive amount of compute and storage resource available is breathtaking. It might not be infinite, but it would be really difficult for any broadcaster to use more than is available to them.
Controlling processes such as vision switching, and audio mixing may at first appear to be a challenging job. One solution is to use traditional broadcast control panels found in broadcast facilities throughout the world. The actual processing of the video and audio streams was removed years ago from the physical control interface so this is easily achievable.
Production switchers and sound consoles often have a network link to the signal processing engine giving the potential to connect them to cloud services. This is indeed possible, but another alternative is to write software for a specific job. Production switchers and sound consoles look complicated and are, as they’re designed to do a multitude of jobs. If instead a generic human interface was written in software that performs a specific task for that production, the whole design would be much less complex and easier to use.
Writing custom software to achieve automation is much easier with web-type frameworks such as RUST. These frameworks hide the complexity from the developer thus reducing the need for repetitive programming. This allows the developer to concentrate of solving the challenges of the production as opposed to getting bogged down in the technology.
Esports technologists and engineers are looking at production from the perspective of IT. And by applying many of the principles of reducing complexity they’re providing easy to operate customized systems that allow the production teams to get on with making compelling programs and ultimately enhance the immersive viewing experience.
You might also like...
In the previous article in this two-part series we looked at how cloud systems are empowering storytellers to convey their message and communicate with viewers. In this article we investigate further the advantages for production and creative teams.
Television is still a niche industry, but nonetheless, one of the most powerful storytelling mediums in existence. Whether reporting news events, delivering educational seminars, or product reviews, television still outperforms all other mediums in terms of its ability to communicate…
In the last article in this series, we looked at how PTP V2.1 has improved security. In this part, we investigate how robustness and monitoring is further improved to provide resilient and accurate network timing.
Timing accuracy has been a fundamental component of broadcast infrastructures for as long as we’ve transmitted television pictures and sound. The time invariant nature of frame sampling still requires us to provide timing references with sub microsecond accuracy.
TDM Mesh Networks - A Simple Alternative To Leaf-Spine ST2110: Application - Eurovision Song Contest
With over 4000 signals to distribute, transfer and route, the Eurovision Song Contest (ESC) proved to be this year’s showpiece for Riedel’s TDM based distributed mesh networked system MediorNet. Understanding the intricacies of such an event is key to rea…