Vendor Content.
VX – The Software Gateway To Compute Infrastructure

We talk to Andy Rayner, CTO at Appear about their new VX software and the vital next evolutionary steps for remote contribution and content distribution in software first, private or public ‘cloud’ infrastructure.

Andy Rayner. CTO. Appear.
Appear’s X Platform media processing hardware has been hugely successful in live sports production infrastructure. There are many reasons for this but the headline is that the X Platform provides extremely dense encoding/decoding – up to 96 HEVC/AVC/J2K/JXS channels in a 2RU X20 – with very lower power consumption, low heat production and impressive carbon footprint when compared to COTS equivalents. They also have a reputation for reliability, ease of use and first rate support. Part of the puzzle also includes a web control and configuration GUI for the units, and full NMOS remote control capability. What is less apparent at first glance is a custom implementation of SRT that achieves significantly greater reliability and efficiency than other products.
Against this backdrop Appear have announced the release of a new software platform called ‘VX’, and there is more to it than first meets the eye. Here we discuss the next phase of the Appear roadmap with Andy Rayner, CTO at Appear.
Broadcast Bridge: So what is VX all about?
Andy Rayner, CTO, Appear: “Most of Appear’s business is in the facilitation of the infrastructure between the acquisition location and the production location. All the technology that’s needed to get stuff from wherever the acquisition location is to wherever the production is being done and for primary distribution of the finished program as it’s being emanated towards the head end etc.”
“That’s where our business has been really coming to fruition in the last few years. What we’re now looking at, and this is really driven by our customers, is the content production stage. A lot of our customers have said we really love the way you do the compression, the way you do the protection mechanisms, everything you do for the transport of the media flows, from acquisition to processing etc. Could you create a complimentary software receiver/transmitter and the requisite transcoding & flow manipulation for the production processing side?”
“If you look at the live production workflow diagram below, with content processing, i.e. the content production bit in the middle, we’ve got what I describe as the format processing on the edges and then production processing in the middle. I also sometimes refer to this as static processing on the edge and dynamic processing in the middle. By static processing I mean stuff that you set up to do a conversion process that’s set for the duration of an event, and the production processing is the dynamic stuff, the graphics, the replay, the vision switching, the audio mixing, all of those bits. Our focus with the initial VX release is on those static elements on the edge. Receiving the incoming feeds, healing them, monitoring them, doing all the observability stuff. Then later this year, with a subsequent software release, transcoding them or just decoding them into whatever other formats are needed within the ecosystem, such that people can then use their production tools of choice in the middle.”
Broadcast Bridge: So VX is part of a vision for software-first, data center compute powered production infrastructure?
Andy: “Yes. For the last almost a hundred years of television it’s been almost an entirely linear workflow. It’s been a linear pipe, whether that’s been an analog 405-line pipe, whether it’s been an analog SD, or digital full HD, or Ultra High Definition pipe. We have perpetuated that pipe concept of linear transit right up to and including 2110. But I think there is an inevitable move towards the production tools being software based. The transition is going to be slower in some places than others, but my belief is most of those production tools are becoming more and more software based. Regardless of whether they’re hosted in people’s own facilities or in the public cloud, the fundamental thing about compute is it’s not linear. To thrive, compute needs to work asynchronously. We need to embrace that, because what we did with 2110, and I put my hands up because I was one of the main authors of 2110-21, was try and constrain compute to work in a fully linear manner with all the timing models, etc. This was and is absolutely valid for what we are using ST2110 for, but not great for intra-compute.”
“The question now is, if we want compute to work efficiently and in its most optimal way, we need to abandon the concept of this linear hosepipe of data. We need to move to asynchronous, timestamped clumps of data that compute can actually mash as it does best in a bursty, non-synchronous way. This is where we’re heading.”
“Obviously at some point before the viewer watches it at the other end, it has to be reconstituted to a time-linear flow. There is debate going about this because maybe the endgame is it won’t become linear until literally the device that it is consumed on. Similarly, there are questions over the linear bit at the front end. At the moment we’re considering cameras still having linear, either 2110 or HD SDI outputs and traversing parts of the system before they are ingested as non-linear feeds into compute. But maybe in the future even cameras will have a non-linear, bursty, compute native interfaces on them.”
“This whole model of 100 years of linear television I think is really significantly changing. Everything that we’re looking at in the way we’ve architected VX and the way the industry is going with what has been done with the G-C-C-G [Ground-Cloud-Cloud-Ground] project in the VSF, and now with the Media eXchange Layer (MXL) project is all about embracing the compute native asynchronous exchange of media flows.”
“One big related topic, which is one of my pet themes, is the timing. As soon as you go asynchronous, the time stamping and the origination time of each of those clumps of media data becomes completely essential.”
Broadcast Bridge: MXL is a new project undertaken by the EBU, the Linux Foundation, NABA and a collection of vendors and broadcasters to develop the Media eXchange Layer (MXL), as an open-source code package that standardizes how media processing functions running in virtualized environments can share and exchange data with each other. Appear are involved in the project, which we went on to discuss, but that will have to be the subject of a future article! Read more about MXL here.
Broadcast Bridge: Can we dig a little deeper into what the operational benefits are of the initial release of VX and why it is needed?
Andy: “The initial VX release answers our most pressing customer demands of what I would call an edge gateway function in software. That gateway function is all about healing, observability and routing because it’s about receiving the streams, doing the most comprehensive RTP merging, ARQ all of those things about healing streams.”
“Why we need healing is about integrity of connectivity. One thing that’s become really apparent is there’s a big difference between telco grade connectivity and public cloud-based connectivity. I spent many years working for BT, (and there are many other good telcos around in the world!), and they basically provide what I call quasi-error-free infrastructure. Apart from the occasional random bit error due to statistical probability, when you buy a 100 gig leased line from BT or any other telco operator, you pretty well know it’s going to be behaving perfectly - other than a component failure. That’s the inherent nature of what they do.”
“If you look at what public cloud resource providers offer, many of them have global connectivity capability now that they sell. The big difference with cloud providers is most of the traffic they serve, 99 point whatever percent is TCP and they do not engineer their infrastructure to be contention free or loss-free. If you buy a circuit from BT for 100 gigs, as a leased line, you get 100 gigs all the time and it’s end-to-end throughput. When you go into public cloud resource provision, it’s inherently contended on a packet level, even if you’ve nominally got the bandwidth you need. So there is inherently packet loss happening all the time under the lid.”
“In the TCP world if you’re losing one in every 5000 packets and it’s being retransmitted, it’s kind of negligible. It really doesn’t matter. It’s invisible. Nobody ever notices. If that starts happening with UDP traffic for broadcast, that’s actually a significant impairment to the flow. So you have to put in protection mechanisms like D2022-7 RTP merge, ARQ such as RIST or SRT, (others are available!), or FEC. Those are your three options, it’s either RTP merge, it’s ARQ, or it’s FEC. If you’re running on public compute, you have to run at least one of those tools to give you integrity because they are inherently lossy. That’s a big difference. I think most end users, broadcasters, content creators, do not understand the difference between native telco connectivity provision and public cloud connectivity provision in terms of the integrity.”
“So going back to our first VX features, the point is providing that vital integrity functionality on the edge. Just to be clear, we’re not trying to move away from a hardware appliance for the acquisition stage. Obviously the first functions that we are realizing in software are entirely complementary to our X Platform, because currently all of that decoding and healing functionality is done by the X hardware. It is about mirroring that in software and offering additional capabilities and creating a more versatile ecosystem. We have a few key focuses, like making sure we stay laser focused on latency optimization.”
“Then obviously one of the biggest challenges people have once they get stuff into compute is actually those basic things of replicating and routing those streams to get them to where you need them. The first VX release doesn’t yet cover any of the transcoding elements, the decompression, the compression, the manipulation of streams – that will come later in the year. Once we’ve done all of that inherent healing and routing manipulation on the edge, the next VX release, which is already in development and will follow very closely, will provide two approaches to transcoding; One is the likely initial use case of transcoding from whatever format is coming in, into the compression format that’s needed for the next point in the existing production chain.”
“The other scenario is what I hope will become the de facto approach of decoding from whatever compression is inherently being used to come in to the compute, down to uncompressed, and using something like MXL as the way of exchanging that media-flow data, uncompressed in a compatible and open manner, between the different vendor software tool kits that will actually comprise the production chain. In many ways VX is a first vital step towards broadcasters being able to use best of breed applications, exchanging data entirely within an open format media exchange layer within a microservices-based compute environment – without the need for 2110 or any other inherently compute-inefficient I/O layer.”