Vendor Content.
Backhaul Solutions Engineering

Here we introduce a vendor perspective to our ongoing discussions of Live Sports Production with an interview with David Walker, VP Solutions Engineering EMEA at Appear, whose team have supported some of the largest global sports productions in recent years.

David Walker. Senior Solutions Architect. Appear
The Appear X Platform series of media processing and gateway hardware has been deployed around the world in the most demanding applications, from many of the very highest-profile sporting events and ground breaking global eSports, to fully remote production infrastructure. As the man at the sharp end of leading the team responsible for supporting the broadcasters, production service providers and system integrators who deploy these systems, David Walker has unique insight into the backhaul infrastructure requirements. Here is just some of what he had to say.
The Broadcast Bridge: What kind of projects have you been supporting over the last 24 months?
David Walker, VP Solutions Engineering EMEA. Appear. “We have been doing a lot of live contribution, especially for larger scale tier 1 events. A perfect example of the scale of projects and our role in them is a major multi-venue live sports event in Paris we did during 2024. We had fully redundant systems at work in four separate live event or stadium venues simultaneously. At each live event venue there were typically 20 to 30 cameras, they went through an on-site SDI router, and that was fed into an Appear X Platform chassis, configured in a redundancy model that would then send all the streams back to a single central IBC. Each path we did was capable of delivering up to 300 channels.”
“We are also very involved in more typical tier 1 events where we see a lot of the production being done on site in OB’s. Sometimes it’s partially done on site and passed over to the studios, for example for sanitization of feed. Certain advertisers can’t be shown in certain regions so back at the production facility software applications replace the advertising hoardings to make it friendly for global audiences.”
“But we’re finding we sit nicely in the workflow whether it’s tier 1 or tier 2. We are heavily involved in tier 2 for fully remote production. We’ve also been getting involved with VAR where we work with some partners providing the encoders for low latency HEVC streams for the video assistant referees for various sports. We are not currently so involved in tier 3 but that will change I think with the release of the compact, cost-efficient X5.”
The Broadcast Bridge: Do you and your team get directly involved with projects?
“For the biggest multi-venue event we had a specialist team of people working with the rights holder and their production partners, and we had a dedicated member of my team on site. I manage the solutions engineering team for EMEA for Appear. Over the months leading up to the event one of my team worked closely with stakeholders participating in technical planning calls. Immediately before the event we visited them to do a health check on the racks. During the multi-week event we had two support staff on call rotation purely for this event. We did as much hands-on stuff we could do without actually physically sitting next to the units.”
The Broadcast Bridge: Why do you think Appear’s X Platform has been the product of choice for so many high-profile installations and events recently?
“I think the obvious answer is density. We can do 96 HEVC channels with full redundancy in a 2 RU X20. If you compare that with some of the COTS based equipment out in the market, that’s very dense.”
“I also think power consumption is a major factor especially in OB’s. Generally, we use five times less power than a COTS piece of hardware of the same size and we produce a lot less heat. Rising energy prices mean saving on power consumption and AC matters.”
“Going beyond power consumption I think the rest of our green credentials are also a factor. For a UHD channel, we produce 32 times less CO2 per channel. All our equipment, including the circuit boards is manufactured in Norway at our main facility, and the facility is entirely hydro-electric powered. Also combining carbon footprint with cost benefits, our equipment is comparatively lightweight. Many of our partners ship a lot of equipment nationally and internationally and one of our European partners recently informed me that they are paying around €750 per kilo to ship it globally. If you’re taking a flight case where you can do everything in 2RU you are saving multiple kilos compared to a server.”
Configuration & Control
The Broadcast Bridge: Can we run through where your processors sit within typical configurations?
“We do AC and we do DC so we can sit in the truck. We can sit in a flight case. We also do inter facility workflows where we are in the data center, using our density to take things from one location, or multiple locations and pass them on to one or many locations. We’ve got a satellite modulator so we can send a particular stream straight into the cloud if needed. We also sit at the other end of the managed network so we can take the signals and decode into SDR etc. We can handle a wide range of functions in the whole range of different workflows. Appear can do all of these things because the units are modular and FPGA based. We’re incredibly flexible.”
The Broadcast Bridge: So the processor hardware is modular?
“Technically our units are a frame that has twelve slots in the back and two in the front. The two slots in the front are for management. The twelve slots at the back can be whatever you want to put in there from our portfolio of FPGA processing and connectivity modules. It can be 2110, SDI, ASI, satellite demod, or a scrambling card. You can pick and choose how you configure all units. If they go into a relatively fixed position like a truck there’ll be a static configuration. If it goes into a flight case that would be configured for that particular event and all our cards are hot swappable.”
The Broadcast Bridge: Presumably that is all software controllable?
“We have a web User Interface for configuration and control. From an operational perspective, with 2110 we are fully NMOS remote compliant, so we can be controlled by most other third-party control suites. We partner with Dataminer so we’re able to be monitored and controlled by them.”
Typical Workflows
The Broadcast Bridge: Throughout this series we are discussing different workflows being adopted. Can we discuss what balance you are seeing between; traditional full on-site OB, full REMI/Remote Production and hybrids of the two?
“For most tier 1 events, I’m seeing a lot of the production being done on site. If you’re doing a full OB on site live production where everything’s there and you are producing a stream in the camp you have more bandwidth and you can focus more on quality. You can send 1 or 2 streams, a house broadcast stream and an international stream where there’s no audio on there, just the event graphics, and that can go to other host broadcasters to add their audio.”
“Sometimes it’s getting partially done on site and then passed over to the studios, for example, for sanitization of feed. Certain advertisers can’t be shown in certain regions so those studio feeds might go back to the production facilities, and they’re using software systems to replace the advertising hoardings etc. to make it friendly for global audiences.”
“The venue contribution infrastructure is still typically SDI. We work with a lot of tier 1 sports broadcasters and production people doing live event contribution and talking generally, we see tier 1 cameras contribution via SDI. If you go down to tier 2 it will be less cameras but usually still SDI. If you go down another tier it will probably be 1 or 2 cameras doing NDI into the cloud. But we are definitely seeing a lot of interest with upcoming live sports events in EMEA to move away from SDI and into 2110 at the venue. Generally we’re seeing a lot of people using 2110 within remote production facilities.”
“With the multi-venue global sports event in Paris, that was SDI coming in. We had N+N redundancy. That was eight rack units configured in redundant pairs - four encoders and four backups. There were also fully loaded cold backup units as you would expect for events of that nature. We were doing AVC compression in this case for compatibility. And then that was firing transport streams out. So single SPTSs (Single Program Transport Streams) across a managed network back to their multiplex, and then they were using the signals as they needed them. So effectively a large-scale REMI workflow.”
“With the full REMI work we are seeing, it is typically a compressed backhaul workflow. It is challenging to move uncompressed ST 2110 around remote networks because you could be moving well over 12 gigabits a second around. Sometimes that isn’t particularly doable because it requires private networks.
The SDI streams come into an encoder. They’re encoding into TR 07 [JPG XS]. We see a lot of people wrapping it in SRT but it can be RIST. That is then sent via two diverse network paths using SMPTE 2022-7. So that’s two ‘bit identical’ video streams going down two diverse paths with recovery at the far end.”
“In other workflow scenarios you’ve also got people that are running solely in the cloud. Their mindset is that they are going to fire it out to a CDN at some point anyway, so there is logic to keeping it in the cloud to do audio, graphics, instant replay etc. with cloud-based applications.”
The Broadcast Bridge: Do you mean public cloud hyper scaler, or do you mean private data center?
“The public cloud is just someone else’s data center with very good connectivity into it. I’ve worked quite heavily on prem and in the cloud in my career. I’ve seen both sides of it. I don’t want to seem either pro cloud or anti cloud. A lot of people have gone through transformational processes where they moved traditional workflows that were on prem into the cloud. And we’ve also seen people bring them back on prem. There are multiple workflows, and it depends on how far along that workflow journey the broadcaster is. Some have gone full production in the cloud, so they’re just taking signals in, doing everything in the cloud and then passing it back to on premise. Some are doing maybe 65% of the workflow in their private data center, in their studios. It all depends on the broadcaster and on the event.”