Acquisition Global Viewpoint – March 2023

Metaverse Leading The Way

Improving the immersive viewing experience has been at the core of all broadcast innovation since the first radio broadcasts in the early 1900s. And as we progress on our IP journey, is the Metaverse going to take this experience to even greater levels?

IP is not only delivering greater scalability, flexibility, and resilience, but is also empowering broadcasters to learn from seemingly unrelated industries. Medical and finance are often identified as two of these industries as the researchers in these fields are working extensively in image processing and reducing latency in networks, but there is also a massive amount of research and development going on in the underlying technology of the Metaverse.

It will be some time yet before we can reach the scientific utopia of having machines read our brain waves to generate virtual worlds that can be fed back to us. But what is happening is that vendors looking towards the future are already investing massive amounts of resource to build the Metaverse infrastructure, and this is something broadcasters can benefit from now.

With all the technology that fills the everyday lives of broadcasters, it’s sometimes easy to lose focus on the problem we’re trying to solve, and primarily, this is to improve the immersive experience for the viewer. ML, networks, GPUs, CPUs, and storage are all areas where Metaverse researchers are making massive gains. The scene data that is rendered to the image displays demands much more library storage and there is always increased pressure on delivering faster retrieval speeds. And latency is also being addressed as the immersive experience relies of very fast rendering of the scenes and response to the human input.

One example of the Metaverse technology winning for broadcasters is through LED walls and virtual production. Metaverse designers often use a software architecture called Universal Scene Description (USD) to create files that describe virtual worlds using text-like syntax. The USD architecture effectively separates the design and rendering processes so that the target image does not restrict the scene description. Instead, the rendering system can be facilitated local to the display so that massive image maps do not need to be distributed across the internet.

Once a scene has been designed the rendering process defines the target image as well as the viewport. Furthermore, the scene description is described in three dimensions so that ray tracing and shading can be delivered in real-time allowing the viewport to move into the scene and around it. For an LED wall application, the movement of the actors can be tracked and fed into the rendering engine so that the contents of the viewport move in sympathy to them. In other words, the actors can move into and around the scene. And this is massive for any broadcaster working in virtual production.

Broadcasters are continuing to benefit from many industries as they progress on their IP journey, and the technology underpinning the foundations of the Metaverse is here today and is adding to the increasing benefits of adopting IP broadcast infrastructures.