IP and COTS infrastructure designs are giving us the opportunity to think about broadcast systems in an entirely different manner. Although broadcast engineers have been designing studio facilities to be flexible from the earliest days of television, the addition of IP and COTS takes this to a new level allowing us to continually reallocate infrastructure components to make the best use of expensive resource.
High-speed networks further abstract the processing equipment from the point of use giving even greater flexibility. We have the option of centralizing core infrastructure components, distributing them, or providing a combination of the two. In a true IP environment, we do not have to be concerned with SDI signal loss and cable length restrictions as we can keep our video, audio and metadata essence in IP through standards such as SMPTE’s ST2110.
HPC (High Performance Computing) has championed the way for IP broadcast installations. This is yet another example of how broadcasting is benefiting from the gains in other industries allowing us to ride on the crest of the wave of their innovation. It would have been almost unheard of to pass live 4K video and audio through a computer server even five years ago, but the speed with which HPC has advanced for machine learning and finance has allowed us to use them without hardware modification.
Ethernet speeds now exceed the requirements of HD and 4K uncompressed video. More surprisingly, the equipment needed to make these networks operate are readily available from industry vendors. They are not available from consumer outlets, but they are available from multiple vendors who also provide high specification service level agreements intended to keep systems working in the most demanding of applications.
Once broadcasters move to IP, they start to see how flexible and scalable IP and COTS infrastructures really are. Flexible licensing gives even greater choice as this opens up a whole new plethora of opportunities through the pay-as-you-go model.
Studios are rarely used for one show. Sets, lighting, camera and sound configurations are regularly changed to make the best use of this incredibly expensive resource and the same demands for versatility is now finding its way into the rest of the broadcast infrastructure. We can no longer design a playout system for a fixed number of channels or a technical area to work with a limited number of shows. Instead, there is a massive demand to make all technical areas even more flexible.
Software licensing combined with COTS infrastructures delivers this level of flexibility. Software can be pre-installed on servers and enabled through license keys that can be procured for any length of time the vendor agrees to. These truly dynamic systems deliver the capability the modern broadcaster is looking for not only through the infrastructure design, but also through the combinations of OPEX and CAPEX options.
Public cloud systems are entirely OPEX, but broadcasters may want to keep some hardware of their own, in which case the combination of CAPEX and OPEX is available. There are many options for the broadcaster to choose from allowing them to best meet their own requirements.
IP and COTS is more than just about saving money, it’s about leveraging the flexibility and scalability they delivery.
Adoption of IP has taught us that the benefits are more than distributing video and audio over a new transport stream, they also embrace completely new working practices that deliver flexibility and in addition make broadcast infrastructures future proof.
Broadcast stations have always had an element of flexibility about them. Studios are really just working spaces where we can control the temperature and humidity, acoustics and lighting. They were designed with flexibility in mind from the outset as scenery could be regularly changed, the number of cameras could vary and the method with which audio was recorded could be tuned to the production.
Flexibility is nothing new to broadcasting and most systems engineers would make a facility as future proof as possible to keep the studio as active as possible for as long as possible. Tie lines, routing matrices and wall boxes all demonstrate the commitment to flexibility from the beginning. However, the resilience of a traditional broadcast facility was limited due to the restrictive nature of SDI, AES and MADI.
With IP, we have a common transport stream allowing video, audio and metadata to be simultaneously transferred over IP networks. In all probability, the IP will transfer over Ethernet, but other options are available such as fiber.
This opens up a whole new world of opportunities as each of the principle elementary streams, that is video, audio, and metadata can transfer over the same network and cable.
This has further led to the democratizing of processing hardware. Instead of requiring broadcast specific hardware solutions, we can now look to COTS systems based on high-speed servers, network infrastructures and storage systems to provide most, if not all of our video, audio and metadata processing and storage needs.
Advances in HPC (High Performance Computing) has demonstrated the speed with which data can be processed now exceeds broadcast requirements, especially for 4K/UHD systems. Infrastructure latencies are incredibly low and multi-processor servers are taking data processing speeds to new levels.
Figure 1 – IP infrastructures allow us to physically separate the multiviewer application and server away from the display device. Here, the IP to HDMI converter is close to the Multiviewer but the server can be some distance away, potentially in a separate building or even the cloud.
Ingress And Egress Capacity
Although many broadcasters are currently focusing on on-prem datacenter designs, public cloud systems are now starting to appear. The biggest challenge for cloud is the ingress and egress capacities and associated costs, however, with OTT gaining more prominence and the progress made with remote operation, the need to move vast amounts of data between on-prem and off-prem datacenters is vastly diminishing.
Essentially, IP allows us to take flexibility, scalability and resilience to new levels. Combined with these infrastructures, the advent of SMPTE’s ST2110 has the potential to deliver unprecedented opportunities. ST2110 effectively abstracts away the underlying transport stream layer from the video, audio and metadata essence. This is the first time in the history of television this method has been available to broadcasters for real- time operations.
Although SDI, AES and MADI have served the broadcast industry well over the past thirty years and will continue to provide solutions for some applications, it is a relatively static method of operation and leaves little scope for flexibility, resilience and scalability. In part, this is by design as all three were designed to be highly reliable at the expense of being flexible from the start. The embedded clocks, CRCs and limited supported formats were more geared to reliability than flexibility.
IP from the outset, was designed to be a flexible packet switched mechanism. Although the protocol is hardware and transport stream agnostic, many broadcasters use Ethernet as the underlying transport stream. Ethernet in itself has developed massively in recent years and bandwidths of 40Gb and 100Gb are now available.
Multiviewers now present unparalleled flexibility. Studios, control rooms and viewing suites all use multiviewers. Flat panel display devices further expand their capability due to their shallow depth and versatility. There’s no need to have SDI as the HDMI 2.0 specifications now support 4:4:4 color subsampling. Even though the color signals are only 8bits per channel, this probably isn’t good enough for grading but is more than adequate for confidence and quality monitoring.
IP to HDMI converters are readily available at relatively low cost so a ST2110 stream can easily be converted to HDMI for the flat panel display. Many other versions of IP to HDMI converters are available for ST2022, compressed video and audio allowing IP to be taken right to the back of the display.
The true power of the flexibility of multiviewers appears when we consider the COTs solutions available. In the past, if you wanted to reconfigure or expand your SDI multiviewer then you would probably have to buy additional cards or even buy a new frame if you exhausted its capacity. All this is both expensive, time consuming and very inflexible, especially by today’s standards.
Very High Shelf
Admittedly, the COTS servers needed to process multiple streams of ST2110 or ST2022 video and the associated audio are very high-end and stretch the technical abilities of these devices. They are off-the-shelf in terms of being readily available from industry suppliers, it’s just that the shelf is very high, that is, the equipment is designed for high availability industrial applications and the costs reflect that.
Having such massive amounts of processing power and data throughput available on-prem or off-prem gives us outstanding flexibility. Not only can we run different applications on the servers, but we also have the choice of which software we run on which server due to software licensing.
Also, the individual software applications can be easily configured either through the Ethernet interface or pre-stored files. As the servers do not have any vendor specific hardware, such as SDI input and output cards, the video, audio and metadata can be easily processed on any server with sufficient resource.
Broadcast Bridge Survey
You might also like...
Virtual Production For Broadcast is a major 12 article exploration of the technology and techniques of LED wall based virtual production approached with broadcast studio applications in mind. Part 4 examines image based lighting, new developments in RGBW LED technology and what i…
TV stations have mostly parked their satellite trucks and ENG vans in favor of mobile bi-directional wireless digital systems such as bonded cellular, wireless, and direct-to-modem wired internet connections. Is Starlink part of the future?
One of the creative advantages of virtual production for performers is seeing the virtual environment in which they are performing. Using motion capture techniques extends this into capturing the motion of performers to drive CGI characters. New technologies are rapidly…
Sometimes, there’ll be a need to represent real-world objects in the virtual world. Simple objects could be built like any VFX asset; more complex ones might be better scanned as a 3D object, something some studios have begun to c…
Sending out a crew to capture a real-world environment can be a more straightforward option than creating a virtual world, but there are some quite specific considerations affecting how the material is shot and prepared for use.