SDI and IP differ fundamentally in their approach to data transport as SDI is circuit switched and IP is packet switched. This provides interesting challenges for us as we start to consider what it means to route IP signals.
In traditional broadcast SDI, AES and analogue systems, we relied on crosspoint matrices to provide one-to-many signal routing to connect inputs and outputs together. Crosspoint routing was reliable, but it was restricted in its operation.
Expanding a crosspoint is a challenge. In the case of a 512 x 512 SDI frame, when either the number of inputs or outputs was reached, the router would be at full capacity. Expanding it required a great deal of compromise and potential routing blocking. The two options were either to completely replace the router with a bigger one or provide a second router and connect them together.
SDI Router Limitations
Replacing the router is clearly expensive and very disruptive for the broadcast facility. The whole router would have to be disconnected, the new one installed, then all the signals reconnected. Anybody who has tried to remove hundreds of SDI coaxial cables from a highly dense backplane knows this is a complex and time-consuming task.
This lack of expansion capability usually leads to routers being heavily over specified during the initial design phase leading to increased capital expenditure. It’s almost impossible to plan capacity years ahead making the concept of future proofing SDI infrastructures incredibly difficult. Although broadcast facilities have been able to cope with this philosophy in the past, the need for flexibility and scalability has encouraged them to look at IP.
One of the benefits of transitioning to IP is that we can ride on the crest of the wave of innovation from the IT industry. Switch vendors have been working on flexible and scalable designs since the first devices were originally conceived back in the 1980s. By definition, packet switched networks are dynamic in nature as each datagram has its own source and destination address, consequently, flexible and scalable infrastructures are a given.
One of the parameters that defines ethernet switches is the backplane bandwidth. The backplane is historically the speed of the fabric that connects the ports on the line cards for routing. In a non-blocking environment, there is sufficient bandwidth on the backplane to route all the inputs and outputs of every port. For example, a switch with thirty-two 400Gbps ports will have a backplane bandwidth of 12.8Tbps (12,800Gbps), that is 32 x 400G.
Figure 1 – Expanding SDI routers is deceptively difficult. Not only do we have to embark on massive infrastructure costs when expansion is required, but we inadvertently create signal blocking. This diagram shows how the tie-lines limit input and output connectivity. If all the tie-lines are being used, it would be very difficult to route camera-1 from studio-1 to the production switcher in studio-2.
IP does not use physical distribution amplifiers to duplicate signals but instead uses multicasting. A fixed IPv4 Class D of addresses (188.8.131.52 to 184.108.40.206) is reserved for multicasting. Every source device providing a streaming service is assigned a multicast address so that the multicast streams can be available throughout the network. For example, if the primary video output from studio-1, camera-1 is assigned IP address 220.127.116.11, then any receiver in the network can opt to receive this.
Receiver devices include production switchers, multiviewers, monitors, video disk recorders, etc. It is the method of making sure these devices can receive the multicast signals that is signal routing in the broadcast application of IP. To achieve this, two methods are available to us, IGMP (Internet Group Management Protocol) and SDN (Software Defined Networking).
The whole point of IGMP is that the router will only forward IP packets to the receiver devices that need them. For example, if the cameras for studio-1 were on ports 1 and 2 of the router, and the production switcher was on port 5, then it’s entirely possible that the video multicast streams on port 1 and 2 will be forwarded to port 5. And if the multiviewer was on port 6, then it’s likely that none of the multicast streams from port 1 and 2 would be forwarded to port 6 as the multiviewer would not need to receive the camera video streams.
IGMP Control Efficiency
IP IGMP is the traditional control method of making multicast streams available to the receiver devices and operates by the receiver devices opting-in to accept specific multicast streams. A server running the IGMP protocol will reside on the network with each device being able to access it. The multiviewer will access the IGMP server and request studio-1’s production switcher program output, the IGMP server will then instruct the switch to forward the multicast packets to the relevant port on the switch. If multiple devices request the same video stream, then the IGMP server will instruct the switch to duplicate the IP datagrams and forward them to the relevant ports.
Although IGMP creates an efficient system as only receiving devices that require the multicast stream are switched to their ports, its major drawback is that there is a noticeable delay from initiating the multicast “join command” to the multicast stream being forwarded and becoming available to the downstream device.
IGMP Control Latency
A software management tool is needed to keep a record of the multicast stream allocation as there could easily be thousands, or even tens of thousands of video and audio streams in a broadcast network. Maintaining a spreadsheet for assigning and remembering the video stream to IP multicast streams is just not viable. Also, the management software can simply send an IMG request to join a required stream.
Before a receiver can join the multicast, the software management tool must establish if there is enough bandwidth on the link the receiver is connected to. For example, if the sound control room loudspeakers are connected to the router through a 1Gbps ethernet connection on port 9 and the sound console is on port 10, the management software will need to establish if enough bandwidth is available from port 10 to port 9 on the router. This can be achieved by interrogating the routers API but the software runs the risk of becoming vendor specific resulting in scalability limitations.
Figure 2 – Leaf-spine switching topology provides both resilience and scalability. Each device, such as cameras, microphones, and sound consoles are attached to one of the leaf’s and the connection to the spine facilitates routing to other leafs. To maintain the optimum network design, each studio should have its own leaf. For example, if studio-1 used LEAF-1 then all the cameras, the production switcher and multiviewer for studio-1 would share the same non-blocking switch, this would reduce the overall network traffic but still provide the option of routing the devices to other studios.
The second method of routing control is to use SDN (software defined networking). As the broadcast industry continues to embrace IT technologies and working practices, the adoption of SDN and software defined infrastructures is becoming more common place.
SDN is a development of the software manager and IGMP server architecture as it has the ability to interface directly to the router, or multiple routers, facilitating some subtle but very important additions.
In SDI routers, we are familiar with switching between sources to give near instantaneous video changing on a monitor. Due to the inherent delay in IGMP, visible latency between switching of several seconds can be experienced when joining and leaving IGMP streams. For example, if a monitor is switching between the video output of camera 1 and 2, the monitor will need to leave the multicast feed of camera-1 before joining the multicast feed of camera-2. This is a feature of IGMP.
To avoid these latencies, the SDN can establish a multicast feed for camera-2 and forward the IP packets to the monitor, and then switch to camera-2’s multicast stream. Although this will speed up the switch to the monitor, double the data bandwidth is required on the link during the transition as the two camera multicast streams are simultaneously active. SDN can manage this bandwidth allocation.
As we transition to IP, there’s a lot of legacy SDI equipment still in use. Even in a greenfield site not all equipment is IP enabled and will need some form of SDI interface, there may well be SDI routers in the infrastructure. SDN will provide a method of controlling all the routers, whether IP or SDI to provide users with a consistent interface.
SDN enables a higher view of the network where much of the low-level functionality is abstracted away from the user to allow them to focus on their work. This includes maintaining an inventory of all the connected devices and their attributes including video and audio codec types, and compression bit rates, etc.
Routing signals in IP infrastructures is not as straight forward as it is with the more familiar SDI environments. The potential gains with software management systems based on an SDN approach are beyond our wildest dreams. Flexibility and scalability are built in at the beginning, and we only have to deliver the system we need now, not the one we may think we will need in ten years’ time, which invariably will change anyway.
You might also like...
Optimization gained from transitioning to the cloud isn’t just about saving money, it also embraces improving reliability, enhancing agility and responsiveness, and providing better visibility into overall operations.
IPsec and VPN provide much improved security over untrusted networks such as the internet. However, security may need to improve within a local area network, and to achieve this we have MACsec in our arsenal of security solutions.
The new year is a time to ponder the past and muse about the future. In the past, nearly each technical device needed to produce broadcast TV cost more than building a new house, was as huge as it was…
Entertainment over the internet has gained significant traction over the last years. For this reason, companies have developed new business models in order to retain customers, by meeting their emerging needs and studying the behavior patterns of online streaming consumption.
As the pandemic pushes remote technologies to the fore, cloud production is undergoing a baptism of fire. The move towards cloud-based production environments has been rapidly accelerated by the pandemic forcing the video ecosystem to quickly shift to a new w…