A Brief History of IP - Putting It All Together

Building reliable, flexible IP networks requires an understanding of infrastructure components and the interoperability of systems that run on them, especially when working in fast-paced, dynamic studios. Protocol interfacing is relatively straightforward, but as we investigate application level connectivity further, systems become more interesting.

Simplistically, layer-2 ethernet switching provides faster data throughput than IP routers; this leads to reduced packet jitter and delay. But layer-3 IP routers provide greater flexibility, especially when distributing to other districts or cities via telcos (telecommunication providers).

Excessive packet jitter and delay cause problems with signal reconstruction at the receiver. The standard method of removing jitter and delay is to use a buffer; each packet is written to continuous memory, but is then read out in sequence and in time.

However, if a packet arrives too late or with too much jitter, it will not be available to the decoder engine at the appropriate time. A packet that is too early suffers a similar issue.

The receiver buffer only has a finite length. If it’s too long, excessive delays in decoding occur, resulting in lip-sync errors. If it’s too short, packets will quickly become out-of-date, resulting in splats and pops in audio, and frame freeze and break-up in video.

Administration Challenges

Although IP routing provides a great deal of flexibility, it also poses some interesting administrative challenges. When connecting a Windows laptop to a network, a great deal of configuration unseen by the user goes on between the laptop and the network. The process used in IT networks is called Dynamic Host Control Protocol (DHCP) and provides the laptop with an on-demand IP address.

The network’s DHCP server keeps a pool of IP addresses in its database, and each time a computer connects to the network, it will provide an IP address for it. When it disconnects, the allotted IP address will be returned to the free-pool so it can be allocated when another device connects to the network.

Without DHCP, system administrators would need to actively issue IP addresses. This is relatively straightforward with fixed devices such as desktop PC’s or rack servers. However, the system becomes more complex when portable devices are used as they may connect and disconnect to a network many times throughout the day – requiring the system administrator to issue new IP addresses. Clearly this is unworkable.

Diagram 1 – To reduce the administrative nightmare of allocating IP addresses, DHCP is used.

Diagram 1 – To reduce the administrative nightmare of allocating IP addresses, DHCP is used.

Broadcast television and radio face similar challenges as each host device must have an allocated, unique IP address. Allocating the same address to two devices gives rise to a phenomenon called IP-ghosting causing all kinds of problems with a network; systems assume and specify that all devices must have unique IP addresses.

IP-Ghosting Must be Avoided

But there is no mechanism within the IP protocol to stop IP-ghosting from occurring, it is the responsibility of system administrators and engineers configuring a network to make sure all devices have unique IP addresses within a domain.

Automated options like DHCP are available, such as the plug-and-play method offered by Wheatstone to automatically configure their BLADEs – distributed IP audio processing modules that provide an array of audio services including mixing, level control and equalization.

Dedicated servers have the advantage over PC servers as they have dedicated hardware that can process audio in real-time with very little delay. Digital Signal Processors (DSP’s) with localized near-chip memory and streamlined, pipelined data flows process audio in just a few samples, resulting in very low processing delays.

Scalability and Flexibility

Distributed processing empowers scalability but traditionally, broadcast infrastructures were designed for rigid peak demand to allow for the worse-case-scenario use-case. It’s almost impossible to predict requirements years ahead, so systems tended to be over-specified and over-designed, resulting in unnecessarily inflated costs and complexity.

It is possible to use PC rack or cloud servers to provide similar functions to dedicated audio processors, but their data throughput is inherently slower due to the buffering required. Computer servers based on PC architectures are designed for generic processing of data. DSP’s specialize in short-loop high-bandwidth processing found in audio algorithms such as filters and gain control.

Generic PC’s Compromise Speed

PC architectures are designed to facilitate many tasks, and use round-robin task switching to achieve the appearance of parallel processing. Generic operating systems provide subsystems that interface to computer screens, keyboards, and network interface devices; these are all relatively slow and can cause blocking within the processor, especially if audio and video samples are being stored on disk drives.

To overcome these issues, PC architectures adopt buffering strategies. Each slow device stores its data in a memory buffer, and is read in when allowed by the operating system. Programmers have little, or no control over these processes, so data input and output take place at the discretion of the operating system. Although data is processed in real-time, throughput is delayed and can be greatly compromised. 

Diagram 2 – The PC’s network interface card causes IP packet jitter due to its use of buffers.

Diagram 2 – The PC’s network interface card causes IP packet jitter due to its use of buffers.

Dedicated hardware systems such as BLADEs have hardware specifically designed to process audio with bespoke operating systems dedicated to maintaining fast data processing and throughput.

Auto Back-Up

Each BLADE uses a common protocol to detect similar devices on a network, so the possibility of IP-ghosting is greatly reduced and multi-studio configurations can be easily established. Sound consoles detect BLADEs within a network and determine their configuration. For example, they might have microphones connected in studio one, but MADI interfaces connected to the BLADE in studio two.

Localized configuration databases in each BLADE store not only their own configurations, but also those of other compliant devices on the network. If a new BLADE is connected to the network, the other devices will recognize it and send their own databases as well as requesting a new one. Using distributed computing in this way means there is no single point of failure for the configuration database and the system automatically backs itself up.

Building an IP network is only part of making a system work reliably. Interoperability plays a huge role in integration, and anybody installing a system should consider this from the outset. Dedicated vendor-specific solutions solve many of these problems and continue to build on the major benefits of scalability and redundancy in IP systems.

Part of a series supported by

You might also like...