Understanding the basics of IP Networking, Part 2

In this look at the potential use of IT solutions in broadcast applications, John Watkinson turns to key issues of bandwidth, latency and compression.

IT packet switchers are superficially like broadcast routers in that stuff comes in and stuff goes out, but that’s as far as it goes. Broadcast routers know something about broadcast signals; IT switchers wouldn’t know a broadcast signal from a hole in the ground. IT based networks have to deliver data of all kinds, so they do that by being totally agnostic about what the data represent. That is a key reason IT equipment is less expensive than broadcast equipment. It has to do with market size and the economy of scale. The broadcast market is not the driver for the way IT equipment is built. It never has been and never will be. IT equipment is what it is and if we intend to use it in broadcast applications we have to take it as we find it and discover ways to work around its uniqueness.

That’s not necessarily a bad thing. The IT market decided that the majority of hard drives would be of a certain physical size. Then someone had the bright idea of assembling them into arrays which offered distinct advantages.

Broadcasters follow, not lead the IT industry.

Broadcasters follow, not lead the IT industry.

IT networks require the data to be fitted into packets of exactly the same size so from a content standpoint they all look the same to the network. In that way anybody’s packet can follow anybody else’s packet down a cable and provided the packets are labelled or numbered people only get the packets they were expecting and nothing else.

Any given link can only send one packet at a time, so packets belonging to everyone else have to wait. Packet multiplexing, that is by definition subject to interruptions, is inconsistent with the constant bit rate required by digital audio and can only be made to work using buffer memory. The buffer at the sending end fills up and the buffer at the receiving end outputs data whilst somebody else’s packets are being sent. This only works if both buffers are half full, so they can tolerate the greatest swing of packet transmission rate. The bigger the buffers, the more irregularity that can be absorbed before death by egg timer occurs. But the presence of the buffers causes delay, or latency. It’s a real swings and roundabouts issue. Where the lowest latency is required, small packets and small buffers are needed.

Audio data differ from generic data in that audio data need to be reproduced with a precisely defined time axis. If the sampling rate is wrong the audio pitch gets changed. If the sampling clock jitters, the quality deteriorates. IT knows nothing about this. If using IT equipment to deliver audio data, the next question has to be how is the correct sampling rate clock to be recreated at the destination? MPEG Transport Streams have that technology. They can recreate a remote clock using Program Clock Reference signals. Unless the audio data is transferred in no real time to a storage device, something like that is required in an audio network.

Multiplexing allows several signals to share a single data stream and then be properly separated at the destination.

Multiplexing allows several signals to share a single data stream and then be properly separated at the destination.

Another vital point to grasp is that a multiplexed data stream has finite bandwidth. Even if the multiplexing is ideal the bandwidth of the data stream is reduced by the need to send addresses and labels and error checking codes. The bandwidth that is left has to be shared between the different people hoping to send data. In that sense it resembles a freeway. During the finals of the Super Bowl, you will see no traffic at all except the odd patrol car. On a sunny weekend it will be jammed with people going to the beach. So IT networks are statistical, which means that under some circumstances they may choke.

Clearly that is not acceptable for a broadcast installation. If you go off air that’s a disaster. If production is stopped you have people who are being paid to sit around. Steps have to be taken to make sure there is always capacity available so that those packets are never held up. If it’s important enough, your network has to be completely under your control so you can decide what information is sent through it and when so it never chokes. That also allows the best level of security which is another word IT doesn’t understand. Another approach is Quality of Service (QoS). With QoS all packets are not equal. Packets about rusty old pick-up trucks are held up while packets in a black limo with motorcycle outriders sweep by.

Today's compression technology allows for smaller transport streams, yet enables virtually indistinguishable outputs.

Today's compression technology allows for smaller transport streams, yet enables virtually indistinguishable outputs.

The amount of required bandwidth can be reduced by using compression. But, that too raises important issues. Firstly, compression works by prediction. The decoder tries to predict what some attribute of the audio waveform will look like. If something novel comes along, that prediction will fail. However the encoder also contains a decoder, so the encoder knows exactly how the decoder failed and can send correction data. The decoder adds the correction to its failed prediction and out pops the audio. If the prediction error was sent in its entirety, the decoded signal will be identical to the encoded signal and the result is lossless.

In practice, lossless compression does not achieve a very high compression factor and instead, not all of the prediction error is sent so that the decoded signal is not an exact replica of the original signal. One of the early tenets of digital audio was that generation loss could be eliminated by cloning the data. Lossy compression brings us right back to the analogue days of generation loss. Every time a signal passes through a lossy codec, it gets a little worse. This is progress?

Another problem that rears its head with compression is that it requires extraordinarily good loudspeakers to be used for monitoring. The reason is that cheap loudspeakers act like lossy compressors in that they remove some of the information in the signal. In a typical audio scenario, the production process was performed with high quality speakers and then the result was losslessly stored and delivered. It was not necessary to have quality speakers to check the router because it was enough to know the signal was there. With the use of compression in the network the signal might be impaired and with cheap speakers no-one will hear it until it is too late.

It should be obvious that the better the ability of encoder and decoder alike to predict, the better the compression factor that can be achieved. However, prediction requires the system to be able to look ahead. As we lack clairvoyance software, the look-ahead has to be done by delaying the signal. It follows that high compression factor goes hand in hand with high latency. Compression and real time are mutually exclusive. Another point that needs to be made is that the saving on IT router cost by the use of compression may well be eclipsed by the cost of all the codecs. It’s important to look at the big picture.

IT equipment may be less expensive than a single-purpose broadcast solution--but it’s a mixed blessing. There is no single solution to everyone’s problems. Factors affecting choice include the physical size of the network, what signal damage risks it runs, what probability of failure is acceptable. Also, audio network designers need to consider the level of security. Is real time operation required, if not, how much latency is acceptable? Finally, what sound quality is required and does a sampling clock need to be remotely reconstructed? Such are some key questions that need to be considered far before purchase decisions are made.

You might also like...

KVM & Multiviewer Systems At NAB 2024

We take a look at what to expect in the world of KVM & Multiviewer systems at the 2024 NAB Show. Expect plenty of innovation in KVM over IP and systems that facilitate remote production, distributed teams and cloud integration.

NAB Show 2024 BEIT Sessions Part 2: New Broadcast Technologies

The most tightly focused and fresh technical information for TV engineers at the NAB Show will be analyzed, discussed, and explained during the four days of BEIT sessions. It’s the best opportunity on Earth to learn from and question i…

Standards: Part 6 - About The ISO 14496 – MPEG-4 Standard

This article describes the various parts of the MPEG-4 standard and discusses how it is much more than a video codec. MPEG-4 describes a sophisticated interactive multimedia platform for deployment on digital TV and the Internet.

The Big Guide To OTT: Part 9 - Quality Of Experience (QoE)

Part 9 of The Big Guide To OTT features a pair of in-depth articles which discuss how a data driven understanding of the consumer experience is vital and how poor quality streaming loses viewers.

Chris Brown Discusses The Themes Of The 2024 NAB Show

The Broadcast Bridge sat down with Chris Brown, executive vice president and managing director, NAB Global Connections and Events to discuss this year’s gathering April 13-17 (show floor open April 14-17) and how the industry looks to the show e…