Understanding the basics of IP Networking, Part 1

Like everything else, audio equipment is increasingly adopting IT solutions in the hope of achieving some combination of economy, resilience, flexibility and sound quality. John Watkinson argues that the best way to obtain the desired result is to make informed decisions based on an understanding of the technology.

Let’s go right back to the meaning of words. A net is used to catch fish and it has the useful characteristic that if one of the strands gets cut, the whole thing doesn’t fall apart. Before fish were available in supermarkets, that kind of resilience was important to survival. The term redundant is also used to describe things that keep going if something goes wrong. All modern cars have dual braking systems and airliners have several engines. A lot of routers used in TV stations have dual power supplies; computers have battery back-up and so on.

In today’s media facilities IP routers are as common as patch bays used to be, but much more versatile.

The whole of IT has its roots in the military which then spun off commercial and consumer products, typically losing elements of security along the way and adding things like windows and gates that were invariably left wide open. Not surprisingly, computer networks also have military roots. The goal was to create a communication system that was resilient enough to withstand the odd mushroom cloud. Only a genuine network with physical redundancy could meet that requirement. In the case of damage, the information could be re-routed and still arrive at its destination. The military refer to information as intelligence, even though we are not fooled, and this sloppy use of terminology has also spun off into commerce. As a result the meaning of the word “network” is no longer clear.

A lot of people casually refer to a network as something that allows anything to communicate with anything else. Although that is obviously a feature of networks, it does not define a network. Take, for example the AES/EBU router, its predecessor the analog router or even the more humble patch bay. Those clearly allow anything to be connected to anything else, but they are not networks because the essential resilience, the elimination of the single point of failure is not there.

Vampire taps are no longer used as they are unreliable and today we have better connectivity solutions.

The original Ethernet used a single cable that could have taps connected to it essentially anywhere a node or port was needed. These were often called vampire taps because they had a sharp point that would penetrate the yellow 10BASE5 cable to make the connection. Although Ethernet behaves logically like a network, in that any port can communicate with any other port, it is not physically a network as it has no resilience.

Probably the first place to start when considering a network is to determine how much resilience is needed. Generally speaking, the larger the area in involved, the more likely something will get broken and the more likely some resilience needs to be built in. Another factor is how critical the network is. If it is used for offline production, it’s not as critical as a playout control room where a failure could put a broadcaster off the air.

That introduces the subject of topology, which is a sixty-four-dollar word for how stuff is joined up. Now the buzzwords start coming thick and fast. The only topology that will withstand the odd mushroom cloud, or more mundane threats like backhoes, is the mesh. Clearly this is just another word for a net, of the fishing variety, that has lots of interconnections and lots of (or at least more than one) different ways of getting from Joe to Moe, A to B or from the Pentagon to the Aberdeen Proving Grounds, as the case may be. Everything else has some of the features of a network, but lacks the bomb-proof bit.

This simple Ethernet bus connects all devices via a single conductor. It is not a reliable method of connectivity.

This simple Ethernet bus connects all devices via a single conductor. It is not a reliable method of connectivity.

A bus, from the Latin omnibus, is a system where the devices tap in to a conductor so that they are electrically all in parallel. The first Ethernet was a bus system. An electronic fault in a node could drag down the whole system. The daisy-chain system is one in which every device has two connectors and the signal goes though one device to get to another. A fault in a node could split the chain into two halves. Buses and daisy chains are prone to single point failures. Some allow the ends of the bus or chain to be joined to make a ring, which can survive some failures.

A star or radial system is basically similar to the traditional broadcast router. Every device is connected to a central unit known as a hub. If the hub goes down, you are toast. Some devices have more than two connections so they can be used to form tree structures. A tree in which some of the branches are joined up is called a spanning tree.

When building IT networks, avoid single points of failure--ie this hub.

When building IT networks, avoid single points of failure--ie this hub.

Clearly networks that are based on IT are digital, so an audio network is going to be transmitting digital audio. All binary data look the same. Print out some binary data and all you get is a ton of ones and zeros. You have no clue as to whether these data are pixels that depict Paris Hilton taking a bath or Vladimir Putin’s shopping list;- half a dozen eggs, some dog food, a liter of milk, and a regiment of tanks; that sort of everyday stuff.

Audio data are also a ton of ones and zeros, so we have an immediate problem that we need to identify what our data are so we don’t mix up text, images and audio. That is the purpose of metadata, it’s data about data. Audio data differ from generic data in that audio samples are only meaningful if they are presented in an unbroken sequence to the listener at a correct and stable sampling rate. That’s the definition of streaming. Egg timers are not allowed.

This requirement introduces more buzz words such as latency. Latency is the delay a system causes to the information. Clearly in audio the requirement for constant sampling rate means that whatever the overall latency is, it had better be constant. Where the latency is very small, a system is described as working in real time. On a strict interpretation, the only real time audio we have is analog. All analog-to-digital convertors and digital-to-analog convertors cause delay, primarily because practically all modern units use oversampling and the necessary signal processing takes time. In live sound applications latency matters a great deal. Fortunately convertor delay isn’t great, so provided we don’t do anything dumb we can make digital live sound systems. In other applications latency may matter less or not at all.

All networks are shared resources. Many users want to ship stuff from place to place. Clearly any particular signal path can only send one bit at a time. The solution is multiplexing. User data are assembled into packets that always have the same size so the actual transport mechanism of the data stream doesn’t care who a packet belongs to. Multiple users are handled by sending their packets in turn. This brings about a requirement for buffering, which causes further latency. We can talk about that in part two.

Part two in this series will be published in January, 2015.

Comments:

Hi,
Concerning: “Clearly in audio the requirement for constant sampling rate means that whatever the overall latency is, it had better be constant”
The latency in a mesh network can hardly ever be constant, because one cannot know in advance which path the signals will take. Modern systems measure latency and compensate by buffering signals to insure that they arrive at the same time. If this buffering system is considered part of the network then it could be said to have constant latency, but to the best of my knowledge buffering used to compensate for variable latency is not always part of the network infrastructure.

December 27th 2014 @ 12:47 by Christopher walker

Great perspective and a great article, Thank you Broadcast Bridge for getting John Watkinson to write for you.

March 18th 2015 @ 11:27 by Josh Gordon
Let us know what you think…

Log-in or Register for free to post comments…

You might also like...

The Move Towards Next Generation Platforms

Whenever I’m asked about my opinion on the transition to IP, I always state that the impact can’t be appreciated until its history is understood. This brings into context the need for broadcasters to educate and surround themselves wit…

Taming The Virtualized Beast

Without doubt, virtualization is a key technological evolution focus and it will empower many broadcast and media organizations to work differently, more efficiently and more profitably.

Essential Guide: IP – A Practical Application

As broadcasters accelerate IP migration, we must move from a position of theory to that of practical application. Hybrid solutions to integrate SDI, AES, MADI, and IP will be needed for many years to come, even with green field sites,…

Essential Guide: When to Virtualize IP

Moving to IP opens a whole plethora of options for broadcasters. Engineers often speak of the advantages of scalability and flexibility in IP systems. But IP systems take on many flavors, from on-prem to off-prem, private and public cloud. And…

Server-Based “At Home” Workflows Provide Efficiency For NASCAR Productions

NASCAR Productions, based in Charlotte NC, prides itself on maintaining one of the most technically advanced content creation organizations in the country. It’s responsible for providing content, graphics and other show elements to broadcasters (mainly Fox and NBC), as w…