Broadcast and media systems are moving to embrace an IP infrastructure.
Like everything else, audio equipment is increasingly adopting IT solutions in the hope of achieving some combination of economy, resilience, flexibility and sound quality. John Watkinson argues that the best way to obtain the desired result is to make informed decisions based on an understanding of the technology.
Let’s go right back to the meaning of words. A net is used to catch fish and it has the useful characteristic that if one of the strands gets cut, the whole thing doesn’t fall apart. Before fish were available in supermarkets, that kind of resilience was important to survival. The term redundant is also used to describe things that keep going if something goes wrong. All modern cars have dual braking systems and airliners have several engines. A lot of routers used in TV stations have dual power supplies; computers have battery back-up and so on.
In today’s media facilities IP routers are as common as patch bays used to be, but much more versatile.
The whole of IT has its roots in the military which then spun off commercial and consumer products, typically losing elements of security along the way and adding things like windows and gates that were invariably left wide open. Not surprisingly, computer networks also have military roots. The goal was to create a communication system that was resilient enough to withstand the odd mushroom cloud. Only a genuine network with physical redundancy could meet that requirement. In the case of damage, the information could be re-routed and still arrive at its destination. The military refer to information as intelligence, even though we are not fooled, and this sloppy use of terminology has also spun off into commerce. As a result the meaning of the word “network” is no longer clear.
A lot of people casually refer to a network as something that allows anything to communicate with anything else. Although that is obviously a feature of networks, it does not define a network. Take, for example the AES/EBU router, its predecessor the analog router or even the more humble patch bay. Those clearly allow anything to be connected to anything else, but they are not networks because the essential resilience, the elimination of the single point of failure is not there.
Vampire taps are no longer used as they are unreliable and today we have better connectivity solutions.
The original Ethernet used a single cable that could have taps connected to it essentially anywhere a node or port was needed. These were often called vampire taps because they had a sharp point that would penetrate the yellow 10BASE5 cable to make the connection. Although Ethernet behaves logically like a network, in that any port can communicate with any other port, it is not physically a network as it has no resilience.
Probably the first place to start when considering a network is to determine how much resilience is needed. Generally speaking, the larger the area in involved, the more likely something will get broken and the more likely some resilience needs to be built in. Another factor is how critical the network is. If it is used for offline production, it’s not as critical as a playout control room where a failure could put a broadcaster off the air.
That introduces the subject of topology, which is a sixty-four-dollar word for how stuff is joined up. Now the buzzwords start coming thick and fast. The only topology that will withstand the odd mushroom cloud, or more mundane threats like backhoes, is the mesh. Clearly this is just another word for a net, of the fishing variety, that has lots of interconnections and lots of (or at least more than one) different ways of getting from Joe to Moe, A to B or from the Pentagon to the Aberdeen Proving Grounds, as the case may be. Everything else has some of the features of a network, but lacks the bomb-proof bit.
This simple Ethernet bus connects all devices via a single conductor. It is not a reliable method of connectivity.
A bus, from the Latin omnibus, is a system where the devices tap in to a conductor so that they are electrically all in parallel. The first Ethernet was a bus system. An electronic fault in a node could drag down the whole system. The daisy-chain system is one in which every device has two connectors and the signal goes though one device to get to another. A fault in a node could split the chain into two halves. Buses and daisy chains are prone to single point failures. Some allow the ends of the bus or chain to be joined to make a ring, which can survive some failures.
A star or radial system is basically similar to the traditional broadcast router. Every device is connected to a central unit known as a hub. If the hub goes down, you are toast. Some devices have more than two connections so they can be used to form tree structures. A tree in which some of the branches are joined up is called a spanning tree.
When building IT networks, avoid single points of failure--ie this hub.
Clearly networks that are based on IT are digital, so an audio network is going to be transmitting digital audio. All binary data look the same. Print out some binary data and all you get is a ton of ones and zeros. You have no clue as to whether these data are pixels that depict Paris Hilton taking a bath or Vladimir Putin’s shopping list;- half a dozen eggs, some dog food, a liter of milk, and a regiment of tanks; that sort of everyday stuff.
Audio data are also a ton of ones and zeros, so we have an immediate problem that we need to identify what our data are so we don’t mix up text, images and audio. That is the purpose of metadata, it’s data about data. Audio data differ from generic data in that audio samples are only meaningful if they are presented in an unbroken sequence to the listener at a correct and stable sampling rate. That’s the definition of streaming. Egg timers are not allowed.
This requirement introduces more buzz words such as latency. Latency is the delay a system causes to the information. Clearly in audio the requirement for constant sampling rate means that whatever the overall latency is, it had better be constant. Where the latency is very small, a system is described as working in real time. On a strict interpretation, the only real time audio we have is analog. All analog-to-digital convertors and digital-to-analog convertors cause delay, primarily because practically all modern units use oversampling and the necessary signal processing takes time. In live sound applications latency matters a great deal. Fortunately convertor delay isn’t great, so provided we don’t do anything dumb we can make digital live sound systems. In other applications latency may matter less or not at all.
All networks are shared resources. Many users want to ship stuff from place to place. Clearly any particular signal path can only send one bit at a time. The solution is multiplexing. User data are assembled into packets that always have the same size so the actual transport mechanism of the data stream doesn’t care who a packet belongs to. Multiple users are handled by sending their packets in turn. This brings about a requirement for buffering, which causes further latency. We can talk about that in part two.
Part two in this series will be published in January, 2015.
You might also like...
This FREE to download eBook is likely to become the reference document you keep close at hand, because, if, like many, you are tasked with Preparing for Broadcast IP Infrastructures. Supported by Riedel, this near 100 pages of in-depth guides, illustrations,…
Every Super Bowl is a showcase of the latest broadcast technology, whether video or audio. For the 53rd Super Bowl broadcast, CBS Sports will use almost exclusively IP and network-based audio.
This year’s Super Bowl LIII telecast on CBS will be produced and broadcast into millions of living rooms by employing the usual plethora of traditional live production equipment, along with a few wiz bang additions like 4K UHD and a…
Today’s broadcast engineers face a unique challenge, one that is likely unfamiliar to these professionals. The challenge is to design, build and operate IP-centric solutions for video and audio content.
Broadcasting used to be simple. It required one TV station sending one signal to multiple viewers. Everyone received the same imagery at the same time. That was easy.