Understanding IP Broadcast Production Networks: Part 1 - Basic Principles Of IP

This is the first of a series of 14 articles that re-visit the basic building blocks of knowledge required to understand how IP networks actually work in the context of broadcast production systems.

Much of the content in this series was originally published by The Broadcast Bridge way back in 2016. Back then IP for broadcast production was new and everybody was evaluating it’s value and its practicality. A mere seven years on and IP is everywhere. The versatility and scalability it brings has moved it into the mainstream and established it as the central nervous system of broadcast. Most new facilities are IP native, many broadcasters are running hybrid SDI-IP systems, and there are very few engineers for whom IP is not a key part of daily life. 

The Broadcast Bridge has published hundreds of articles on IP and continues to push the boundaries, helping broadcasters evaluate the next generation of infrastructure - where IP is enabling cloud and microservices to create the next step change in our technology and workflow. Throughout the evolution of IP for broadcast this series of articles has continued to draw significant traffic from search engines and our own site search. This series has become a reference work for engineers and operators who need to understand the fundamental principles and technology of IP.

The content has been updated and edited to reflect the changing times. We hope it continues to serve the industry well.

The Basic Principles Of IP

Network timing requirements, differing working practices and protocols, and data integrity all help to deteriorate communication between broadcast and IT engineers.

Timing in broadcast is tightly defined and a thorough understanding of legacy television systems is required. IT engineers use asynchronous full duplex systems, expect there to be network failure, and use protocols that effectively slow transmission to make sure data has been accurately delivered. Broadcasters use synchronous one direction connectivity and assume the network is as robust and reliable as SDI.

In this series, we look at networks from a broadcast engineer’s point of view, giving a better understanding of core IT concepts and enabling them to communicate with colleagues in the “IT department”.

To fully understand IT networks, we must understand the problem we are trying to solve; a network is needed to allow users to exchange data predictably, reliably and securely, and provide control of one computer over another. This is true of PC’s, servers, IP-camera’s, production switchers and control panels, and the more secure and reliable a system, the more complex it becomes.

A network must be resilient, fast, and reliable to give the best user experience. To explain the roles of routers and switches we start with a basic network of four PC’s and two servers connected in a simple IP over Ethernet network using CAT5.

Ethernet has three forms of physical interface: coaxial, twisted pair and fiber optic. They all send the same type of packets of data but differ in their duplex as twisted pair can send and receive data at the same time, but coaxial and fiber optic cannot. Transmission speeds are faster on fiber optic and coaxial.

Few computers use coaxial connectivity as twisted pair is cheaper and more robust. Fiber optic tends to be reserved for high bandwidth switch and router connection due to its higher cost and fragility.

Figure 1 - SIMPLE HUB NETWORK – A datagram sent from C1 to S2 will be re-sent to all computers and servers on the network potentially causing security and congestion issues.

Figure 1 - SIMPLE HUB NETWORK – A datagram sent from C1 to S2 will be re-sent to all computers and servers on the network potentially causing security and congestion issues.

A hub with twisted pair infrastructure (CAT5) could be used in a simple network.  The hub is like a distribution amplifier allowing mapping of one-to-many transmit and receive pairs. The hub has no intelligence and will route a packet received on one port directly to all its other ports.

In a hub network, security becomes a problem as all users would be able to see data being exchanged between each other’s computers and servers. For example, all users would receive transactions associated with the finance server.

Computer network cards receive all datagrams on the connected network and will usually discard those not intended for them. With the right software, it’s easy to decode the datagram and view restricted and sensitive financial transactions. This is true of all the systems running on any of the servers.

Lost packets of data occur as network traffic increases and the physical switch and router links become quickly saturated, and this is further exasperated by micro-bursts of data that can overflow egress buffers. Protocols such as TCP (Transmission Control Protocol) can remedy lost packets but do so at the expense of increased and variable latency. This is one of the reasons standards such as SMPTEs ST2110 uses UDP (User Datagram Protocol) as it operates a fire-and-forget transmission system, resulting in predictable and low latency. However, when using UDP, lost packets cannot be easily recovered.

Figure 2 - If C1 & C2 both want to send data they would wait for the first available space on the transmission line, potentially sending at the same time and corrupting their data.

Figure 2 - If C1 & C2 both want to send data they would wait for the first available space on the transmission line, potentially sending at the same time and corrupting their data.

Ethernet is a packet switched system, each PC will monitor the transmit line and wait for a gap so it can send its own packet. Although the packets are of a fixed size, the frequency with which they are sent is random across all the connected computers on the network. Another computer may be listening at the same time waiting for the same space, and two or more computers could try and simultaneously access the transmit pair resulting in a collision and packet loss, and slow response for the user.

A network router or switch will protect against collisions and congestion and is one of the reasons routers and switches are used, other reasons are to provide resilience and security. Switches send packets at the Ethernet packet level (layer 2) and routers route packets at the Internet Protocol level (layer 3).

In the ISO seven-layer model IP packets are encapsulated by the layer 2 Ethernet frames. This might seem like an unnecessary overhead; however, the IP protocol is independent of the transmission network and abstracts the data away from the hardware limitations of Ethernet. It’s entirely possible, during the lifetime of an IP packet, that it will be routed over non-Ethernet networks such as ATM (asynchronous transfer mode) or WiFi. With IP, we need not be too concerned with the medium the data is travelling on.

Each Ethernet card in a PC or IP-camera has its own unique hard coded address called the media access control (MAC address). Each camera can be configured to have a unique IP address, so a faulty camera can C1 be replaced with the same IP address. The MAC address willhave changed but the address resolution protocol (ARP) in the routers would detect this and reconfigure themselves.

Managed Ethernet switches provide a better solution but have limited capability. The switch is configured with the MAC address of each computer connected to its ports and will send traffic only intended for the associated computer thus reducing network traffic on each connection. For these reasons Ethernet switches tend to be used in fixed high-speed applications such as core network switches and head of rack topologies. They are faster as there is less information to process in the Ethernet datagram header compared to an IP header. For example, there is no “time to live” value to be updated.

Figure 3 - SWITCH OR ROUTER NETWORK – a datagram sent from C1 to S2 will only be sent to S2, improving security and data speeds.

Figure 3 - SWITCH OR ROUTER NETWORK – a datagram sent from C1 to S2 will only be sent to S2, improving security and data speeds.

IP addressing schemes offer greater flexibility and allow administrators to specify their own IP number schemes. Security is improved as routers can be configured to make sure finance transactions only go to authorized computers and IP-camera’s only send their pictures to monitors and production switchers in the studio. Programs such as Ping can be blocked to stop hackers detecting computers and attacking them.

Automatic routing algorithms provide resilience by detecting a broken link and sending the data via a different route. Multi-path links can be used between studios and outside broadcast consisting of different types of media such as fiber optic and satellite. Users are unaware that routers have switched to a different path when a link breaks.

Even in a simple network routers and switches improve network speeds and security, and routers become essential when resilience is needed.

You might also like...

Future Technologies: Autoscaling Infrastructures

We continue our series considering technologies of the near future and how they might transform how we think about broadcast, with a discussion of the concepts, possibilities and constraints of autoscaling IP based infrastructures.

Standards: Part 12 - ST2110 Part 10 - System Level Timing & Control

How ST 2110 Part 10 describes transport, timing, error-protection and service descriptions relating to the individual essence streams delivered over the IP network using SDP, RTP, FEC & PTP.

FEMA Experimenting At IPAWS TSSF

When government agencies get involved, prepare for new acronyms.

Managing Paradigm Change

When disruptive technologies transform how we do things it can be a shock to the system that feels like sudden and sometimes daunting change – managing that change is a little easier when viewed through the lens of modular, incremental, and c…

Future Technologies: The Future Is Distributed

We continue our series considering technologies of the near future and how they might transform how we think about broadcast, with how distributed processing, achieved via the combination of mesh network topologies and microservices may bring significant improvements in scalability,…