Understanding IP Broadcast Production Networks: Part 9 - Ethernet

How Ethernet has evolved to combat congestion and how speeds have increased through the decades.

In the 1970’s Ethernet was in its infancy and competing with two other proprietary networking systems, token ring and token bus. These systems were backed by heavy industry and the concept of each employee having a computer on their desk wasn’t considered.

Token ring allowed computers to be connected to each other with coaxial cable in a ring topology. An empty packet of data held the electronic token and was passed between each computer, and only when the computer received the token could it send data. Various timeout protocols existed to stop a computer hogging the token.

Token bus uses the token ring protocol, and they only differ in that token bus can have an open-ended network connection, but token ring must have the LAN connected in a continuous loop. With token ring, each computer on the network must know the address of the previous and next computers within the LAN to allow them to pass the token.

Token systems could be unreliable, especially if a computer failed, or it was switched off. Adding extra computers to the network was difficult as it stopped all computers on the LAN from communicating. Furthermore, they were highly inefficient as they would have to receive, process, and transmit the empty token packet even if they had no data to send.

Ethernet was adopted by the IEEE in 1980 and given the project number 802. Although Ethernet originally used coax as its LAN cable, it soon moved to twisted pair providing a cheaper more flexible alternative with full duplex operation. Coax was limited to either send or receive data but could not do both at the same time. Hardware buffers within the Network Interface Cards (NIC) allowed the computer to preload memory at CPU speeds and then the NIC would send the data at Ethernet line speeds, thus making the whole operation significantly faster.

Ethernet was originally developed with a shared bus system in mind. All computers on the LAN would be either listening, sending, or receiving data. To send data, a computer would detect other transmissions, and if it found another communication was taking place it would wait.

During transmission, the network interface of each computer would continue to listen for other traffic on the coax, and if one or more computers started sending data, they would all detect this and stop their transmissions. An algorithm in each NIC randomly held back the transmission so statistically one NIC would transmit ahead of the others and stop an infinite race-off condition occurring, keeping the system stable.

This collision detection is called “Carriers Sense Multiple Access with Collision Detect”, or CSMA/CD and was adopted by the IEEE as 802.3 and became a full published standard by 1985. CSMA/CD is used on twisted pair networks as multiple computers can be connected through a hub. These essentially connect all the transmits together (through buffer circuits) and all the receives together so CSMA/CD was still required.

With one cable or transmission system, packet collisions would increase as the number of computers increased and communicated more. More collisions result in reduced efficiency and hence lower data rates. Layer 2 switches solved this problem as they greatly reduced the number of devices on each segment and could allow a point-to-point connection between the computer and port of the switch.

Intelligent switches took advantage of input buffering and would decide when to schedule the sending of a packet and avoid collision altogether.

Layer 2 switches can be connected through bridges to keep maintenance easy and reduce complex cable runs.

Figure 1 - Point-to-point requirements for non CSMA/CD CAT8.

Figure 1 - Point-to-point requirements for non CSMA/CD CAT8.

Each Ethernet network interface card has its own unique Media Access Control address programmed during manufacture. Ethernet assumes that the manufacturers of the NIC have been assigned a MAC address from the IEEE’s registration authority and that it has been programmed properly. A NIC will only respond to two types of received messages: its own MAC address in the destination address header, and the broadcast address in the same part of the header.

The broadcast address is always “ff-ff-ff-ff-ff-ff” and is used by protocols such as address resolution protocol (ARP) to resolve an IP address to a MAC address. The downside of this is that when one computer sends out a broadcast message every single device on the network will receive it and must process it. This is inefficient and a waste of valuable bandwidth. The solution to this is to split networks physically using bridges, or VLANs, which coincide with subnets on IP networks.

Ethernet was designed to work with many different protocols, IP is only one of them, and it is possible to send IP packets and ARCNET packets at different speeds on the same Ethernet network.

As Ethernet evolved a series of standards were published (5GBASE-T, 10GBASE-T etc) that defined the bandwidth supported by new and improved cable types (CAT5, CAT6, CAT8) etc.

In recent years, speeds of Ethernet networks have increased significantly.

In the 1980’s network speeds were running at 100Kbps on CAT1 cabling, CAT5 provided 250Mbps and CAT8 now gives us 40Gbps by splitting the data over multiple pairs within a single cable or using fiber. However, only distances of 30m are achievable over CAT 8 cable.

Figure 2 - Ethernet frame layout.

Figure 2 - Ethernet frame layout.

The most important aspect of 40GBASE-T is that CSMA/CD has been dropped from the IEEE 802 specification and there is no backwards compatibility with CAT6 and earlier. This is fine for cameras as the output of the camera will connect directly to the high-speed non-blocking layer 2 switch. However, we can’t mix other CAT6 devices within the CAT8 segment as they will require CSMA/CD to work correctly.

As speeds increase to 40Gbps and 100Gbps the RJ45 Ethernet connector has been succeeded by SFP and CXP connectors. The small form-factor pluggable (SFP) is a hot pluggable sub-module allowing connection for fiber or copper. CXP combines multiple copper pairs to provide 100Gbps with twelve 10Gbps pairs, or three 40Gbps links.

To network multiple 2160P120 cameras would need a seriously fast layer 2 switch to process all the 24Gbps network streams being sent to it. This would be a switch worthy of a Tier-4 datacenter and certainly wouldn’t be Consumer-Off-The-Shelf.

You might also like...

Standards: Part 9 - Standards For On-air Broadcasting & Streaming Services

Traditional on-air broadcasters and streaming service providers use many of the same standards to define how content is received from external providers and how it is subsequently delivered to the consumer. They may apply those standards in slightly different ways.

An Introduction To Network Observability

The more complex and intricate IP networks and cloud infrastructures become, the greater the potential for unwelcome dynamics in the system, and the greater the need for rich, reliable, real-time data about performance and error rates.

Designing IP Broadcast Systems: Part 3 - Designing For Everyday Operation

Welcome to the third part of ‘Designing IP Broadcast Systems’ - a major 18 article exploration of the technology needed to create practical IP based broadcast production systems. Part 3 discusses some of the key challenges of designing network systems to support eve…

What Are The Long-Term Implications Of AI For Broadcast?

We’ve all witnessed its phenomenal growth recently. The question is: how do we manage the process of adopting and adjusting to AI in the broadcasting industry? This article is more about our approach than specific examples of AI integration;…

Next-Gen 5G Contribution: Part 2 - MEC & The Disruptive Potential Of 5G

The migration of the core network functionality of 5G to virtualized or cloud-native infrastructure opens up new capabilities like MEC which have the potential to disrupt current approaches to remote production contribution networks.