Understanding IP Broadcast Production Networks: Part 9 - Ethernet
How Ethernet has evolved to combat congestion and how speeds have increased through the decades.
All 14 articles in this series are now available in our free 62 page eBook ‘Understanding IP Broadcast Production Networks’ – download it HERE.
All articles are also available individually:
In the 1970’s Ethernet was in its infancy and competing with two other proprietary networking systems, token ring and token bus. These systems were backed by heavy industry and the concept of each employee having a computer on their desk wasn’t considered.
Token ring allowed computers to be connected to each other with coaxial cable in a ring topology. An empty packet of data held the electronic token and was passed between each computer, and only when the computer received the token could it send data. Various timeout protocols existed to stop a computer hogging the token.
Token bus uses the token ring protocol, and they only differ in that token bus can have an open-ended network connection, but token ring must have the LAN connected in a continuous loop. With token ring, each computer on the network must know the address of the previous and next computers within the LAN to allow them to pass the token.
Token systems could be unreliable, especially if a computer failed, or it was switched off. Adding extra computers to the network was difficult as it stopped all computers on the LAN from communicating. Furthermore, they were highly inefficient as they would have to receive, process, and transmit the empty token packet even if they had no data to send.
Ethernet was adopted by the IEEE in 1980 and given the project number 802. Although Ethernet originally used coax as its LAN cable, it soon moved to twisted pair providing a cheaper more flexible alternative with full duplex operation. Coax was limited to either send or receive data but could not do both at the same time. Hardware buffers within the Network Interface Cards (NIC) allowed the computer to preload memory at CPU speeds and then the NIC would send the data at Ethernet line speeds, thus making the whole operation significantly faster.
Ethernet was originally developed with a shared bus system in mind. All computers on the LAN would be either listening, sending, or receiving data. To send data, a computer would detect other transmissions, and if it found another communication was taking place it would wait.
During transmission, the network interface of each computer would continue to listen for other traffic on the coax, and if one or more computers started sending data, they would all detect this and stop their transmissions. An algorithm in each NIC randomly held back the transmission so statistically one NIC would transmit ahead of the others and stop an infinite race-off condition occurring, keeping the system stable.
This collision detection is called “Carriers Sense Multiple Access with Collision Detect”, or CSMA/CD and was adopted by the IEEE as 802.3 and became a full published standard by 1985. CSMA/CD is used on twisted pair networks as multiple computers can be connected through a hub. These essentially connect all the transmits together (through buffer circuits) and all the receives together so CSMA/CD was still required.
With one cable or transmission system, packet collisions would increase as the number of computers increased and communicated more. More collisions result in reduced efficiency and hence lower data rates. Layer 2 switches solved this problem as they greatly reduced the number of devices on each segment and could allow a point-to-point connection between the computer and port of the switch.
Intelligent switches took advantage of input buffering and would decide when to schedule the sending of a packet and avoid collision altogether.
Layer 2 switches can be connected through bridges to keep maintenance easy and reduce complex cable runs.
Each Ethernet network interface card has its own unique Media Access Control address programmed during manufacture. Ethernet assumes that the manufacturers of the NIC have been assigned a MAC address from the IEEE’s registration authority and that it has been programmed properly. A NIC will only respond to two types of received messages: its own MAC address in the destination address header, and the broadcast address in the same part of the header.
The broadcast address is always “ff-ff-ff-ff-ff-ff” and is used by protocols such as address resolution protocol (ARP) to resolve an IP address to a MAC address. The downside of this is that when one computer sends out a broadcast message every single device on the network will receive it and must process it. This is inefficient and a waste of valuable bandwidth. The solution to this is to split networks physically using bridges, or VLANs, which coincide with subnets on IP networks.
Ethernet was designed to work with many different protocols, IP is only one of them, and it is possible to send IP packets and ARCNET packets at different speeds on the same Ethernet network.
As Ethernet evolved a series of standards were published (5GBASE-T, 10GBASE-T etc) that defined the bandwidth supported by new and improved cable types (CAT5, CAT6, CAT8) etc.
In recent years, speeds of Ethernet networks have increased significantly.
In the 1980’s network speeds were running at 100Kbps on CAT1 cabling, CAT5 provided 250Mbps and CAT8 now gives us 40Gbps by splitting the data over multiple pairs within a single cable or using fiber. However, only distances of 30m are achievable over CAT 8 cable.
The most important aspect of 40GBASE-T is that CSMA/CD has been dropped from the IEEE 802 specification and there is no backwards compatibility with CAT6 and earlier. This is fine for cameras as the output of the camera will connect directly to the high-speed non-blocking layer 2 switch. However, we can’t mix other CAT6 devices within the CAT8 segment as they will require CSMA/CD to work correctly.
As speeds increase to 40Gbps and 100Gbps the RJ45 Ethernet connector has been succeeded by SFP and CXP connectors. The small form-factor pluggable (SFP) is a hot pluggable sub-module allowing connection for fiber or copper. CXP combines multiple copper pairs to provide 100Gbps with twelve 10Gbps pairs, or three 40Gbps links.
To network multiple 2160P120 cameras would need a seriously fast layer 2 switch to process all the 24Gbps network streams being sent to it. This would be a switch worthy of a Tier-4 datacenter and certainly wouldn’t be Consumer-Off-The-Shelf.
You might also like...
Brazil Adopts ATSC 3.0 For NextGen TV Physical Layer
The decision by Brazil’s SBTVD Forum to recommend ATSC 3.0 as the physical layer of its TV 3.0 standard after field testing is a particular blow to Japan’s ISDB-T, because that was the incumbent digital terrestrial platform in the country. C…
Designing IP Broadcast Systems: System Monitoring
Monitoring is at the core of any broadcast facility, but as IP continues to play a more important role, the need to progress beyond video and audio signal monitoring is becoming increasingly important.
Broadcasting Innovations At Paris 2024 Olympic Games
France Télévisions was the standout video service performer at the 2024 Paris Summer Olympics, with a collection of technical deployments that secured the EBU’s Excellence in Media Award for innovations enabled by application of cloud-based IP production.
Standards: Part 18 - High Efficiency And Other Advanced Audio Codecs
Our series on Standards moves on to discussion of advancements in AAC coding, alternative coders for special case scenarios, and their management within a consistent framework.
HDR & WCG For Broadcast - Expanding Acquisition Capabilities With HDR & WCG
HDR & WCG do present new requirements for vision engineers, but the fundamental principles described here remain familiar and easily manageable.