In the last article we looked at VLANs and the problem they’re trying to solve. In this article we continue the theme of looking at a network from a broadcast engineers’ point of view so they can better communicate with the IT department, and take a deeper look at Ethernet.
In the 1970’s Ethernet was in its infancy and competing with two other proprietary networking systems; token ring and token bus. All of these systems were backed by heavy industry and the concept of each employee having a computer on their desk wasn’t considered.
Token ring allowed computers to be connected to each other with coaxial cable in a ring topology. An empty packet of data held the electronic token and was passed between each computer, and only when the computer received the token could it send data. Various timeout protocols existed to stop a computer hogging the token.
Token Ring Unreliable
Token bus uses the token ring protocol and they only differ in that token bus can have an open ended network connection, but token ring has to have the LAN connected in a continuous loop. With token ring, each computer on the network has to know the address of the previous and next computers within the LAN to allow them to pass the token.
Token systems could be unreliable, especially if a computer failed, or it was switched off. Adding extra computers to the network was difficult as it stops all computers on the LAN from communicating. Furthermore, they were highly inefficient as they would have to receive, process and transmit the empty token packet even if they had no data to send.
Ethernet was adopted by the IEEE in 1980 and given the project number 802. Although ethernet originally used coax as its LAN cable, it soon moved to twisted pair providing a cheaper more flexible alternative with full duplex operation. Coax was limited to either send or receive data, but could not do both at the same time. Hardware buffers within the network interface (NIC) cards allowed the computer to preload memory at CPU speeds and then the NIC would send the data at ethernet line speeds, thus making the whole operation significantly faster.
Ethernet was originally developed with a shared bus system in mind. All computers on the LAN would be either listening, sending or receiving data. To send data, a computer would detect other transmissions, and if it found another communication was taking place it would wait.
During transmission the network interface of each computer would continue to listen for other traffic on the coax, and if one or more computers started sending data they would all detect this and stop their transmissions. An algorithm in each NIC randomly held back the transmission so statistically one NIC would transmit ahead of the others and stop an infinite race-off condition occurring keeping the system stable.
This collision detection is called “carriers sense multiple access with collision detect”, or CSMA/CD and was adopted by the IEEE as 802.3 and became a full published standard by 1985. CSMA/CD is used on twisted pair networks as multiple computers can be connected together through a hub. These essentially connect all of the transmits together (through buffer circuits) and all of the receives together so CSMA/CD was still required.
With one cable or transmission system, packet collisions would increase as the number of computers increased and communicated more. More collisions result in reduced efficiency and hence lower data rates. Layer 2 switches solved this problem as they greatly reduced the number of devices on each segment and could allow a point to point connection between the computer and port of the switch. Intelligent switches took advantage of input buffering and would decide when to schedule the sending of a packet and avoid collision altogether.
Layer 2 switches can be connected together through bridges to keep maintenance easy and reduce complex cable runs.
Each Ethernet network interface card has its own unique media access control address programmed during manufacture. Ethernet assumes that the manufacturers of the network interface card have been assigned a MAC address from the IEEE’s registration authority and that it has been programmed properly. A NIC will only respond to two types of received messages; its own MAC address in the destination address header, and the broadcast address in the same part of the header.
The broadcast address is always “ff-ff-ff-ff-ff-ff” and is used by protocols such as address resolution protocol (ARP) to resolve an IP address to a MAC address. The down side of this is that when one computer sends out a broadcast message every single device on the network will receive it and have to process it. This is inefficient and a waste of valuable bandwidth. The solution to this is to split networks physically using bridges, or VLANs, which coincide with subnets on IP networks.
Ethernet was designed to work with many different protocols, IP is only one of them, and it is possible to send IP datagrams and ARCNET packets at different speeds on the same Ethernet network.
In recent years, speeds of Ethernet networks have increased significantly. In the 1980’s network speeds were running at 100Kbps on CAT1 cabling, CAT5 provided 250Mbps and CAT8 now gives us 40Gbps by splitting the data over multiple pairs within a single cable or using fiber. However, only distances of 30m are achievable over cable.
The most important aspect of 40GBASE-T is that CSMA/CD has been dropped from the IEEE 802 specification and there is no backwards compatibility with CAT6 and earlier. This is fine for camera’s as the output of the camera will connect directly to the high speed non-blocking layer 2 switch. However, we can’t mix other CAT6 devices within the CAT8 segment as they will require CSMA/CD to work correctly.
As speeds increase to 40Gbps and 100Gbps the RJ45 ethernet connector has been succeeded by SFP and CXP connectors. The small form-factor pluggable (SFP) is a hot pluggable sub-module allowing connection for fiber or copper. CXP combines multiple copper pairs to provide 100Gbps with twelve 10Gbps pairs, or three 40Gbps links.
To network 2160P120 cameras we would need a seriously fast layer 2 switch to process all of the 24Gbps network streams being sent to it. This would be a switch worthy of a Tier-4 datacenter and certainly wouldn’t be consumer-off-the-shelf.
You might also like...
This FREE to download eBook is likely to become the reference document you keep close at hand, because, if, like many, you are tasked with Preparing for Broadcast IP Infrastructures. Supported by Riedel, this near 100 pages of in-depth guides, illustrations,…
Artificial Intelligence (AI) has made its mark on IT and is rapidly advancing into mainstream broadcasting. By employing AI methodologies, specifically machine learning, broadcasters can benefit greatly from the advances in IT infrastructure innovation and advanced storage designs.
As broadcasters continue to successfully migrate video and audio to IP, attention soon turns to control, interoperability, and interconnectivity to improve reliability and efficiency. In this article, we investigate IP control as we move to IP infrastructures.
Broadcast systems are renowned for their high speed and high capacity data demands. Up to recently, they relied on bespoke hardware solutions to deliver the infrastructure required for live real-time uncompressed video. But new advances in IT data storage have…
Broadcast and IT technology collaboration is continuing its journey with the cross-over between them becoming ever clearer. Storage is a major growth area with advanced methods achieving greater efficiencies and resilience. And as broadcast repeatedly demands more and more capacity,…