In the last article in this series we looked at how KVM improves control, reliability, security and integration for multiple devices and cloud systems. In this article, we look at how latency is addressed so that users have the best quality of experience possible.
If the user is willing to accept a visually lossless video feed, then the MAC can be moved to the datacenter too. Not only will this deliver a better working environment for the user but also provides a superior storage area for the MAC computer further improving its reliability.
Protecting media assets is often a priority for broadcasters and production facilities. Their high-value cannot be underestimated, especially when working with yet-to-be released movies and episodic series.
Moving all the hardware from the user’s workstation to the datacenter not only provides an improved working environment but also improves security. Datacenters can be built to very high security standards for both network and people access. Having computers stored under desks makes them vulnerable from theft and tampering. Moving them to the datacenter improves security considerably.
Centralized control of the KVM system allows system administrators to control access to the workstations KVM receiver, further adding layers of security.
IT Engineer Pool
A major advantage of this system is that the KVM transmitter and receiver work over an IP network. Although there may be some specific routing required for the correct QoS on the network links to provide the optimal video latency and keyboard/mouse operation, the networks core switch is readily available and understood by network engineers.
One of the benefits of moving to IP is that broadcasters can take advantage of IT innovation and working practices. Network design and operation generally follows well understood connectivity and configurations. Companies such as Cisco provide training programs with certification such as the CCNA (Cisco Certified Network Associate). This teaches a methodology for design and support of IP networks that cover the vast majority of network requirements.
Using standard network topologies allows KVM users to take advantage of this standardization giving access to a greater pool of already trained network engineers. Although earlier KVM systems may have used dedicated routers specifically to switch KVM signals and video feeds, the adoption of IP both reduces costs and greatly simplifies support.
Although packet switched networks such as IP deliver greater flexibility than their circuit switched counterparts, we must be careful about keeping latency under control.
We should put latency into context. For example, for general keyboard and mouse operations it’s often difficult for us to perceive above 100-150ms of latency. A latency of 150ms should be achievable in a well-designed and configured private network. The critical point is that we are able to adapt our hand to eye coordination relatively quickly. Consequently, it’s better to focus on realistic and achievable latencies than continuously trying to achieve unrealistic low latencies.
Resilience is usually inbuilt into enterprise networks giving an unexpected benefit for users of KVM over IP. Resilient networks are self-healing and provide greater reliability for users. As the KVM systems sits on top of the IP network, any failures are automatically rectified without user intervention. The KVM monitoring system may notice there has been a network change and even log it, but the user will be unaware of these changes most of the time.
IP is a best effort delivery system by design and does not guarantee the reception of the packet at the destination device. Although this may seem a bit lacking in the original IP design, it turns out to be one of its greatest strengths. By keeping the packetized data structure simple it is possible to transport the IP datagram over many different types of transport stream. Broadcasters generally use ethernet as the transport stream and IP packets reside within the ethernet data frame.
As an IP datagram progresses its journey it will often encounter many different types of transport stream such as WiFi or fiber. The simplicity of the IP datagram allows it to easily switch between different hardware interfaces.
TCP is a protocol that adds a layer of data validation to guarantee delivery of the underlying data, in this case, the keyboard and mouse information. As with all things engineering, there is always a compromise. Although TCP guarantees delivery, the price we pay for this is increased latency.
TCP adds a layer of data delivery guarantee to a network at the potential expense of increased and indeterminate latency. Using UDP in enterprise networks reduces latency to improve video and audio distribution.
Every key press, mouse-click and move are essential to the user. To maintain high user confidence in systems, all of the control information must be reliably exchanged between the user’s workstation and servers in the equipment room with the minimum of latency possible. Even the delay of a single mouse-click or delay of a mouse drag can cause uncertainty with users.
To avoid flooding a network with data, TCP uses congestion control to moderate the number of IP packets it sends at a time and network switches use buffers to reduce the chance of oversubscription within a link. However, one of the strategies used to avoid sending too much data into a link is to drop TCP packets, thus forcing the sender to back-off its transmission rate. The sender detects packets have been dropped and then resends them. Hopefully on this occasion there is sufficient capacity on the link and the IP packets can be sent unhindered.
The packet drop method is effective and most of the time the user is unaware this is occurring. However, the major challenge is that latency can increase and become indeterminate. It’s perfectly acceptable to send system messages, including keyboard and USB device information using TCP as they are not time critical, and we need to ensure all messages arrive at devices even in the face of packet loss in the network.
Typically, keyboard and USB devices (such as flash drives) need reliable exchange and therefore use TCP. This also guarantees that keyboard messages and specifically key operations arrive at the devices in order, so they do not become confused on the state of particular buttons. As these messages are short, they utilize minimal network bandwidth.
UDP (User Datagram Protocol), is another protocol within the IP stack and provides further addressing information for the IP packet but still uses the send-and-forget policy. We make the assumption that enterprise networks are very reliable and it’s unlikely that UDP packets will be lost.
As UDP does not have a congestion control function the network switch will not opt to drop them if a link is starting to oversubscribe. Switches can even be configured to prioritize UDP packets in favor of TCP when links are becoming busy.
UDP is often used to distribute high-bandwidth video and audio unless protecting against lossy networks such as WiFi. IGMP (Internet Group Management Protocol) is often used to multicast video streams so many different receivers can “tap” into the video distribution feed. This requires network switches to support the IGMP protocol and allow pruning of multicast streams so that only the required devices receive the stream. This improves network efficiency and reduces the risk of congestion.
KVM is proving its worth in broadcast and media facilities, especially when we consider all the benefits IP solutions deliver. Security, resilience and improved operational environments combine to further improve flexibility for the broadcast and media facility, as well as the people using the workstations.
You might also like...
Optimization gained from transitioning to the cloud isn’t just about saving money, it also embraces improving reliability, enhancing agility and responsiveness, and providing better visibility into overall operations.
Television ratings service Nielsen recently released a report that showed streaming platforms pulled in a bigger share of viewers’ time then broadcast networks did. In fact, Netflix and YouTube alone now make up about 12 percent of the time Americans spend i…
The basic goal is for consumers of video services to be highly engaged. It is easy to say but hard to do. Yet it is at the core of being a D2C streamer. D2C requires a deep understanding…
More than 52% of survey participants report at least three-second latency delays or more in their live streaming broadcasts.
The focus of much of the latest broadcast TV R&D is the Remote Integration Model (REMI). From millions of Skype meetings over consumer ISPs to the recent Winter Olympics TV broadcasts, REMI is significantly changing the internal dynamics…