Understanding IP Broadcast Production Networks: Part 13 - Quality Of Service

How QoS introduces a degree of control over packet prioritization to improve streaming over asynchronous networks.

Broadcast engineers are accustomed to point-to-point, reliable, guaranteed bandwidth circuits. An SDI cable will guarantee 270Mbps for SD and 1.485Gbps for HD. Digital audio circuits will guarantee 2.5Mbps for uncompressed 48Khz sampled 24bit word stereo.

Computer networks do not offer this level of guarantee as they are shared packet switched systems and use best effort delivery mechanisms. A stream of video or audio is divided up into packets of 1,500-octets to fit in an IP packet, which is generally inserted into an Ethernet packet. Each of these packets is streamed in sequence at the video or audio data-rate and sent through the network to its destination using a best effort mechanism.

Routers and switchers vary enormously in their sophistication and data handling speeds. The routing protocols they support and their ability to provide services such as multicasting also vary. Consequently, if a packet is sent from a camera to a production switcher, through a network, we cannot guarantee, or even predict how long it will take to get there, whether it will get there, or whether it will be in sequence.

Each packet has a sequence number within its header enabling the receiver to re-order the packets within its input buffer. The packets can be received out of sequence if a route is temporarily interrupted and the packets sent by a different path, resulting in some packets taking longer than others. The receiver can re-order them, but if some packets take too long to arrive then they may be dropped as they will fall outside of the input buffer window.

Input buffers are usually of a fixed size and are used to re-time and re-order packets causing delays for video and audio streaming. If the buffer is too big then there will be an unacceptable delay, possibly even seconds. If the buffer is too small, then a disproportionate number of packets will be dropped as their arrival will fall outside of the buffer window.

To meet the demands of video and audio streaming, IT have adopted the term QoS, adding an element of control to packet arrival. Another term used is rate shaping. Essentially, QoS is helping us distribute a synchronous stream over an asynchronous network.

When IP and TCP were originally designed, there was no consideration or provision for synchronous services such as streamed audio and video.

Two strategies are available to help us reliably stream audio and video over an IT network; extra provision and packet prioritization.

Extra provision is providing more bandwidth than we need. If a streamed audio service requires 2.5Mbps, extra provisioning would demand 5Mbps or even 10Mbps. Clearly this is wasteful of bandwidth and doesn’t scale efficiently. The bandwidth requirements and switching speeds become absurd when we start looking at HD and UHD video.

Figure 1 - When packets are queued, the routers algorithm must decide which packet to output next. The video buffer is full resulting in new packets being dropped.

Figure 1 - When packets are queued, the routers algorithm must decide which packet to output next. The video buffer is full resulting in new packets being dropped.

Packet prioritization takes advantage of information inside the IP header. The Type of Service (ToS) field, recently renamed as Differential Services Code Point (DSCP), belongs to packet prioritization models called differentiated services. This information is used by the router to help it determine its routing order.

If many streams are being switched to one port, then the router is left to decide which packets take priority over others. This might be a round-robin type strategy, or first come first served. Higher end routers use buffers to temporarily store packets as they enter the device. Algorithms within the router interrogate the packets, extract information such as the DSCP value, and decide its priority when sending to the next hop in the network.

The prioritization causes delay and packet loss within a network. If a buffer becomes overloaded, then it drops packets. Variations in delay give rise to jitter, both long and short term. Solving this problem is the essence of QoS.

Packet prioritization relies on routers within a network all agreeing on a prioritization strategy. From a broadcasting point of view, video and audio should take priority over HTTP and other traffic. Within closed private networks this may be possible, however, once we move into public networks the prioritization becomes less predictable. Some network providers might not even provide QoS or bit rate shaping, potentially resulting in a complete mess.

Packet prioritization relies on switchers and routers performing deep analysis of the packet to extract the necessary DSCP values, and even looking at the stream itself to determine whether it is audio or video, resulting in delays and bottlenecks. This is further compounded when we look at encrypted packets as the router will not be able to decode any of the payload and will not determine if it’s switching video or audio.

Multi-Protocol Label Switching (MPLS) is designed to overcome these problems. MPLS is provided by a network supplier and is largely transparent to the end user, as a packet enters the MPLS network the ingress router adds a label to the packet, this is used by all subsequent routers within the providers’ network to prioritize and route that packet.

Figure 2 -  MPLS service providers abstract away the network and guarantee a Quality of Service for video and audio streaming.

Figure 2 - MPLS service providers abstract away the network and guarantee a Quality of Service for video and audio streaming.

Routing efficiency is improved as switchers use the label for prioritization and routing, but don’t interrogate the packet further, thus improving throughput and reducing complexity. As QoS is an intrinsic part of MPLS it forms a fundamental part of the routing method instead of being an unwelcomed add on.

Interoperability between network providers is maintained, to be part of an MPLS system all providers must agree on using the same QoS strategies to guarantee streaming of video and audio.

MPLS can adopt diverse types of layer 2 connections including Ethernet, ATM and DSL. Combined with multi-vendor interoperability, MPLS is extremely flexible and lends itself well to broadcasting, especially when backhauling cameras and microphones from remote outside broadcasts.

As IP grows within broadcast facilities QoS will become a fundamental consideration, and decisions on whether to use protocols such as MPLS will need to be taken at an early stage, especially when choosing network providers who may or may not be able to provide MPLS.

You might also like...

Future Technologies: Autoscaling Infrastructures

We continue our series considering technologies of the near future and how they might transform how we think about broadcast, with a discussion of the concepts, possibilities and constraints of autoscaling IP based infrastructures.

Standards: Part 12 - ST2110 Part 10 - System Level Timing & Control

How ST 2110 Part 10 describes transport, timing, error-protection and service descriptions relating to the individual essence streams delivered over the IP network using SDP, RTP, FEC & PTP.

FEMA Experimenting At IPAWS TSSF

When government agencies get involved, prepare for new acronyms.

Managing Paradigm Change

When disruptive technologies transform how we do things it can be a shock to the system that feels like sudden and sometimes daunting change – managing that change is a little easier when viewed through the lens of modular, incremental, and c…

Future Technologies: The Future Is Distributed

We continue our series considering technologies of the near future and how they might transform how we think about broadcast, with how distributed processing, achieved via the combination of mesh network topologies and microservices may bring significant improvements in scalability,…