Internet Contribution For Broadcasters: Pt. 3 - Why Carriage Matters To Live Internet Video Delivery

On the internet, congestion and latency is added at the points at which carriers connect to each other. Understanding this will help you design a better quality video service, says Bernhard Pusch, Head of Global Internet Strategy at Telstra Corporation.



This article was first published as part of Core Insights - Internet Contribution For Broadcasters - download the complete Core Insight HERE.

Using the internet to distribute live content has exploded. Internet delivery is cast as a less expensive means of transporting more content to consumers to watch on the device of their choice. Well in train before 2020, the past year has sent the need to connect events at venues using remote links into overdrive. More and more major sports franchises are moving direct to consumer over the internet (OTT) with broadcasters supplementing their over the air programme delivery with live streaming services.

The key issue with live video is the requirement (especially in professional environments) to ensure there is an uninterrupted feed. The lower cost of using unmanaged internet networks is often traded with the quality of the end product. Some broadcasters are still holding out against wholesale shift of live linear content to the internet in the belief that private lines are superior.

And it is hard to argue with that when you see the results of the buffering, delay and jitter which is all too common on live sports streams.

Typically, when running video point to point over a private network the broadcaster has control end to end. There’s no need to perform a check sum if data is lost in transit as with an IP network. There are decades worth of confidence that the signal will be received on time, intact, in sync.

But the nature of internet architecture means you can never be 100 percent sure. On the internet, depending on which routes and providers you use, there are multiple sources of potential congestion. These are what typically cause problems with live video.

To mitigate this, you can add systems and protocols to the stream to ensure integrity on reception. This has the impact of adding delay which is exacerbated where the route between source and destination is a lengthy one. Even half a second latency is enough to cause concern for broadcasters especially if they compare the best efforts of the internet to a satellite relay.

That’s why, when you design video over IP networks, it’s important to understand how the internet works and where congestion and latency is inserted.

For most people the internet has the appearance of one homogenous cloud in which packets transit from one end to the other.

But the more deeply you peer into the topology the more you understand the complexities at play. For instance, there are certain carriers that only interconnect at particular points in the network. There are also certain carriers which will be overbooking their service.

Understanding Congestion
Digging into this further, you will find larger more professional carriers in the internet space selling 1Gig of internet capacity to customers and will allocate 1Gb of service in the network to carry the client. On the other hand, some lower cost, lower speed operators might overbook by, for example, servicing 2G of capacity for 1Gig on the network. This leads to traffic congestion.

Congestion and latency rarely occurs within a carrier’s network. The bottleneck is between carriers, the points at which traffic is handed over to another carrier. Congestion is further complicated at these interconnect (peering) points by issues unrelated to technology and all to do with local competition and political rivalry.

In some regions and especially in APAC, carriers will only interconnect with each other at certain points, sometimes at considerable distance from their home market.

Understanding The Politics Of The Internet
Korean carriers for example will connect in Japan. Taiwanese carriers might connect in Japan and Hong Kong. In some cases, carriers will only interconnect on the West Coast of the U.S. Their primary purpose is to keep rival international carriers out of their home networks and force OTTs to pay to connect locally (thereby providing best performance) to their networks.

None of this is obvious to the casual observer but it has major implications downstream. You might think you have a signal going the shortest route from A to B when in fact it is yo-yoing from A to C and D to B and will consequently take much longer than you think.

In the case of the major online content providers like Netflix or Amazon, the likelihood is that their consumers will enjoy a good experience at home. The reason is that Netflix or Amazon servers are almost certain to be connected to the same carrier that brings consumers their local home broadband. Internet Service Providers are going to want to make sure that Netflix and Amazon subscribers don’t receive a poor experience since the comeback is likely to be on them. It is the service provider not the content owner who gets the customer service call or reputational damage on social media when content goes down.

In the case of live media is it the broadcaster not the broadband provider who will get the blame. What’s more, content provision of live events is often not in-country but intra-country with feeds bouncing between points of presence, with different carriers connecting at different and multiple points each adding potential latency.

Knowing where the interconnect points are and understanding the politics of the internet helps you to design networks and setup the right interconnections between the fixed network and the internet to give you optimal delivery.

When building networks for delivery over the internet it is important to bear in mind exactly where the source and destination are and combine that with knowledge about the structure of different carrier politics and congestion points in order to avoid them.

With careful design and understanding how the internet works, these issues can be largely mitigated (although not completely removed) enabling substantially cheaper solutions to be implemented for delivering video with acceptable performance.

All carriers are not the same. Some broadcasters may not care if a carrier drops 20 percent of packets or if the latency is on the high side because they are getting a cheaper service. Other carries will always give the best quality service. That’s something to bear in mind.

Telstra Internet Delivery Network
The Telstra Internet Delivery Network provides a gateway from ‘on-network’ media rights-holders to ‘off-network’ media buyers, using Telstra’s high-capacity Internet peering arrangements and cloud infrastructure.

Each gateway is located strategically in-region where Telstra provides its own Global Internet access and peering arrangements with Tier1 Telcos and content providers. The Telstra Internet Gateway will encapsulate the contribution feeds and send them via its own Global Internet Direct access to the Broadcasters. Managed by broadcast media and technology experts at Telstra Broadcast Services, they will offer the possibility takers to select appropriate transport protocols. Examples include; Zixi, SRT, RTMP, UDP/RTP with FEC, RIST, and HLS.

A full managed end-to-end network with the flexibility to manage large scale media operations and accommodates unpredictable latency, network hitter, packet loss and network congestion.

Internet delivery will only be the last mile connection to the customer. Telstra is choosing this solution as major Tier 1 Telco in the world, we have high-capacity Internet peering arrangements with local Telcos and content providers. By leveraging its direct peering links Telstra can semi-manage the streams via its Internet backbones and monitor the non-congested paths to provide a secure streaming experience. 

Supported by

Broadcast Bridge Survey

You might also like...

Designing IP Broadcast Systems

Designing IP Broadcast Systems is another massive body of research driven work - with over 27,000 words in 18 articles, in a free 84 page eBook. It provides extensive insight into the technology and engineering methodology required to create practical IP based broadcast…

NDI For Broadcast: Part 3 – Bridging The Gap

This third and for now, final part of our mini-series exploring NDI and its place in broadcast infrastructure moves on to a trio of tools released with NDI 5.0 which are all aimed at facilitating remote and collaborative workflows; NDI Audio,…

Microphones: Part 2 - Design Principles

Successful microphones have been built working on a number of different principles. Those ideas will be looked at here.

Expanding Display Capabilities And The Quest For HDR & WCG

Broadcast image production is intrinsically linked to consumer displays and their capacity to reproduce High Dynamic Range and a Wide Color Gamut.

Standards: Part 20 - ST 2110-4x Metadata Standards

Our series continues with Metadata. It is the glue that connects all your media assets to each other and steers your workflow. You cannot find content in the library or manage your creative processes without it. Metadata can also control…