From SDI Baseband to IP Routing

While the Television industry has been well served by SDI Baseband technology including routers, coax cable and BNC connectors, new IP-based video/audio infrastructure and workflows are promising far more flexible and cost-effective solutions in the future. However the new IP-based world must still interoperate with and build upon existing Baseband technology. This Lawo / Arista Networks White Paper explores a practical technology solution: A Software Defined Video IP Network using Lawo¹s Encapsulators, Gateways and Edge Processors, integrated with Enterprise-level high performance IP switching from Arista. The combination promises to advance and automate Television work-flows to meet near and long term demands of production, contribution, distribution and content delivery… and do it cost effectively.

The migration of the Broadcast world to IP infrastructure is definitely under way as evidenced by early adoption in some very large and influential television facilities already using IP Network architectures.

The establishment of IP Video Routers and IP Networks in television over the next several years requires substantial efforts by Broadcasters and broadcast equipment suppliers, similar to the efforts we saw in the transformation from analog to digital television which took over a decade to be fully realized. This new paradigm shift merges the disciplines of Broadcast and IT, which requires cross-functional skills and knowledge that is sure to keep the industry busy for years to come.

The move away from proprietary and bespoke broadcast technology will eventually change the approach for many types of productions, creating new workflows which are difficult to envision today. But the migration from SDI Baseband to IP Video Routing/Networking is ESSENTIAL and TIMELY.


Destination timed switching should work without any additional overhead within the ip (routing) switcher as all sources are always present. The same would be true within a production switcher because when mixing signals both have to be present anyway. Please give an example of where the theoretical double bandwidth would be an actual occurrence.

May 22nd 2015 @ 23:00 by Christopher walker

Reply from Lawo:

There is a little misunderstanding in how we route signals in an IP environment.

Unlike a production switcher, where the input signals are always present, in an output edge device you do not have all potential 1000 or more signals present.

Lets say you would like to clean switch the signal from feed A to feed B. You can either make a destination-timed switching with both signals, A and B, available already at the device,
or, what we describe in the white paper, change within the stream.

The second saves bandwidth dramatically as you only need to stream the feed once from the switch to the edge device.

With destination-timed switching you need both signals (6G in a 3G environment) which means, a maximum of one signal can be fed from the switch with a 10GbE connection.

Also in an IP environment we will switch all signals which are required for production into the production switcher, whether it is the audio mixing console or the video switcher.

This clean switch is certainly done within the switcher itself, with all effects such as DVE etc.

May 29th 2015 @ 11:15 by David Austerberry Editor

Reply from Walker:
If I understand you correctly there is no advantage in the vision mixer for source based switching.
The functionality of a routing switcher is such that there is a connection from each device. In the IP world an RJ45 connector is plugged into a switch. If that switch is connected to a WAN or LAN there may be a fibre optic with 10, 40 or 100 Gb going to another switch. If we hang a multiviewer on the destination switch then all signals have to go across the inter switch link, unless we combine the multiviewer output at the source, but that would be a constraint as the sources could be coming from different switches.

I do not know what the cost overhead is going to be for implementing sourced based switching and whether this would be justified in the few occasions where it actually does save bandwith.

In my opinion the reluctance to use highly compressed signals for live production is unfounded. What is going out to the consumer is highly compressed anyway. Record a mezzanine file at the signal source for later upload and live out a signal that meets the requirements for end user consumption after processing. This will make live production infrastructure more resilient and less expensive, and still leave the option to post for other outlets.

May 30th 2015 @ 17:38 by Christopher walker

@Christopher Walker
As a man with IPTV background, whose dealing with mostly compressed signals on daily I cannot agree with you. Artifacts and fidelity loss, occurring due to compression are additive - the more you compress, more “clouds” and “squares” you get and signal gets uglier and uglier. Yes, signal going to end user is heavily compressed. But for IPTV in case of SDI feed its compressed once, in case of satellite feed - minimum twice: once for uplink (VBR), once for distribution (CBR) and there is a big difference, especially on fast scenes in CBR. So, compressing them several times on production stage will just add insult to injury.

September 25th 2015 @ 07:24 by David Jashi

Hi David,
I agree with you, concatenation is a real problem. What I was talking about was the preference for I frame only compression at the camera vs VBR long GOP. This is a leftover from the days when the compute power to decode at the recorded frame rate (real time) was not available. At the same bit rate, VBR Long GOP (with adequate motion vector analyses) is always going to give you better quality (at the price of increased latency). In a shared bandwidth IP infrastructure all redundant information should be removed at the source. Of course I am talking of 50 to 250 Mbs streams not the 10 Mb/s or less used in IPTV.
The only problem I see is the variable latency as scene complexity changes. In other words buffering at the encoder side could get very large wink if every frame has different content. The solution I prefer is intelligent scene creation. As long as we are not doing medical imaging or evidence video it does not really matter if the frame viewed is an exact copy of what actually happened as long as the audience accepts it as such.
The point is that IP infrastructure is shared bandwidth and baseband SDI is not. This should be taken into account as we make the transition to an all IP plant.

September 25th 2015 @ 17:22 by Christopher walker

Well, if the money is right, you can afford to make dedicated point-to-point IP links. I did so in couple of headend projects, when all traffic from IRD farm had to go through one big transcoder - just plugged two crossover cables in GigE ports on them and forgot about their existence.
As for high bitrate streams - yes, you are right, only a pixel-hunter like me would notice anything and only because I know where and when to search.
About intelligent scene - you mean MPEG4 Part 11 and, maybe Part 16 and such? Haven’t seen any implementation of those yet, do you have any information, if anyone did something commercially available using those?

September 27th 2015 @ 20:15 by David Jashi
Let us know what you think…

Log-in or Register for free to post comments…

You might also like...

Apple TV Plus Puts Spotlight On Low Latency Streaming And CMAF

The recent launch of Apple’s TV Plus service bulked up with original TV shows costing $6 billion to produce has disrupted global attempts to unify streaming behind a common set of protocols for encoding, packaging, storing and playing back video d…

Data Recording: Part 14 – Error Handling

In the data recording or transmission fields, any time a recovered bit is not the same as what was supplied to the channel, there has been an error. Different types of data have different tolerances to error. Any time the…

Predicting Editing Performance From Games And Benchmarks: Part 1

In the good old days when you were thinking about upgrading your computer you began by reading printed reviews such as those published by Byte magazine. These reviews usually included industry standard benchmarks. Now, of course, you are far more…

What You Need To Know About Thermal Throttling

With 6K acquisition becoming more common, you may be considering getting ahead of the editing curve by upgrading your computer system. Likely you’ll want a hot system based upon one of the new AMD or Intel 6- or 8-core m…

Color and Colorimetry – Part 6

It is almost a hundred years since the color space of the human visual system was first explored. John Watkinson looks at how it was done.