IP Video: Its More Than One Thing, And It Needs More Than One Solution. Part 3

What does IP Video really mean, why there is such hysteria and where it’s all going? At annual industry trade shows, like IBC and NAB, there is often a single ‘buzzword’ which floats to the top of conversations between vendors, broadcasters and consultants. Over the last year or so, that word has been ‘IP Video’, and with each day that passes, IP Video hysteria grows. In this article we will try to rationalise what IP Video actually means in practice, and ask questions about how it will start to play a role in the day-to-day lives of broadcasters. We will question the ongoing debate around choosing which IP Video protocol is going to become “the standard” and we will conclude that in fact we need more than one protocol to realise the opportunities offered by working with Video in an IP world. Continue reading part 3.

Continued from  part 2


Alongside this rush to hardware, there is a contrasting and more pragmatic approach where the layers of existing standards are skipped and a purely practical solution for getting a frame of video from one device to another has been sought.

Enter Network Device Interface (NDI). This has been designed with a number of important prerequisites in mind, and offers a practical solution for IP Video which can work on existing 1Gbit network infrastructure, or even Wi-Fi, without the need for dedicated hardware.

NDI is offered as a licence free technology with a very easy to implement API. Tools based on NDI have already been demonstrated on a variety of platforms. NDI can be comfortably implemented in software, with demands on compute so low that applications for iPhone and iPad are already available - something which would be impossible using SMPTE 2022–6 or ASPEN.

Designing a system that can be built on commodity gigabit networking infrastructure is a key requirement. The cost benefits alone of standard gigabit Ethernet over copper compared with fibre based 10 gigabit networks make it a clear and obvious goal.

In order to use Gigabit networking for HD Video, compression is essential, and NDI includes a high quality wavelet codec as part of the solution, yielding a typical data rate of 50-100Mb/s per stream. Unlike H.264 the NDI codec does not carry any licence fees, and is designed to be very low latency. It is also extremely efficient, and a typical modern desktop computer can easily compress HD at full frame rates on a single CPU core, providing the multi-channel scalability needed for more ambitious systems with multi-core CPUs. NDI uses the TCP Protocol to move entire compressed video frames in one piece between software based systems without the requirement of dedicated hardware to compress, decompress, slice, dice and interface with 10 gigabit networks.

NDI was developed for software applications to exchange video

Which Protocol is most suitable for each application?

We have outlined three different IP Video solutions – and there are others that we have not mentioned. Much of the industry discussion has revolved around ‘which is best’ — looking for a single IP Video protocol ‘to rule them all’. However, in fact, it turns out that broadcast almost certainly needs more than one IP Video protocol, and we need to start thinking of IP Video as a family of protocols, not just one. To illustrate this, we will look at four fundamental connections found throughout the broadcast chain:

1) Hardware to Hardware (e.g. camera to router or router to monitor, long distance carriage of SDI over fibre)

2) Hardware to Software (e.g. camera to ingest software)

3) Software to Hardware (e.g. playout software to router)

4) Software to Software (e.g CG, DDR, cell phone camera — anything interfacing with a software based vision mixer, and many future directions for integrated workflows)

Use the appropriate protocol for each application

We have four different junctions, each of which might traditionally have used SDI in some way, or in the case of 4) Software to Software, it might have required SDI hardware adaptors on both ends simply to get from one piece of software to another.

As devices traditionally viewed as ‘hardware’ evolve, more and more of these components will actually become more like ‘software’. For example, a network camera ceases to be a purely SDI device and starts to become a piece of IP Video software capable of so much more. Vision Mixers are gradually shifting from expensive hardware based devices to become more software driven systems. Even video monitors have a changing future — rather than routing one signal to a monitor — let the monitor pick up the one it wants up from the network.

Whilst Software to Software systems may be a minority today, it is clear where the future leads, and so it’s worth considering the optimum IP Video solution for interconnected software. As described in the earlier sections, SMPTE 2022–6 and ASPEN both employ very high data rates of very complex bit streams and as such are not suitable for a pure software to software exchange of video frames. In contrast NDI (which was specifically designed for software to software communication) provides a low overhead, manageable bandwidth, and ultimately *simple* mechanism to pass video and audio from one piece of software to another.

Conversely, in pure Hardware to Hardware systems, which are already based around underlying SDI architecture, clearly a solution like SMPTE 2022–6 or ASPEN may be most appropriate.

The ultimate conclusion is that we almost certainly need more than one IP Video standard - and we should not been looking for just one solution. We need to evaluate the four different interface classes between software and hardware and consider the best fit for each. In doing so we can create the most efficient and practical approach to the adoption of IP Video, and maybe somewhere along the line we can start to see the real benefits and opportunities it promises.

Mark Gilbert is CTO at Gallery SIENNA. Simon Haywood is an independent broadcast consultant at Tamed Technology

You might also like...

Future Technologies: Autoscaling Infrastructures

We continue our series considering technologies of the near future and how they might transform how we think about broadcast, with a discussion of the concepts, possibilities and constraints of autoscaling IP based infrastructures.

Standards: Part 12 - ST2110 Part 10 - System Level Timing & Control

How ST 2110 Part 10 describes transport, timing, error-protection and service descriptions relating to the individual essence streams delivered over the IP network using SDP, RTP, FEC & PTP.

FEMA Experimenting At IPAWS TSSF

When government agencies get involved, prepare for new acronyms.

Managing Paradigm Change

When disruptive technologies transform how we do things it can be a shock to the system that feels like sudden and sometimes daunting change – managing that change is a little easier when viewed through the lens of modular, incremental, and c…

Future Technologies: The Future Is Distributed

We continue our series considering technologies of the near future and how they might transform how we think about broadcast, with how distributed processing, achieved via the combination of mesh network topologies and microservices may bring significant improvements in scalability,…