Hardware Infrastructure Global Viewpoint – October 2019

The Myths Of Zero Packet Loss

When discussing IP, I often hear pundits advising that networks must be “zero packet loss”. This is more of an aspiration than a reality and I would ask why so many accept this contradictory assertion without question.

SDI is the de facto specification IP is often compared to. Whether we’re looking at signal distribution, reliability, or resilience, engineers often speak passionately about the reliability of SDI. It’s fair to say, it is incredibly reliable, and interoperability is built into the core of the specification, but I’m not convinced it is as error free as so many would like to think.

The possibility of data loss is an inevitable consequence of moving information from one place to another. Whether we use a transport stream such as SDI or Ethernet, or transfer the information to a disk drive, there is always going to be a probability of data loss, even if that probability is incredibly small.

The universe is so complex we cannot possibly hope to understand it in its entirety. Random events and unpredictable behaviors all conspire against us. Consequently, there will always be uncertainty, and this is beautifully described by the probability distribution function. I accept, that as we move to the extremes of the tails of the gaussian curve, the probability of error becomes extremely small, but there is still the possibility of an error. We cannot just ignore this truth because it’s inconvenient!

SDI data loss generally manifests itself as large green disturbances on the video monitor, and stuttered freeze frames if there is a synchronizer in the way. To help with diagnosis, the CRC error check would be employed.

The CRC counter is a great tool for determining the source of errors, but I cannot remember ever religiously monitoring it for every SDI signal in a station, whether this was a camera output, production switcher input, or transmission feed. There was probably more data loss going on in SDI than I would care to consider as the CRC was only used in an operational environment when a much more serious problem occurred.

With this in mind, dare we suggest, or even accept that SDI is not perfect, and that there are many systematic and random errors in an SDI network? We are just not aware of them as we cannot see every pixel loss. Even an entire line of disturbance is difficult to detect with the human eye. If we accept this, then why are we even attempting to strive for zero packet loss in IP and Ethernet networks? Apart from being impossible to achieve, is the comparison to SDI really justified in this context?

Transitioning to IP is a fantastic opportunity for broadcasters, and specifically engineers. Not only do we have the freedom to question existing workflows and practices, but we are also encouraged to dig deep into our own minds and question our established thinking. I believe this era in broadcasting is offering more opportunity than we have seen for many years. It’s a fantastic time to be asking the question “why?”, and we should use this opportunity to cast aside our confirmation bias.

I will start the ball rolling and assert, achieving zero packet loss is impossible!