Software Infrastructure Global Viewpoint – October 2022
Cloud Variable Frame Rate

One of the challenges we face with cloud computing and IP in general, is that neither natively lends themselves well to tight timing constraints such as those found in video and audio sampling, but is there another alternative?
IP and cloud offer us the opportunity to do things differently. This isn’t just about changing workflows or improving flexibility by spinning up new server instances, but also gives us the scope and freedom to challenge how we have solved technical challenges in the past.
Fundamentally, the human visual system is analog and as far as we understand, the human brain isn’t sampling images from our eyes in the same way as a camera scans a scene. In fact, television is an illusion as there are no moving pictures, just a sequence of still images played back very quickly to give the illusion of motion.
Cloud and IP systems are optimized for bursty data to take advantage of statistical multiplexing. Whether this is through network routing, processor or memory access, great efficiency savings can be achieved through bursting data.
Sampled systems do exist, such as those based on Real Time Operating Systems, but they only excel when the data being processed is continuously delivered. For example, an aircraft will be continuously determining its speed by sampling air flow sensors and will know at all times the elevation of the ailerons, but in most human interactive systems, events are generally randomly generated.
The central limit theorem (CLT) establishes that in many situations, when independent random variables are summed, their normalized sum tends towards a normal distribution. This is one of the reasons bursty systems can take advantage of statistical multiplexing and optimize delivery and processing. In a system based on generating random events, such as users accessing a web page, creating a worst-case network access for every user is clearly a massive waste of resource, and probably impossible to achieve. Hence the reason IP and cloud systems are based on statistical multiplexing of resource, bandwidth, and processing.
This is completely alien to traditional broadcast operations as SDI and AES represent anything but random variables, hence the reason we always design for peak resource and bandwidth, and largely this is because we use time invariant sampled video and audio.
As I see it, we have one of two options, we fight the internet, IP and cloud, and force them to bend to our will and work with sampled video and audio, or we change how we work to meet the massive potential for IP and cloud. One way of achieving this is by dropping the time invariant constraint that we insist on pursuing and imposing. This isn’t the 1950’s and we don’t need to provide currents to energize scanning coils that shift electron beams or be worried about blowing up the driver circuits by generating too much back EMF.
I’m not saying this is easy, it certainly isn’t. But I do sense that a lot of native software people, that is those driving IP, cloud and internet innovation, are pulling their hair out when they look at how television works and having reflected on their views, they do seem to have a good point. So, if we’re going to throw away the rule book, let’s do it properly! Audio isn’t exempt either. Imagine how much easier latency would be to deal with and efficient video and audio processing would be if we could relax our attitude to time and use variable sampling. This would create bursty data from the start.