The Mechanical Turk was a chess playing automaton exhibited around Europe from 1770 to 1854. Image public domain.
In the transition to IP, do you know what the product is you might be buying? Is everything as it seems, or will you draw back a curtain to find nothing has changed, realize that the medicine is snake oil or that you have been tricked and locked into an expensive proprietary platform? Even if the product uses media industry standards? What questions should you ask?
What is your organization trying to achieve with a transition to IP? For example, using the internet rather than a phone line, it is possible to replicate fax machines like-for-like, but I choose to communicate using e-mail, VoIP, instant messaging, web conferencing, social networks, cloud drives etc.. Is a transition to IP - Internet Protocol - a replacement of an existing signal-based infrastructure like-for-like, or is it to benefit from business transformations enabled by IT - Information Technology? Is the purpose to update a content production facility for the opportunities and challenges of reaching a tech-savvy audience, or to update some aging plant? Riding the IT core means building on the same commodity infrastructure that Mi-goog-azon-book-er have used to democratize many industry sectors — using software — including knowledge, communication, information, community, shopping, media & entertainment, etc. What questions should you be asking to determine what you need, when you need it and whether what you buy is IP or IT?
I am a computer scientist. I’ve always looked at the technology from the perspective of ‘How can I use software to make a computer really sing?’. For example, if I gave my friends from the Victorian era - who have missed out on the history of broadcasting - the internet, a data centre and a smartphone, how would they build an efficient, data-driven content production and delivery infrastructure? What benefits and limitations would they see in some of the proposed solutions on offer, free from the legacy of signal-based thinking? Would a concept of broadcasting using media-specific standards emerge? Ultimately, do the users of this infrastructure care what lies under the hood or do they want to get on and use it, just like buying a ticket and getting on a train or plane?
If you are purchasing today and have to get on air, many media-specialist products are already available that offer pragmatic, IP-based solutions. To help characterize what you are trying to achieve in the longer term and analyse what is on offer, read on to arm yourself with some questions. This is followed by some steam-powered observations from my Victorian friends!
The Mechanical Turk was a chess playing automaton exhibited around Europe from 1770 to 1854. Visitors were shown into a darkened room where a robot-like figure sitting behind a large desk would appear to play you at chess automatically. Exposed as a hoax in 1820, when you draw back the curtains under the desk you expose your lever-operating human opponent. What IP solutions appear to be something new but are a legacy solution dressed up to look like IT?
Replacing a signal, analogue or digital, by tunnelling it through network packets - appearing bit-for-bit the same at both ends - changes little. Arguably, the process itself adds complexity - like the levers of The Turk - as it becomes necessary to engineer around issues of transporting signals as packets and defend against any other network traffic. To detect this kind of solution:
- Does the switch and/or router have to be made by a media-specialist manufacturer rather than an IT vendor?
- Are packets and connections being managed as if point-to-point on a single cable, at data-link layer 2 in the OSI model?
- Is switching and routing taking place using software defined networking (SDN) rather than in application software?
- Has an existing product been wrapped in a layer of IP but is otherwise unchanged?
- Does the solution embed and de-embed data inside signals, such as ancillary data, where a simple and direct API using REST and JSON would be better?
Modern post-production, editing and file-based workflows demonstrate that compressed and uncompressed media - video and audio - can be processed on commodity computing infrastructure - CPU, GPU and storage. Even on a laptop, this processing is often several times faster than real time and can be at resolutions of 4K and above. Many low-cost expansion peripherals support the use of media-specialist signals - such as SDI and ASI - via standard IT interfaces - PCI, USB and Thunderbolt. Ten gigabit networking is becoming more commonplace and affordable. Modern software engineering is test-driven and delivers reliable software, i.e. app store apps.
Snake oil refers to any product with questionable or unverifiable quality or benefit. Given that professional media processing with commodity computing infrastructure is proven, are all media-specialist products providing a true benefit? Have all claims about reliability, performance, scale and hardware density in favour of media-specialist products been fairly demonstrated and measured against well-implemented software-only counterparts? Here are some questions to ask:
- Is a special codec or proprietary protocol required to facilitate the transition to IP? What’s wrong with the standard codecs and protocols that we already have?
- If a ten gigabit network is not sufficient, is the solution some light video compression or adding another network card to the same machine?
- Modern software tooling, i.e. asynchronous programming languages, test & delivery automation, simplifies the writing of reliable and performant software that fully utilizes all CPU and/or GPU processing cores. The expertise to use the tools is hard to come by. How about at your supplier?
- Have claims about reliability, energy efficiency and performance been measured?
Sleight of hand?
It looks like IT, smells like IT but is it actually IT? IT has a language that includes terms including: containers, cloud, virtualized, apps, app stores, dev-ops, continuous delivery, open source, APIs, data model, web-based, scale-up, scale-out, agile, software-defined, bandwidth etc.. Each of these terms are connected to one another and have a specific meaning for the standard IT platform. Change the meaning of one or two terms and you have diverged from THE platform. At what cost? Does the use of a term in the description of a media-specialist product convey the standard meaning or are media-specific wheels being invented, or are you being locked in with something proprietary and expensive?
Here are some things to watch out for:
- What value is a single-vendor app store providing to its users? Why can’t I use an an IT platform-specific app store (i.e. Apple, Google, Microsoft)?
- Is a specialist media processing expansion card or other form of dedicated hardware platform really required? What is the knock-on operational cost?
- Can you audit and secure a significant deviation from the Mi-goog-azon-book-er platform that is required by a media-specialist product for use with existing enterprise IT? For example, non-standard operating system kernels, or an application writing directly to network card RAM buffers?
- Internet transport (TCP/IP) and web security (HTTPS) are the result of 40+ years of ongoing research and optimization. Are any aspects of a media-specialist product overlooking, repeating or ignoring this work?
My Victorian friends have dived straight in at the internet and missed the history of broadcasting. They have made a few observations and this is what they have to say:
- “Our experiments show that by tuning the bandwidth delay product. moving files over 10Gbps networks via FTP, HTTP and HTTPs can successfully saturate the speed of the network, which for HD uncompressed video is several times faster than real time. Although latency per frame increases by adding more streams in parallel, so does overall bandwidth usage and throughput.
- The same cannot be said for UDP/RTP, where a CPU-limit is hit and packets start to be lost. Perhaps there are different levels of investment being made in optimizing various protocols for the x86 platform?
- At its maximum speed in a vacuum, light travels about 30cms/11.8inches in a nanosecond ... and so do network packets! Content production infrastructure needs to work like clockwork, but if you don’t own that infrastructure, you don’t know the signal propagation delays. You had better use a data-driven model for time, like the JT-NM reference architecture.
- Companies working with big data are processing that data using streams, reactive streams, just like adaptive bitrate rate media streams. Could we exploit some of that technology, monitoring, measuring and dynamically responding to current resource usage with back pressure?
- The Internet of Things is wiring together devices and building mixed virtual and physical infrastructure. Could we use those ideas to build software-defined content production infrastructure?
- Where is the value in creating the physical infrastructure or selling software? Mi-goog-azon-book-er have built web-based infrastructure by sharing open-source libraries that they monetize in creative ways by providing high-level services.
- The internet allows me to deliver a personalized experience to a viewer or group of similar viewers. Could we democratize content production by having a means for creating and delivering professional quality content in and for the web, spinning up infrastructure on demand?”
Dr Richard Cartwright is CTO and founder, Streampunk Media Ltd
You might also like...
In part-1 of this three-part series we discussed the benefits of Remote Production and some of the advantages it provides over traditional outside broadcasts. In this part, we look at the core infrastructure and uncover the technology behind this revolution.
Recent international events have overtaken normality causing us to take an even closer look at how we make television. Physical isolation is greatly accelerating our interest in Remote Production, REMI and At-Home working, and this is more important now than…
MIT researchers have developed RFocus “smart surface” antenna technology that can work as both a mirror and a lens to increase the strength of WiFi signals or 5G cellular networks by ten times.
SDI has been and continues to be a mature and stable standard for the distribution of video, audio and metadata in broadcast facilities. From its inception in the 1989 to the modern quad-link 12G-SDI available today, it has stood the test…
Here we look at one of the first practical error-correcting codes to find wide usage. Richard Hamming worked with early computers and became frustrated when errors made them crash. The rest is history.