The computer industry has shown the way to a more efficient video production future.
After a year like 2020, predicting the future is scary business. However there are several leading-edge technologies—many borrowed from the IT and consumer-facing industries—that certainly look to make a significant impact on video production and broadcasting in 2021. Here are some, in no particular order, that will see continued implementation and streamline production and distribution workflows. To date we’ve seen these new tools begin to alter the way video production and distribution is done, helping the industry move forward and media businesses grow, and that’s certain to continue in new and exciting ways.
Large media organizations are streamlining their internal operations by migrating to remote signal processing in the cloud. It’s been happening for the past two years and will continue to get better and more reliable. Using the cloud saves time and money in equipment investment while also reducing inefficient or unnecessarily redundant workflows. In an era dominated by the COVID-19 pandemic, remote operations have also provided a technical foundation that has helped bring back sports to fans sitting at home while also keeping production crews safe.
A cloud-based technical workflow has proven to be extremely helpful to media organizations looking to scale up new interactive video services quickly. Global channels that used to take months to launch are now online in days. Virtualization has also facilitated the automation of many mundane tasks and enabled studios to be more efficient in how they use their existing on-premise hardware systems.
There’s also much to be said about cloud elasticity and its ability to handle spikes in processing power usage. We live in a world of unpredictability, when it comes to consumer demand, so companies have to be ready and able to pivot their services to meet this unprecedented demand.
While consumers see 5G as their savior for better video reception on their cellphones, professional video production will also benefit immensely. We’re talking about low latency connections, guaranteed bandwidth and speed—5G provides speeds of 10 to 20 times faster than current 4G networks. And while 4G allows mobile users to receive SD and HD video, 5G will allow more users to simultaneously receive higher quality video.
In addition, due to the network architecture of 5G, the location of client devices can be triangulated with significantly higher accuracy than with current 4G networks. Location-based applications that require high precision will operate more effectively in 5G environments than they can today. In this way news director can track crews in the field and get them closer to the news event using GPS applications.
Another application for 5G in live programming is in providing a high-capacity wireless link in areas where wired infrastructure is unavailable. Today, coverage of live events such as cycling, marathons, or golf that occur over large or remote areas is problematic. Camera placement is often determined by the availability of wired connections that provide the reliable connections necessary for broadcast. This limits the number of options available and hinders a producer’s ability to create a high-quality broadcast.
The technology also potentially allows greater flexibility for high-quality broadcasts. Placement of broadcast cameras can be determined by optimal viewing rather than accessibility, providing producers with greater coverage and more options for effective storytelling. 5G will also support data-intensive resolutions such as 4K (and eventually 8K) that will be the future of broadcast video.
Machine learning, a subset of artificial intelligence, is a technology that is now being applied almost everywhere there’s an opportunity to improve or automate a process – including live video. There are many opportunities for machine learning applications in live video scenarios of all sizes, including large multi-camera events, smaller single-camera livestreams, and even lectures at educational institutions.
Many different kinds of software programs can adopt machine learning, such as video production apps, video animation tools, and encoding software within live production systems.
For example, ML algorithms can streamline the virtual set process by automatically adding digital elements (or removing physical elements) based on detected visuals, such as shapes, depth of field, and static or dynamic images.
Machine learning can also be applied to automatically aggregate social media comments into a stream, allow presenters to respond to comments live, and automatically route replies to the appropriate social platform. It can also simplify the addition of dynamic content to a live stream, such as a Twitter hashtag conversation or a news feed discussion. The code can detect and learn relevant keywords across specific digital media channels and dynamically include the content into the livestream.
The technology has also been used to automatically create highlight reels within minutes during live sporting events. This is a job traditionally done by a dedicated operator. With proper training, computers can learn the best shots to take—based on past successful productions—and automatically go to those shots when certain learned prompts are recognized during the event. Don’t expect the NFL Super Bowl to be done this way, but for hundreds of “B” and C” level college and high school telecasts, this holds immense promise and could ensure that more types of sports get covered on TV or the Internet.
100 GB/s Networks
These days many productions are being completed over a distributed infrastructure. The need to work more remotely requires more bandwidth, especially for the highest quality media feeds transported over a robust, resilient, optical network for mission-critical contribution performance. Enter 100 Gigabit Ethernet (100GbE) IP switching.
There was a time, not too long ago, when 100 Gbps data rates were only considered for IT data centers moving large amounts of financial and military data. With the growth of media and the urgent need for remotely controlled production infrastructures, 100 Gb/s is no longer a far off dream for content distribution system engineers and has become a slow-but-steadily emerging Contribution reality that meets the capacity needs of today’s bandwidth-hungry media industry.
Indeed, 100 Gbps deployments will become more common in 2021, with some of the biggest infrastructure providers (like AT&T, Verizon and others) rolling out 100 Gb/s topologies.
The global pandemic has further accelerated the need to deploy bandwidth intensive solutions now, not down the road, especially for sending the highest quality video and other media signals from remote destinations such as high profile sporting and entertainment venues to distant production/broadcast studios. This trend will continue in 2021, as the demand for more resources will remain high.
Deep Learning/Neural Networks
There are many applications in video (and movie) production where “deep learning” has been applied, now that the media industry has a better understanding of what deep learning actually means.
Basically, deep learning is a subset of machine learning where artificial neural networks—that is, algorithms inspired by the human brain—learn from large amounts of data. One of the more complex ways of doing this is by mimicking the neurons in a human brain. A neural network is a type of machine learning that, via a set of algorithms, allows a computer to learn and improve upon the task at hand. When we make these artificial brains, or neural networks, more complex, we call that deep learning.
Many are calling deep learning the new frontier for the video industry, as it allows video professionals to do things automatically that would have taken weeks of work in the past, as well as complete tasks that wouldn't have been possible at all. For example, it allows a computer to take all the pixels in a frame of video and output something equally complex, such as all the pixels in a new altered frame of video. It may be shown frames with unwanted grain as input, and have its output compared to clean frames. By trial and error, it learns how to remove the grain from the input. As more and more images are passed through it, it can learn how to do the same thing for images that it was never shown.
Deep learning can also be used to match generated speech with human speech, so text-to-speech programs sound more natural. In a similar task, it is used by translation companies to teach computers how to translate from one language to another.
The use of all of these technologies (and others) has only begun to touch the surface of what is possible and that will only improve in 2021. As with any new technology, the video industry and tech experts must come together to develop the standards of how tomorrow’s new normal might look. However, with the right approach, they all will take film and television production and distribution to a whole new level.
If 2020 taught us anything, it’s how to adapt when traditional methods don’t work. The industry has done that and more, but don't expect production and distribution methods to drastically change overnight. While larger productions will likely revert to tried and true methods as soon as they’re able, the skills and efficiencies gained this year have provided a good jumping off point for the new year and a whole lot of new video services.
The future looks bright… as long as your viewing device is turned on.
You might also like...
Information can never be separated from its nemesis, which is uncertainty. The former is only possible by limiting the latter.
Maintaining controlled access is critical for any secure network, especially when working with high-value media in broadcast environments.
Electrical safety is extremely important, and a combination of technology and procedures helps achieve adequate protection.
NAB happened! Yes, footfall was inevitably down and there were fewer vendors exhibiting, but the show went on. And what a great success it was too.
As we saw earlier when discussing transform duality, when something happens on one side of a transform, we can predict through duality what to expect on the other side.