Taking graphics to the next level

As graphics’ technology gets better, it becomes more difficult to tell the difference between real and software-generated imagery.

Graphic elements serve many purposes. The line between content and promotion has become so blurred that the consumer confuses the two. This should be intended! The viewer tends to lower the critical filter for content. The trick is to keep it lowered for advertising and promotion as well.

One thing we can do is analyze the graphics in content (the structure of graphical elements is quite different from real world objects) and apply similar templates (colors, fonts, transparency, etc) to our advertising or promotional material. This will work well for sporting events, but it would be interesting where intellectual property rights intersect advertising when we use similar music and graphics (“Who wants to be a Millionaire”) to sell our soap.

On air graphic engines are the key to this kind of functionality. Combining data mining with real time pixel manipulation, these systems have changed our expectations about how news and sports should be presented. The results can be found in all types of non-drama programing from dancing to surviving.

In the last few years a whole ecosystem has grown up around these systems. Companies specializing in template design build elements to match a show or station’s needs. Modular frame-work applications allow for connecting and applying logic to real time data streams.

Social network filtering algorithms allow for tweets and likes to be quantified and graphically represented. What added value do these systems offer? In the background is a script driven 3D renderer with a database and a bunch of SQL statements taking advantage of open API’s to existing data sources. The size of the market has brought many vendors into the field, such that while “roll your own” may be an option. today it just does not make sense. Applications range from the simple kinds of graphics any station would use in the daily lineup to the Emmy nominated real time interactive game show “Web vs Promi”.

The graphic engine software provided with most channel in a box systems can provide all the functions required for a normal programming day, but getting the most out of them requires talent and knowledge. You may be better of outsourcing the design and programming tasks to one of the many firms providing these services.

The hardware behind all this is, in most cases, an of the shelf graphic card from either AMD or Nvidia. In any case all manufacturers have the same starting point. These GPU’s vary in price from $200 to around $1K, a small percentage of the total price of channel in a box solutions.

There are two parts to developing state of the art graphics; Programming and Hardware. Real time photorealistic rendering at 4K 120fps remains somewhere in the future. The hardware is getting better every day and gaming engines are driving the development.

Lucasfilm, according to Kim Libreri chief technology strategy officer, has begun utilizing video game technology for feature production. "We think that computer graphics are going to be so realistic in real time computer graphics that, over the next decade, we'll start to be able to take the post out of post-production; where you'll leave a movie set and the shot is pretty much complete," Libreri said. (BAFTA 2013) Broadcast graphics has always been about “real time”, currently 30 Fps @ 1920x1080, so the question becomes, what is possible and how can we integrate that into our programming?

Replacing the actor with software
Before you think that actors may not be replaceable, take a look at this video. You’ll be hard pressed to know that’s not a real live human head. The image relies on an NVIDIA GTX Titan card being run on the Maximum Resolution Games View You Tube channel. The channel specializes in high-rez imagery and has plenty of examples of software looking pretty realistic.

The ability to create realistic avatars opens up many possibilities, but technically these are not off the shelf solutions. Mastering the engineering challenges implicit in a new programming paradigm, ie real time interactive with user generated content and live avatars, will be rewarded with market success.

Broadcasters have the unique advantage of being able to reach a large audience simultaneously in a cost effective manner. Giving viewers the possibility to interact will require simultaneous participation, thus playing to broadcast strengths. Creating the parameters of a live event without the overhead is good for everybody in the broadcast value chain.

Looking down the road for the next year or so Thomas Molden, a player in automated graphics from day one when Computersports presented the first working virtual studio, sees some trends to keep an eye on. “Second and third screens are going to be major users of data-generated graphics. The individual user profiles available will enhance the first screen experience.”

These social interactions make broadcasting a must if the viewer wants to be part of the action. Thomas also sees lots of opportunities in using the intelligence at the display device. “Generating graphic overlays at the display will allow for individualization even on the first screen.” Would you send a clean feed or reserve certain parts of the picture for additional information?

Thomas suggests a third option. “Screens get bigger and support higher resolutions, think 4K. These displays have little or no content in native resolution. Why not use the extra resolution and screen space for additional information instead of just blowing up the HD feed?” New televisions and STB’s have the required intelligence on-board so this sounds like a really good idea to give the early adopters some added value. The lines between “content” and “graphics” are going to disappear!

You might also like...

Core Insights - Internet Contribution For Broadcasters

What is the internet? Who is the internet? Where is the internet? These are the first three questions on the tip of every engineers and technologist’s lips. Before we can ever possibly hope to work with internet technology, we m…

HDR: Part 26 - Creative Technology - MovieLabs Software Defined Workflows

The term “paperless office” goes back at least to 1978. The parallel term “filmless movie” is actually far older, dating perhaps from a 1930 article by the Hungarian inventor Dénes Mihály in the West Australian, published in Perth on 9 April 1930. Given how…

The Sponsors Perspective: PRISM Waveform Monitors: Form Is Temporary, But Class Is Permanent?

In the beginning, there was television. And whenever people tried to make television programmes effective video signal monitoring was an essential pre-requisite.

Timing: Part 3 - Early Synchronization

Synchronizing became extremely important with the growth of AC power systems, which ended up being used to synchronize all sorts of equipment, from Radar to television.

Essential Guide: Cloud Microservice Workflow Design

The power and flexibility of cloud computing is being felt by broadcasters throughout the world. Scaling delivers incredible resource and the levels of resilience available from international public cloud vendors is truly eye watering. It’s difficult to see how a…