Taking graphics to the next level

As graphics’ technology gets better, it becomes more difficult to tell the difference between real and software-generated imagery.

Graphic elements serve many purposes. The line between content and promotion has become so blurred that the consumer confuses the two. This should be intended! The viewer tends to lower the critical filter for content. The trick is to keep it lowered for advertising and promotion as well.

One thing we can do is analyze the graphics in content (the structure of graphical elements is quite different from real world objects) and apply similar templates (colors, fonts, transparency, etc) to our advertising or promotional material. This will work well for sporting events, but it would be interesting where intellectual property rights intersect advertising when we use similar music and graphics (“Who wants to be a Millionaire”) to sell our soap.

On air graphic engines are the key to this kind of functionality. Combining data mining with real time pixel manipulation, these systems have changed our expectations about how news and sports should be presented. The results can be found in all types of non-drama programing from dancing to surviving.

In the last few years a whole ecosystem has grown up around these systems. Companies specializing in template design build elements to match a show or station’s needs. Modular frame-work applications allow for connecting and applying logic to real time data streams.

Social network filtering algorithms allow for tweets and likes to be quantified and graphically represented. What added value do these systems offer? In the background is a script driven 3D renderer with a database and a bunch of SQL statements taking advantage of open API’s to existing data sources. The size of the market has brought many vendors into the field, such that while “roll your own” may be an option. today it just does not make sense. Applications range from the simple kinds of graphics any station would use in the daily lineup to the Emmy nominated real time interactive game show “Web vs Promi”.

The graphic engine software provided with most channel in a box systems can provide all the functions required for a normal programming day, but getting the most out of them requires talent and knowledge. You may be better of outsourcing the design and programming tasks to one of the many firms providing these services.

The hardware behind all this is, in most cases, an of the shelf graphic card from either AMD or Nvidia. In any case all manufacturers have the same starting point. These GPU’s vary in price from $200 to around $1K, a small percentage of the total price of channel in a box solutions.

There are two parts to developing state of the art graphics; Programming and Hardware. Real time photorealistic rendering at 4K 120fps remains somewhere in the future. The hardware is getting better every day and gaming engines are driving the development.

Lucasfilm, according to Kim Libreri chief technology strategy officer, has begun utilizing video game technology for feature production. "We think that computer graphics are going to be so realistic in real time computer graphics that, over the next decade, we'll start to be able to take the post out of post-production; where you'll leave a movie set and the shot is pretty much complete," Libreri said. (BAFTA 2013) Broadcast graphics has always been about “real time”, currently 30 Fps @ 1920x1080, so the question becomes, what is possible and how can we integrate that into our programming?

Replacing the actor with software
Before you think that actors may not be replaceable, take a look at this video. You’ll be hard pressed to know that’s not a real live human head. The image relies on an NVIDIA GTX Titan card being run on the Maximum Resolution Games View You Tube channel. The channel specializes in high-rez imagery and has plenty of examples of software looking pretty realistic.

The ability to create realistic avatars opens up many possibilities, but technically these are not off the shelf solutions. Mastering the engineering challenges implicit in a new programming paradigm, ie real time interactive with user generated content and live avatars, will be rewarded with market success.

Broadcasters have the unique advantage of being able to reach a large audience simultaneously in a cost effective manner. Giving viewers the possibility to interact will require simultaneous participation, thus playing to broadcast strengths. Creating the parameters of a live event without the overhead is good for everybody in the broadcast value chain.

Looking down the road for the next year or so Thomas Molden, a player in automated graphics from day one when Computersports presented the first working virtual studio, sees some trends to keep an eye on. “Second and third screens are going to be major users of data-generated graphics. The individual user profiles available will enhance the first screen experience.”

These social interactions make broadcasting a must if the viewer wants to be part of the action. Thomas also sees lots of opportunities in using the intelligence at the display device. “Generating graphic overlays at the display will allow for individualization even on the first screen.” Would you send a clean feed or reserve certain parts of the picture for additional information?

Thomas suggests a third option. “Screens get bigger and support higher resolutions, think 4K. These displays have little or no content in native resolution. Why not use the extra resolution and screen space for additional information instead of just blowing up the HD feed?” New televisions and STB’s have the required intelligence on-board so this sounds like a really good idea to give the early adopters some added value. The lines between “content” and “graphics” are going to disappear!

You might also like...

The Meaning Of Metadata

Metadata is increasingly used to automate media management, from creation and acquisition to increasingly granular delivery channels and everything in-between. There’s nothing much new about metadata—it predated digital media by decades—but it is poised to become pivotal in …

Designing IP Broadcast Systems: Remote Control

Why mixing video and audio UDP/IP streams alongside time sensitive TCP/IP flows can cause many challenges for remote control applications such as a camera OCP, as the switches may be configured to prioritize the UDP feeds, or vice…

AI In The Content Lifecycle: Part 2 - Gen AI In TV Production

This series takes a practical, no-hype look at the role of Generative AI technology in the various steps in the broadcast content lifecycle – from pre-production through production, post, delivery and in regulation/ethics. Here we examine how Generative AI is b…

Managing Paradigm Change

When disruptive technologies transform how we do things it can be a shock to the system that feels like sudden and sometimes daunting change – managing that change is a little easier when viewed through the lens of modular, incremental, and c…

AI In The Content Lifecycle: Part 1 - Pre-Production

This is the first of a new series taking a practical, no-hype look at the role of Generative AI technology in the various steps in the broadcast content lifecycle – from pre-production through production, post, delivery and in regulation/ethics. Here w…