Dynamic templates could allow graphics to be accurate and appropriate for a variety of screens.
Aesthetically pleasing 3D titles and graphics are integral to providing the wow factor that keeps today’s broadcast viewers glued to the screen. These visual elements—including 3D and 2D titles, animated graphics and real-time data-driven overlays—provide the vital contextual information that enables viewers to fully appreciate and understand the events they are watching.
The eye candy factor goes beyond live coverage of sporting events, breaking news, business news, elections and special events coverage to include commercial spots and promos. And it also transcends the traditional television medium to encompass streaming media, video on demand, social media and other online distribution.
The reality is that today’s video consumers want to watch content on any platform using their preferred display device—ranging from small mobile phones and tablets to big-screen HDTVs and 4K-resolution UHDTV sets.
EXPANDING VIEWER OPTIONS
50 million people now view video content on their mobile phones in the U.S. alone, and more than 15% of all online video hours are on tablets and smartphones globally, according to a BI Intelligence 2015 report. And that trend will only continue. In fact, the same research shows that video consumption on smaller phone screens is accelerating faster than video consumption on larger mobile devices, such as tablets.
This means that, rather than preparing titling and graphics for the unique specifications of a single medium—TV—content creators and broadcasters must now quickly optimize the titling and graphics in their shows and ads to suit the myriad of output resolutions, screen sizes and aspect ratios today’s savvy, mobile viewers require, without compromising the viewing experience.
Whether for live television, streaming media or video on demand, the prospect of creating rich, animated graphics and titles—that look like they were designed for whatever platform or device the viewer is using—now presents significant creative and technical challenges for broadcasters and content creators.
We’re reaching the point as an industry where it’s no longer cost-efficient, practical or profitable to have people physically recreate, modify, and manage all of the titling and graphics variations needed for all the different platforms, channels and devices. If a live show airs titles and graphics with incorrect spellings, timing, positioning or other problems, viewers are more forgiving because they understand it’s live. But they’re not as forgiving when mistakes occur in what should be a finished video.
We believe that our industry needs to adopt a platform that can automate the entire file-based titling and graphics workflow in order to generate all the required iterations—without manual intervention—while maintaining quality control. As mentioned earlier, on-screen visual elements can encompass full-screen graphics and bulleted text, lower third supers, scoreboards and leaderboards, tickers, crawls, animated logos, data-driven charts, and more.
To be viable, this automated workflow would need to support key industry standards of interoperability, such as BXF, IMF and DPP. And it would have to dovetail with the familiar, widely used titling and graphics creation tools and software, such as Photoshop and After Effects since creatives don’t want to have to learn a new way of working. It should also support Avid and other nonlinear editing systems as well as live broadcast graphics and production systems that industry professionals regularly use.
TRANSFORMING THE WORKFLOW
Automation is certainly not new to the broadcast industry; automating the playout chain is now commonplace. And media and entertainment companies have automated systems to handle transcoding and other process-intensive tasks at their facilities. But the scale and scope of the versioning process for titling and graphics—across editing, automated production and playout—requires something bigger and broader.
It calls for a quantum leap to a unified workflow with powerful, intelligent engines that can process the workload according to whatever work orders, instructions or parameters have been set up for the job. The processing might involve changing the size, resolution, colors, textures, tags, and on-screen positioning, as well as other individualized requirements for playout to the target medium.
If it became the industry standard for broadcast and post-production entities to use tools and systems supported by this type of automated platform, then titling and graphics assets could flow between them, and users could modify those assets without first having to recreate them from scratch.
Since many of today’s editing, graphics and broadcast systems don’t have a common platform where they can talk to each other, or support the same content creation and delivery capabilities, these systems are relegated to graphics silos that are inefficient. Recreating titles and graphics from scratch is labor and time intensive, especially when editors and artists are working in silos of different systems that don’t share the same exact fonts, look and style toolbox.
A better approach is for graphics designers, editors and content creators to compose the aesthetic look and style of the titling and graphics just once, at the start of the project, with no further hands-on involvement whenever versions and variations need to be generated. Human error is then eliminated, and the graphical design and branding remains consistent, even as the assets shuttle between separate design, broadcast and post entities.
Remember having to edit the tag at the end of a national car spot with the name and location of individual dealers? If there are dozens of spots in the campaign, that volume increases when you need to create custom tags for each one, multiplied by all the versions needed to meet delivery format requirements. It’s a mundane job you wouldn’t wish on anyone, but imagine the contact information for all those car dealerships is now provided on a spreadsheet, using a database format like XML. An automated workflow engine could create and insert titles into proper places within the media streams quickly and accurately with no human intervention.
Programs and spots may also need new versions that correctly represent the translated text in multiple foreign languages—including complex ones like Hebrew, Arabic, Hindi and Tamil—or customization for different geographical regions or markets. The campaign might need to change existing text to reflect new dates, times or other terms.
When people must do the work by hand, this exponential versioning becomes overwhelming or downright impossible. Or we make unsatisfactory compromises, such as living with rudimentary or badly executed titling and graphics that don’t fit or fill the screen properly. Or we just give up altogether and have no graphics or titles at all.
Graphics templates are not a new concept as live graphics systems often use them as the basis for generating data-driven graphics on the fly. However, this becomes another type of silo found at the playout stage. Ideally, template designs should be authored and tested in a way that allows collaborators to access and use them from the familiar user interface of their preferred editing, graphics and production systems—again, without having to recreate them perfectly from scratch.
Facilitating this new level of efficiency introduces a set of complex questions. For example, how can we design, author and test templates so that they work across multiple output formats, from small mobile screens to large, high-resolution displays? How can we maintain design balance and readability? And how do we automate the way that data, such as a temperature reading from an RSS feed, populates that template in a uniform way across the entire content creation and production workflow?
To solve these problems, we need smarter templates that can adapt themselves to automation on the fly while supporting the technical requirements of design, editing and playout.
We need for this automation solution to understand auto-positional placement of data within live streams while avoiding the burned-in graphics and titles already present in the media. It would also need to know intrinsically how long to leave a particular title or graphic on-screen, since longer text will take longer to read.
Automating these types of complex scenarios requires technology that employs deeper-level analysis, artificial intelligence (AI) and advanced automation. Telestream and NewBlue are among the companies actively seeking solutions that make it easier and more cost-efficient to automate titling and graphics in the multiplatform content distribution era.
Scott Matics, Director of Product Planning for Telestream
Todor Fay, CEO and Co-founder of NewBlue
You might also like...
On October 27, 2020 The Federal Communications Commission issued an order to expand its captioning mandate for broadcasters to include audio description requirements for 40 designated market areas (DMAs) over the next four years. The move came after the Twenty-First Century Communications and…
One of the surprises from the latest research published by Nielsen was the significant rise in audiences watching live linear TV. Lockdown has not only sent SVOD viewing soaring through the roof but linear TV is expanding rapidly. One reason…
Online video captioning is critical for the deaf community at any time, but during a public health emergency like COVID-19, it has taken on a new significance, particularly as people stay at home.
The standards for moving video over IP are all decided, right? Not yet. Even so, the innovation presents unprecedented opportunities and empowers broadcasters to deliver flexibility, scalability, and more efficient workflows. Consultant and The Broadcast Bridge technology editor, Tony Orme,…
Innovation has become a mantra for broadcasters, driven in part by the disruption of online content consumption and proliferation of video content sources which now number 1 billion globally by some counts. Innovation is seen as crucial for the very long…