The Need for a Replacement Timecode Standard

The SMPTE timecode standard has served us well for almost half a century. But the equipment it was designed for is history, and soon it will be too. The industry needs a replacement.

If there is one tool that distinguishes professional video production from amateur it is SMPTE timecode, officially defined in the SMPTE ST 12M suite of specifications as a set of cooperating standards.

The concept of timecode was originally developed in 1967 by EECO (Electronic Engineering Company of Santa Ana, CA) as an "hours:minutes:seconds:frames" numbering system first used on 2” helical scan quad tape. EECO made the intellectual property public, and in the March, 1970 issue of the “Journal of the SMPTE”, it was proposed as an industry standard. Then on April 2nd, 1975, SMPTE timecode was approved by the American National Standards Institute.

But now, half a century later, as Peter Symes of Symes TV Consulting, said in his SMPTE 2018 presentation that the original timecode is in need of revision.

“SMPTE ST 12M is a wonderful 50 year old standard, but it has a number of deficiencies,” Symes began, “and back in 2007 Hans Hoffman of the EBU, and member of SMPTE, and I set up a Task Force to look into it.”

SMPTE timecode has outlived those who first proposed it.<br />(click to expand)

SMPTE timecode has outlived those who first proposed it.
(click to expand)

The Task Force reported that the goals for the existing timecode replacement had to support all known frame rates and still be attractive to other industries.

“At the time there were two competing proposals for a timecode replacement, but they could not be reconciled,” Symes said. “So we set some requirements, including multiple media rates (both low and high), a time span that exceeded 24 hours, and retro compatibility with SMPTE ST 12.”

Above all, they wanted the new version to have a “Digital Birth Certificate” (DBC) which meant including a precise time of acquisition, a persistent identifier that would stipulate whatever camera created the recording or stream, and the media rate.”

A drafting group was formed, chaired by John Willkie of Luxio to define the overall data structure, the content of each data field, and the rules for parsing the data to extract the information.

The group initiated a project, making it as extensible as possible in case new suggestions were added to the proposal.

Hohn Wilkie gets drafted to structure the data<br />(click to expand)

Hohn Wilkie gets drafted to structure the data
(click to expand)

“Eventually, getting tired of calling the goal of this project the ‘thingy’, we settled on the ‘TLX Project’ for ‘Time Label eXtensible’,” Symes said, “and came up with the idea of a tiered structure of profiles so that people could know what items are required in the DBC and what are not.”

They tried to leave the profile as flexible as possible, and Symes was clear that this is still a project in process.

Thanks to KLV's interoperability, it has also been adopted by the Motion Imagery Standards Board.<br />(click to expand)

Thanks to KLV's interoperability, it has also been adopted by the Motion Imagery Standards Board.
(click to expand)

“A profile is not terribly difficult to define,” he said. You just say ‘this is mandatory and ‘this is optional’ and set ‘if this, then that’ rules and set the maximum size. Then let people work with it.”

The TLX structure will be set as KLV (Key-Length-Value), a data encoding standard, often used to embed information in video feeds.

Much work remains for all of us<br />(click to expand)<br />

Much work remains for all of us
(click to expand)

“We are also looking at other profiles, in fact, this TLX project is still very fluid,” Symes said. “So if you want to get involved, you are heartily invited. We are working on the document suite being called ST 2120.”

“And, if you have a better sentence than the second one in this slide, feel free to suggest it.”

His main goal was to invite maximum participation.

“If you have ideas, or even better ideas than we’ve come up with, please join us,” he said.

Let us know what you think…

Log-in or Register for free to post comments…

You might also like...

Essential Guide:  IP - The Final Frontier

Today’s broadcast engineers face a unique challenge, one that is likely unfamiliar to these professionals. The challenge is to design, build and operate IP-centric solutions for video and audio content.

Cost-effective IP Contribution and Distribution

Saving dollars is one of the reasons broadcasters are moving to IP. Network speeds have now reached a level where real-time video and audio distribution is a realistic option. Taking this technology to another level, Rohde and Schwarz demonstrate in…

IMF - Interoperability and Content Exchange Made Easy

As the television business has become more global, and evolving consumer devices spawn the need for ever more formats, there has been an explosion of the number of versions that are needed for an item of content. The need to…

Sony Virtual Production Service Launched at Red Bull Event

Although OTT delivery has created a mature market for on-demand scripted shows that leverages the public internet for distribution, the ever increasing and IP-enabled bandwidth available that uses public wireless networks and the public cloud, is opening a new market…

NEP Sweden Rolls Out XT4K Servers In New UHD-1 HDR OB Truck For 2018 IIHF Championship

NEP Sweden, a division of NEP Europe, has selected EVS XT4K ChannelMAX servers to drive the ingest, playout, slow motion replay and highlights production within its new UHD-1 OB truck. The mobile production unit was used for the first…