MAM is Dead. Long Live Media Logistics—Part 3

In the third and final part of BroadcastBridge’s MAM feature we contend that MAM as we’ve known it is dead and that today’s broadcaster and content delivery firm want a media logistics solution which encompasses all ingest, production, distribution and archive with rich metadata including rights. If so, are the tools in most MAM’s appropriate at ‘orchestrating’ all of these assets?

Here are the comments of Tony Taylor, CEO TMD.

TT: Many MAM solutions have been designed around a siloed approach. This has been typical of the way software has been developed in the broadcast industry for many years. Even now I find it incredible when I hear some of the stories of MAM implementations that have taken no account of joining up the business of media across organisations.

That joining up has to start with the metadata. The successful media businesses are those who realise the value of the metadata which exists alongside the content, and implement a MAM solution that uses it to the fullest extent possible.

There can be no argument that the future will be around file-based workflows in data centre environments. This depends upon metadata: acting on it, reacting to it and enriching it as it passes between and through facilities. The protection and enrichment of metadata has always been at the heart of any asset management system worth the name, and today it is the only logical place to put the workflow orchestration layer.

If workflow orchestration is about drawing on and adding to metadata, why would you even consider putting orchestration in a separate system?. It has to be in the system which is charged with holding the metadata.

Content preparation and delivery firms are required to deliver assets to an ever increasing variety of platforms. How have manufacturers helped content companies gear up for life in a multi-platform world?

TT: You have to think in terms of layers. At the bottom is the hardware: the servers, the encoders and transcoders, and the content delivery networks. Above that is a control layer, which tells the hardware what to do with each piece of content.

Above that is the business layer. This is where executives look at the economics of the operation and make commercial decisions. In a modern media enterprise, these executives should be able to make decisions based on purely commercial considerations, not what the technology allows them to do.

The middle layer is the asset and workflow management. Its rich metadata captures all the information on the content: what rights are available; when and where it can be shown; what content needs to take priority through the encode farms and more. Most important, the asset and workflow management system should both be controlling the hardware at the bottom, and reporting and responding to the business systems above it.

Put simply, a CEO should be able to look at one screen – familiar to him or her because it is in the enterprise management layer – and make a decision to, say, put a particular programme on iTunes. That decision should pass automatically to the workflow management system which will draw on the technical metadata to determine precisely which processes are required, and implement them at the right time, again fully automatically.

What are the tools to create, deliver and store files and metadata for broadcast, VoD, mobile and web in one workflow?

TT: The very simple answer to that is a rich metadata schema. If the asset and workflow management system knows all there is to know about the content, from rights to resolution, then it can command whatever other equipment is around to make all these things happen.

It is, frankly, ridiculous to think that the media industry can think about multi-platform delivery in anything other than a single workflow environment. Conceptually, you are delivering your content to your audience. It is one concept, so how can it be anything other than one workflow environment?

There are many tools that exist to achieve this, from editors to transcoders. But the primary tool to ensure efficient automated media business process management is content intelligence, relying on the metadata. There is no need to compromise if you use the biometrics inherently encapsulated in the metadata and content.

How important is the ability to integrate tools from a range of vendors?

TT: Broadcast engineers have always chosen best of breed solutions: the right set of functionality and performance for a specific installation. Do we really think anyone wants to change that?

However, as we move into the IT-centric and increasingly the cloud era, we have to find ways to maintain and simplify that choice. One of the biggest challenges is scaling services up and down to cater for peaks and troughs in volumes as well as introducing new technologies and services. At TMD we have designed, integrated and implemented a platform called UMS – unified media services – which is a simple approach to service-oriented architectures that enables broadcast and media organisations to cost effectively integrate third-party technologies.

There is of course the FIMS standard as a good open foundation, but this does not answer all of the needs of the current broadcast customer. So UMS provides a service bus to support integrations, which includes FIMS, proprietary APIs and other methods to decouple the technology from the operations, allowing users to choose best of breed hardware yet still operate it from automated, metadata-driven workflow orchestration.

Is it best to adopt a single system or opt for a modular workflow?

TT: It is best to implement a system that fulfils the real commercial needs of the media company. In some cases that can be done in a one-stop shop solution. In most cases, I suspect, it will best be served by components from a number of top vendors, brought together under a metadata-driven environment. Either way, the question should never be “who do I buy this from?” but “what do I need to make money?”. It has to be looked at from the business perspective and not simply the technology preference of an engineering or IT department.

TMD's Tony Taylor

TMD's Tony Taylor

You might also like...

Delivering High Availability Cloud - Part 1

Broadcast television is a 24/7 mission critical operation and resilience has always been at the heart of every infrastructure design. However, as broadcasters continue to embrace IP and cloud production systems, we need to take a different look at how we…

The Peril Of HDR: Just Because You Can Doesn’t Mean You Should

There is a disturbing and growing consensus among viewers that many movies and TV shows today are under illuminated or simply too dark. As DOPs, we surely share some of the blame, but there is plenty of blame to go…

The Sponsors Perspective: What Are You Waiting For?

Cloud based workflows are here to stay. So, there are no longer any technical or production-based reasons not to take full advantage of the opportunities this can bring to broadcasters.

Broadcast And The Metaverse - Part 2

The Metaverse may seem a long way off but the technology underpinning its rapid deployment is here today and has the potential to empower broadcasters to improve the immersive viewing experience.

Scalable Dynamic Software For Broadcasters: Part 7 - Connecting Container And Microservice Apps

Messaging is an incredibly important concept in microservices and forms the backbone of their communication. But keeping systems coherent and resilient requires an understanding of how microservices communicate and why.