DIY Asset Management

Is it possible to build one’s own asset management solution? Before you begin, let’s consider the challenges.

I was sitting at an Asset Management conference listening to panelists speak about their “grow your own” asset management systems and their lack of metadata schemas. They all spoke of the grandeur of metadata, its importance and the role it plays in archiving. While the panelists represented a few of the larger programmers and had significant budgets to “roll their own”, the majority of the audience didn’t have similar budgets or resources to build their own solutions.

When it came to metadata, the presenters noted why they didn’t believe a standard would help that stood out to me. There are too many different use cases was one given reason. Another was that each distribution entity – not platform - had different requirements. These issues combined to create an unmet challenge. But then, all mobile is not created equal, neither is OTT, OTA or Cable/Satellite/IPTV.

In expressing my surprise to one of the event organizers, his comment was it’s still The Wild Wild West.” I made similar comments to some vendors in the exhibit area. One was a storage guy, the other an asset MAM exhibitor. Both agreed and said they too continued to be challenged and disappointed with the use of metadata.

Metadata challenges

In one of my projects, the programmer was building their mobile app and we were discussing the metadata requirements for each carrier. That is when I learned the carriers all require different metadata.

The client then must ask himself, do they build a master metadata schema and then build profiles for each carrier or delivery platform and just extract the metadata for each profile. Or do they maintain separate databases and populate each one with the metadata for each specific carrier.

That means the upstream databases need to map across multiple databases, rather than one database and use a downstream script to pull the metadata from one database to the individual carriers. It was a tough decision, but one master database prevailed. That way as new requirements arose they only had to build a new profile not a new database and mapping schema.

Back to DIY

The sports guys are looking for new ways to retrieve statistics and generate player tracking data and even ball tracking for game analysis, monetization and content retention. But somehow figuring out where and how to save metadata in a way that makes it searchable, findable and retrievable is as elusive as the Lochness Monster or Saskatchewan.

Of course we cannot have a discussion about metadata without using the term semantic. That seems to be the buzzword du jour when other terms don’t fit.  

Back in the later part of the last century, the concept of fuzzy math and fuzzy logic first appeared. The evolution of the fuzzification of mathematical concepts can be broken down into three stages:

  1. Straightforward fuzzification during the sixties and seventies,
  2. The explosion of the possible choices in the generalization process during the eighties,
  3. The standardization, axiomatization and L-fuzzification in the nineties.

Fuzzification – now there’s a term I can put my arms around. So let’s bring it forward and pose a question - Does the fuzzification of yesterday’s logic equal the semantic logic of today? The issue expressed mathematically:

Fuzzification = Semantic OR Fuzzification ≠ Semantic

Semantic as defined, focuses on the relationship between words, phrases, signs, and symbols and what they stand for.

A semantic search provides suggestions of what you didn’t ask but the search engine guessed what you really meant. So instead of the axiom that a “Computer does what you ask it to do, not what you want it to do”, a semantic engine attempts to guess what you really wanted to do and recommends what it thinks you really meant.

Well that is certainly a little fuzzy. Both concepts attempt to apply some reasoning to the request and assume the requester wasn’t quite sure what they were asking. The computer then provides a broader selection of answers more inclusive of matching content where the matching criteria might be a little thin.

What does this have to do with asset management?

I find it strange that there aren’t acceptable asset management products in all sizes shapes and price ranges so we can get past the home-grown solutions. Archive, search, browse, manage and retrieve are the core functions. Even a semantic engine can’t find an asset if it’s not tagged or indexed. There is also the DAM vs MAM conversation, meaning does one product service all masters; content acquisition, delivery and archive vs content creation. No matter because both approaches need to be integrated and share metadata.

There are obvious practical reasons to buy rather than build a custom solution. They include maintenance, scaling, ongoing development and losing the code or the person that wrote the code taking another job. And it must be more cost effective to buy a solution and then work with the vendor to configure it for one’s particular needs.

Other issues. What happens when the next codec or container comes out? Monetizing content through rich metadata is a growing business. Is it practical to have a full time development team building and maintaining it? It may be difficult to convince management that you now need a full-time media manager just to run and update the system.

It also seems silly that, even today, it is still difficult to get people to enter metadata on their content before they save it.

Discussions on asset management should instead be focused on finding clever ways to get metadata entered and associated to the content, standardizing metadata within the organization and using technology to create profiles to meet the specifications of each distribution channel.

Is a content delivery platform’s secret sauce based on what metadata the programmer sent. Or is there some advantage in how clever they can be with recommendation engines and fuzzy logic to deduce what their customer really wanted?

Listening to the librarians and media managers talk about retention policies and chasing metadata seems odd given the same discussion is about maintaining petabytes of content.

My recommendation is to avoid DIY all together and find an asset management product that actually can meet your requirements. You’ll sleep better.     

Editor’s Note: Gary Olson has a book on IP technology, “Planning and Designing the IP Broadcast Facility – A New Puzzle to Solve”, which is available at bookstores and online.

You might also like...

“Content-Aware” Encoding Could Be Key To Cost-Effective 8K Delivery

If an 8K content service from OTT providers like Amazon, Netflix and YouTube is ever going to be successful, and that’s still a hot topic of debate, new types of compression will have to be part of the solution. T…

Software-Defined Automation: Are We Nearly There Yet? Part II

Playout automation has been enabling fewer people to control more channels for decades but we’re not quite at the point where human interaction can be eliminated altogether. Since most linear broadcasters will either move to a software-based deployment for t…

Core Insights - Operating An IP Broadcast Facility

Whether we’re routing signals or remotely operating equipment, the need for reliable system control is one of the most important aspects of a broadcast facility. But as we migrate to IP, some of the working practices we took for g…

Operating An IP Broadcast Facility:  Part 3 - At CBC’s New IP Broadcast Center, Communication Is Key

CBC/Radio-Canada (CBC) is putting the finishing touches to a brand new all-IP broadcast facility that will provide virtually unlimited flexibility across multiple platforms to support highly efficient production and distribution workflows for its radio, TV and online programming.

WFH: Tech Choices From Edit To Approval

You have two key choices for remote technologies: those that give you remote access into machines and devices at your facility or those based in the cloud. Depending on your needs it may be sensible to take a mixed approach.