The Future of Media Archives: Managing Media Across Time and Space

Media storage is not just about meeting today’s needs of size and speed. It’s also about being able to access that content tomorrow or 20-years from now.

Ever since video was first stored on hard disk drives some three decades ago, media operations have wanted faster, bigger, and cheaper storage. Fortunately we’ve finally come to a point in the evolution of storage where faster, larger and cost-effective solutions are available.

Fast — With all the talk about 4K and 8K resolutions, high dynamic range, high frame rate, etc., demand for speed is clearly at a new high. Fortunately, storage arrays and controllers have no problem providing the gigabytes per second of throughput for multiple streams of the heaviest of these uncompressed rates.

Big — The concept of big storage got a whole lot bigger a few years ago when social media overtook professional and broadcast media as the ultimate warehouse for our cultural heritage. The capacity of broadcast and studio media repositories is now small compared to the tens of petabytes employed at photo- and video-sharing sites, from Flickr to Facebook to YouTube. So when it comes to supporting large production and distribution environments, “big” is no longer an issue.

Cheap — A terabyte costs less than 1/1000th what it did 15 years ago. And it will only get cheaper.

Clearly fast, big, and cheap are no longer the problems. So what is on everyone’s storage wish list today?

Storage Management: The New Cost Conundrum

These days, instead of worrying about the price per terabyte, the big expense on the radar is the cost of storing the media over time and making sure it’s accessible and usable indefinitely. In other words, it’s not just storing it today, but keeping it for tomorrow—especially in the case of broadcasters and studios, for whom “tomorrow” could mean a decade or even a century from now. Consider this: “Star Wars” opened almost 40 years ago. The original “20,000 Leagues under the Sea” movie was released 100 years ago. How can we affordably manage for that kind of longevity?

The actual location of stored content should be invisible to users.  And, storage needs to be more than just a place to manage content during the creation phase. A proper solution needs to provide accessibility—forever.<br />

The actual location of stored content should be invisible to users. And, storage needs to be more than just a place to manage content during the creation phase. A proper solution needs to provide accessibility—forever.

Object Stores: The Key to Affordable Long-Term Storage Management

Enter object stores. Unlike file systems, object stores group files and their metadata into objects that can be coherently accessed by different media asset management systems, file-delivery systems, and other production and distribution applications. They are designed to support multiple storage sites connected by a wide area network. As content ages, policies can automatically move content to tape or the cloud, eliminating the need to manually move or delete thousands or millions of files.

Some object stores are specifically designed to manage content through time and space — where time is measured in decades and hardware-platform transitions, and space in measured in the distance between cities and continents.

Across the next century, we will want to store our content on various hardware platforms — some yet to be invented. The ease of migrating content from one hardware platform to the next is one of the true values of an object store, and migrating content is the key to ensuring cost-effective long-term storage and accessibility.

An object store brings the data-management portion of the equation down into the storage level, where costs can be driven out. In this way, small, specialized MAM providers needn’t burden their cost structures by writing custom code to support data management, mobility, and migration technologies.

Here’s how it works: Storage vendors sell object stores into a variety of vertical industries and cloud-storage service providers. These storage vendors can afford to build robust APIs and have good reason to standardize these APIs across vendor platforms. Amazon’s Simple Storage Service (S3) has become the dominant API set. By making calls to this API set, MAM vendors in turn can initiate complex data management, movement, and migration — such as ensuring content is being moved to another site, deleted when appropriate, copied when needed, and migrated to new hardware when the time comes. No longer do they have to write specific code for each storage vendor.

Object storage allows media companies to cost-effectively employ various storage tiers through the years, such as lower-cost drives, future flash offerings, cloud storage, tape, and whatever comes next. With object storage, companies can manage all flavors of storage without the huge data-migration headaches that plague many global repository initiatives today.

The Future Topology

Over the next five years, a topology will evolve that combines a fast, thin production-storage tier with a large, slower object-storage tier behind it.

The first tier requires speed to support video production, processing, and delivery. For that reason, the dominant Tier 1 storage structure will continue to be either SAN or NAS, depending on a variety of workflow and network variables. This tier will most likely be made up of flash (solid state drives) within the next couple of years. Media asset management systems will have the functionality to send content to the object store. From there, the object will apply policies for data distribution across the multisite repository and prioritize content access as the assets age.

The second tier — the object store — will be extremely resilient, with embedded lifecycle-management functionality ruled by policies that govern long-term management, movement, and migration of content. Importantly, the functionality includes periodic checking and rebuilding of content as the individual drives age and fail.

By marrying a production tier with an extremely resilient, flexible, and scalable object store, media enterprises can securely share content from the object-store tier among multiple sites. This new storage paradigm — one platform that resides in multiple sites — allows mobility and resiliency across geographies, migration across the storage tiers of today and tomorrow, and data security across time.

Whether you are producing on three continents or distributing across six, policy-based workflows, not file-based workflows, will move your content where it needs to go next. Operations that value speed more than long-term resiliency will be fine using file systems for years to come. But for those that need both speed and the ability to manage content throughout space and time, the future is now. The move to an object store-enabled future is already underway.

Jason Danielson, Media and Entertainment Solution Marketing Manager, NetApp.

Jason Danielson, Media and Entertainment Solution Marketing Manager, NetApp.

You might also like...

An Introduction To Network Observability

The more complex and intricate IP networks and cloud infrastructures become, the greater the potential for unwelcome dynamics in the system, and the greater the need for rich, reliable, real-time data about performance and error rates.

Designing IP Broadcast Systems: Part 3 - Designing For Everyday Operation

Welcome to the third part of ‘Designing IP Broadcast Systems’ - a major 18 article exploration of the technology needed to create practical IP based broadcast production systems. Part 3 discusses some of the key challenges of designing network systems to support eve…

What Are The Long-Term Implications Of AI For Broadcast?

We’ve all witnessed its phenomenal growth recently. The question is: how do we manage the process of adopting and adjusting to AI in the broadcasting industry? This article is more about our approach than specific examples of AI integration;…

Next-Gen 5G Contribution: Part 2 - MEC & The Disruptive Potential Of 5G

The migration of the core network functionality of 5G to virtualized or cloud-native infrastructure opens up new capabilities like MEC which have the potential to disrupt current approaches to remote production contribution networks.

Standards: Part 8 - Standards For Designing & Building DAM Workflows

This article is all about content/asset management systems and their workflow. Most broadcasters will invest in a proprietary vendor solution. This article is designed to foster a better understanding of how such systems work, and offers some alternate thinking…