Policy-based workflows, not file-based workflows, will move content where it needs to go next.
Media storage is not just about meeting today’s needs of size and speed. It’s also about being able to access that content tomorrow or 20-years from now.
Ever since video was first stored on hard disk drives some three decades ago, media operations have wanted faster, bigger, and cheaper storage. Fortunately we’ve finally come to a point in the evolution of storage where faster, larger and cost-effective solutions are available.
Fast — With all the talk about 4K and 8K resolutions, high dynamic range, high frame rate, etc., demand for speed is clearly at a new high. Fortunately, storage arrays and controllers have no problem providing the gigabytes per second of throughput for multiple streams of the heaviest of these uncompressed rates.
Big — The concept of big storage got a whole lot bigger a few years ago when social media overtook professional and broadcast media as the ultimate warehouse for our cultural heritage. The capacity of broadcast and studio media repositories is now small compared to the tens of petabytes employed at photo- and video-sharing sites, from Flickr to Facebook to YouTube. So when it comes to supporting large production and distribution environments, “big” is no longer an issue.
Cheap — A terabyte costs less than 1/1000th what it did 15 years ago. And it will only get cheaper.
Clearly fast, big, and cheap are no longer the problems. So what is on everyone’s storage wish list today?
Storage Management: The New Cost Conundrum
These days, instead of worrying about the price per terabyte, the big expense on the radar is the cost of storing the media over time and making sure it’s accessible and usable indefinitely. In other words, it’s not just storing it today, but keeping it for tomorrow—especially in the case of broadcasters and studios, for whom “tomorrow” could mean a decade or even a century from now. Consider this: “Star Wars” opened almost 40 years ago. The original “20,000 Leagues under the Sea” movie was released 100 years ago. How can we affordably manage for that kind of longevity?
The actual location of stored content should be invisible to users. And, storage needs to be more than just a place to manage content during the creation phase. A proper solution needs to provide accessibility—forever.
Object Stores: The Key to Affordable Long-Term Storage Management
Enter object stores. Unlike file systems, object stores group files and their metadata into objects that can be coherently accessed by different media asset management systems, file-delivery systems, and other production and distribution applications. They are designed to support multiple storage sites connected by a wide area network. As content ages, policies can automatically move content to tape or the cloud, eliminating the need to manually move or delete thousands or millions of files.
Some object stores are specifically designed to manage content through time and space — where time is measured in decades and hardware-platform transitions, and space in measured in the distance between cities and continents.
Across the next century, we will want to store our content on various hardware platforms — some yet to be invented. The ease of migrating content from one hardware platform to the next is one of the true values of an object store, and migrating content is the key to ensuring cost-effective long-term storage and accessibility.
An object store brings the data-management portion of the equation down into the storage level, where costs can be driven out. In this way, small, specialized MAM providers needn’t burden their cost structures by writing custom code to support data management, mobility, and migration technologies.
Here’s how it works: Storage vendors sell object stores into a variety of vertical industries and cloud-storage service providers. These storage vendors can afford to build robust APIs and have good reason to standardize these APIs across vendor platforms. Amazon’s Simple Storage Service (S3) has become the dominant API set. By making calls to this API set, MAM vendors in turn can initiate complex data management, movement, and migration — such as ensuring content is being moved to another site, deleted when appropriate, copied when needed, and migrated to new hardware when the time comes. No longer do they have to write specific code for each storage vendor.
Object storage allows media companies to cost-effectively employ various storage tiers through the years, such as lower-cost drives, future flash offerings, cloud storage, tape, and whatever comes next. With object storage, companies can manage all flavors of storage without the huge data-migration headaches that plague many global repository initiatives today.
The Future Topology
Over the next five years, a topology will evolve that combines a fast, thin production-storage tier with a large, slower object-storage tier behind it.
The first tier requires speed to support video production, processing, and delivery. For that reason, the dominant Tier 1 storage structure will continue to be either SAN or NAS, depending on a variety of workflow and network variables. This tier will most likely be made up of flash (solid state drives) within the next couple of years. Media asset management systems will have the functionality to send content to the object store. From there, the object will apply policies for data distribution across the multisite repository and prioritize content access as the assets age.
The second tier — the object store — will be extremely resilient, with embedded lifecycle-management functionality ruled by policies that govern long-term management, movement, and migration of content. Importantly, the functionality includes periodic checking and rebuilding of content as the individual drives age and fail.
By marrying a production tier with an extremely resilient, flexible, and scalable object store, media enterprises can securely share content from the object-store tier among multiple sites. This new storage paradigm — one platform that resides in multiple sites — allows mobility and resiliency across geographies, migration across the storage tiers of today and tomorrow, and data security across time.
Whether you are producing on three continents or distributing across six, policy-based workflows, not file-based workflows, will move your content where it needs to go next. Operations that value speed more than long-term resiliency will be fine using file systems for years to come. But for those that need both speed and the ability to manage content throughout space and time, the future is now. The move to an object store-enabled future is already underway.
Jason Danielson, Media and Entertainment Solution Marketing Manager, NetApp.
You might also like...
TDM Mesh Networks - A Simple Alternative To Leaf-Spine ST2110: Application - Eurovision Song Contest
With over 4000 signals to distribute, transfer and route, the Eurovision Song Contest (ESC) proved to be this year’s showpiece for Riedel’s TDM based distributed mesh networked system MediorNet. Understanding the intricacies of such an event is key to rea…
Broadcasters are no longer faced with the binary choice of going down the SDI or IP routes. The hybrid method of using TDM (Time Domain Multiplexing) combines the advantages of distributed networks with IP and SDI to deliver a fully…
TDM Mesh Networks: A Simple Alternative To Leaf-Spine ST2110. Pt1 - Balancing Technical Requirements
IP is well known and appreciated for its flexibility, scalability, and resilience. But there are times when the learning curve and installation challenges a complete ST-2110 infrastructure provides are just too great.
IP is delivering unprecedented flexibility and scalability for broadcasters. But there is a price to pay for these benefits, namely, the complexity of the system increases significantly as we add more video and audio over IP.
Never trust the adhesive holding tape to the hub of a 40 year-old ¾-inch videocassette.