The Changing Face of Media

The adoption of tapeless digital workflow is becoming customary in the modern world of media and entertainment. Unlocking creativity, improving production quality, obtaining better control of digital assets, and surpassing viewer expectations without attendant cost increases are powerful factors that influence the adoption of file-based production. HD and 4K are now required capabilities along with delivering content in broadband, mobile and DVD/Blu-ray formats. Optimizing tapeless digital workflow in order to allow for the best possible implementation requires the careful consideration of many variables. After the production planning and workflow design are done, after the cameras and codecs and edit tools have been chosen, after the workflow is documented and the file naming and metadata conventions have been determined, what then?

Choosing the Right Storage

It is important to make the right decision when it comes to finding the most suitable shared-access storage platform. The appropriate platform will set the stage for fast, efficient processing of high quality content. The UK’s Digital Production Partnership advises that, “The safe, affordable, simple storage and retrieval of file-based video assets is the single greatest need of any producer heading into the tapeless future.”

In order to meet the needs of digital broadcast workflows, a storage platform must deliver some basic requirements. It goes without saying that firstly it should be file-based as both production and post teams work with this medium but paradoxically many IT-oriented storage providers sell block storage (a proverbial round peg in a square hole problem). While block storage offers good throughput, it often struggles to scale and grow with the environment. This brings the risk that the storage system, an integral part of the operation, becomes a bottleneck. Storage solutions for digital broadcast need to deliver the best available performance and the flexibility to add capacity as needs grow. The best solution will scale throughput, performance and capacity linearly, regardless of the number of active files or their size. Even some high-performance storage systems cannot scale performance as they have limited I/O bandwidth; only scale-out storage systems offer parallel architectures that keep scaling as the demands placed on the storage system increase.

Secondly, high performance is needed for mixed workloads. The emergence of advanced parallel-access protocols dramatically improves the aggregate performance of production teams in file-based workflows, compared to traditional NAS protocols.

Tracking digital assets via metadata is one of the new concepts emerging with file-based workflows. Metadata is typically used to mark up and describe video footage within a media asset manager so that the right content can be found easily and versioning of assets can be tracked. A shared-access storage platform maintains its own metadata catalogue of all file-based content, especially important in large repositories. Relative to video files, metadata files tend to be small and put additional demands on the storage solution. Panasas takes a hybrid SSD-SATA architecture approach that addresses these challenges, and leverages SSD storage to accelerate small file and metadata access.

Finally, the right storage solution must have a reliably effective method of protecting digital content. Storage arrays based on legacy hardware RAID designs should be avoided as these protect data at the block level and are slower at rebuilding data protection on larger multi-terabyte hard drives – thus exposing customer to data loss in the event of multiple hard drive failures.

The optimal storage solution should also be capable of “banking” completedand catalogued programs, serving asan archive of footage available for reuse and content for sale. Storing older, less frequently used material is commonly referred to as “disk archiving” in the IT world and it requires the storage solution to scale capacity to terabytes or even petabytes. A storage solution that is easy to manage should also be a requirement. A simple, intuitive management interface means no wasted time setting up, tuning, or fixing the foundation of the workflow. Finally, the ideal storage platform will be a springboard for fast content delivery.

Parallel Access for Distributed Computing

Most storage systems, (especially render farms) channel all data via single access points before processing it. So, inevitably as the amount of content and data grows, the more the data gets forced through the funnel. Panasas believes in the direct flow of data and scale in parallel. This means that every Panasas client gains access to distributed storage resources. Instead of forcing the workstation’s data through one door, we open the floodgates of data I/O by allowing clients to access all storage components via our parallel protocol, without any hotspots from client congestion or concentrated load typically found from traditional NFS and SMB protocols. 

You might also like...

Building Software Defined Infrastructure: Observability In Microservice Architecture

Building dynamic microservices based infrastructure introduces the potential for variable latency which brings new monitoring challenges that require an understanding of observability.

Broadcast Standards: Kubernetes & The Architecture Of Cloud Compute Based Systems

Here we describe Kubernetes and the taxonomy of containerized architecture based cloud compute system designs it manages.

Live Sports Production: Backhaul In Live Sports Production

Getting content reliably and securely from venue to studio remains key to live sports production so here we discuss the technology and services required.

Monitoring & Compliance In Broadcast: Monitoring Delivery In The Converged OTA – OTT Ecosystem

Convergence or coexistence between linear broadcast, IP based delivery and 5G mobile networks creates new challenges for monitoring of delivery paths, both technically and logistically.

IP Monitoring & Diagnostics With Command Line Tools: Part 4 - SSH Public Keys

Installing public SSH keys created on your workstation in a server will authenticate you without needing a password. This streamlines the SSH interaction and avoids the need to use stored and visible passwords in your scripts.