The Changing Face of Media

The adoption of tapeless digital workflow is becoming customary in the modern world of media and entertainment. Unlocking creativity, improving production quality, obtaining better control of digital assets, and surpassing viewer expectations without attendant cost increases are powerful factors that influence the adoption of file-based production. HD and 4K are now required capabilities along with delivering content in broadband, mobile and DVD/Blu-ray formats. Optimizing tapeless digital workflow in order to allow for the best possible implementation requires the careful consideration of many variables. After the production planning and workflow design are done, after the cameras and codecs and edit tools have been chosen, after the workflow is documented and the file naming and metadata conventions have been determined, what then?

Choosing the Right Storage

It is important to make the right decision when it comes to finding the most suitable shared-access storage platform. The appropriate platform will set the stage for fast, efficient processing of high quality content. The UK’s Digital Production Partnership advises that, “The safe, affordable, simple storage and retrieval of file-based video assets is the single greatest need of any producer heading into the tapeless future.”

In order to meet the needs of digital broadcast workflows, a storage platform must deliver some basic requirements. It goes without saying that firstly it should be file-based as both production and post teams work with this medium but paradoxically many IT-oriented storage providers sell block storage (a proverbial round peg in a square hole problem). While block storage offers good throughput, it often struggles to scale and grow with the environment. This brings the risk that the storage system, an integral part of the operation, becomes a bottleneck. Storage solutions for digital broadcast need to deliver the best available performance and the flexibility to add capacity as needs grow. The best solution will scale throughput, performance and capacity linearly, regardless of the number of active files or their size. Even some high-performance storage systems cannot scale performance as they have limited I/O bandwidth; only scale-out storage systems offer parallel architectures that keep scaling as the demands placed on the storage system increase.

Secondly, high performance is needed for mixed workloads. The emergence of advanced parallel-access protocols dramatically improves the aggregate performance of production teams in file-based workflows, compared to traditional NAS protocols.

Tracking digital assets via metadata is one of the new concepts emerging with file-based workflows. Metadata is typically used to mark up and describe video footage within a media asset manager so that the right content can be found easily and versioning of assets can be tracked. A shared-access storage platform maintains its own metadata catalogue of all file-based content, especially important in large repositories. Relative to video files, metadata files tend to be small and put additional demands on the storage solution. Panasas takes a hybrid SSD-SATA architecture approach that addresses these challenges, and leverages SSD storage to accelerate small file and metadata access.

Finally, the right storage solution must have a reliably effective method of protecting digital content. Storage arrays based on legacy hardware RAID designs should be avoided as these protect data at the block level and are slower at rebuilding data protection on larger multi-terabyte hard drives – thus exposing customer to data loss in the event of multiple hard drive failures.

The optimal storage solution should also be capable of “banking” completedand catalogued programs, serving asan archive of footage available for reuse and content for sale. Storing older, less frequently used material is commonly referred to as “disk archiving” in the IT world and it requires the storage solution to scale capacity to terabytes or even petabytes. A storage solution that is easy to manage should also be a requirement. A simple, intuitive management interface means no wasted time setting up, tuning, or fixing the foundation of the workflow. Finally, the ideal storage platform will be a springboard for fast content delivery.

Parallel Access for Distributed Computing

Most storage systems, (especially render farms) channel all data via single access points before processing it. So, inevitably as the amount of content and data grows, the more the data gets forced through the funnel. Panasas believes in the direct flow of data and scale in parallel. This means that every Panasas client gains access to distributed storage resources. Instead of forcing the workstation’s data through one door, we open the floodgates of data I/O by allowing clients to access all storage components via our parallel protocol, without any hotspots from client congestion or concentrated load typically found from traditional NFS and SMB protocols. 

You might also like...

Apple’s M1 ARM For Broadcast Infrastructure Applications: Part 2

In part 2 of this investigation, we look at why Apple’s new M1 processor benefits broadcasters.

Apple’s M1 ARM For Broadcast Infrastructure Applications: Part 1

Apple’s M1-based MacBook Air, MacBook Pro, and Mac Mini have been the focus of computer news for the last half-year because of their surprisingly high-performance.

Production In The Age Of Media Choice

The way consumers engage with content is constantly shifting and at a faster pace than ever before, leaving the television industry playing catch up. Broadcasters, production companies and content producers around the globe are seeing the complexities in production and…

Data Recording and Transmission: Part 25 - Encryption Strategies

As in all systems where there are opposed ideologies, there is a kind of cold war in which advances on one side need to be balanced by advances on the other. In encryption, the availability of increased computing power at…

Data Recording and Transmission: Part 24 - Message Integrity

Once upon a time, the cause of data corruption would be accidental. A dropout on a tape or interference picked up on a cable would damage a few bits. Error correction was designed to deal with that.