Nine Pitfalls Of Relying On FTP To Move Large Media Files

Broadcasters are continuing to adopt and take advantage of IT working practices as they transition to file-based workflows. However, some seemingly effective solutions are outdated, have not kept pace with advances in computing power, and are unable to efficiently transfer large media files. FTP, for example, is tried and trusted but its 1970s design philosophy has proven inadequate for large media file transfer.
Over the past thirty years, computer resource has grown beyond all recognition. Modern computer operating systems have kept pace with IT innovation and can support file sizes of 16TB. To meet the growing consumer demand for a better immersive experience, broadcast video data-rates have expanded exponentially resulting in ever increasing media file sizes. The FTP solution of the 1970s was simply never designed to efficiently transfer large media files for today’s broadcasters.
This white paper, provided by Signiant, investigates the shortcomings of FTP and explains why it is no longer a reliable method for moving large media files for broadcasters. HDR, 4K, and 8K formats all conspire against FTP and with security playing a prominent role in broadcast infrastructure design, FTP is definitely showing its limitations.
With broadcasters looking to automate workflows and improve efficiencies wherever possible, the lack of an effective API to monitor and control the transfer of large media files further demonstrates the limitations of FTP. This white paper also discusses an alternative solution.
Broadcast engineers, technologists, software developers, and their managers will all benefit from downloading this white paper. Learn about the pitfalls of FTP and discover the solution.
Supported by
You might also like...
Monitoring & Compliance In Broadcast: Monitoring Video & Audio In Capture & Production
The ability to monitor Video and Audio during capture and production is becoming increasingly important, driven by the need to output to many widely different services, and doing it very quickly.
Broadcast Standards: Cloud Compute Workflow Pipelines
This is a detailed exploration of system & workflow principles, storage systems, queue management, how microservices enable active workflow designs, and using node graph systems to create a friendly UI.
Building Software Defined Infrastructure: Systems & Data Flows
For broadcasters seeking to build robust workflows from software defined infrastructure, key considerations arise around data flows and the pro’s and cons of open and closed systems.
Broadcast Standards: Microservices Functionality, Routing, API’s & Analytics
Here we delve into the inner workings of microservices and how to deploy & manage them. We look at their pros and cons, the role of DevOps, Event Bus architecture, the role of API’s and the elevated need for l…
Live Sports Production: Part 3 – Evolving OB Infrastructure
Welcome to Part 3 of ‘Live Sports Production’ - This multi-part series uses a round table style format to explore the technology of live sports production with some of the industry’s leading broadcast engineers. It is a fascinating insight into w…