Scality Announces 1 Terabit Per Second Performance

Scality Scale-Out File System (SOFS) running in Azure was measured at 1 terabit per second—the equivalent of downloading 50 HD movies per second.

That’s a record speed, apparently, and what’s more the service is up to 10 times less expensive than other file services capable of delivering similar performance. According to the firm that’s because most other cloud file services sit on virtual machines and can only use capacity allocated to the local virtual machines on which they run. This local storage model is expensive and limited in its ability to scale out.

Scality SOFS leverages Azure Blob storage, giving customers access to limitless storage at only $0.0184 per GB per month. And, with Azure Storage Reserved Capacity, customers save over 30% off standard pricing,” says Giorgio Regni, CTO and co-founder, Scality. “The combination of customer interest in hybrid cloud or pure cloud use cases with some of Azure’s differentiated features, such as a single API for storage tiers and Azure Data Lake Storage (ADLS), enabled us to quickly deliver an integrated solution on top of Azure Blob storage.”

SOFS is hosted in a customer’s Azure subscription and connects to the customer’s Azure Blob storage accounts. Any number of virtual machines can be spun up on-demand to linearly scale performance, and SOFS tiers data across Azure Blob to optimize performance and costs. Leveraging low-cost virtual machines and limitless object storage allows SOFS running in Azure to offer blazing fast performance at a low cost that can scale to hundreds of petabytes of data. And because the solution supports ADLS Gen2, data remains in native Azure format and fully accessible by any Azure service for use cases including data analytics, machine learning and more.

SOFS supports on-premises applications with high-aggregate performance requirements, such as massive-scale file systems (10 PB and 20 PB SMB shares), 1 PB per day ingest rates of logs, healthcare-critical deployments for medical imaging, recording and broadcasting of hundreds of simultaneous high-definition channels, and long-term asset preservation in national libraries. All of these applications can now be deployed in Azure cloud without modification. 

You might also like...

Data Recording and Transmission: Part 25 - Encryption Strategies

As in all systems where there are opposed ideologies, there is a kind of cold war in which advances on one side need to be balanced by advances on the other. In encryption, the availability of increased computing power at…

Data Recording and Transmission: Part 24 - Message Integrity

Once upon a time, the cause of data corruption would be accidental. A dropout on a tape or interference picked up on a cable would damage a few bits. Error correction was designed to deal with that.

The Sponsors Perspective: SpycerNode Harnesses High Performance Computing

In 2018, ​​Rohde & Schwarz announced a new multi-user shared access storage system called SpycerNode. It offers a radically different approach to coping with broadcast & media storage requirements. In this article, we take a closer look to see how its app…

The Sponsors Perspective: Storage - How To Solve 5G’s Biggest Challenge

The arrival of 5G brings both opportunities and challenges to communications, media and entertainment companies, as well as the original equipment manufacturers (OEMs) working to support them.

Data Recording and Transmission: Part 23 - Delivering Data

The requirements for data transmission have changed out of all recognition since the early days of computing where the goal was simply to make something that worked. Today that’s the easy part.