Scale Logic to Demonstrate the HyperFS Unified Storage File System at IBC

Scale Logic’s HyperFS solves issues with shared SAN/Block and Scale-Out NAS/File data storage — no matter the size or needs of the user. The software can scale up to handle over 64 petabytes of data storage and robust high availability for data and metadata.

Scale Logic’s HyperFS allows small organizations to cover expansion with on-the-fly capacity and performance scalability. It can also be used by enterprise-level organizations with the need to access billions of files, folders and objects on a 24/7 basis.

HyperFS has certifications on a wide variety of industry applications, file and block level access to a single file system, and simultaneous access to that file system on Windows, Macintosh and Linux. No matter the size of an organization or the specifics of its storage needs, HyperFS can handle the data storage better and with less hassle than any other file system.

It supports the project management layer for Avid, Adobe and Final Cut Pro; supports the archive and backup layer for increased asset reliability; and is compatible with over 200 media centric applications.

While supporting mass data sharing and storage, back-end optical fiber storage units can provide the FC direct connection application mode according to the different needs of users. The application mode can provide standard FC-SAN applications to integrate and store data.

Featuring strong IO performance, scalability, reliability and management, the HyperFS cluster file system can be widely used in information processing fields including video post-production, digital media asset management, streaming media, exploration data analysis, remote sensing information processing, scientific education and calculation, simulation, network digital video monitoring and cloud storage.

Dataflow for SAN and Scale-Out NAS using HyperFS

Dataflow for SAN and Scale-Out NAS using HyperFS

HyperFS for SAN Management

You might also like...

Data Recording: Burst Errors - Part 20

The first burst error correcting code was the Fire Code, which was once widely used on hard disk drives. Here we look at how it works and how it was used.

Data Recording: Cyclic Redundancy Checks - Part 19

The CRC (cyclic redundancy check) was primarily an error detector, but it did allow some early error correction systems to be implemented. There are many different CRCs but they all work in much the same way, which is that the…

Data Recording: Modulo Counting - Part 18

The mathematics of finite fields and sequences seems to be a long way from everyday life, but it happens in the background every time we use a computer and without it, an explanation of modern error correction cannot be given.

Selecting A Content Creation Laptop

Computer marketing departments typically do not promote all company products. Rather they focus on high margin products.

Data Recording and Transmission: Error Correction II - Part 17

Here we look at one of the first practical error-correcting codes to find wide usage. Richard Hamming worked with early computers and became frustrated when errors made them crash. The rest is history.