On-Line, near-Line or archive. Just how should data be stored?

Any discussion of media storage relies on four generic phrases; on-line, near-line, archive or off-line. What storage technology is best suited for each task?

Let’s set some general definitions for our discussion on data storage. On-line means the information is immediately usable for any required purpose. The data could be on-line for transcoding while at the same time being off-line for editing. In other words storage terms like “on-line” cannot be used unless an application is also specified.

Off-line has occupied some under-defined usage scenarios for the last 10 to 20 years. For me it has always meant "Damn-it, I have to wait for something to happen before I can do what I really wanted to do". Obviously such interruptions in a workflow, to the extent that KPI's are not met, require the media to be less "off-line".

Types of storage

Why do we need a third definition of storage usage scenarios? If something is "On-shelf" is it not simply further off-line. I would like to propose that we define the difference in terms of human interaction.

Off-line can become on-line as an automated process. The cheapest, and in some sense the most secure, storage is on the shelf, preferably kept in two geophysical separated disaster protected locations.  Deciding what to put where becomes easier if we know that all three kinds of storage are available. Just to be clear, these distinctions still have value even when provisioning from the cloud.

Localized storage typically consists of multiple HDs installed in rack mounts of various sizes, configuration.  Shown here is a Facilis TX16 storage system.

Localized storage typically consists of multiple HDs installed in rack mounts of various sizes, configuration. Shown here is a Facilis TX16 storage system.

Today’s storage systems are virtually all disk based. While solid-state drives are available, and we use RAM, both provide insufficient storage capacity and are more expensive than rotating disk—especially for media projects. So what are the key differences in types of storage and how do we judge their performance?

The criteria for data storage are, Permanence, Availability, Scalability and Security. Making an acronym we get P.A.S.S.

  • Permanence means that data is never lost.
  • Availability means that the user/application requirements for access/performance are met.
  • Scalability defines the ease of meeting changing requirements.
  • Security defines the granularity and durability of access privileges.

Providing storage with the best technologically available, P.A.S.S., regardless of application, is going to be prohibitively expensive. Therefore we need to match the storage to the application. That means trading storage performance against availability. But that means we have to maintain multiple types of storage.

Cloud storage gives us the advantage of bespoke storage without the additional overhead. Generic cloud storage enables economies of scale were multiple clients are served from a single enterprise storage system. But, you still need to ask, will the cloud provider at some point off-load your deep archive and ship it to Iron Mountain? Or, can you even afford to keep all of your projects on expensive, always ready, on-line storage?

Let’s assume that we have determined what P.A.S.S we need for each application used within our acquisition, post-production and distribution pipeline. How do we determine when to move the data to less-expensive storage? Changing technology (lower prices) mean that this decision is always open for reinterpretation. This all comes back to KPI’s, if we can achieve the KPI while moving the data to less expensive storage, then do it!

Match workflows to KPI

An automated workflow should make data migration between types of storage a transparent background process. This works because the task requirements are anticipated and built into the system.

Grading systems arguably have the highest availability requirements in the production process, but does the storage used for this application have to be mirrored? Do you have to simultaneously store all current projects? Or, can you live working on just one project on-line at a time? Tradeoffs can be made to permit a sufficient level of availability, redundancy and capacity. After all, the redundancy required for the safe operation of a nuclear power-plant may be excessive for the production of the nightly news!

Diagram illustrates a typical broadcast workflow using Isilon technology. A modern storage platform allows users to adapt available storage to project needs—all without the complexity of becoming a storage infrastructure expert.

Diagram illustrates a typical broadcast workflow using Isilon technology. A modern storage platform allows users to adapt available storage to project needs—all without the complexity of becoming a storage infrastructure expert.

Correctly designed workflow management systems acquire the necessary information in order to anticipate data access requirements and move the data where it will be needed in a timely manner. This can even include an automatic order to get backups from Iron Mountain in advance so that the material is in place when needed for post. Fortunately, today’s workflow management solutions are so sophisticated, they can anticipate virtually all your production storage needs and automatically retrieve and move the data where it’s needed without human intervention.

Storage costs continue to drop. But, that doesn’t mean you should chase them.  Shown here is a Samsung 1TB SSD, which today costs about $400. A 1TB HD may cost less than $50.<br /><br />

Storage costs continue to drop. But, that doesn’t mean you should chase them. Shown here is a Samsung 1TB SSD, which today costs about $400. A 1TB HD may cost less than $50.

Exact pricing for each storage option is a moving target, however the relationship between the options should remain essentially the same.

Off-line storage costs are about one-third of on-line storage, this is without geographic replication. Using LTO-6 and 3TB cassettes at 50 cents per tape per month makes archive physical storage cost 1/100 of off-line costs. The latter comparison is, of course, unfair as it does not include the cost of the tape itself or the additional cost for physical retrieval.

However the extreme discrepancy between automated retrieval taking hours and manual retrieval taking days leaves room for a new service offering 24 hour retrieval of on-shelf storage. When thinking about the viability of shelf storage, remember that Disney destroyed the 4K data used for the latest release of Snow White and only keeps the physical separations!

You might also like...

The Shows Go On

The current social and medical situation with lockdowns and distancing is unleashing new ideas at local TV stations. Some will become the new normal.

Predicting Editing Performance From Games And Benchmarks: Part 1

In the good old days when you were thinking about upgrading your computer you began by reading printed reviews such as those published by Byte magazine. These reviews usually included industry standard benchmarks. Now, of course, you are far more…

Essential Guide: High Dynamic Range Broadcasting

HDR offers unbelievable new opportunities for broadcast television. Not only do we have massively improved dynamic range with the potential of eye-watering contrast ratios, but we also have the opportunity to work with a significantly increased color gamut to deliver…

Esports Expands Audiences Using Broadcast IP Production & Distribution – Part 2 – The IP Technology

Esports viewership worldwide is on a steep upward trajectory and will soon begin to challenge traditional sports broadcast audience figures. As the esports and traditional sports communities converge, what can traditional broadcasters learn from the remote production workflows being pioneered…

What Does PCI 4.0 Offer?

When, in May 2019, AMD announced their Ryzen Zen 2 architecture, beyond the amazing performance offered by the new Series 3000 microprocessors, they announced the new chips would support PCI 4.0. Although I was pretty confident the step from 3.0 to 4.0 meant 2X greater bandwidth,…