At one time, it was traditional to complain about the things that science had promised us but failed to deliver. Now we actually have a jet pack that’s practical, or at least that lasts more than thirty seconds, we’re free to turn our attention to some other things that seem perpetually just around the corner.
It's hard to prove a negative, but there are probably no more than two data tape formats currently in commercial manufacture. One of them - IBM's 3592 series, which is rarely found outside big IBM server installations, so the options for most people are LTO or LTO.
Nobody's ever been under any illusion that tape is ideal. It's a strip of plastic with infinitesimal magnetic patterns on it which can be ruined with the magnet on the end of a focus puller's measuring tape. As such, during the twenty-year life of LTO, lots of people have tried to compete for the backup market. Holographic disc systems have been shown, particularly InPhase Technologies' 300GB discs which popped up at NAB a good few years ago (Eventually, Apple bought the technology, but nothing ever came of it).
Glass has also been proposed as a storage medium. Microsoft and Warner ran Project Silica, demonstrating in 2019 how the whole 1978 Superman, some 75.6GB of data, could be archived on a single pane the size of a small bathroom tile. Earlier work by Professor Peter Kazansky at the University of Southampton, UK, demonstrated lasers firing very short, very high-intensity pulses of energy into blocks of glass, something that seems at least conceptually related to the Silica project. Impressive, though it's now 2021 and there isn't yet a USB-attached device implementing the process.
People have even proposed writing 2D barcodes onto film stock. One advantage is that it's easy to determine how the system works just by looking at it; the same thing can't be said for a tape format. Happily, LTO is a very competent thing, much-upgraded since its 2000 genesis. Its latest generation is the ninth, a roomy eighteen-terabyte tape achieving 400MB a second uncompressed. It's a good, well-engineered format and a big, reliable bucket for bits even in the context of uncompressed raw cameras working at 6K and beyond. The main downside is that it's often too expensive for the kind of individual freelancers who might benefit most, but it's more affordable in a business context.
Much as humanity would really, really like better batteries, there hasn't been a step change in the performance of what's available since lithium-ion emerged in the early 1990s. That hasn't stopped everyone with a copy of PowerPoint from trying to hook enthusiastic venture capitalists with the idea that a game-changing development in power storage is just around the corner, though. It's not uncommon for technology journalists get a couple of breathlessly-enthusiastic press releases a month on the subject, promoting ideas that seem perpetually five years from commercialization.
That's been the case for a decade or more, and so far, we haven't seen more than incremental improvements specifically in the capacity of batteries; most of the changes have been small optimizations of lithium-ion types. Lithium polymer designs use broadly similar physics but eschew the metal can, optimizing weight against sturdiness. Lithium iron phosphate batteries (note iron not ion) have a wider operating temperature range, longer life, and other advantages, but lower capacity for the volume.
There are other goals than sheer capacity; for instance, lithium iron phosphate batteries avoid the use of cobalt, which is often mined in places with a questionable approach to human rights. Despite the massive resources available from the world of electric cars, though, better batteries have been a steep hill to climb. Research on lithium-glass batteries by Maria Helena Braga and John B. Goodenough (the inventor of the lithium-ion battery) has been queried on the basis it appears to violate basic principles and although it was first published in 2016, we don't have glass-based batteries on our cameras or in our cars in 2021.
The solution might not involve lithium, which is itself in somewhat short supply, although other designs based around glass have been discussed, with one published as recently as 2020 in Scientific Reports. In the end, the vast potential reward of the electric car market is likely to push a lot of research and development money towards the issue, and the film and TV industry is likely to benefit from it.
Just as with batteries, forces outside film and TV have provoked a vast amount of research and development into low-energy lighting. Unlike batteries, all that work has fruited magnificently, showering us with LED-based designs that have hugely reduced the power consumption of both domestic and movie lights. This is exactly what was intended to happen, and it's therefore surprising that LED lighting doesn't always reduce costs. On the contrary; it can sometimes seem rather expensive.
The reason is that cost is not just about the power. Yes, studio power is expensive, and generator power is even more expensive once the generator has been rented, driven to site by someone with a license to operate heavy commercial vehicles, fueled, and parked in a specially-arranged place where parking usually isn't allowed. Further expenses come in cabling; a good example is a studio full of spacelights, tungsten-halogen versions of which nominally consume a hefty 6kW each and may be deployed by the dozen. LED equivalents typically consume 20% of that, saving vastly on power and the cabling, trucks and crew required to supply it.
If there's a problem – and there isn't, always – it's that LED lights are often much, much more complicated than their historic equivalents. Having all that control over color means that color has to be controlled; we can't rely on the physics of a glowing filament, so there's a lot of calculations involved. It's a new technology, so much of the hard work hasn't yet been amortized, and the underlying electronic components are improving rapidly, meaning development effort that's only a few years old can quickly drift out of relevance. As a result, LEDs have pushed some lighting budgets up rather than down.
That's not the case on every shoot, and it's also an issue that has already faded and is ultimately likely to be self-solving. LED lighting is only a decade and a half old, and by the standards of something like 35mm movie film, still in its infancy. In the meantime, it's often massively more convenient than the historical approaches, as well as more efficient.
It's easy to overlook the fact that CPU performance hasn't improved nearly as much in the last twenty years as it did in the previous twenty. Vector processors – GPUs – take up a lot of the slack, but they work best on jobs that can be broken down into a large number of individually simple tasks. Multi-core processors pack more power into the same space but writing effective code for them is more difficult than if we're concentrating on a single core.
That's not to say there haven't been big improvements in CPUs, but new ideas which actually improve per-core performance have been hard to come by.
The first widespread attempt to do better was arguably reduced instruction set computing, RISC, which emerged in the 1990s when it became clear that the writing was on the wall for ever-increasing clock speeds. RISC, as well as explicitly parallel instruction computing, EPIC, were new design concepts intended to allow CPU performance to break through clock-speed limits. The dubious history and recent end of Itanium suggests that EPIC might never have been able to do that, and while RISC works well in those low-power-consumption ARMs, it seems to have achieved lower power for a given performance, as opposed to higher performance per se.
If there's a way around the CPU roadblock, it's in multiple cores and better software engineering approaches to using them. Doing lots of individual tasks at once then trying to usefully collect and organize the results of those tasks is difficult, and mostly a problem software engineers are required to solve by hand. Automating that without trading performance for code-writing convenience is a target for a lot of primary computer science research. It'll be a process taking years, but new tools emerge all the time.
Much as this has been a discussion of potential not quite yet fulfilled, two things are encouraging about it. First is that these are well-defined demands, all of which are heavily resourced by organizations with a keen business interest in fulfilling them. The second encouraging thing is the number of film and television technologies which are not on this list, particularly things like digital camera equipment which has quickly risen to meet and exceed almost all the historic standards of 35mm film.
It's probably inevitable, in the end, that no matter what we have, we'll want more – and that's healthy.
You might also like...
Time base correction is an enabling technology that crops up everywhere; not just in broadcasting.
The last twenty years has seen a lot of film and TV hardware turn into software that runs on workstations and servers. With Apple’s move away from Intel, though, and several other recent attempts to popularize alternative designs, anyone…
As broadcast facilities and other organizations that use media to educate and inform continue to carefully make the move to video over IP, they currently face two main options, with a range of others in the wings. They may opt f…
Due to the flexibility and virtually unlimited access of the Internet Protocol, manufacturers of broadcast and production equipment have for years provided customers with the remote ability, via an HTML 5 browser interface, to monitor and control hardware devices via a…
Progress inevitably comes with compromise. We can’t complain about the technology that’s brought us from hundred-kilo tube cameras to the 4K cellphones of today, but anyone who remembers the fuzzy old days of the 1990s might also remember…