It’s one thing to be confronted by a big pile of technology and to be confused by it. It’s another to know something about that technology and conclude that things could be a lot simpler than they are. That’s the reaction a lot of people have to color and brightness handling in modern cameras which offer more than enough options to make things confusing. Clearly, we need standardization, but with each camera manufacturer keen to promote the benefits of its proprietary approach, it seems pretty unlikely that we’ll one day find that every camera is outputting signals and files that can be treated identically.
Camera color and brightness standards like Sony’s SLog and SGamut, or Arri’s Log C, specify two things: which red, which green, and which blue we’re going to use, and how much signal level represents how much light. The closest thing we’ve ever had to a universal standard in this regard is the famous Rec. 709, but it was made to handle early HD cameras of the type that existed in the early 1990s. Modern cameras are vastly more capable, but the lack of any more capable standard has provoked manufacturers to create their own. Or, more to the point, many, many of their own.
In the simplest case that’s not really a big problem. It’s usually easy to tell a grading or compositing application to expect material shot on a certain camera with certain settings. Sometimes those settings are even stored in the files, so the application will be automatically aware of them (the fact that this is not always the case is a ringing indictment of the state of camera and post technology). It’s important not just so the pictures will look right but also so the grading controls are aware of how the pictures are represented and react as the colorist expects.
It works, but as soon as we start to involve on-set monitoring, more than one type of camera, and perhaps complications such as visual effects shots which use other standards, things can quickly become complicated. Red, green and blue sounds simple until we realize how many different reds, greens and blues there are. So yes, it’s complex, and no, we probably can’t expect the camera manufacturers to universally adopt a single format. What we can do, perhaps, is to take what the camera manufacturers give us, and convert it all into a grand unified workspace, so it can all be handled consistently. That, in the broad strokes, is what ACES does.
Creating a system that can store any imaginable picture demands very capable technology. We can’t rely on red, green and blue primaries anymore. That famously describes a triangle on a color chart, but the total range of colors visible to humans isn’t a triangle. No three real colors can be combined to reproduce all the colors a human can see, so devices which implement the most common variant of ACES (designated ACES2065-1) use three primaries that only exist as mathematical constructs. It’s not the first system to do this – Kodak’s ProPhoto RGB colorspace uses primaries near to green and blue that are not real world colors – but it does mean that systems using ACES can represent quite literally any color visible to a human being.
Brightness encoding is also very capable, being encoded using 16 bits for each channel. ACES is a scene referred system, which means the stored levels refer (fairly) directly to the amount of light observed coming out of the scene. Light levels are stored linearly so very large ranges are needed, although the basic encoding is intuitive. A theoretically perfect 100% diffuse white object is encoded at RGB levels 1, 1, 1, while an 18% Grey card is encoded at RGB 0.18, 0.18, 0.18. Most scenes will include things brighter than 1,1,1 because of light sources and bright reflections, but ACES can handle vast ranges of light - up to 30 stops in OpenEXR files.
Vendor and Post House Cooperation
Getting material into an ACES system requires the cooperation of the manufacturer (or, theoretically, a long-winded lab characterization process for each camera setup, though manufacturers have so far seemed amenable.) It’s up to each manufacturer to specify a conversion from a proprietary format into ACES, which the standard describes as an input transform. Input transforms do not, crucially, seek to make all cameras look the same; the manufacturers would never have tolerated that, which is why ACES encoding is only related fairly directly to the light that came out of the scene.
Once the material is in ACES format, it can be worked on using tools designed for ACES material, without any concern over where that material came from. Everything handles the same way, and the behavior of things like color grading controls is completely consistent from shot to shot. While the material is being worked on, two further transforms are applied: a look management transform, which is the creative look intended for the material which might be informed by what a colorist is doing, and an output transform, which prepares the material for display on a monitor.
That monitor might be a conventional Rec. 709 display, or perhaps, more likely in 2020, one implementing an HDR standard. The output transform is made of two parts: the reference rendering transform and the output device transform. The reference rendering transform takes the scene-referred picture and prepares it for display by imposing something similar to a film-style, S-shaped contrast curve, to create a reasonable amount of contrast and a viewable image. The output of the reference rendering transform still contains lots of both color and brightness information, more than most displays can handle, and the output device transform is responsible for the final preparation of the image for a particular type of monitor.
Common Display Characteristics
So, in short, systems using ACES will take images represented using a manufacturer’s specified red, green and blue primaries, with that manufacturer’s idea of how much signal level represents how much brightness. Those images are converted into a huge, very capable internal format using a transformation supplied by the manufacturer, worked on, and then sent through other transformations to create images suitable for display or distribution.
Other ACES encodings exist, including two (ACEScc and ACEScct) intended to be more suitable for color correction environments, and one, ACESproxy, which is intended for use on set where SDI cables can’t carry full-size, 16-bit ACES data. Even the proxy encoding, though, is very capable, representing a wider range of colors than the very capable Rec. 2020 standard often associated with HDR distribution.
Whether ACES is actually used on any particular production is often determined by the comfort level of everyone in the camera department, the DIT, post house people and whether they have any proprietary, in-house special sauce that they’d rather use. Old habits die hard, and the idea of making unforced changes to a known-good approach may provoke frowns. There is certainly no sign of manufacturers turning their backs on proprietary brightness and color encodings, and there is some risk that ACES might be seen as not so much a unification, but simply another point of incompatibility. It’s certainly likely that the world will need to continue supporting a huge range of color and brightness encodings for the foreseeable future.
Where it is used, though, ACES has been successful. It mainly finds application in feature filmmaking, though there’s nothing stopping it being used in any other field. To address one common concern, it doesn’t require uncompressed files, since all the conversion work into the ACES world can just as easily be done on the fly from a compressed camera original. Still, many of the advantages of ACES – unification of multiple specialist camera types, visual effects integration and proxy workflows – are probably most useful on big movies. The value of ACES is not so much to simplify camera setup as to make for less complicated post workflows, though it certainly does both to an extent.
ACES is intended (among other things) to solve all of the world’s problems with the huge range of brightness and color encodings that various manufacturers have developed. Technically, it can certainly do that, and it is a very welcome attempt at standardization. In the end, standards are not so much about creating the perfect approach, it’s about creating an approach that has a chance of being widely enough adopted to simplify the (massively overcomplex) status quo.
Being attached to a name like the Academy of Motion Picture Arts and Sciences doesn’t hurt. In the end, as so often, it’s not about which standard is best – it’s about who has the clout to make it stick, and it’s hard to imagine any organization which would lend the project greater legitimacy than the people behind the Oscars. It’d be nice to think that something like ACES can continue to introduce a bit more commonality to things beyond Hollywood feature films. Technically it could; often it does, but even though it was released years ago, ACES still feels a long way from universal adoption.
You might also like...
Need a live shot from inside an unmarked moving rental sedan during a thunderstorm? No problem.
Gamma is a topic that pervades almost all forms of image portrayal, including film, television and computers. Gamma has become a tradition, which means that its origins are not understood, and it is not questioned. Perhaps it is time that…
The global lockdowns have come just too soon for 5G mobile services to help mitigate disruption to production and content creation.
Glasgow in December is a place and a time with a particular look, and The Nest is a production which enthusiastically embraces that aesthetic. Broadcast in the UK beginning in March 2020, it was produced for the BBC by Studio Lambert…
The current social and medical situation with lockdowns and distancing is unleashing new ideas at local TV stations. Some will become the new normal.