Emerging standards are making the best of existing pixels. Understand the principles of HDR, learn how to build workflows to simplify production, and deliver the highest quality HDR pictures possible.
Now that virtually anyone can afford a very high quality video camera, the democratization of video production is in full play. With that comes shortcuts adapted by newcomers that make the video quality dreadful, but become acceptable through frequent use. In a way, it is a dumbing down of video production.
Broadcasters are continuing to adopt and take advantage of IT working practices as they transition to file-based workflows. However, some seemingly effective solutions are outdated, have not kept pace with advances in computing power, and are unable to efficiently transfer large media files. FTP, for example, is tried and trusted but its 1970s design philosophy has proven inadequate for large media file transfer.
In a multi-disciplinary subject such as color space, it is hard to know where to start. John Watkinson argues that the starting point is less important than the destination.
As one who owned an RCA TK-76 video camera and one of the first Sony Betacams, I have long been acquainted with very heavy video gear. This, in fact, was a key reason I bought Apple’s new iPhone 11 Pro Max. I got it in time for a major video shoot and these are my initial experiences.
When, in May 2019, AMD announced their Ryzen Zen 2 architecture, beyond the amazing performance offered by the new Series 3000 microprocessors, they announced the new chips would support PCI 4.0. Although I was pretty confident the step from 3.0 to 4.0 meant 2X greater bandwidth, I decided it was time to learn more about the PCIe bus.
As High Dynamic Range (HDR) and Wide Color Gamut (i.e.BT.2020) are increasingly mandated by major industry players like Netflix and Amazon, DOPs in the broadcast realm are under intense pressure to get it right during original image capture. We all know (or learned the hard way) that the amount of detail required to produce an optimal HDR master cannot be recreated or effectively added downstream.
Most people are aware that any color can be mixed from red, green and blue light, and we make color pictures out of red, green and blue images. The relationship between modern color imaging and the human visual system was recently discussed by John Watkinson in his series on color. In this piece, we’re going to look at something that comes up often in modern film and TV technique: color gamuts. It’s a term that suffers a lot of misuse, but the basics are simple: a color image uses red, green and blue, and the gamut describes which red, which green, and which blue we’re using.
Live sports productions are the natural home for HDR. The increase in luminance latitude combined with extended color space delivers an immersive experience never before witnessed by the home viewer. But backwards compatibility must still be maintained for legacy SDR audiences.