Try our new AI powered Smart-Search!
In this tutorial on the science of lenses (Part 4), John Watkinson examines lens resolution and discusses how to determine the lens performance needed so as to get the maximum performance from your camera sensor.
We discuss the business case and technology challenges of using cinema cameras in live sports broadcast with Mark Chiolis of Mobile TV Group.
There has been an almost inevitable surge in TV production in the UK as the pandemic recedes. The way the sector has rapidly hit production capacity highlights some long-term issues with how the industry attracts and trains new talent.
Most people are aware that any color can be mixed from red, green and blue light, and we make color pictures out of red, green and blue images. The relationship between modern color imaging and the human visual system was recently discussed by John Watkinson in his series on color. In this piece, we’re going to look at something that comes up often in modern film and TV technique: color gamuts. It’s a term that suffers a lot of misuse, but the basics are simple: a color image uses red, green and blue, and the gamut describes which red, which green, and which blue we’re using.
All lenses suffer from various imperfections that reduce sharpness. However, even if the lens elements were ideal and caused no loss of resolution, lenses still can’t focus to a perfect point.
Nobody’s risking much, in 2022, by assuming we’re living through the genesis of virtual production. There are enough high-profile productions happening to lend the technique some legitimacy, and while the surge in both interest and the provision of facilities makes it hard to say how demand and supply are matching up, activity is at fever pitch.
On one hand, film might seem dead. On the other, the last few months have seen a flurry of activity which is emblematic of a wider interest in the field that’s continued ever since the technology fell out of mainstream use. It’s been suggested that photochemical origination is settling into a new normal, and that’s not hard to substantiate.
Almost since photography has existed, people have pursued ways of modifying the picture after it’s been shot. The “dodge” and “burn” tools in Photoshop are widely understood as ways to make things brighter or darker, but it’s probably less widely understood that they refer to techniques for exposure control that date all the way back to the earliest days of darkroom image processing. Bring moving images into the mix and consistency becomes a big concern too. Individual still photographs might be part of a single exhibition, but they don’t have any concept of being cut together in a sequence.