SMPTE Webcast On 4K

The Broadcast Bridge writer, John Watkinson, reviews the recent SMPTE webcast on 4K imagery, which was moderated by Mark Schubin.

It was with keen anticipation that I logged in to see Mark Schubin talk about camera design for beyond HD applications, having made sure I had my time zones sorted out so I wouldn’t miss it.

I should make clear from the outset that I have known Mark for some time, and I have to declare an interest and make clear that we have worked together, so my objectivity is right out the window. A kind of solidarity exists amongst beard wearers. Also I rely on him to make my dress style appear conservative.

Nevertheless he and I have in common the unusual characteristic of attempting to remain within the laws of physics and within the operation of the human visual system when talking about imaging. So I wasn’t expecting any hype at this event and true to form, there wasn’t any.

This is titled a 4K image. However it is only 3840px wide. The small difference in this and true 4K images will be missed by consumers.

Image basics

The presentation began with a quick summary of the evolution of TV pictures from SD through HD to UHD, along with a correct observation that the older formats were described by the number of active lines whereas once video became digital the number of horizontal pixels became the metric, hence 4K means the number of pixels across the screen. Even there pitfalls await the unwary, because 4k actually means 4 x 1024 = 4096, whereas the UHD-1 format “only” has 3840 pixels.

Whilst 4K displays and cameras exist, Mark felt that the chances of it being widely broadcast any time soon were not that high and the issue is really whether 4K camera technology could be used to improve HD productions. The answer is, in principle yes. Lenses designed for 4K can improve lesser cameras, and 4K cameras make excellent oversampling cameras for progressive HD formats. Equally 4K displays running up-converted from native formats mean that no pixel structure is going to be visible on the viewer’s screen.

Where there might not be an improvement is if the limiting factor is the compression codec and bit rate used for delivery. If that is iffy, putting more detail into it will just raise the coding noise.

He then addressed the topic of motion portrayal, a subject dear to my own heart. The benefits of higher resolution are only obtained if the frame rate also rises. With frame rates as they are, the bandwidth might be better spent on wider colour gamut and higher dynamic range.

Comparison of camera imager sizes.

Mark then contrasted the main types of camera, those using multiple sensors and a beam splitter and those using a single sensor having some form of Bayer sensor and made a good job of outlining, pretty fairly in my view, the pros and cons of each. The size of sensors and photosites and the effect on noise was also explored.

I thought I heard the noise described as coming from the light, and if that was indeed what was said, I don’t agree. Whilst there must be some noise in the light, else it would have infinite information capacity, much of the noise is generated in the sensor and the early stages of the electronics of the camera. Fact is that larger photosites produce less noise because it allows their self noise to average out.

Getting more resolution and low noise requires larger sensors, but then the depth of field goes down, which as he pointed out, is not necessarily a bad thing.

Thus making photo-sites smaller to cram more pixels onto a given size sensor could run into a noise problem. Fortunately CMOS is taking over from CCD in sensors and this is inherently a lower noise sensor. This is just as well as HDR by definition requires a low noise sensor. Maybe HDR and 4K is trying to have your cake and eat it?

Beam splitter cameras are relatively easy to understand since there’s a sensor for each primary and each one can have the same pixel count and therefore resolution. This got a little more complex when the four sensor camera, having two green chips, arrived. There’s a physical shift between has two greens so a bit more resolution can be squeezed out by combining the sensor outputs.

Clearly in a single sensor camera you can’t get colour information for nothing. Half of the photosites are green, with the rest split between red and blue. Of course if you don’t know the difference between a photosite and a pixel, as a lot of marketing literature seems not to, the single chip camera seems to lose resolution. Once you know that it needs four photosites per pixel that problem goes away. The three chip camera needs three photosites per pixel, one in each chip, so it may seem more efficient, but then you have to pay for a beam splitter and for the cost of registering the three sensors in three dimensions.

As he correctly noted, the resolution of red and blue in a Bayer sensor is less than that of green, and that with a single anti-aliasing filter you can’t filter both resolutions. What he might have said was that it really doesn’t matter as aliasing isn’t much of a problem in the real world because of motion blur and even less of a problem if oversampling is used.

So what is my overall take on the webinar? Well, I’m not the target audience, and I can’t pretend I am. However, being who I am, I can put hand on heart and say with very minor quibbles, the technical content Mark Schubin presented here was a comprehensive, fair and accurate summary of the state of cameras and that must be of considerable value for those trying to sift the facts from the marketing.

If I have one criticism, it is that Mark’s web cam was so poorly positioned that the audience was looking up his nose and also got a stunning view of the paint flaking off his ceiling. Gotta do better….

Webcast presenter Mark Schubin. Image courtesy IEEE.org

Webcast presenter Mark Schubin. Image courtesy IEEE.org

You might also like...

HDR: Part 4 - Surviving Modern Colorimetry

Most people are aware that any color can be mixed from red, green and blue light, and we make color pictures out of red, green and blue images. The relationship between modern color imaging and the human visual system was…

HDR: Part 3 - Grading

Almost since photography has existed, people have pursued ways of modifying the picture after it’s been shot. The “dodge” and “burn” tools in Photoshop are widely understood as ways to make things brighter or darker, but it’s probably less widely…

HDR: Part 2 - Brightness Encoding

Dealing with brightness in camera systems sounds simple. Increase the light going into the lens; increase the signal level coming out of the camera, and in turn increase the amount of light coming out of the display. In reality, it’s…

Creating Virtual Sets With LED Walls And Unreal Engine With Andy Jarosz

Virtual production based around LED walls involves a disparate collection of technologies, and the people best placed to get the best out of the technology are often multi-disciplinarians with experience across several fields.

HDR: Part 1 - The State of HDR

Over the century or so we’ve been making moving images, a lot of improvements have been dreamed up. Some of them, like stereo 3D and high frame rate, have repeatedly suffered a lukewarm reception. Other things, like HD, and e…