Can Artificial Intelligence Create Low-Light Images Without Noise?

Noise is found in all imaging systems, but it becomes particularly challenging in low light. High ISO can be used to increase brightness, but it also amplifies noise. Post-processing can be applied, but it does not resolve the low signal-to-noise ratio due to low photon counts. Is artificial intelligence the answer to clean low-light images?

Researchers at the University of Illinois Urbana–Champaign and Intel are working on the low-light issue and have developed a deep neural network that brightens ultra-low light images without adding noise and other artifacts.

A new white paper says the neural network was trained using a “See-in-the-Dark” (SID) dataset that uses 5,094 raw short-exposure low-light and long-exposure image pairs. The result is a system that automatically brightens images at a much higher quality than traditional processing options.

The dataset contains both indoor and outdoor images. The outdoor images were generally captured at night, under moonlight or street lighting. The illuminance at the camera in the outdoor scenes is generally between 0.2 and 5 lux.

The indoor images are even darker. They were captured in closed rooms with regular lights turned off and with faint in-direct illumination set up for this purpose. The illuminance at the camera in the indoor scenes is generally between 0.03 and 0.3 lux.

The limitations of alternative denoising, deblurring and enhancement techniques result in high levels of noise that isn't present when using the machine learning technique. The researchers used images captured with a Fujifilm X-T2 and Sony a7S II, and also demonstrated the system on images taken with an iPhone X and Google Pixel 2 smartphone.

The researchers noted that fast, low-light imaging is a formidable challenge due to low photon counts and a low signal-to-noise ratio. Imaging in the dark, at video rates, in sub-lux conditions, was considered impractical with traditional signal processing techniques.

However, with the See-in-the-Dark (SID) dataset the team has developed a simple pipeline that improves upon traditional processing of low-light images. The presented pipeline is based on end-to-end training of a fully-convolutional network. Experiments demonstrate promising results, with successful noise suppression and correct color transformation on SID data.

“We expect future work to yield further improvements in image quality, for example by systematically optimizing the network architecture and training procedure,” the team wrote.

The researchers are using machine learning — artificial intelligence (AI) — to automatically enhance images in low light. Other companies are exploring AI as well to repair damage or to enhance effects. Since, all modern camera sensors are now used for both video and stills, AI could be a major future breakthrough for low-light videographers.

For the full white paper, go here.

The structure of different image processing pipelines. In illustration A, from top to bottom: a traditional image processing pipeline, the L3 pipeline and a burst imaging pipeline. In illustration B, the researcher’s pipeline.

You might also like...

The Back Of The Brain May Soon Rule The Roost

If industry reports are to be believed, Apple is poised to release a mixed-reality headset at some point in 2023. Of course, it’s anyone’s guess when Apple’s Reality Pro will actually see the holographic light of day, but one t…

Learning From The Experts At The BEITC Sessions at 2023 NAB Show

Many NAB Shows visitors don’t realize that some of the most valuable technical information released at NAB Shows emanates from BEITC sessions. The job titles of all but one speaker in the conference are all related to engineering, technology, d…

Interlace: Part 3 - Deinterlacing

Now that interlace is obsolete, we are left only with the problem of dealing with archive material that exists in the interlaced format. The overwhelming majority of video tapes, whether component or composite, analog or digital, would be interlaced.

Compression: Part 6 - Inter Coding

The greatest amount of compression comes from the use of inter coding, which consists of finding redundancy between a series of pictures.

Magicbox Puts Virtual Production Inside An LED Volume On Wheels

Virtual production studios are popping up across the globe as the latest solution for safe and cost/time-effective TV and movie production. This method replaces on location shooting and, by utilizing all-encompassing LED walls (often called “volumes”), is fundamentally changing the…