Noise is found in all imaging systems, but it becomes particularly challenging in low light. High ISO can be used to increase brightness, but it also amplifies noise. Post-processing can be applied, but it does not resolve the low signal-to-noise ratio due to low photon counts. Is artificial intelligence the answer to clean low-light images?
Researchers at the University of Illinois Urbana–Champaign and Intel are working on the low-light issue and have developed a deep neural network that brightens ultra-low light images without adding noise and other artifacts.
A new white paper says the neural network was trained using a “See-in-the-Dark” (SID) dataset that uses 5,094 raw short-exposure low-light and long-exposure image pairs. The result is a system that automatically brightens images at a much higher quality than traditional processing options.
The dataset contains both indoor and outdoor images. The outdoor images were generally captured at night, under moonlight or street lighting. The illuminance at the camera in the outdoor scenes is generally between 0.2 and 5 lux.
The indoor images are even darker. They were captured in closed rooms with regular lights turned off and with faint in-direct illumination set up for this purpose. The illuminance at the camera in the indoor scenes is generally between 0.03 and 0.3 lux.
The limitations of alternative denoising, deblurring and enhancement techniques result in high levels of noise that isn't present when using the machine learning technique. The researchers used images captured with a Fujifilm X-T2 and Sony a7S II, and also demonstrated the system on images taken with an iPhone X and Google Pixel 2 smartphone.
The researchers noted that fast, low-light imaging is a formidable challenge due to low photon counts and a low signal-to-noise ratio. Imaging in the dark, at video rates, in sub-lux conditions, was considered impractical with traditional signal processing techniques.
However, with the See-in-the-Dark (SID) dataset the team has developed a simple pipeline that improves upon traditional processing of low-light images. The presented pipeline is based on end-to-end training of a fully-convolutional network. Experiments demonstrate promising results, with successful noise suppression and correct color transformation on SID data.
“We expect future work to yield further improvements in image quality, for example by systematically optimizing the network architecture and training procedure,” the team wrote.
The researchers are using machine learning — artificial intelligence (AI) — to automatically enhance images in low light. Other companies are exploring AI as well to repair damage or to enhance effects. Since, all modern camera sensors are now used for both video and stills, AI could be a major future breakthrough for low-light videographers.
For the full white paper, go here.
The structure of different image processing pipelines. In illustration A, from top to bottom: a traditional image processing pipeline, the L3 pipeline and a burst imaging pipeline. In illustration B, the researcher’s pipeline.
You might also like...
In their latest hyper-realistic VR weather warning, The Weather Channel helps viewers better understand the potential dangers created by ice storms.
Like many professional football players themselves, CBS Sports Lead television director Mike Arnold tries to treat the Super Bowl as he would a regular season game, calling the same shots and camera angles—albeit with many more cameras at his d…
The Intel True View allows a production team to recreate selected clips in 3D from any vantage point in a stadium or even from a player’s perspective.
This year’s Super Bowl LIII telecast on CBS will be produced and broadcast into millions of living rooms by employing the usual plethora of traditional live production equipment, along with a few wiz bang additions like 4K UHD and a…
Last Fall, “Orbital Redux” broke new ground for streaming entertainment as a live, scripted multi-episode sci-fi drama in which the audience determined the outcome of the action.