Deep Joint Demosaicking and Denoising
Siggraph Asia 2016

overview

Abstract

Demosaicking and denoising are the key first stages of the digital imaging pipeline but they are also a severely ill-posed problem that infers three color values per pixel from a single noisy measurement. Earlier methods rely on hand-crafted filters or priors and still exhibit disturbing visual artifacts in hard cases such as moiré or thin edges. We introduce a new data-driven approach for these challenges: we train a deep neural network on a large corpus of images instead of using hand-tuned filters. While deep learning has shown great success, its naive application using existing training datasets does not give satisfactory results for our problem because these datasets lack hard cases. To create a better training set, we present metrics to identify difficult patches and techniques for mining community photographs for such patches. Our experiments show that this network and training procedure outperform state-of-the-art both on noisy and noise-free data. Furthermore, our algorithm is an order of magnitude faster than the previous best performing techniques.

BibTeX

Acknowledgements

We thank the SIGGRAPH reviewers for their constructive comments. We gratefully acknowledge NVIDIA for the generous donation of a Tesla K40 GPU. Thanks to Eric Chan for his invaluable expertise and precious feedback. Thanks to Sebastian Nowozin, Felix Heide and Jan Kautz for help with the comparison. Thanks to Tiam Jaorensri for help with the hardware. This work was partially funded by a gift from Adobe.