Deep Bilateral Learning
for Real-Time Image Enhancement
Siggraph 2017

Abstract

Performance is a critical challenge in mobile image processing. Given a reference imaging pipeline, or even human-adjusted pairs of images, we seek to reproduce the enhancements and enable real-time evaluation. For this, we introduce a new neural network architecture inspired by bilateral grid processing and local affine color transforms. Using pairs of input/output images, we train a convolutional neural network to predict the coefficients of a locally-affine model in bilateral space. Our architecture learns to make local, global, and content-dependent decisions to approximate the desired image transformation. At runtime, the neural network consumes a low-resolution version of the input image, produces a set of affine transformations in bilateral space, upsamples those transformations in an edge-preserving fashion using a new slicing node, and then applies those upsampled transformations to the full-resolution image. Our algorithm processes high-resolution images on a smartphone in milliseconds, provides a real-time viewfinder at 1080p resolution, and matches the quality of state-of-the-art approximation techniques on a large class of image operators. Unlike previous work, our model is trained off-line from data and therefore does not require access to the original operator at runtime. This allows our model to learn complex, scene-dependent transformations for which no reference implementation is available, such as the photographic edits of a human retoucher.

Downloads

The release of the Android prototype is still WIP. (updated August 5, 2017)

BibTeX

Erratum

The original version of the paper misreported the L*a*b* error numbers from [Hwang 2012] and [Yan 2016] in Table 3. This has been fixed.

Acknowledgements

We thank the SIGGRAPH reviewers for their constructive comments. Special thanks to Marc Levoy for his valuable feedback. This work was partially funded by Toyota.