menu

Learning to Synthesize Motion Blur

Tim Brooks, Jonathan T. Barron

Summary

Input Unblurred Images

Output Motion Blurred Image

Realistic motion blur is a problem that has interested me as an artist for many years. In fact, for my first project in my introductory computer science class my freshman year, I made a crude attempt to render curved motion blur. Five years later, I worked with a teammate of mine at Google AI to use machine learning to synthesize much more realistic motion blur, which is useful as a cue in image understanding and a visual effect in photography, cinematography and computer graphics.


Abstract

We present a technique for synthesizing a motion blurred image from a pair of unblurred images captured in succession. To build this system we motivate and design a differentiable “line prediction” layer to be used as part of a neural network architecture, with which we can learn a system to regress from image pairs to motion blurred images that span the capture time of the input image pair. Training this model requires an abundance of data, and so we design and execute a strategy for using frame interpolation techniques to generate a large-scale synthetic dataset of motion blurred images and their respective inputs. We additionally capture a high quality test set of real motion blurred images, synthesized from slow motion videos, with which we evaluate our model against several baseline techniques that can be used to synthesize motion blur. Our model produces higher accuracy output than our baselines, and is several orders of magnitude faster than those baselines with competitive accuracy.


Video


Paper

Navigation: