BeNeRF: Neural Radiance Fields from a Single Blurry Image and Event Stream
Abstract: Neural implicit representation of visual scenes has attracted a lot of attention in recent research of computer vision and graphics. Most prior methods focus on how to reconstruct 3D scene representation from a set of images. In this work, we demonstrate the possibility to recover the neural radiance fields (NeRF) from a single blurry image and its corresponding event stream. We model the camera motion with a cubic B-Spline in SE(3) space. Both the blurry image and the brightness change within a time interval, can then be synthesized from the 3D scene representation given the 6-DoF poses interpolated from the cubic B-Spline. Our method can jointly learn both the implicit neural scene representation and recover the camera motion by minimizing the differences between the synthesized data and the real measurements without pre-computed camera poses from COLMAP. We evaluate the proposed method with both synthetic and real datasets. The experimental results demonstrate that we are able to render view-consistent latent sharp images from the learned NeRF and bring a blurry image alive in high quality. Code and data are available at https://github.com/wu-cvgl/BeNeRF.
- Unreal Engine: The most powerful real-time 3D creation tool. https://www.unrealengine.com/en-US/
- Foundation, B.: Blender.org - Home of the Blender project - Free and Open 3D Creation Software
- Jia, J.: Single image motion deblurring using transparency. In: 2007 IEEE Conference on computer vision and pattern recognition. pp. 1–8. IEEE (2007)
- Qin, K.: General matrix representations for B-splines. In: Proceedings Pacific Graphics ’98. Sixth Pacific Conference on Computer Graphics and Applications (Cat. No.98EX208). pp. 37–43 (Oct 1998). https://doi.org/10.1109/PCCGA.1998.731996
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.